medium_logo

Welcome to Remo Team

Using Remo Team, you have all the features of Remo Community but you can also:

  • Invite team members
  • Login using secure authentication
  • Enjoy multi-processing
  • Store your data on an S3 bucket

More features will be coming soon.

Remo Team is a paid package. To proceed with installation you would need to purchase a license.

Installation overview

In the rest of the page we go over the steps to install Remo Team using Docker.

We are happy to assist with the installation if needed - just drop us a line at hello AT remo DOT ai.

To complete the installation, you would need the following files, which should have been provided to you:

  • remo-team-docker-access.json: to access our Remo Team Docker Hub
  • docker-compose.yml and docker-compose-separate-postgres.yml: to launch the application
  • .env file: to setup the environment
  • S3-bucket-policy.json: to setup S3 policy, if you want to use S3

Make sure to save all the files in the same folder before proceeding.

1. Install Docker and Docker-compose

If not installed already, you'd need to install the needed Docker components by running the commands below from the command line.

For more information about Docker Compose, you can refer to https://docs.docker.com/compose/install/ .

# install docker, docker-compose
sudo apt update
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# if this doesn't work, you might have to logout and login again, or reboot
newgrp docker

sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

2. Authenticate access to private Docker registry

You should have received a remo-team-docker-access.json. Use it to gain access to Remo private Docker registry:

cat premium-remo-sample-customer.json | docker login -u _json_key --password-stdin https://eu.gcr.io

3. Prepare docker-compose.yml

Fill in the docker-compose.yml file with the relevant details:

  • make sure to use the latest tag from our Docker Hub
  • add any folder you want to expose to Remo for the purposes of linking data
  • other changes are needed only if you want to use an external PostgreSQL database.

Linking data

In case you want to link data from your machine in Remo, you'd have to make a path outside the Container visible to the inside by adding it to the backend volume section.

For example, if you add:

- /home/andrea/data/my_datasets:/datasets

you would make the local path /home/andrea/data/my_datasets visible inside the container as /datasets/.

You can then use the path inside the container to pass folders and files to Remo.

Internal PostgreSQL server

If using an Internal PostgreSQL server, the installation will create 3 separate containers running on the same server: one for the backend, one for the front end and one for PostgreSQL.

Remember to change the tag from 0.5.7 to the latest from our Docker Hub, under frontend and backend.

Your docker-compose.yml file is then ready to use.

# docker-compose.yml file

version: '2'

volumes:
  media: {}
  tmp_files: {}
  postgresdb_data: {}

services:
  postgres:
    image: postgres:11-alpine
    restart: always
    volumes:
      - postgresdb_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    env_file: .env

# change the tag for the version number to the latest from our Docker Hub
  backend:
    image: eu.gcr.io/premium-remo/backend:0.5.7
    restart: always
    depends_on:
      - postgres
    volumes:
      - media:/app/remo_app/media
      - tmp_files:/tmp
    ports:
      - "5000:5000"
    env_file: .env

# change the tag for the version number to the latest from our Docker Hub
  frontend:
    image: eu.gcr.io/premium-remo/frontend:0.5.7
    restart: always
    depends_on:
      - backend
    ports:
      - "80:80"

External PostgreSQL server

You have the option to connect Remo to an external PostgreSQL database for increased reliability.

If using this option:

  • use the docker-compose-external-postgres.yml file
  • rename docker-compose-external-postgres.yml as docker-compose.yml
  • change the tag for the version number to the latest one from our Docker Hub
  • enter the relevant database details in the .env file (next step)
# docker-compose-separate-postgres.yml file
version: '2'

volumes:
  media: {}
  tmp_files: {}
  postgresdb_data: {}

# change the tag for the version number to the latest from our Docker Hub
services:
  backend:
    image: eu.gcr.io/premium-remo/backend:0.5.7
    restart: always
    volumes:
      - media:/app/remo_app/media
      - tmp_files:/tmp
    ports:
      - "5000:5000"
    env_file: .env

  frontend:
    image: eu.gcr.io/premium-remo/frontend:latest
    restart: always
    depends_on:
      - backend
    ports:
      - "80:80"

4. Prepare env file

Fill in the following details:

  • admin details: email address, username, password and fullname
  • domain: in most cases, this is just the Public IP address of your VM
  • optional: if using an external PostgreSQL server, fill in the relevant DB details
  • optional: if you want data to be stored in your S3 bucket, fill the relevant fields (see also next section)
# .env file

# General settings
DOMAIN=sample.com

# Remo license
REMO_TOKEN=******

# Admin user
REMO_ADMIN_EMAIL=admin@sample.com
REMO_ADMIN_PASS=samplepass
REMO_ADMIN_USERNAME=admin
REMO_ADMIN_FULLNAME=Sample Admin

# Postgres database connection info
POSTGRES_PASSWORD=remopass
POSTGRES_USER=remo
POSTGRES_DB=remo
POSTGRES_HOST=postgres

# AWS
#STORAGE=aws
#DJANGO_AWS_ACCESS_KEY_ID=******       # service account credentials
#DJANGO_AWS_SECRET_ACCESS_KEY=*******  # service account credentials
#DJANGO_AWS_S3_HOST=s3-eu-west-1.amazonaws.com     # need to check, depends on region
#DJANGO_AWS_STORAGE_BUCKET_NAME=sample-bucket

5. Launch application

Run the following command in the same directory where the config files live:

docker-compose up -d

That's it!

Your Remo instance will be now running and accessible by browsing to your DOMAIN.

6. (Optional) Setup S3 bucket

1) Fill out the relevant details in the env file

2) Create the S3 bucket, with the following permissions

3) Create a service account

4) Grant service account access to the S3 bucket using the given JSON policy

# S3-bucket-policy.json file

# create private S3 bucket: sample-bucket
# create AWS user/service account
# grant permission via inline policy
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::sample-bucket",
        "arn:aws:s3:::*/*"
      ]
    }
  ]
}
5) Download credentials for service account and enter details in the .env file


Downgrading to Community

If you are using Remo Team, you have the option to downgrade back to Remo Community at any time.

1) Stop running remo application:

docker-compose down

2) Change docker-compose.yml file to this one:

version: '3.2'

services:
  remo:
    image: rediscoveryio/remo:0.5.6
    volumes:
      - ./remo_home:/root/.remo
    ports:
      - "80:8123"
    env_file: .env

3) In your existing .env file, add this DB_URL value:

DB_URL=postgres://db_user:[email protected]:5432/db_name

This change allows to use the existing database without losing data. However any data from S3 bucket won't be available anymore.

4) Run community version:

docker-compose up -d