SDK
The SDK class contains some core functionalities of Remo. It mainly acts as a wrapper around our API endpoints.
Use the SDK class to:
-
create a dataset and annotation set
-
list and retrieve datasets
-
export annotations without a need to intialize a dataset
Most of the functions documented below can be called from Python by doing
import remo
remo.function_name()
class remo.sdk.SDK¶
Creates sdk object, and checks connection to server
documentation
class remo.sdk.SDK(server: str, email: str, password: str, viewer: str = 'browser')
-
Parameters
-
server – server host name, e.g.
http://localhost:8123/
-
email – user credentials
-
password – user credentials
-
viewer – allows to choose between browser, electron and jupyter viewer. To be able change viewer, you can use
set_viewer()
function. See example.
-
Example:
import remo
remo.set_viewer('browser')
add_annotations_to_image¶
Adds annotation to a given image
documentation
add_annotations_to_image(annotation_set_id: int, image_id: int, annotations: List[remo.domain.annotation.Annotation])
-
Parameters
-
annotation_set_id – annotation set id
-
image_id – image id
-
annotations – Annotation object
-
add_data_to_dataset¶
Adds images and/or annotations to an existing dataset.
Use local files
to link (rather than copy) images. Use paths_to_upload
if you want to copy image files or archive files. Use urls
to download from the web images, annotations or archives.
Adding images: support for jpg
,``jpeg``, png
, tif
Adding annotations: to add annotations, you need to specify the annotation task and make sure the specific file format is one of those supported. See documentation here: https://remo.ai/docs/annotation-formats/
Adding archive files: support for zip
, tar
, gzip
documentation
add_data_to_dataset(dataset_id: int, local_files: List[str] = None, paths_to_upload: List[str] = None, urls: List[str] = None, annotation_task: str = None, folder_id: int = None, annotation_set_id: int = None, class_encoding=None, wait_for_complete=True)
-
Parameters
-
dataset_id – id of the dataset to add data to
-
local_files – list of files or directories containing annotations and image files Remo will create smaller copies of your images for quick previews but it will point at the original files to show original resolutions images. Folders will be recursively scanned for image files.
-
paths_to_upload – list of files or directories containing images, annotations and archives. These files will be copied inside .remo folder. Folders will be recursively scanned for image files. Unpacked archive will be scanned for images, annotations and nested archives.
-
urls – list of urls pointing to downloadable target, which can be image, annotation file or archive.
-
annotation_task – annotation tasks tell remo how to parse annotations. See also:
remo.task
. -
folder_id – specifies target virtual folder in the remo dataset. If None, it adds to the root level.
-
annotation_set_id – specifies target annotation set in the dataset. If None: if no annotation set exists, one will be automatically created. If exactly one annotation set already exists, it will add annotations to that annotation set, provided the task matches.
-
class_encoding – specifies how to convert labels in annotation files to readable labels. If None, Remo will try to interpret the encoding automatically - which for standard words, means they will be read as they are. See also:
remo.class_encodings
. -
wait_for_complete – blocks function until upload data completes
-
-
Returns
Dictionary with results for linking files, upload files and upload urls:
{ 'files_link_result': ..., 'files_upload_result': ..., 'urls_upload_result': ... }
create_annotation_set¶
Creates a new annotation set within the given dataset
documentation
create_annotation_set(annotation_task: str, dataset_id: int, name: str, classes: List[str] = [])
-
Parameters
-
annotation_task – specified task for the annotation set. See also:
remo.task
-
dataset_id – dataset id
-
name – name of the annotation set
-
classes – list of classes. Default is no classes
-
-
Returns
remo.AnnotationSet
create_dataset¶
Creates a new dataset in Remo and optionally populate it with images and annotations. To add annotations, you need to specify an annotation task.
documentation
create_dataset(name: str, local_files: List[str] = None, paths_to_upload: List[str] = None, urls: List[str] = None, annotation_task: str = None, class_encoding=None, wait_for_complete=True)
-
Parameters
-
name – name of the dataset.
-
local_files – list of files or directories. These files will be linked. Folders will be recursively scanned for image files:
jpg
,png
,tif
. -
paths_to_upload – list of files or directories. These files will be copied. Supported files: images, annotations and archives.
-
image files:
jpg
,png
,tif
. -
annotation files:
json
,xml
,csv
. -
archive files:
zip
,tar
,gzip
.
Unpacked archive will be scanned for images, annotations and nested archives.
-
-
urls – list of urls pointing to downloadable target, which can be image, annotation file or archive.
-
annotation_task – specifies annotation task. See also:
remo.task
. -
class_encoding – specifies how to convert class labels in annotation files to classes. See also:
remo.class_encodings
. -
wait_for_complete – blocks function until upload data completes
-
-
Returns
remo.Dataset
delete_dataset¶
Deletes dataset
documentation
delete_dataset(dataset_id: int)
-
Parameters
dataset_id – dataset id
export_annotations_to_file¶
Exports annotations in a given format and saves it to a file. If export_tags = True, output_file needs to be a .zip file.
It offers some convenient export options, including:
-
Methods to append the full_path to image filenames,
-
Choose between coordinates in pixels or percentages,
-
Export tags to a separate file
-
Export annotations filtered by user-determined tags
Example::
# Download and unzip this sample dataset: [https://s-3.s3-eu-west-1.amazonaws.com/dogs_dataset.json](https://s-3.s3-eu-west-1.amazonaws.com/dogs_dataset.json)
dogs_dataset = remo.create_dataset(name = ‘dogs_dataset’,
> local_files = [‘dogs_dataset.json’],
> annotation_task = ‘Instance Segmentation’)
dogs_dataset.export_annotations_to_file(output_file = ‘./dogs_dataset_train.json’,
annotation_format = ‘coco’,
append_path = False,
export_tags = False,
filter_by_tags = ‘train’)
documentation
export_annotations_to_file(output_file: str, annotation_set_id: int, annotation_format: str = 'json', export_coordinates: str = 'pixel', append_path: bool = True, export_tags: bool = True, filter_by_tags: list = None)
-
Parameters
-
output_file – output file to save. Includes file extension and can include file path. If export_tags = True, output_file needs to be a .zip file
-
annotation_set_id – annotation set id
-
annotation_format – can be one of [‘json’, ‘coco’, ‘csv’]. Default: ‘json’
-
append_path – if True, appends the path to the filename (e.g. local path). Default: True
-
export_coordinates – converts output values to percentage or pixels, can be one of [‘pixel’, ‘percent’]. Default: ‘pixel’
-
export_tags – if True, exports also all the tags to a CSV file. Default: True
-
filter_by_tags – allows to export annotations only for images containing certain image tags. It can be of type List[str] or str. Default: None
-
generate_annotations_from_folders¶
Creates a CSV annotation file associating images with labels, starting from folders named with labels (a common folder structure for Image Classification tasks). The CSV file is saved in the same input directory where images are stored. Example of data structure for a dog / cat dataset: - cats_and_dogs
dog
* img1.jpg * img2.jpg * …
cat
* img199.jpg * img200.jpg * …
Example::
# Download and unzip this sample dataset: s-3.s3-eu-west-1.amazonaws.com/cats_and_dogs.zip
data_path = “cats_and_dogs”
remo.generate_annotations_from_folders(path_to_data_folder=data_path)
documentation
generate_annotations_from_folders(path_to_data_folder: str, output_file_path: str = './annotations.csv', append_path: bool = True)
-
Parameters
-
path_to_data_folder – path to the source folder where data is stored
-
output_file_path – location and filename where to store the file. Default: ‘./annotations.csv’
-
append_path – if True, file paths are appended to filenames in the output file, otherwise the filename alone is used. Default : True
-
-
Returns
string, path to the generated CSV annotation file. Format: ‘file_name’, ‘class_name’
-
Return type
output_file_path
generate_image_tags¶
Creates a CSV annotation file associating tags to images, as defined in the tags_dictionary. The CSV file is saved in the current working directory.
Example of a dictionary: {‘train’: [‘img1.jpg’, ‘img2.jpg’],’test’: [‘img3.jpg’, ‘img4.jpg’],’val’: [‘img5.jpg’, ‘img6.jpg’]}
Example::
# Download and unzip this sample dataset: [https://s-3.s3-eu-west-1.amazonaws.com/small_flowers.zip](https://s-3.s3-eu-west-1.amazonaws.com/small_flowers.zip)
import glob
import os
import random
im_list = [os.path.basename(i) for i in glob.glob(str(‘./small_flowers/images’)+’/
```
**
```
/
```
*
```
.jpg’, recursive=True)])
im_list = random.sample(im_list, len(im_list))
tags_dict = {‘train’ : im_list[0:121], ‘test’ : im_list[121:131], ‘valid’ : im_list[131:141]}
remo.generate_image_tags(tags_dict)
documentation
generate_image_tags(tags_dictionary: dict, output_file_path: str = './images_tags.csv', append_path: bool = True)
-
Parameters
-
tags_dictionary – dictionary where each key is a tags and the value is a List of image filenames (or foder paths containing images) to which we want to assign the tags.
-
output_file_path – location and filename where to store the file. Default: ‘./images_tags.csv’
-
append_path – if absolute path to images is required. Default: True
-
-
Returns
string, path to the generated CSV tags file. Format: ‘file_name’, ‘tag’
-
Return type
output_file_path
get_annotation_info¶
Returns current annotations for the image
documentation
get_annotation_info(dataset_id: int, annotation_set_id: int, image_id: int)
-
Parameters
-
dataset_id – dataset id
-
annotation_set_id – annotation set id
-
image_id – image id
-
-
Returns
annotations info - list of annotation objects or classes
get_annotation_set¶
Retrieves annotation set
documentation
get_annotation_set(annotation_set_id: int)
-
Parameters
annotation_set_id – annotation set id
-
Returns
remo.AnnotationSet
get_dataset¶
Retrieves a dataset with given dataset id.
documentation
get_dataset(dataset_id: int)
-
Parameters
dataset_id – dataset id
-
Returns
remo.Dataset
get_image¶
Retrieves image by a given image id
documentation
get_image(image_id: int)
-
Parameters
image_id – image id
-
Returns
remo.Image
get_image_content¶
Get image file content by url
documentation
get_image_content(url: str)
-
Parameters
url – image url
-
Returns
image binary data
list_annotation_set_classes¶
List classes within the annotation set
documentation
list_annotation_set_classes(annotation_set_id: int)
-
Parameters
annotation_set_id – annotation set id
-
Returns
list of classes
list_annotation_sets¶
Returns a list of AnnotationSet containing all the AnnotationSets of a given dataset
documentation
list_annotation_sets(dataset_id: int)
-
Parameters
dataset_id – dataset id
-
Returns
List[
remo.AnnotationSet
]
list_annotations¶
Returns all annotations for a given annotation set
documentation
list_annotations(dataset_id: int, annotation_set_id: int)
-
Parameters
-
dataset_id – dataset id
-
annotation_set_id – annotation set id
-
-
Returns
List[
remo.Annotation
]
list_dataset_images¶
Returns a list of images within a dataset with given dataset_id
documentation
list_dataset_images(dataset_id: int, limit: int = None, offset: int = None)
-
Parameters
-
dataset_id – dataset id
-
limit – limits result images
-
offset – specifies offset
-
-
Returns
List[
remo.Image
]
list_datasets¶
Lists the available datasets
documentation
list_datasets()
-
Returns
List[
remo.Dataset
]
list_image_annotations¶
Returns annotations for a given image
documentation
list_image_annotations(dataset_id: int, annotation_set_id: int, image_id: int)
-
Parameters
-
dataset_id – dataset id
-
annotation_set_id – annotation set id
-
image_id – image id
-
-
Returns
List[
remo.Annotation
]
open_ui¶
Opens the main page of Remo
documentation
open_ui()
search_images¶
Search images by classes and tags
Examples::
remo.search_images(dataset_id=1, classes = [“dog”,”person”])
remo.search_images(dataset_id=1, image_name_contains = “pic2”)
documentation
search_images(dataset_id: int, annotation_sets_id: int = None, classes: str = None, classes_not: str = None, tags: str = None, tags_not: str = None, image_name_contains: str = None, limit: int = None)
-
Parameters
-
dataset_id – the ID of the dataset to search into
-
annotation_sets_id – the annotation sets ID to search into (can be multiple, e.g. [1, 2]). No need to specify it if the dataset has only one annotation set
-
classes – string or list of strings - search for images which have objects of all the given classes
-
classes_not – string or list of strings - search for images excluding those that have objects of all the given classes
-
tags – string or list of strings - search for images having all the given tags
-
tags_not – string or list of strings - search for images excluding those that have all the given tags
-
image_name_contains – search for images whose name contains the given string
-
limit – limits number of search results (by default returns all results)
-
-
Returns
List[
remo.AnnotatedImage
]
set_public_url¶
documentation
set_public_url(public_url: str)
set_viewer¶
Allows to choose one of available viewers
documentation
set_viewer(viewer: str)
-
Parameters
viewer – choose between ‘browser’, ‘electron’ and ‘jupyter’ viewer
view_annotate_image¶
Opens browser on the annotation tool for giving image
documentation
view_annotate_image(annotation_set_id: int, image_id: int)
-
Parameters
-
annotation_set_id – annotation set id
-
image_id – image id
-
view_annotation_stats¶
Opens browser in annotation set insights page
documentation
view_annotation_stats(annotation_set_id: int)
-
Parameters
annotation_set_id – annotation set id
view_annotation_tool¶
Opens browser in annotation view for the given annotation set
documentation
view_annotation_tool(id: int)
-
Parameters
id – annotation set id
view_dataset¶
Opens browser for the given dataset
documentation
view_dataset(id: int)
-
Parameters
id – dataset id
view_image¶
Opens browser on the image view for given image
documentation
view_image(image_id: int, dataset_id: int)
-
Parameters
-
image_id – image id
-
dataset_id – dataset id
-
view_search¶
Opens browser in search page
documentation
view_search()