Skip to main content

Task manager (Queue)

The task manager manages all annotation and review tasks for labeling queues & workflows. The task manager is built to optimize labeling and quality control workflows and is designed to scale to thousands of annotators, reviewers, team managers, and administrators working concurrently on the same project. The task manager is enabled by default but can be switched on and off under the 'Options' tab in project settings.

Annotation and review tasks are distributed automatically using the first in-first out method - tasks that have been in the queue the longest are served first. Annotation tasks are generated and added to the label queue when a dataset or a set of datasets is attached to a project and when new data is added to attached datasets. Review tasks are generated and added to the review queue once an annotator submits an annotation task. Conversely, detaching a dataset will remove any associated annotation and review tasks, so you should exercise caution if you proceed.

Team managers and administrators can also assign tasks explicitly to individual annotators and reviewers. Once an annotation or review task is distributed to an annotator or reviewer, it is reserved by that individual, prohibiting other team members from accessing that task. Both annotation and review tasks are accessible in the 'Queue' pane in the 'Labels' tab.


Task generation

Annotation tasks are generated and added to the label queue when a dataset or a set of datasets is attached to a project and when a new data asset is added to attached datasets. Review tasks are generated and added to the review queue once an annotator submits an annotation task. Conversely, detaching a dataset will remove any associated annotation and review tasks, so you should exercise caution if you proceed.

By default, each data asset will be labeled once, and each label submitted for review will be reviewed once. You can create additional review tasks by clicking the + Add reviews button and following the steps in the window. You can reopen submitted annotation tasks if you wish to send the data asset back into the queue for further labeling by selecting the relevant assets and clicking the Reopen button.


Task distribution

Annotation and review tasks are distributed automatically using the first in-first out method (illustrated below) - tasks that have been in the queue the longest are served first. Once an annotator or reviewer clicks on the Start labeling or Start reviewing button, the next available free task in the queue is reserved by that individual, prohibiting other team members from accessing the task. Once the task is fetched, the annotator or reviewer is taken to the label editor to complete the task.

Project administrators and team managers can override the automated distribution of tasks by explicitly assigning tasks to individuals in the 'Queue' pane in the 'Labels' tab. Assignments can be done on a task-by-task basis or in bulk by selecting the relevant tasks and clicking the Assign button.

Tasks can be released by pressing the icon next to the task and clicking the Release task button. Reserved tasks do not have an expiry and will keep being assigned to an individual until it is submitted, released, or skipped.

tip

Annotation tasks can be submitted programmatically via our API & using Encord's Python SDK.


Task completion

An annotation task is completed once all outstanding labels subject to review have been reviewed. Completed annotation tasks and annotation tasks currently in the review stage are visible in the 'Activity' pane in the 'Labels' tab.

Task Status

As tasks move through the Task Management System, their status will evolve from 'Queued' for annotation to 'In review' and then 'Complete'. If labels are rejected or the tasks is otherwise judged in need of further annotation work, it will be marked as 'Returned'. The most comprehensive view of task statuses are available to project Administrators and Team Managers in a project's labels dashboard under the data tab.


Annotation

Annotation tasks are completed in the label editor. Hit the Submit button once labeling work is completed and the annotator is confident that the quality and coverage of produced labels meet the accuracy threshold for the project. The submitted labels will be selected for review, depending on the project Quality configuration.

Annotators can skip tasks by clicking the Skip button. The next available task is automatically distributed and assigned if a task is skipped.

caution

In order to prevent any possible issues of annotator work being overwritten, it's critical that all annotations are done via the Task Management System's Queue tab, and only the person assigned to the task makes annotations at any given time.


Review

Review tasks are completed in a purpose-built user interface in the label editor.

Review mode components:

  • A. Single label review toggle
  • B. Edit labels
  • C. Pending reviews pane
  • D. Completed reviews pane
  • E. Reject and Approve buttons
  • F. Approve and Reject all in frame buttons
note

All labels are reviewed on an instance level. This means that if an instance is rejected on one frame, it will be rejected across all frames. This includes using the Accept all in frame and the Reject all in frame buttons.

'Pending' and 'Completed' review panes

All labels for review for a particular data asset assigned to the reviewer are automatically loaded into the 'Pending reviews' pane. Completed reviews are displayed in the 'Completed reviews' pane. You can click on specific objects to highlight them. Labels can be selected and then approved or rejected for a given instance or in bulk using the Reject and Approve buttons or the matching hotkeys, b for reject and n for approve.

'Single label review' toggle

You can enter the 'Single label review' mode by toggling the switch at the top. The single label review mode automatically highlights and hides other objects, allowing you to review and approve or reject a single label at a time and quickly browse through individual labels using the Up and Down keys on your keyboard.

note

The reviewer is automatically taken to the next set of requested label reviews once all labels in a particular review task have been reviewed.

Edit labels

A convenient feature is allowing reviewers to edit labels and make small adjustments without the need to return the entire set of labels to the annotator. Press the Edit labels button and make any necessary changes before switching back to review mode. Currently, only a subset of label edit operations are supported:

  • objects: moving the object or individual vertices
  • classifications: changing the classification value
  • objects and classifications: change any nested classifications

Approve/Reject all in frame buttons

In addition to being able to review all labels for a given instance, you can review labels grouped by frame as well. For review workflows that focus on progressing through video by frame rather than by instance, use the Approve all in frame and Reject all in frame buttons. Of course, you should be sure you want to apply that judgement to all labels in a given frame before using this feature!


Rejected labels

If a reviewer rejects a label during the review stage, it will be marked as Returned in the 'Queue' pane in the 'Labels' tab. By default, rejected annotation tasks are returned and assigned to the queue of the person who submitted the task.

Returned tasks are resolved in a purpose-built user interface in the label editor. Click the icon on the right-hand side of the screen to open the drawer containing rejected labels. Once the reviewer comments have been addressed, click the button icon to mark it as resolved.

Annotation tasks cannot be re-resubmitted until all issues have been marked as resolved. Once a task is re-submitted, the labels marked as resolved are sent back for an additional review. There is no limit on how many times a label can be rejected and sent back for correction.

Returned task label editor view

Missing labels

If a reviewer determines that a label is missing entirely, they can use the report missing labels feature to indicate labels are missing in a given frame or image. Missing label reports will be sent back to the annotator via the same queue as rejected labels.

Submit missing label report

Quality Assurance

Automated Quality Assurance

Automated quality assurance reviews annotation tasks by comparing labels against a benchmark manual QA project which acts as a 'gold standard' - the 'benchmark project source'. This process completely removes the need for reviewers and improves the speed at which projects with the same ontology as the benchmark can be reviewed.

note

Automated QA projects need to have the same ontology as their benchmark project source.

  • The Benchmark project source allows you to see which project is being used as the gold standard benchmark function against which QA is done.

  • Click Edit next to the Benchmark scoring function to adjust the way labels are scored by the benchmark function by adjusting the relative weights of the evaluation metrics. This will only be available for dynamic scoring functions.

    • Intersection over Union (IoU) is an evaluation metric which assesses the accuracy of labels compared to the ground truth / gold standard. If labels fully overlap with those in the ground truth full points will be awarded. Conversely, if there's no overlap between a label and the ground truth labels then no points will be awarded. In the example below the scale ranges from 0 to 200.

    • Category is an evaluation metric based on correctly identifying the ontology category. In the example above correctly identifying an 'Apple' will award 100 points, while correctly identifying the nested classifications 'Colour' and 'Size' will award 50 points each. In total, the 'Category' metric ranges from 0 to 200 points.

  • A benchmark QA walkthrough can be found here.


Manual Quality Assurance

Manual quality assurance relies on reviewers manually reviewing annotation tasks.

The task manager allows you to set parameters for manual quality control workflows in the 'Settings' tab of your project:

  • The percentage of labels that are to be manually reviewed.
  • Rules for distribution of review tasks.
  • Common rejection reasons that can be used to identify and systematize errors in your labels.
  • Reviewer to class and annotator mapping (e.g. label X with class Y should always be reviewed by reviewer Z).
  • Assign tasks that are rejected after a specific number of review cycles for expert reviews.

A. Sampling rate
B. Multi review assignment
C. Default rejection reasons
D. Reviewer mapping
E. Expert review

Sampling rate

Project administrators can dynamically change the sampling rate applied to submitted annotation tasks. The sampling rate determines the proportion of the submitted labels that a reviewer should review. This can be modified with the slider.

Sampling rates can also be configured by annotation type and annotator (e.g. class Y should have a sampling rate of 50%, class Z should have a sampling rate of 80%, annotator A should have a sampling rate of 70%, annotator B should have a sampling rate of 95%) by clicking the Configure button (this feature is only available to paying users).

Multi review assignment

Annotation tasks with many labels across one data asset might get partitioned into review tasks that are distributed to different reviewers. Enabling multi review assignment means that all review tasks generated through the submission of one annotation task are assigned to the same reviewer.

Default rejection reasons

The default rejection reasons allows an admin to create default responses a reviewer can select when rejecting annotation tasks. Pressing the + New button and entering a response will save it for future reviews. Setting default rejection reasons can help you identify and systematize errors in your labels.

Reviewer mapping

You can configure rules that automatically assign specific reviewers to classes and annotators (e.g. label X with class Y should always be reviewed by reviewer Z). The setting can be configured by toggling the 'Reviewer mapping enabled' option.

Clicking the Configure button opens up a window where you can assign reviewers to specific annotators or classes. Assigning a reviewer to classes (objects or classifications) can be done under the 'Class mapping' tab, and assigning a reviewer to annotators under the 'Annotator mapping' tab. Any number of reviewers can be assigned to annotators and classes. One of them will be selected at a time for each task submitted.

note

If an annotator is mapped to a reviewer(s) and they create labels with specific classes also mapped to a reviewer, the class mapping will take precedence over the annotator mapping.


Expert review

Many industries and domains require years of training or experience to accurately recognize and classify examples — and an expert’s time can often be expensive or hard to schedule. In other cases, there may be additional requirements on your data quality assurance processes depending on the regulatory environment. To help customers speed up their data annotation processes in these complex environments, Encord provides an expert review feature which empowers expert reviewers you designate to have an additional layer of oversight in the review process.

Expert reviews differ from normal reviews in the following ways:

  • Expert reviews are initiated following a normal review, not direct annotator submission.
    • Therefore, rules for forwarding to expert review do not clash with normal annotator or class reviewer mappings.
    • The sample rate for expert review is configured according to the expert review stage config, not normal annotation sample rates. Sample rates apply to the review judgements indicated in the expert review configuration.
  • Instance annotations rejected by an expert are permanently rejected instead of being returned to the annotator.

The expert review configuration resembles as follows.

Set up an expert review configuration by specifying the parameters.

  1. After X reviews: Choose X such that after X cycles of submission and rejection by a normal reviewer, all rejected reviews are forwarded to expert review. This may sometimes be known as the review count threshold. Because 2 is the review count threshold in the above sample configuration, all reviews rejected for a second time will be sent to expert review.
  2. Expert reviewers: Choose the pool of possible expert reviewers. There is no requirement to designate a user as an expert reviewer, other than they have at least reviewer permissions inside in the project. Users can be made expert reviewers regardless of their placement within normal annotator or class reviewer mappings.
  3. Expert review stages: Stages or iterations, indicate how to forward review results to expert review after each normal review. In the above sample configuration, 10% of all first reviews will be sent to expert review, and 50% of approved second reviews will be sent to expert review.
    • At each stage you configure the review iteration, the sampling rate, and the action for which to apply the sampling rate. Note that because all rejected reviews are always forwarded to expert review at the threshold count, you can only choose 'Approved' for the possible extra action when configuring the final stage.

The above configuration can be visualized as follows: