Skip to main content

Working with training projects

Start here if you want to learn how to run a successful annotator training project. If you don't have a training project yet, head over to creating a training project to get started.

Roles and permissions

PermissionAdminTeam ManagerAnnotator
View benchmark project sourceX
Edit benchmark scoring functionX
Add annotation instructionsX
DeleteX
Invite team membersXX
Manage team permissionsX
Manage adminsX
Annotate tasks in the task management systemX
Control assignments & status in the task management systemXX

How to run a training project

After you've created a training project, training normally proceeds along the following lifecycle milestone steps. We're actively expanding the documentation surrounding our annotator training module, so please reach us at support@encord.com for any unanswered questions.

OK, let's get started!

1. Onboard your annotators

You can add annotators during the creation phase, or by going to 'Settings > Team' and inviting new annotators. Remember that unlike in annotation projects where each piece of data can only be seen by one annotator at a time, training projects score each annotator against the same set of benchmark tasks. Therefore, a copy of each benchmark task will be added to the project for each annotator added.

You can confirm annotators and tasks are ready to go by checking the summary screen. In this case, our source project had 4 tasks and we have 4 annotators assigned. We should expect a total of 16 tasks.

note

The nature of training projects is to train annotators. Therefore, tasks are not created for admins assigned to the project and administrators can not access annotator tasks via the 'Labels > Queue' tab. This is to prevent administrators from accidentally completing training tasks meant for annotators. Administrators can still confirm annotator submissions using the 'Activity' and 'Data' tabs in the labels page as needed.

Once you've prepared the project with your intended annotator trainee team, send the project URL to each of your team members so they can join and start the training.

2. Annotators proceed through the benchmark tasks

Annotators can access the training project at the URL you share with them. Annotators see a simplified interface which shows only their tasks in both the summary and labels queue pages. Annotators can start their evaluation tasks by clicking the 'Start labelling' button in the upper right or clicking 'Initiate' next to any given labeling task.


Annotation in a training project is the same as it is for an annotation project. Guide your team to the label editor documentation to get them started. Once an annotator has submitted a task, it can not be re-opened. We're working on adding greater flexibility to the benchmark task lifecycle, please let us know at product@encord.com if you have any related requests or interests!

3. Evaluate annotator performance

Submitted tasks are automatically run through the benchmark function, and the annotators performance on the task is computed. Project administrators can confirm annotator progress and performance in the summary page as below. Use the overview tab for quick insights into overall annotator performance. Use the 'Annotator submissions' tab to confirm individual task submissions on a per-label basis.

At this stage, you can communicate with your annotators in whichever manner is easiest for you and your team. Use the CSV to download the entire set of results and share with relevant team members. Or perhaps it makes more sense to schedule a live review, using the Annotator submissions' 'View' functionality to verify the benchmark labels and a given annotator's submission in the label editor.

For projects which have hundreds of evaluation labels per annotator, where an 'evaluation label' is defined as an annotation per frame, then we limit the number of evaluation labels displayed in the dashboard for performance reasons. The labels displayed will be some random sampling of the submitted labels. You can always access the full set of evaluation labels by downloading the CSV. Larger downloads may require significant time, and may prompt you to run the downloads in a separate tab so the download can proceed while you can continue working in the current tab.

note

Some teams may need further insight into the details of the benchmark function in order to devise an accurate system. However, detailed knowledge of the benchmark function may unduly influence trainees behavior. Please contact Encord directly at support@encord.com and we'll be more than happy to provide further material on the benchmark process to your administration team. This allows us to empower our customers while protecting the integrity of the benchmarking process.

4. Adjust the benchmark function and re-calculate scores

If, after evaluating annotator performance, you feel that annotator score distributions don't correctly reflect the skill displayed -- or don't properly reward and penalize annotators for the types of annotations they made correctly or incorrectly -- it's always possible to adjust the benchmark function and recalculate.

Go the 'Settings' page, and find the section marked 'Benchmark scoring function'. Press the 'Edit' button to enable the function's weight editor and change the values to match your new plan. Finally, press 'Save' in the upper right to persist the new function configuration.

To see the changes applied against previous submissions, return to the 'Summary' page and press the 'Re-calculate scores' button. If a given annotator's annotations were affected by the weighting change, the 'Benchmark results' column will change to reflect their new score with the new weights! In this case, we see the score of an annotator, on the left and right respectively before and after we changed the scoring function (as above), and pressed the 'Re-calculate scores' button. The annotator's change in score is noticeable, but doesn't seem to change his performance from unskilled to skilled. Likely, this annotator should undergo another round of training.

5. Repeat until finished

You can continue to adjust scores even after all the annotators have finished all their tasks, until you feel the score distribution matches your intent.

You can also add new annotators to existing projects, as you did in step #1. We recommend however, that when adding a new group or significant amount of annotators, it's easier to manage if you create another new training project. This way, you can manage the new cohort of annotators all at once.