Aggregation of results

If tasks were issued with an overlap of 2 or higher, run aggregation of results. Toloka will process all performers' responses for the task and issue the resulting response and its confidence level.
Note. If you run the pool with the assignment review, make sure that all responses are accepted.
  1. Open the pool.
  2. Click next to the Download results button.
  3. Choose the aggregation method:

Aggregation takes from several minutes to several hours. Track the progress on the Operations page. When aggregation is complete, download the TSV file with the results.

To receive notifications and emails when results aggregation is completed, set up notifications:

  1. Log in to your account.
  2. Go to Profile → Notifications  → Pool or aggregation completed.
  3. Choose the notification method:
    • Email: Messages will be sent to your email address.
    • Messages: Notifications will be displayed under Messages in your account. Apart from you, those who set up shared access to your account can see them.
    • Browser: Notifications will be sent to the devices that you logged in to your account from.

Dawid-Skene aggregation model

Analyzes all performers' responses and returns the final response and its statistical significance.

The Dawid-Skene aggregation model automatically evaluates |L|² parameters for each performer, where L is the number of different aggregation values.

Note that these parameters are determined automatically and are only used in calculations.

Important.

Because the Dawid-Skene method evaluates |L|² parameters for each performer, we don't recommend using it when the performer labels < |L|² tasks. In this case, the quality of aggregation may be poor.

The result of aggregation is a TSV file with responses. CONFIDENCE: <field name output> indicates the response significance as a percentage.

Benefits
  • Tasks can be uploaded any way you want.
Features
  • The Dawid-Skene aggregation model works with control and training tasks as well as with main tasks. There is a possibility that the OUTPUT:result field for the control task in the TSV file won't match the actual response to this task (GOLDEN:result).

  • If your project contains an output data field marked with "required": false that's not filled in by performers, then this field won't be included in aggregation.

    For example, you have 1000 tasks; in 999 of them, performers didn't label the label field, and one performer labeled it as label=x. As a result of aggregation, this data field will have CONFIDENCE = 100%, since only one task out of a thousand falls under the aggregation conditions.

How it's calculated

The Dawid-Skene method puts together an error matrix and response popularity for each performer. It uses the EM algorithm.

The idea is that it collects the most accurate aggregated responses for each task, recording the error matrices and response popularity. It aims to determine the best popularities and error matrices among all responses. The process has several stages. Initially, the majority opinion is used to confirm that the response is correct.

Description of the Dawid-Skene method.

If you want to learn how the Dawid-Skene method is implemented in Toloka, check out the open code.

Note.

Aggregation only includes accepted tasks.

The main requirement for this aggregation is the output data fields:

Fields that can be aggregated
  • Strings and numbers with allowed values.

    The allowed value must match the value parameter in the corresponding interface element.

  • Boolean.
  • Integers with minimum and maximum values. The maximum difference between them is 32.

    If there are too many possible responses in the output field, the dynamic overlap mechanism won't be able to aggregate the data.

The allowed value must match the value parameter in the corresponding interface element.

Fields that can't be aggregated
  • Array.
  • File.
  • Coordinates.
  • JSON object.

Aggregation by skill

Analyzes responses based on the level of confidence in the performer. The confidence level is determined by the skill you choose. Skills measure the probability of the performer completing the task correctly.

Benefits
  • If your project processes a large amount of data, the aggregation results will be more accurate compared to the Dawid-Skene method.
  • You can choose the output data fields you want to aggregate.
Features

Each user skill has “weight”. The higher the skill, the more we trust the performer and believe that their responses are correct.

The result of aggregation is a TSV file with responses. CONFIDENCE: <field name output> indicates the confidence in the aggregated response. In this case, it shows the probability that the response is correct.

Example

Tasks were labeled by three performers with different “My skill” values: the first performer has a skill of 70, the second has 80, and the third has 90.

All three performers responded to the first task with OK. In this case, we are 100% sure that OK is the correct response.

On the second task, the first and third performers responded with OK, and the second performer responded with BAD. In this case, we'll compare the performers' skills and determine the confidence based on the result.

How it's calculated

Terms:

  • — a performer's accuracy
  • — smoothing constant
  • — the most popular response
  • — the probability that the estimate is correct

A performer's accuracyis calculated as follows:

,

where:

is a smoothing constant (starting from 0.5) if there are not enough responses to control tasks.

If there are several estimates, the most popular response is determined by adding togetherof the performers who selected each response option. The response with the largest total is considered more correct. Let's call this estimate.

Using Bayes' theorem, we calculate the posterior probability that the estimateis correct.

A uniform distribution of estimates is assumed a priori. For thethe a priori probability is calculated as

,

where:

is the number of response options.

Next, we calculate the probability that the estimateis correct.

If the performer responded, then the probability of this is equal to the performer's accuracy. If they responded differently, then the probability of this is:

,

where:

is the remaining probability;

is the number of remaining responses.

It ensures that the probability of an error is distributed evenly among the remaining estimates.

We take all performers' responses and, for example, optionand calculate the probability that performers will select this response, provided that the correct response is:

func z_prob(x int) : float {
    d = 1.0
    for w[i]: workers
         if answers[w[i]] == z[x]
            d *= q[i]
         else
            d *= (1 - q[i]) / (Y - 1)
    return d
}

Next, using Bayes' theorem, we calculate the probability that the responseis correct:

r = 0
for z[i]: answer_options
    r += z_prob(i) * (1 / Y)

eps = z_prob(j) * (1 / Y) / r
Note.

Aggregation only includes accepted tasks.

Aggregation requirements:

Pool with dynamic overlap

To run aggregation, you must correctly set up dynamic overlap. To do this:

  1. Select a skill. We recommend to select a skill calculated as the percentage of correct responses in control tasks. This will give you the most accurate aggregation results.
  2. Select the output data fields.
    Output data fields that can be aggregated:
    • Strings and numbers with allowed values.

      The allowed value must match the value parameter in the corresponding interface element.

    • Boolean.
    • Integers with minimum and maximum values. The maximum difference between them is 32.

      If there are too many possible responses in the output field, the dynamic overlap mechanism won't be able to aggregate the data.

    The allowed value must match the value parameter in the corresponding interface element.

Pools without dynamic overlap

You can run aggregation by skill if the pool meets the following requirements:

  1. You set a skill that defines the level of confidence in the performer's responses. We recommend to use a skill calculated as the percentage of correct responses in control tasks.
  2. The output data fields have allowed values.
    Output data fields that can be aggregated:
    • Strings and numbers with allowed values.

      The allowed value must match the value parameter in the corresponding interface element.

    • Boolean.
    • Integers with minimum and maximum values. The maximum difference between them is 32.

      If there are too many possible responses in the output field, the dynamic overlap mechanism won't be able to aggregate the data.

    The allowed value must match the value parameter in the corresponding interface element.

  3. The tasks were uploaded in the pool with “smart mixing”.

Troubleshooting

What is the difference between the confidence in the aggregated response in the Dawid-Skene aggregation model and the confidence in aggregation by skill?

In the way it's calculated. In both aggregations, confidence means the same thing.

Does aggregation use the performer's rating?
No, it doesn't.
How does the Dawid-Skene aggregation model work?
The Dawid-Skene aggregation model analyzes the performer responses and creates an error matrix for each performer. This lets us evaluate the statistical significance of the performer in the context of each assignment. Learn more about the model.
Where do I see the aggregation progress?

Click List of Operations on the pool page.

Why might aggregation by performer skill be unavailable?

You cannot aggregate by project fields that have no valid values. Specify the possible values for all the fields of all types.

You can't aggregate by skill. When running via the API, I get the error code ONLY_FOR_POOL_WITH_MIXER. Why?

You need to use smart mixing.