To publish tasks in Toloka, first you need to create them and debug them in the testing environment, the sandbox. If all the settings are working correctly, you can move them to the production version of Toloka and start distributing tasks to users.
To start tasks and get responses:
Register in the sandbox as a requester and set up tasks:
Try completing your tasks in the sandbox as a user:
Create user accounts (see the instructions in the user documentation).
Log in to the sandbox with the requester username and click Add trusted users on the Users page.
Click Add user and enter the usernames of the accounts you created.
Only trusted users can access your tasks.
Check the tasks in the sandbox with the username of a trusted user. Make sure that the buttons and response validation are set up correctly, and assess how long it takes to complete the tasks. Check how user selection and quality control work.
Move the project, pool, and settings to the production version of Toloka.
Add OAuth-token from the production version to the sandbox profile page (External services Integration tab).
Go to the project page in the sandbox and click the Edit → Export button.
Select pools for export.
When you select a pool with main tasks, pool with training tasks is exported automatically (if they are linked in the settings).
Upload the tasks in a TSV file into the production version of Toloka and start the pool.
Before starting all the tasks, send out a small trial pool for completion. The trial pool can contain from 10 to 100 tasks. Start the pool, get responses, and analyze them. If the responses aren't satisfactory, try to figure out why. Perhaps you need to edit the instructions, add training tasks, or change something else.
After starting the tasks, watch for incoming messages from users during the first hour. Users normally react quickly to problems and ask questions if something isn't right.
When the tasks are completed, download the received responses.
Check the responses if you set up offline acceptance when creating the pool.
To get more accurate responses, think through the structure and wording of the tasks and the settings for responses:
This helps reduce the number of errors, since users do better with short, uniform tasks. For example, you need to get product information: the name, manufacturer, price, photo, and description. It's best to create a separate project for each of these points.
The instructions for completing a task should be:
Explain all the possible situations the user might encounter.
Use concise wording and simple sentences.
Give step-by-step instructions (if possible). Use lists and formatting to make the text easier to read.
Provide examples. Add images to them (if the tasks have them). For example, if a task requires evaluating the quality of an image, put both high-quality and low-quality images in the instructions. If the task is to identify the type of clothing in a picture, describe the possible options in the instructions and illustrate the text with images.
If you are using offline acceptance, the instructions should briefly and clearly list the acceptance criteria.
Haste lowers the quality of responses, so you should allow a little extra time to complete the task.
Ask users to go through the training tasks before beginning the pool tasks. Include disputable cases in the training tasks.
If the task contains a simple question with a multiple-choice response and is completed fairly quickly (1-10 minutes), it is best to run the task with overlap and use majority vote checking, a golden set, a captcha, and restriction of fast answers.
If the task doesn't have clearly defined response options (for example, it requires creating or translating a text or transcribing an audio recording), you can use the following verification methods: