Features

flexible Annotations

Flexible Annotations AI tool
flexible Annotations

Research by definition means experimentation and with experimentation comes the need to have flexible design or knobs. Defining a project in AISpotters supports just that. We got you covered when you are trying to find the right method to annotate data. In AISpotters, we support annotation where you can specify number of responses needed per data, specify if you need same group of annotator to annotate the entire data or if you need variety to capture the variance. In data type, you could ask for response corresponding to single data or response to a pair wise data such as if data1 is better in certain aspects compared to data2.

Multiple Group Annotations

Getting the right experimentation

Setting a project in AISpotters is quite simple and highly flexible. You can run several trials which allows free annotations for 1hr varying between the number of repetitions, specify a distinct group of annotators to any number of annotators meeting the repetitions requirement. Once you have identified the right recipe that meets your algorithm or survey requirement, you can copy the project settings to run full fledged production environment. At AISpotters, we would like to think these flexibilities as knobs.

1

Single Data

This knob suggests that your research problem would need to have insights in every single data, there is no comparison needed with other data for the given project. The single data should be self sufficient for the human annotators to provide responses to the posted questions. An example of such problem would be a “Story completion” in NLP, “fact checking” in audio and more.

2

Pairwise Data

Some problems to solve are quite subjective in nature that responses from a single data might produce mostly noise due to randomness in response such as “Scale between 1-10 the quality of input image”. In these scenarios, one would need to compare two data points to answer the question. AISpotters support these projects, where the annotators see both the data side by side and provide responses to the posted question. Few examples of such project would be “content curation in video”, “image quality comparison for Images” and more.

3

Number of Repetitions

In several projects, researchers would like to devise a variation in responses, mimicking the sampling from the space. In such cases, it is necessary to obtain multiple responses for each data. AISpotters allows you to specify the number of repetition needed. Here repetition refers to the number of responses for each data. As an example, when a project with 1000 data points has requested 5 repetition, the total number of responses received by the user will be 5000, where every data has 5 responses to all the questions posted in that project.

4

Group Annotators

In certain cases, these repetitions or multiple answers are needed from the same of group of individuals, when you want to model the inherent bias in annotator. In our tool, we have provided option to specify exactly that requirement. When such setting is specified for a project having 1000 data points and 5 repetitions, the user will receive 5000 responses, where the project would have been annotated by 5 annotators only and each would have answered all the 1000 data.

5

Random Annotators

In certain cases, although the variation in the responses are needed, it is not necessarily needed from the same group. In fact, it would be better to come from wider population mix. AISpotters allows you to specify that as well and in fact this is the default setup. Again taking the typical example, you would still have 5000 responses and each data will have exactly 5 responses corresponding to the “Number of repetitions”, some annotators could have annotated all 1000 data points and some less than 1000.

6

Number of Questions

Every project is unique in its own ways, number of insights to be derived for each data is predominantly project dependent. AISpotters can cover any project needs in terms of insights. As the projects are billed based on total hours spent in annotation, you would have full control on your budget; which can be achieved by either reducing number of questions per data, type of questions or total data count.