Own team, crowdsourcing or outsourcing — which is better for image annotation
Balancing between data quality, cost optimization, and scalability is a main challenge in an image annotation for a computer vision developers. Today we try to find how better to save this balance.
Working with data in artificial intelligence teams takes at least 80% of the time and only 20% of the time is spent working with the architecture. This information has long been known to companies that create and develop their products based on artificial intelligence, and computer vision is no exception. In order to optimize the cost of the project, the time of its release or scaling, companies are starting to look for options to optimize work with data. In fact, there are only three such options — image annotation by your own team, image annotation outsourcing, and crowdsourcing. Each of these options has its advantages and disadvantages, and is suitable for companies under different conditions and stages of development, let’s consider it in detail.
Image annotation process
The image annotation process consists of the following stages:
- Forming requirements for image annotation
- Creating instructions for annotators
- Image annotation
- Quality check
- Repeat steps 2–4 if necessary
During the requirements formation stage, consultations are held with the neural network development team to determine which type of annotation is best suited for the solution of the selected problem (boundary box, polygon annotation, line, key points …), what are the requirements for the annotation presentation format (COCO, PascalVOC, YOLO …). Requirements for the minimum object size, image composition and other important parameters may also be added.
After receiving the requirements, it is necessary to create an instruction for annotators specifying all the nuances of the markup to obtain the maximum quality of the dataset. A clear example of such nuances is the indication of the need to include or not include car mirrors in the annotation.
Next, the tasks are distributed to annotators who perform the annotation, to ensure maximum quality, the same tasks can be duplicated by several annotators with further comparison of the results.
After receiving the annotated dataset, it is necessary to conduct a process of checking the quality of the work of the annotators, this stage is mandatory for finding systematic errors that lead to a significant deterioration of the results of the neural network.
Creating your own annotation team
Your own team always seems to be the best option: full control over the development process, rapid implementation of changes and approaches, and most importantly, maximum data security… but is that all true?
To organize the process of data marking within the company, you first need to appoint a certain number of employees to perform this work, as well as provide them with the necessary tools and instructions. Equally important is the issue of staff training, because the annotator is primarily a qualified specialist with a certain amount of work experience. Even with perfect instructions, the first time a novice annotator can produce a significant amount of incorrectly labeled data.
The cost of providing the team with software tools can also add significantly to the data markup budget. In addition, the issue of scaling is very painful for small teams because the growth of the business is often much faster than the ability of the team to expand.
At the same time, our own team allows us to get incredible flexibility in the development process. Changing the requirements is possible after each iteration of training, which allows you to get the maximum rapid progress at the initial stages of development.
Undoubtedly, the issue of security is also one of the key ones, especially when it concerns banking or medical information, which is extremely sensitive, but choosing a reliable partner in image annotation can significantly reduce possible security risks.
So let’s summarize the pros and cons of creating your own annotation team:
Crowdsourcing
As Wikipedia notes “Crowdsourcing involves a large group of dispersed participants contributing or producing goods or services — including ideas, votes, micro-tasks, and finances — for payment or as volunteers.”
That is, the entire process of annotation is divided into a huge number of small tasks of 1–10 photos and is performed by a large number of people. This leads to several consequences: firstly, systematic errors in your dataset appear only under the condition that an error was made in the instructions, in addition, it is one of the cheapest options for marking — the typical cost of the task is 5–7 cents, which can include from 2 to 5 images.
On the other hand, the people performing the tasks almost certainly do not have any knowledge in the subject area that you are researching. And therefore, the use of crowdsourcing for the labeling of specific datasets, such as X-ray images, classification of butterflies, or details of mechanisms, is almost impossible.
There is one more thing that is not very common to talk about — it is the low motivation of workers on crowdsourcing platforms, the reward for their work is very low, and therefore the motivation is appropriate. And low motivation leads to a sufficiently large number of errors in the dataset, which increases the costs of checking the quality of work.
Outsourcing
Outsourcing image annotation to a specialized company is in many cases the only possible step for further development. The main advantage of transferring image annotation to a specialized team is their experience in performing similar work and a large number of annotators, which allows for scaling and processing of really large volumes of data. The quality control of the annotators work is also transferred to the management of the partner company, which allows you to release maximum resources for product development.
Outsourcing of image annotation makes it possible to combine the advantages that the company receives from its own annotation team and the use of crowdsourcing, primarily data security provided by the team of the outsourcing company, as well as price optimization with the possibility of scaling. Of course, in this option, the flexibility and speed of possible changes are lost, because each such step requires additional communication and coordination, but it makes it possible to provide large volumes of data with minimal human and monetary resources from your side.
Conclusions
What to choose from the above options:
If you are just starting and your development process requires a large number of experiments, including with data annotation, then using an internal team will be the most correct option for you.
In the event that you need to scale the amount of data you need to obtain for model training, the outsourcer will be able to provide you with the required amount of data with excellent quality.
Crowdsourcing is suitable only for experienced users who have experience with crowdsourcing platforms and know how to build photo annotation processes on them (we will definitely discuss this interesting topic in future publications).
The Aikolo company will help you annotate photos and videos, as well as set up processes on the annotation crowdsourcing platform. Tell us about your project.