The object recognition technology of Google will perform better if this technology gets more experience.
This challenge improves the evaluation datasets for Machine Learning by encouraging searching the existing ML benchmarks.
So, when this technology will get more experience, the object recognition technology of Google will perform better.

In the object recognition tasks, CATS4ML will challenge the ability of Machine learning.
The test set has many examples that are difficult to solve with algorithms.
Many evaluation datasets have easy-to-evaluate items, but they miss the natural ambiguity of real context.

Evaluating ML models without real-world examples is difficult to test machine learning performance.
And this causes ML models to develop weak spots.
Google AIs CATS4ML Data Challenge at HCOMP 2020 shows the difficulty of identifying the ML models weaknesses.

The outcomes of this challenges will help identify and avoid future errors.
Weak Spots are examples that are difficult for a model to evaluate correctly.
This happens because the dataset does not include the classes of examples.

The researchers continue to study the Known Unknowns in an Active Learning domain.
The community has found a solution to get a new label from people on random examples.
And if the model is sure about the photo, then the person is not asked.
The real-world examples can give better results to a models failures in its performance.
The CATS4ML data challenge is open till 30 April 2021 for researchers and developers globally.
The participants can register on theChallenge website, download the target images and dataset, and provide the pictures.