Object detection and face recognition. How does neural network detection work?

RoboTech Vision presentation

June 29, 2021  Development

The development of artificial intelligence is advancing at a rapid pace. Many neural networks no longer have a problem distinguishing a person, their face or other objects in the image. RoboTech Vision also deals with object recognition. The company uses the so-called ORC algorithm mainly in the robotics field for detecting static and dynamic obstacles in front of the robot, path segmentation for autonomous navigation or for projects in the security systems field.

Using the ORC algorithm, it is possible to detect people, recognise faces, dynamic and static objects, environment or teach the neural network to distinguish custom objects. It is also possible to specify details such as the gender, age or even mood depending on the dataset size and quality. The ORC algorithm can thus be used for autonomous navigation or a security system. RoboTech Vision uses its own datasets for object recognition. “In the project in which we had detect a person with a weapon, we used a combination of these inputs. We used some data from existing datasets, others we created ourselves,” explains Ing. Patrik Štefka, a RoboTech Vision robotics engineer, who specialises in visual systems.

The basis is quality data

In both cases, good quality data is important. When recognising objects, the input data are RGB images that can be obtained from a photo or video. The larger the image, the better the output, but also the longer the detection time. “With larger shots, the structure of the network is larger and the object recognition takes longer, which is not suitable, for example, when navigating a robot when the device must make decisions as quickly as possible,” says Štefka. The input format, the same number of samples for each object and the originality of the image are important too. “It’s not enough to choose a lot of photos from one video, but to make sure that the object is captured from different angles, in different environments and is really precisely marked and classified. It should be noted that the neural network works similarly to the human brain. It only works with what it learns,adds Štefka. Learning is the second, often key phase in the ORC algorithm. It consists of dataset creation, training and subsequent network verification.

There are two basic ways to teach a neural network. Teacher method and self-learning. RoboTech Vision mainly uses the teacher method. “Object recognition uses convolution when a neural network searches for typical features of objects in an image. However, if there is no freely available dataset to be used by the neural network, it is necessary to teach it to recognise new objects using special software,says Štefka. It is done by marking objects in 90 percent of the data and using the images for retraining. The training time depends on the required percentage of success, the size of the model and the computing power of the graphics card of the device used for the training. The trained model is then tested on the remaining 10 percent of data not previously seen by the neural network. “This phase is called validation and we use it to test if the neural network is working properly. We draw the result from it and evaluate whether the solution is applicable in practice,” explains Štefka.

The second method is the so-called teacherless learning – self-learning. No dataset is required with this method. It is important to have a well-modelled simulation environment for the problem to be solved, for example, collision-free control of a mobile robot in an unknown environment. “During learning, the neural network performs actions, such as controlling a robot, which are rewarded – without a collision, or fined – with a collision according to their correctness. As a result, the network behaves in such a way that it solves the problem optimally, with the greatest possible reward,” explains Štefka. If the simulation environment was a well-modelled, it is possible to apply trained solution in a real environment.

Mechanisms against abuse

The evaluated data is then processed further so that the result is as accurate as possible. Unnecessary files and false detections are filtered, two identical faces cannot be in one photo or the authenticity of objects is assessed. “It would be easy to fool the system with a photo, for example, so there are algorithms that evaluate the liveliness of the face. For example, by watching whether a person blinks or moves their face,” adds Štefka. Deep fake videos or humanoids could be a bait for the neural network.

“There are neural networks that can distinguish an artificial face from a living one now. The most suitable method is learning without a teacher, when two neural networks stand opposite each other and try to deceive each other. One creates an artificial face and tries to convince the other that it is a living object. Due to the bad visualisation or unnaturalness that is characteristic of some deep fake videos, the neural network can reveal that it is only a simulation. Sometimes it also detects fraud by using pixel density or constantly changing the size of the filter which tries to adapt to its background and does not look natural.”

Ing. Patrik Štefka

Robotics engineer, RoboTech Vision

We can detect objects with a neural network and classify them, group them or mark the pixels that create a mask, and therefore, for example, segment the path. These recognition methods can be used for various automation solutions such as sorting large data, for example photographs, recording harmful microbes that the human eye may not detect or unlocking door locks based on facial features. Object recognition is currently often used to detect masks on faces and is also an important part of autonomous car navigation. In combination with data post-processing, which increases its accuracy and mechanisms against its misuse, the ORC algorithm has a wide use, not only in robotics, but also in various smart systems.

Author of the post

Dominika Krajčovičová

Marketing manager

Related articles

We tested our map-based autonomous navigation in three different environments

Androver II robot drove 1,5 kilometers autonomously using our algorithm

Husky A200 robot recognizes and autonomously follows objects

Categories

LATEST POSTS

Pin It on Pinterest

Share This