Robots are neither colleagues nor superiors
The media is currently full of articles about our future coexistence with robots, predicting bleak scenarios on the principle that bad news sells. However, in this expert article, we want to consider the current state of robots – more akin to problem children than intelligent individuals.
Today’s robots don't simply learn by themselves – the learning method has to be carefully chosen. If neural networks are used, their type and topology have to be determined, usually via trial and error – a lot of trial and error. It can also be difficult to decide what to feed these tin men. Some deep neural nets are able to identify key features themselves, but this ability often fails due to a lack of training data. This scenario demands smart feature engineering: time-consuming analysis methods are used to extract textual, aural, visual or other features, which can be used to characterize the object that the machine needs to learn to recognize.
Machine learning methods
It’s not true that machine learning methods represent a kind of black box. Rather, problems are translated into an abstract space in which they can be solved and explained. What is true is that these explanations can rarely be translated back into the original form.
The learning phase described above is followed by a no less challenging monitoring phase. Robots are stubborn and systematic, even when it comes to undesirable behavior. IT novices will be familiar with this problem from attempts to use a web robot to load webpages. These robots quickly run amok, and the operators of the incapacitated websites aren't slow to react, either. The monitoring problem is even more apparent with large industrial robots: humans aren’t permitted to get too close to them.