Aping Is Not Learning
This observation was written in 2018, based on the most successful Machine/Deep Learning applications and frameworks of the day.
In short: all approaches I’ve seen so far imitate the process of learning only, and pride themselves with providing the answers humans would have given, mostly without actual ‘understanding’. Or, as I’ve put it in 2019:
A consequence is that while chasing and measuring the digital outcome only, with real understanding about structural insights into the problem domain not even on the radar, the neural networks brute-force themselves to provide the answers that will make their masters beam with pride, but result in no actual knowledge about the domain, only about the mechanics of input and output. The machine will learn what the machine was instructed to learn and if that is providing the correct answers, it will deliver those.
In plain language we call this cheating.
The to equal amount mindblowing and worrying adversarial attacks on neural network based sight and cognitive systems are a good indicator that we’re not on the right track with this approach. It’s scary how easy it is to derail such systems and should be obvious that the problem won’t be solved with the current approach. For me it is clear to see that for Artificial Intelligence to get significantly better, we’ll need to consider actual learning as prime objective and the answers only collateral benefits.
Recently I read about a newfangled learning approach that actually extracts structural insights from the data and is then capable of connecting those pieces of metadata, thus building the extensive and intricate web of semantics that actually can be called a knowledge network. Which a bunch of magic numbers (weights) hung into a neural network is not. But such attempts are few and far between and it does loook like we’re going to be driven around by cars with nigh zero actual knowledge, only a mastery in cheating.
Here’s hoping the next generation will arrive soon.