Search This Blog

Tuesday, 23 August 2022

Are neural networks inherently lazy

(including our brains)?

This possibility struck me after reading the book "Hello World", by Hannah Fry. This book is a wonderful quick-read, which is quite factual in its narration. Not much drama / opinions included, while bringing out the state of (supposed) artificial intelligence and machine learning, along with related research, progress, and future. Back to the question - why do I ask this question about lazy decision making? Couple of examples from the book makes me wonder.

Example 1 - Chapter - Medicine

Neural networks for recognition of images has been in forefront of progress. In the 2020s now, we know that image recognition is being used in pretty much every area where a computer / mobile / camera is used. A decade ago, a team that trained an algorithm to distinguish between wolves and pet huskies, was feeding lots of images to the machine. They were able to show that with the way the machine learning happened, the algorithm was using totally unrelated information to decide on whether a picture had a pet husky, or a wolf - snow in background meant wolf and no snow meant pet husky!

A Pet

The author also goes on to compare with a child who, when asked whether the animal they passed while walking in the park, was a wolf or a husky, remarked that it was a husky, because it was on a lead!

Is this type of learning to decide, a lazy one, or an intelligent correlation on the part of humans?

Example 2 - Chapter - Cars

In one of the early attempts at using Neural networks and machine learning in the 1990s, academicians tried to let the machine learn from hours of human driving. They included the inputs of the cameras, and also fed actions of human drivers, into the system. They drove hours and hours with the system on real roads, and let the neural networks learn from the training data set.

When they let the system drive, it seemed to work until it came upon a bridge. There the car swerved badly and they had to manually intervene. It seems the network had learned to look for the grass along the roads as the guide for its driving.

Does it seem like the neural network found a lazy option of decision making?

Drive by lines... Is that a correct Exit sign? Human mistake?

These set me thinking whether neural networks by design (of the Omnipotent creator) are lazy to come up with easy answers? Do we humans take the easy path most of the time in our decisions?

  • Search internet for bits of information - vs - learning a topic with some depth
  • Get swung by headlines - vs - reading of entire article (many times contradictory info is found in detail)
  • Jump to conclusions (about individuals, groups, nations) - vs - looking up diverse sources of information

Finally,

Quick acceptance of points reinforcing our own experiences, beliefs, and conclusions - vs - objective learning

Can you add examples and/or counter-examples for this thought / question?

PS: The most though-provoking thing in our thought-provoking time is that we are still not thinking. - Martin Heidegger (in his book "Was heisst Denken" - 'What is called thinking?')