Search This Blog

Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, 23 August 2022

Are neural networks inherently lazy

(including our brains)?

This possibility struck me after reading the book "Hello World", by Hannah Fry. This book is a wonderful quick-read, which is quite factual in its narration. Not much drama / opinions included, while bringing out the state of (supposed) artificial intelligence and machine learning, along with related research, progress, and future. Back to the question - why do I ask this question about lazy decision making? Couple of examples from the book makes me wonder.

Example 1 - Chapter - Medicine

Neural networks for recognition of images has been in forefront of progress. In the 2020s now, we know that image recognition is being used in pretty much every area where a computer / mobile / camera is used. A decade ago, a team that trained an algorithm to distinguish between wolves and pet huskies, was feeding lots of images to the machine. They were able to show that with the way the machine learning happened, the algorithm was using totally unrelated information to decide on whether a picture had a pet husky, or a wolf - snow in background meant wolf and no snow meant pet husky!

A Pet

The author also goes on to compare with a child who, when asked whether the animal they passed while walking in the park, was a wolf or a husky, remarked that it was a husky, because it was on a lead!

Is this type of learning to decide, a lazy one, or an intelligent correlation on the part of humans?

Example 2 - Chapter - Cars

In one of the early attempts at using Neural networks and machine learning in the 1990s, academicians tried to let the machine learn from hours of human driving. They included the inputs of the cameras, and also fed actions of human drivers, into the system. They drove hours and hours with the system on real roads, and let the neural networks learn from the training data set.

When they let the system drive, it seemed to work until it came upon a bridge. There the car swerved badly and they had to manually intervene. It seems the network had learned to look for the grass along the roads as the guide for its driving.

Does it seem like the neural network found a lazy option of decision making?

Drive by lines... Is that a correct Exit sign? Human mistake?

These set me thinking whether neural networks by design (of the Omnipotent creator) are lazy to come up with easy answers? Do we humans take the easy path most of the time in our decisions?

  • Search internet for bits of information - vs - learning a topic with some depth
  • Get swung by headlines - vs - reading of entire article (many times contradictory info is found in detail)
  • Jump to conclusions (about individuals, groups, nations) - vs - looking up diverse sources of information

Finally,

Quick acceptance of points reinforcing our own experiences, beliefs, and conclusions - vs - objective learning

Can you add examples and/or counter-examples for this thought / question?

PS: The most though-provoking thing in our thought-provoking time is that we are still not thinking. - Martin Heidegger (in his book "Was heisst Denken" - 'What is called thinking?')

Saturday, 12 August 2017

Should we worry about machines with AI taking over?

No worries, I guess!

With the recent flurry of articles, including one by Elon Musk, stating that we should control the research work on Artificial Intelligence, we thought about the past and present as well.

I feel that we don't have to worry, because it seems to be no change from the status quo. How many of us really take intelligent decisions for ourselves, rather than let others dictate decisions for us? Are those decisions by others in our best interest or in the best interest of the decision maker? How many of us outsource the decisions? Here are some examples in this line of thought...

1. Elders in many places dictate the decisions. Seniors in places of study or work dictate the path of progress. The schools, colleges, courses we choose - or even the life partners - many have been decided by others.

2. Some of us want to shirk the "responsibility" of dealing with outcome of our moves, by letting others decide for us.

3. Many of us prefer others to take decisions for us, though it may be couched in hidden language as though we are taking others' inputs / advice.

4. If some people have become successful in our eyes, we want to take up similar paths. We want to try their methods. We even profess their methods to others, even if we don't follow them. Even if gut says it may not fit us, we have to try & sometimes also boast that we follow X or Y.

5. Parents, Community or Society have decided many things for us - from the small issues to the big moves.

6. We had given up our freedom in living our life & our emotions, in early days of 20th century, by waiting for the postman and the result of what he delivered (or did not). The modified avatars have been checking emails 20 times a day, peeking at social media very often or opening messengers/chats time-and-again every day.

7. Our moods are dictated by others expectations, the happenings, our projection of their future reactions to outcome, how we want to manipulate the outcome in order to satisfy X or Y or Z (or many), etc.

In other words, the "free will" we have seems to be still tempered by complex calculations on expectations, outcomes, reactions, repercussions, etc. Where is the freedom?

If so much of what we do has already been outsourced to other "supposedly" sentient / intelligent beings, why can't we outsource the decisions to machines as well? If it has better decision making capabilities over time, why not? What are the arguments for or
against a machine being inferior target for our outsourcing?

PS: This is not about Automation - but AI. Automation has always been there - every new tool (we invent or discover) reduces our effort and improves efficiency.