Exploring AI Dependence Upon ‘Artificial Stupidity’ For Autonomous Cars

from AI Trends https://ift.tt/2LM0at1 Allison Proffitt
https://ift.tt/2ALd34l

By Lance Eliot, the AI Trends Insider 

We all generally seem to know what it means to say that someone is intelligent. 

In contrast, when you label someone as “stupid,” the question arises as to what exactly that means. For example, does stupidity imply the lack of intelligence in a zero-sum fashion, or does stupidity occupy its own space and sit adjacent to intelligence as a parallel equal? 

Let’s do a thought experiment on this weighty matter. 

Suppose we somehow had a bucket filled with intelligence. We are going to pretend that intelligence is akin to something tangible and that we can essentially pour it into and possibly out of a bucket that we happen to have handy. Upon pouring this bucket filled with intelligence onto say the floor, what do you have left? 

One answer is that the bucket is now entirely empty and there is nothing left inside the bucket at all. The bucket has become vacuous and contains absolutely nothing. Another answer is that the bucket upon being emptied of intelligence has a leftover that consists of stupidity. In other words, once you’ve removed so-called intelligence, the thing that you have remaining is stupidity. 

I realize this is a seemingly esoteric discussion but, in a moment, you’ll see that the point being made has a rather significant ramification for many important things, including and particularly for the development and rise of Artificial Intelligence (AI). 

Can intelligence exist without stupidity, or in a practical sense is there always some amount of stupidity that must exist if there is also the existence of stupidity? 

Some assert that intelligence and stupidity are Zen-like yin and yang. In this perspective, you cannot grasp the nature of intelligence unless you also have a semblance of stupidity as a kind of measuring stick. 

It is said that humans become increasingly intelligent over time, and thus are reducing their levels of stupidity. You might suggest that intelligence and stupidity are playing a zero-sum game, namely that as your intelligence rises you are simultaneously reducing your level of stupidity (similarly, if your stupidity rises, this implies that your intelligence lowers). 

Can humans arrive at a 100% intelligence and a zero amount of stupidity, or are we fated to always have some amount of stupidity, no matter how hard we might try to become fully intelligent? 

Returning to the bucket metaphor, some would claim that there will never be the case that you are completely and exclusively intelligent and have expunged stupidity. There will always be some amount of stupidity that’s sitting in that bucket. 

If you are clever and try hard, you might be able to narrow down how much stupidity you have, though there is still some amount of stupidity in that bucket. 

Does having stupidity help intelligence or is it harmful to intelligence? 

You might be tempted to assume that any amount of stupidity is a bad thing and therefore we must always be striving to keep it caged or otherwise avoid its appearance. But we need to ask whether that simplistic view of tossing stupidity into the “bad” category and placing intelligence into the “good” category is potentially missing something more complex. You could argue that by being stupid, at times, in limited ways, doing so offers a means for intelligence to get even better. 

When you were a child, suppose you stupidly tripped over your own feet, and after doing so, you came to the realization that you were not carefully lifting your feet. Henceforth, you became more mindful of how to walk and thus became intelligent at the act of walking. Maybe later in life, while walking on a thin curb, you managed to save yourself from falling off the edge of the curb, partially due to the earlier in life lesson that was sparked by stupidity and became part of your intelligence. 

Of course, stupidity can also get us into trouble. 

Despite having learned via stupidity to be careful as you walk, one day you decide to strut on the edge of the Grand Canyon. While doing so, oops, you fall off and plunge into the chasm.  

Was it an intelligent act to perch yourself on the edge like that? Apparently not. 

As such, we might want to note that stupidity can be a friend or a foe, and it is up to the intelligence portion to figure out which is which in any given circumstance and any given moment. 

You might envision that there is an eternal struggle going on between the intelligence side and the stupidity side. 

On the other hand, you might equally envision that the intelligence side and stupidity side are pals, each of which tugs at the other, and therefore it is not especially a fight as it is a delicate dance and form of tension about which should prevail (at times) and how they can each moderate or even aid the other. 

This preamble provides a foundation to discuss something increasingly becoming worthy of attention, namely the role of Artificial Intelligence and (surprisingly) the role of Artificial Stupidity. 

For my indication of the grand convergence that has led to today’s AI, see this link: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/ 

For the importance of AI having self-awareness, see my article here: https://aitrends.com/ai-insider/self-awareness-self-driving-cars-know-thyself/ 

For why it is crucial to have AI algorithmic transparency, see my review here: https://aitrends.com/ai-insider/algorithmic-transparency-self-driving-cars-call-action/ 

For my assessing whether AI can have the motivation, see the article here: https://aitrends.com/ai-insider/motivational-ai-bounded-irrationality-self-driving-cars/ 

Thinking Seriously About Artificial Stupidity 

We hear every day about how our lives are being changed via the advent of Artificial Intelligence. 

AI is being infused into our smartphones, and into our refrigerators, and into our cars, and so on. 

If we are intending to place AI into the things we use, it begs the question as to whether we need to consider the yang of the yin, specifically do we need to be cognizant of Artificial Stupidity? 

Most people snicker upon hearing or seeing the phrase “Artificial Stupidly,” and they assume it must be some kind of insider joke to refer to such a thing. 

Admittedly, the conjoining of the words artificial and stupidity seems, well, perhaps stupid in of itself. 

But, by going back to the earlier discussion about the role of intelligence and the role of stupidity as it exists in humans, you can recast your viewpoint and likely see that whenever you carry on a discussion about intelligence, one way or another you inevitably need to also be considering the role of stupidity. 

Some suggest that we ought to use another way of expressing Artificial Stupidity to lessen the amount of snickering that happens. Floated phrases include Artificial Unintelligence, Artificial Humanity, Artificial Dumbness, and others, none of which have caught hold as yet. 

Please bear with me and accept the phrasing of Artificial Stupidity and also go along with the belief that it isn’t stupid to be discussing Artificial Stupidity. 

Indeed, you could make the case that the act of not discussing Artificial Stupidity is the stupid approach since you are unwilling or unaccepting of the realization that stupidity exists in the real world and therefore in the artificial world of computer systems that are we attempting to recreate intelligence, you would be ignoring or blind to what is essentially the other half of the overall equation. 

In short, some say that true Artificial Intelligence requires a combination of the “smart” or good AI that we think of today and the inclusion of Artificial Stupidity (warts and all), though the inclusion must be done in a smart way. 

Indeed, let’s deal with the immediate knee jerk reaction that many have of this notion by dispelling the argument that by including Artificial Stupidity into Artificial Intelligence you are inherently and irrevocably introducing stupidity and presumably, therefore, aiming to make AI stupid. 

Sure, if you stupidly add stupidity, you have a solid chance of undermining the AI and rendering it stupid. 

On the other hand, in recognition of how humans operate, the inclusion of stupidity, when done thoughtfully, could ultimately aid the AI (think about the story of tripping over your own feet as a child). 

Here’s something that might really get your goat. 

Perhaps the only means to achieve true and full AI, which is not anywhere near to human intelligence levels to-date, consists of infusing Artificial Stupidity into AI; thus, as long as we keep Artificial Stupidity at arm’s length or as a pariah, we trap ourselves into never reaching the nirvana of utter and complete AI that is able to seemingly be as intelligent as humans are. 

Ouch, by excluding Artificial Stupidity from our thinking, we might be damming ourselves to not arriving at the pinnacle of AI. 

That’s a punch to the gut and so counterintuitive that it often stops people in their tracks. 

There are emerging signs that the significance of revealing and harnessing artificial stupidity (or whatever it ought to be called), can be quite useful. 

One such area, I assert, involves the inclusion of artificial stupidity into the advent of true self-driving driverless autonomous cars. 

Shocking? 

Maybe so. 

Let’s unpack the matter. 

For my framework about AI self-driving autonomous cars, see this link: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

On the dangers of AI becoming a Frankenstein, see my analysis: https://aitrends.com/ai-insider/frankenstein-and-ai-self-driving-cars/ 

To understand the cognitive elements of autonomous cars, see my explanation here: https://aitrends.com/ai-insider/cognitive-timing-for-ai-self-driving-cars/ 

Exploiting Artificial Stupidity For Gain 

When referring to true self-driving cars, I’m focusing on Level 4 and Level 5 of the standard scale used to gauge autonomous cars. These are self-driving cars that have an AI system doing the driving and there is no need and typically no provision for a human driver. 

The AI does all the driving and any and all occupants are considered passengers. 

On the topic of Artificial Stupidity, it is worthwhile to quickly review the history of how the terminology came about. 

In the 1950s, the famous mathematician and pioneering computer scientist Alan Turing proposed what has become known as the Turing test for AI. 

Simply stated, if you were presented with a situation whereby you could interact with a computer system imbued with AI, and at the same time separately interact with a human too, and you weren’t told beforehand which was which (let’s assume they are both hidden from view), upon your making inquiries of each, you are tasked with deciding which one is the AI and which one is the human. 

We could then declare the AI a winner as exhibiting intelligence if you could not distinguish between the two contestants. In that sense, the AI is indistinguishable from the human contestant and must ergo be considered equal in intelligent interaction. 

There is a twist to the original Turing test that many don’t know about. 

One qualm expressed was that you might be smarmy and ask the two contestants to calculate say pi to the thousandth digit. 

Presumably, the AI would do so wonderfully and readily tell you the answer in the blink of an eye, doing so precisely and abundantly correctly. Meanwhile, the human would struggle to do so, taking quite a while to answer if using paper and pencil to make the laborious calculation, and ultimately would be likely to introduce errors into the answer. 

Turing realized this aspect and acknowledged that the AI could be essentially unmasked by asking such arithmetic questions. 

He then took the added step, one that some believe opened a Pandora’s box, and suggested that the AI ought to avoid giving the right answers to arithmetic problems. 

In short, the AI could try to fool the inquirer by appearing to answer as a human might, including incorporating errors into the answers given and perhaps taking the same length of time that doing the calculations by hand would take. 

Starting in the early 1990s, a competition was launched that is akin to the Turing test, offering a modest cash prize and has become known as the Loebner Prize, and in this competition, the AI systems are typically infused with human-like errors to aid in fooling the inquirers into believing the AI is the human. There is controversy underlying this, but I won’t go into that herein. A now-classic article appeared in 1991 in The Economist about the competition. 

Notice that once again we have a bit of irony that the introduction of stupidity is being done to essentially portray that something is intelligent. 

This brief history lesson provides a handy launching pad for the next elements of this discussion. 

Let’s boil down the topic of Artificial Stupidity into two main facets or definitions: 

1)      Artificial Stupidity is the purposeful incorporation of human-like stupidity into an AI system, doing so to make the AI seem more human-like, and being done not to improve the AI per se but instead to shape the perception of humans about the AI as being seemingly intelligent. 

2)      Artificial Stupidity is an acknowledgment of the myriad of human foibles and the potential inclusion of such “stupidity” into or alongside the AI in a conjoined manner that can potentially improve the AI when properly managed. 

One common misnomer that I’d like to dispel about the first part of the definition involves a somewhat false assumption that the computer potentially is going to purposefully miscalculate something. 

There are some that shriek in horror and disdain that there might be a suggestion that the computer would intentionally seek to incorrectly do a calculation, such as figuring out pi but doing so in a manner that is inaccurate. 

That’s not what the definition necessarily implies. 

It could be that the computer might correctly calculate pi to the thousandth digit, and then opt to tweak some of the digits, which it would say keep track of, and do this in a blink of the eye, and then wait to display the result after an equivalent of the human-by-hand amount of time. 

In that manner, the computer has the correct answer internally and has only displayed something that seems to have errors. 

Now, that certainly could be bad for the humans that are relying upon what the computer has reported but note that this is decidedly not the same as though the computer has in fact miscalculated the number.+ 

There’s more than can be said about such nuances, but for now, let’s continue forward. 

Both of those variants of Artificial Stupidity can be applied to true self-driving cars. 

Doing so carries a certain amount of angst and will be worthwhile to consider. 

For my detailed review of the Turing Test, see this link: https://aitrends.com/ai-insider/turing-test-ai-self-driving-cars/ 

On the problems of probabilistic reasoning in AI, take a look at my indication: https://aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/ 

Common sense reasoning is an open-ended challenge and needs to be considered, see my article: https://aitrends.com/ai-insider/common-sense-reasoning-and-ai-self-driving-cars/ 

A controversial perspective is that perhaps we need to restart our understanding and approach to AI, see this discussed here: https://aitrends.com/ai-insider/starting-over-on-ai-and-self-driving-cars/ 

Artificial Stupidity And True Self-Driving Cars 

Today’s self-driving cars that are being tried out on our public roadways have already gotten a reputation for their driving prowess. Overall, driverless cars to-date are akin to a novice teenage driver that is timid and somewhat hesitant about the driving task. 

When you encounter a self-driving car, it will often try to create a large buffer zone between it and the car ahead, attempting to abide by the car lengths rule-of-thumb that you were taught when first learning to drive. 

Human drivers generally don’t care about the car lengths safety zone and edge up on other cars, doing so to their own endangerment. 

Here’s another example of driving practices. 

Upon reaching a stop sign, a driverless car will usually come to a full and complete stop. It will wait to see that the coast is clear, and then cautiously proceed. I don’t know about you, but I can say that where I drive, nobody makes complete stops anymore at stop signs. A rolling stop is a norm nowadays. 

You could assert that humans are driving in a reckless and somewhat stupid manner. By not having enough car lengths between your car and the car ahead, you are increasing your chances of a rear-end crash. By not fully stopping at a stop sign, you are increasing your risks of colliding with another car or a pedestrian. 

In a Turing test manner, you could stand on the sidewalk and watch cars going past you, and by their driving behavior alone you could likely ascertain which are the self-driving cars and which are the human-driven cars. 

Does that sound familiar? 

It should, since this is roughly the same as the arithmetic precision issue earlier raised. 

How to solve this? 

One approach would be to introduce Artificial Stupidity as defined above. 

First, you could have the on-board AI purposely shorten the car’s length buffer to appear as though it is driving in the same manner as humans. Likewise, the AI could be modified to roll through stop signs. This is all rather easily arranged. 

Humans watching a driverless car and a human-driven car would no longer be able to discern one such car from the other since they both would be driving in the same error-laden way. 

That seems to solve one problem as it relates to the perception that we humans might have about whether the AI of self-driving cars is intelligent or not. 

But, wait for a second, aren’t we then making the AI into a riskier driver? 

Do we want to replicate and promulgate this car-crash causing risky human driving behaviors? 

Sensibly, no. 

Thus, we ought to move to the second definitional portion of Artificial Stupidity, namely by incorporating these “stupid” ways of driving into the AI system in a substantive way that allows the AI to leverage those aspects when applicable and yet also be aware enough to avoid them or mitigate them when needed. 

Rather than having the AI drive in human error-laden ways and do so blindly, the AI should be developed so that it is well-equipped enough to cope with human driving foibles, detecting those foibles and being a proper defensive driver, along with leveraging those foibles when the circumstances make sense to do so (for more on this, see my posting here). 

On the pranking of AI autonomous cars, see my assessment here: https://aitrends.com/ai-insider/pranking-of-ai-self-driving-cars/ 

One outside-the-box approach to AI includes child-learning, see my recap at this link: https://www.aitrends.com/ai-insider/ai-machine-child-deep-learning-the-case-of-ai-self-driving-cars/ 

Applying these topics to one-shot learning is an intriguing opportunity, see my analysis: https://www.aitrends.com/ai-insider/seeking-one-shot-machine-learning-the-case-of-ai-self-driving-cars/ 

For my comments about the infamous AI paperclip problem, see the link here: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/ 

Conclusion 

One of the most unspoken secrets about today’s AI is that it does not have any semblance of common-sense reasoning and in no manner whatsoever has the capabilities of overall human reasoning (many refer to such AI as Artificial General Intelligence or AGI). 

As such, some would suggest that today’s AI is closer to the Artificial Stupidity side of things than it is to the true Artificial Intelligence side of things. 

If there is a duality of intelligence and stupidity in humans, presumably you will need a similar duality in an AI system if it is to be able to exhibit human intelligence (though, some say that AI might not have to be so duplicative). 

On our roads today, we are unleashing so-called AI self-driving cars, yet the AI is not sentient and not anywhere close to being sentient. 

Will self-driving cars only be successful if they can climb further up the intelligence ladder? 

No one yet knows, and it’s certainly not a stupid question to be asked. 

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends. 

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 


Comments