The Stupidity of Technologies
Kali


Applying Asimov’s Laws

Giancarlo Livraghi – stupidity.itgandalf.it

April 2007



I wonder why (silly me) I never thought of this until I saw a series of cartoons about applying Asimov’s “Laws” not only to robots, but to all sorts of technologies – by the same author that I had quoted in other articles on different subjects.

I am placing the “funny stories” at the end of this page. Let’s start by taking a look at what those “laws” are and how they may be applied in a world where robots, of course, exist, but they are not (so far) the “humanoid” beings that were imagined by science fiction.

The Three Laws of Robotics were conceived by Isaac Asimov and followed in his own developments, as well a practically everyone else’s, as basic rules for the behavior of machines that are assumed to be programmed so that they can do some sort of “thinking”. (See the explanation in Wikipedia).

For over sixty years, they have been generally accepted not only in science fiction, but also in scientific studies of cybernetics and hypothetical “artificial intelligence”.

 
“Intelligent” machines, outside science finction, are either
an unrealistic assumption or a questionable definition.
But that’s another story.
See chapter 19 of The Power of Stupidity and
Machines aren’t “bad”, but they are very stupid – 1999.
 

The “laws” were defined by Isaac Asimov in a short story, Runaround, in 1942, and later applied in his many developments on robots, conceived not as “androids” (that look like people) but as visibly mechanical devices with an approximately anthropomorphic appearance – as we see, for instance, in this picture, that is the front cover of the first edition of I, Robot, a collection of the first series of Asimov’s stories on this subject, from 1940 to 1950.


robot


This other image is much more recent – 2001.
It’s a prototype of a robot called “Asimo”
in Isaac Asimov’s honor.

asimo robot

The style is different. But the concept is the same.


Of course robots exist, in many different ways, from sophisticated scientific or industrial equipment to everyday home appliances. But there are no extended applications of the hypothesis that has been considered “possible” for over half a century (and, in some ways, much earlier) of machines that look, more or less, like these and aren’t just toys or experiments, but are actually in the service of humanity replacing people in unpleasant, dangerous, fatiguing or otherwise “servile” tasks.

It’s pretty obvious that, if such machines existed and were widely used, there should be “universally” shared, and clearly defined, basic criteria to regulate their performance. They can be identified as the “three laws of robotics”.

The idea is that they must be in the programming of any such machine and they are in strictly “hierarchic” order: the first prevails on the other two, the second on the third and all three on everything else.


1 – A robot may not injure a human being or, by failing to act, allow a human being to come to harm.

Let’s move away from an imaginary world populated by billions of more or less humanoid robots – and let’s take a look at what is happening in our everyday life. Do machines (or “automatic” behaviors, even when performed by human beings) obey the “first law” as often as they should? It’s sadly obvious that they don’t. Not only computers, but also more simple devices, aren’t “good servants” for the human beings that they are supposed to help.

They pester us every day with all sort of problems. And they do little, or nothing, to prevent anything or anyone from harming us. The consequences range from small inconvenience to extremely serious damage.

These devices are essentially stupid. If they accurately executed simple and coherent tasks, they could be helpful and cause little trouble. The problem is an ever-growing, idiotic trend to make them more and more complicated, cramming them with useless, or uselessly unmanageable, devices.

The development and application of all sorts of technologies would considerably improve if programming and engineering strictly obeyed the First Law of Robotics.


2 – A robot must obey orders given to it by human beings, except where carrying out those orders would break the First Law.

Technologies (as well as bureaucratic routines, also when applied by human beings) obey much less than they should to the orders (and needs) of the people that they are supposed to serve. They are dominated by the intentions of programmers or project managers who care about their technical fantasies (or category privileges and prejudices) rarely satisfying, or even considering, the real needs of users.

To make things even worse, technicians (and bureaucrats) often don’t know what they are doing, because by combining more and more complicated routines they “import” devices that they don’t understand or procedures that were developed for a different purpose. The resulting mess becomes unmanageable, with a multiplication of malfunctions that even the most experienced technicians can’t understand – and therefore they can’t find a solution.

The result is that technologies no longer obey human beings – and, more and more, people are expected to obey technologies. Before this leads to a large part of humankind needing psychiatric help, or more catastrophic disasters are caused than those that have already happened, it would be very desirable to subject every technical development (or bureaucratic routine) to strict and rigorous application of the Second Law of Robotics.

Many of today’s technologies should be dumped, to be replaced by much more simple and functional systems. (In many cases this could be done by restoring those that existed, but were stupidly “upgraded” to less effective solutions.)

This may be embarrassing for those who profit, or exert power, by plaguing us with messy complications. But it would be a considerable improvement for the rest of humankind.


3 – A robot must protect its own existence, as long as the things it does to protect itself do not break the First or Second Law.

In today’s reality, do machines, al least, know how to protect themselves? Facts prove that they don’t. In the overwhelming process of complication they are more and more subject to breakdowns, malfunctioning and internal conflicts (added functions interfere with each other.)

The idiotic chase for fake “innovation” makes it very difficult to fix them. When something doesn’t work, we are often told that the only solution is to throw away whatever we are using and replace it with something “new” (that will be more complicated and therefore work worse and break down more often). Machines and technologies, in addition to not obeying us, are heading at increasing speed for collective suicide. The clutter of poorly managed scrap is already causing serious environment problems.

Continuing on this course isn’t intelligence – artificial or natural. It’s stupidity – with a streak of human and mechanical masochism. Before it’s too late, let’s reconsider Asimov’s Laws and see how they can be applied.


In one of his later developments on this subject, Isaac Asimov realized that there was a need for a more general principle. So he added, at the top of the hierarchy, the “Zeroth Law“ A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The rest of the laws are modified sequentially to acknowledge this.

Right. Let’s do so. Let’s put real human needs above everything else. And let’s see if, by doing so, we can finally reduce to obedience all these machines and devices that it would be much more comfortable and pleasant to use if they were really conceived and made to be at our service.




A footnote: is someone trying?

According to a news report in March 2007, the government of South Korea intended to set a rule that all robots being produced or designed should be programmed to obey Asimov’s Laws. There are no indications of that being actually done in Korea or anywhere else. And, of course, it shouldn’t apply only to “robots”, but to all technologies and procedures. Unfortunately it isn’t easy, considering how far things have gone in the wrong direction. But that isn’t a good excuse for not even trying.




These are the “funny stories” that made me think about this subject. They are interesting in that perspective, but also quite intriguing about other reasons why “things don’t work” – and how even Asimov’s Laws can backfire when they aren’t properly applied or they get unexpectedly mixed up with other programming. As happens quite often with all sorts of technologies.

 
(Isaac Asimov was quite aware of such potential problems. Some of his stories include intriguing examples of unexpected and embarrassing robot behaviors.)
 

A first series of five cartoons on this subject was published by J.D. Frazer (“Illiad”) from April 19 to 23, 2007.

vignetta 1
vignetta 2
vignetta 3
vignetta 4
vignetta 5
 
Copyright © J.D. Frazer “Illiad” – 2007


A single cartoon appeared ten days later, on April 3, 2007.

vignetta 6
 
Copyright © J.D. Frazer “Illiad” – 2007

But it wasn’t the end of this little saga. Five more cartoons followed on April 17-21, 2007.

vignetta 7
vignetta 8
vignetta 9
vignetta 10
vignetta 11
 
Copyright © J.D. Frazer “Illiad” – 2007
 

The two characters in the last cartoon, that appear in several “Illiad” episodes, are called Cthulhu and Hastur (derived from “horror” stories by Howard Phillips Lovecraft). They are ironic incarnations of “evil” and here their role is somewhat confused.
 

At this point the story comes to an end. It seems that “Illiad” wants to leave the conclusion to the reader. What is going to happen? Whatever the outcome, was this just a mistale? Or was it a misguided “artificial intelligence” setting the machine to behavior unfit for humans? Who is going to get hurt? Are the “evil” monsters going to be victims of some misfiring scheme?

If from fantasy we come back to reality, all those options are possible. It’s easy to find, in many circumstances, that mischief, carelessness and stupidity often combine, causing all sorts of unexpected problems.


back
back to top




book
book


index
stupidity


homepage Gandalf
home