Note: You are currently viewing my old web site. There is a new version with most of this content at OJB.NZ.
The new site is being updated, uses modern techniques, has higher quality media, and has a mobile-friendly version.
This old site will stay on-line for a while, but maybe not indefinitely. Please update your bookmarks. Thanks.


[Index] [Menu] [Up] Blog[Header]
Graphic

Add a Comment   (Go Up to OJB's Blog Page)

The Morality of Machines

Entry 1751, on 2015-11-19 at 23:07:50 (Rating 2, Philosophy)

I have noticed several times in the last few weeks that the subject of how "intelligent" machines will affect society has become more prevalent. This has been particularly obvious in the context of self-driving cars, like the ones which Google seem to have brought to a fairly advanced level of functionality.

Yes, Google have had self-driving cars on real roads, doing cross-country (I mean across the US, not off-road) trips, and having an accident rate far below that of cars driven by humans (in fact, the only accident reported so far was caused by a human driver in another vehicle). And other companies are getting into this area too. Some, like Tesla, are just offering automated aids to human drivers; and others, like Apple, are working on car projects but we don't really know exactly what they are!

Now an interesting discussion is starting regarding the details of the behaviour of the AI (artificial intelligence) these cars use. First, there is the tedious legal detail of liability in the event of accidents; and second, there is the more interesting moral problem of how an AI should handle situations where decisions involving the "least bad" response should be made.

For example, if a self-driving car has a passenger (as they normally would) who is likely to be killed when the car swerves to avoid a group of pedestrians is that OK? If 2 pedestrians would die if the car continued on its current course is it OK to kill one occupant by swerving and hitting a wall? In that situation I am imagining the outcome is certain and I am swapping 2 lives for one. Many people would say that is OK.

But what about this: in the scenario above, if there was a 50% chance of the one passenger dying and a 30% chance of the pedestrians dying what do you do then? Is a higher chance of one person dying better than a somewhat lower chance of two? That is a harder decision to make but many people would still go with saving the two and sacrificing the one.

Let's change that slightly: if there was a 90% chance for the demise of the passenger but just 10% for the pedestrians what about that? In that case most people would say the low risk of killing the pedestrians is worth it compared with the almost certain death of the passenger. But this seems to be a quantitative thing because there will be a point where the preferred action swaps. Where is that point and how can we justify it?

Here's another thing to think about. If one manufacturer guarantees their car will always maximise the chance of survival of the passenger but another gives equal weight to survival of other road users which car should I buy? Many people will think primarily of themselves meaning most manufacturers will bias the response of the car towards saving the occupants.

And if that is the case who is to blame if some pedestrians are killed? Is it the owner who deliberately bought a car willing to sacrifice other people? Or the car company for creating a machine with that tendency? Or does the machine itself take some blame?

It has been shown quite clearly that people do not make logical decisions in these situations (see my blog entry "Would You Press the Button?" from 2013-07-16 where I discuss the famous trolley problem) so whatever a machine does it could potentially be a lot better than a human. But making a logical decision is often not seen as the best response. Will that human bias work against an intelligent machine?

I should say that using most current technology the machine isn't really making a free decision because the outcome is entirely deterministic. On the other hand, many people (including me) would say that human thought is also deterministic - just at a much greater level of complexity. But it's usually quite easy to follow the logic of a computer program and see what decision it will make in any situation.

Because of this it's really the programmer who is making the decision, not the machine, which is just carrying out the program it has been given. But again, the same can be said of humans. The brain has been "programmed" by evolution and personal experience. Does the individual consciousness (whatever that is) really have free will? And so I get back to the old free will question again... but did I really have any choice?

-

Comment 1 (4450) by devicecoders on 2015-11-20 at 09:09:15:

I’d like to say we all have choices. That’s not entirely true.

-

Comment 2 (4493) by OJB on 2016-05-19 at 13:43:49:

Whether we have genuine choices (free will) is a difficult question and maybe the biggest difficulty is deciding what exactly "free will" means.

Here's how I think about it : if I rewind the universe back to a point where I made a decision in the past, could I make a different decision then? I think the answer is no (remember, I would have no knowledge of what my "original" response was because it hasn't happened yet). If we keep making the same decision over and over we are deterministic and have no free will, according to my definition, at least.

-

You can leave comments about this entry using this form.

Enter your name (optional):

Enter your email address (optional):

Enter the number shown here:
Number
Enter the comment:

To add a comment: enter a name and email (both optional), type the number shown above, enter a comment, then click Add.
Note that you can leave the name blank if you want to remain anonymous.
Enter your email address to receive notifications of replies and updates to this entry.
The comment should appear immediately because the authorisation system is currently inactive.

[Comments][Preview][Blog]

[Contact][Server Blog][AntiMS Apple][Served on Mac]