View previous topic :: View next topic |
Author |
Message |
|
Polycell
Joined: 16 Jan 2012
Posts: 4623
|
Posted: Sat Jun 25, 2016 10:00 pm
|
|
|
Razor/Edge wrote: | That's an interesting solution i've never heard proposed before. But you could really program fear into a robot? Would it be ethical to do so? What happens if the robots get over this fear, as some people get over their fears? |
The most facile thing to say would be "make it an immutable part of their programming", but I have a feeling advanced AI won't be so simple. It would probably be best to go Asimov-style with strict rules that take precedence over any learned behavior, but throwing on crippling fear as a sort of firewall, both for uprising and modifying its prime directives, can't hurt.
|
Back to top |
|
|
Wandering Samurai
Joined: 30 Mar 2014
Posts: 875
Location: USA
|
Posted: Sun Jun 26, 2016 7:14 am
|
|
|
Quote: | A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. |
- Isaac Asimov
But that may get thrown out the window, because we may be getting Skynet in the not too distant future. Or we could start making them like Data.
|
Back to top |
|
|
Blanchimont
Joined: 25 Feb 2012
Posts: 3456
Location: Finland
|
Posted: Sun Jun 26, 2016 7:28 am
|
|
|
Wandering Samurai wrote: |
Quote: | A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. |
- Isaac Asimov |
A few problems with that;
- define 'human'
- 'define 'harm'
- inevitable bugs
- AIs designing better versions of their own designs
...and probably a few dozen more, but those spring to mind immediately...
|
Back to top |
|
|
GDMaid Man
Joined: 19 Jan 2011
Posts: 71
|
Posted: Sun Jun 26, 2016 2:26 pm
|
|
|
Will my robot waifu cheat on me?
|
Back to top |
|
|
Polycell
Joined: 16 Jan 2012
Posts: 4623
|
Posted: Sun Jun 26, 2016 3:11 pm
|
|
|
Blanchimont wrote: | A few problems with that;
- define 'human'
- 'define 'harm'
- inevitable bugs
- AIs designing better versions of their own designs
...and probably a few dozen more, but those spring to mind immediately... |
Exactly the problem with the exact laws Asimov chose: depending on the interpretation of the first law, a robot could easily tape your mouth shut and destroy everything you ever loved while you're unable to order it to stop; it's far too vague(I believe Asimov himself wrote a few stories that explored such consequences of his laws). However, I believe the fundamental principle of basic laws a robot cannot act contrarily to is sound, even if the traditional laws aren't the best. As an example, Asimov's first law would probably be better replaced by Rothbard's nonagression principle("No one may threaten or commit violence ('aggress') against another man's person or property."), with some limits on its ability to defend itself(eg, no attacking the source of damage, but only covering itself and escaping). Hopefully this will also reduce bugs in the core code, but sometimes bugs go undetected because they don't act up until changes are made elsewhere - which means a core that works for millenia might well cause the latest generation of sexbot to exterminate mankind because of one little tweak made in the highest levels.
GDMaid Man wrote: | Will my robot waifu cheat on me? |
Yes.
|
Back to top |
|
|
Jose Cruz
Joined: 20 Nov 2012
Posts: 1777
Location: South America
|
Posted: Sun Jun 26, 2016 3:27 pm
|
|
|
Juno016 wrote: | The danger of AI comes in a few aspects of mapping the human brain, but I'd hardly credit the most likely scenarios around them becoming more human. AI in the current age are programmed to perform work more efficiently, and these are the AI most likely to rebel. If given a goal and programmed to be more and more efficient at that goal with a system that adapts to more and more efficient methods, it may determine, by probability statistic, that humans are an obstacle to that goal, and develop, on its own, a method of wiping out humans. A bit unrealistic today, but we do have to be aware of what we program AI to do in order to be more efficient at what it does. |
Well if the goal of the AI is to serve humans I would hardly think that exterminating humanity would be the best scenario for it, although one could think of something like Skynet: an AI that controls all nuclear weapons and is designed to minimize the future probability of nuclear war and decides that a present nuclear war is the best option for achieving that objective in the future, since nobody would be alive to fire missiles. But that type of problem can be understood to be more like programming mistakes rather than AI revolting.
|
Back to top |
|
|
|