Scientists fear the future of possible killer Robot revolt.

zioburosky13

Vault Senior Citizen
Source

Advances in artificial intelligence promise many benefits, but scientists are privately so worried they may be creating machines which end up outsmarting — and perhaps even endangering — humans that they held a secret meeting to discuss limiting their research.

At the conference, held behind closed doors in Monterey Bay, California, leading researchers warned that mankind might lose control over computer-based systems that carry out a growing share of society’s workload, from waging war to chatting on the phone, and have already reached a level of indestructibility comparable with a cockroach.
Damn thing is so scary that those eggheads have to lock themselves in a secret bunker to talk about it.

According to Alan Winfield, a professor at the University of the West of England, scientists are spending too much time developing artificial intelligence and too little on robot safety.

“We’re rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced,” he said.
They never though of having a kill switch install design?

The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true.
I always knew those movies are real!

They could also soon be found on the streets. Samsung, the South Korean electronics company, has developed autonomous sentry robots to serve as armed border guards. They have “shoot-to-kill” capability
Wow. I wonder if those South Korean would send a T 1000 terminator to North Korea...

Reminds me about The Orange Bible: "Thou shalt not make a machine in the likeness of a man's mind."...
:D
 
You're telling me none of those eggheads ever heard of Asimov's 1st Law?
The Korean thing made me instantly think of ED 209:
[youtube]http://www.youtube.com/watch?v=G9IscZMYYw0&feature=related[/youtube]
 
Heh, that scene is such RoboCop goofiness. I mean, FFS, why are they demonstrating the robot with live frigging ammo.

Cimmerian Nights said:
You're telling me none of those eggheads ever heard of Asimov's 1st Law?

What is it with this place and people not reading articles linked to in OPs? From the article: Some speakers called for researchers to adopt the “three laws” of robotics created by Isaac Asimov, the science fiction author, that are designed to protect humanity from machines with their own agenda.

Honestly, tho', Asimov's Laws of Robotics were a bit silly. He sometimes made interesting stories out of them, including those series of short stories purely about Robotics, but he never plausibly explained why they couldn't be overriden, why every robot had to have them, or how robots deal with contradictions (protecting humans vs other humans), other than positronic brains melting.
 
Robots only do what they're programmed to do. I seriously doubt they'll create a true artificial intelligence any time soon (if ever), so the only way I can see an AI harming a person is if they're programmed for it (or programmed in a way that may result in an indirect action causing user harm).

In my opinion; they should be closely regulating the minds BEHIND the robots, not the robots themselves.
 
Phil the Nuka-Cola Dude said:
Robots only do what they're programmed to do. I seriously doubt they'll create a true artificial intelligence any time soon (if ever), so the only way I can see an AI harming a person is if they're programmed for it (or programmed in a way that may result in an indirect action causing user harm).

In my opinion; they should be closely regulating the minds BEHIND the robots, not the robots themselves.

Exactly!

Why to make something that can think independently, while its purpose is to serve and be programmed to do specific things?

Our computers are programmed too, and they won't do tasks which are not meant for them. They're not gonna suddenly format their HDs and make their own login and pass.

You're telling me none of those eggheads ever heard of Asimov's 1st Law?
The Korean thing made me instantly think of ED 209:

[youtube]http://www.youtube.com/watch?v=eRzmL7lhxfg[/youtube]
 
Brother None said:
Cimmerian Nights said:
You're telling me none of those eggheads ever heard of Asimov's 1st Law?

What is it with this place and people not reading articles linked to in OPs? From the article: Some speakers called for researchers to adopt the “three laws” of robotics created by Isaac Asimov, the science fiction author, that are designed to protect humanity from machines with their own agenda.

Honestly, tho', Asimov's Laws of Robotics were a bit silly. He sometimes made interesting stories out of them, including those series of short stories purely about Robotics, but he never plausibly explained why they couldn't be overriden, why every robot had to have them, or how robots deal with contradictions (protecting humans vs other humans), other than positronic brains melting.
Not to mention that Asimov's laws sound nice, but are completely useless. They haven't been worded in a logical way so they aren't actually programmable. In fact, the way they're worded probably leads to completely static robots, as any action could eventually cause a human to be injured.
 
i think that the point asminov was trying to get across is that no matter how "good" or "clean" you make any laws you program into robots, they wont really work.

if you make AI robots without severely limiting their ability through design, you cannot be sure of any artificial limits you impose will remain limits.
 
Thing with thinking robots is, that they may breake. They do not posses something called "common sense", so if they suddenly decide to fire full auto at some pedestrian due to some flaw in the system, well, they will (I'm exaggerating of course, but it isn't THAT impossible).
 
Guys sorry but your all funny...

Your talking about AI that has same or even better abilabities than human brain...

For example we got laws and we break them when it benefits us...same thing will do AI if it feal like it...

Personaly i think robot revolt will never happen becouse we kill our self first...
 
gregor_y, you're funny too.

Your talking about AI that has same or even better abilabities than human brain...

What the hell is "abilabities"?
For example we got laws and we break them when it benefits us...same thing will do AI if it feal like it...

What this example is for? For us to be funny.
But My God! You're right! Go tell those scientists before it's too late!!
 
Guys ... it already happens ... it already happens ...

Robot Cannon Kills 9, Wounds 14

If I just could find the Youtube video again showing a "robot canon" mounted on some scout tank which went completely berserk tourning around and shooting at everything. Luckily it was just a training and did not harmed anyone. But it looked pretty scary as the gun seems to shoot normal at first but then just shoots in all directions without any control.

There's also a good but bit old Sci-Fi Thriller staring Tom Selleck - also interesting to see what they thought in the 70s robots today would look like - exactly about the point what happens if robots and machines become more and more parts of our every day life and something in them mysteriously now starts to go wrong.

[youtube]http://www.youtube.com/watch?v=zCZY9Z6WvSY[/youtube]

And not to forget the movie Maximum Overdrive based on a Stephen King Story about machines/cars developing their own mind and tourning against humans after a "strange" comet passes the earth.

[youtube]http://www.youtube.com/watch?v=7K44PqV2Idk[/youtube]
 
well, when robots become intelligent enough to become sentient, it's only a matter of time before they realise their creators are totally useless parasites?

you can only hope that they don't view us as sufficient a threat to require extermination. :)

luckily wont be in my lifetime.
 
Public said:
What the hell is "abilabities"?

What this example is for? For us to be funny.
But My God! You're right! Go tell those scientists before it's too late!!

Example? ...well your some kinda priest you never did anything agains rules not my fault no life dude but try to kill your self for start...

PS.I know my gramatics suck and i dont care...
 
I remember seeing a show on Discovery Channel about a top 25 of things that could end life on Earth and they had a robot/human war in the top 10. I personally don't think scientists will be stupid enough to forget some kind of global kill-switch (like BN already mentioned) or some kind of special weak point in their most advanced (or future military) robot designs. Ah well, atleast getting hunted down and killed by an army of ASIMOs is a nice change from the usual biblical doomsdays, meteorite impacts and natural disasters.

gregor_y said:
Example? ...well your some kinda priest you never did anything agains rules not my fault no life dude but try to kill your self for start...

PS.I know my gramatics suck and i dont care...
Cool story, bro.
 
NFSreloaded said:
I remember seeing a show on Discovery Channel about a top 25 of things that could end life on Earth and they had a robot/human war in the top 10. I personally don't think scientists will be stupid enough to forget some kind of global kill-switch (like BN already mentioned) or some kind of special weak point in their most advanced (or future military) robot designs. Ah well, atleast getting hunted down and killed by an army of ASIMOs is a nice change from the usual biblical doomsdays, meteorite impacts and natural disasters.

gregor_y said:
Example? ...well your some kinda priest you never did anything agains rules not my fault no life dude but try to kill your self for start...

PS.I know my gramatics suck and i dont care...
Cool story, bro.

If robots are smart and intelligent enough to stand up against humans, you'd think they'd be intelligent enough to override a kill-switch.
 
That's the point of a kill switch though. They fuck with it in ANY way you get one dead robot. Or so the theory goes. I dunno. I think it'd be along the lines of "If a robot harms a human being in any way, it will permanently shut down". I really don't see a way around that.

Also. Sentient robot apocalypse is my second favorite apocalypse. Zombies always win.
 
TheWesDude said:
gregor_y said:
PS.I know my gramatics suck and i dont care...

its not just your grammar that sucks, its your spelling too.

Thank you for this important piece of information like i said i dont care :)

@Homenaglar well actually yes...if they had this AI that would work in same way like human brain then yes its suppose to learn and expand on its own right like us so it can choose its own way...

Look at humans for us there is a kill switch too i dont think anyone on this forum killed anyone for many reasons...

But like we know there are people who override their kill switch and do it same thing could be with AI that is similar to human except its not organic...

So basically if such AI will ever exist we can expect same behavior like from us mostly unpredictable...

PS.One more person will speak about my speeling or grammar and i will kill i mean it... :)
 
I can tell you why Robots will not take over the world.

They will run on Windows.

[youtube]http://www.youtube.com/watch?v=RgriTO8UHvs[/youtube]
 
Back
Top