Killer Robots To Be Debated In Geneva And This Is A First Ever By The UN

PEB

Sunflower in support of Ukraine
@Steel
Joined
Jan 20, 2004
Messages
30,555
Reaction score
19,551
web-robots-warnerv1.jpg


UN in Geneva is going to debate the use of robots in military function and this will happen during the UN Convention on Certain Conventional Weapons (CCW) meeting. This class of robot they will be holding the debates on is a class of autonomous machines that are able to identify and kill targets without human input. Basically robots that can identify a target and aim and shoot without the control of a human navigating its actions.

This is significant because this class of machine will rely on highly developed AI and imaging systems to make these decisions. Obviously this debate was spurred on by DARPA's recently held 10 million dollar competition. Seeing these robots climb over obstacles and manipulate pressure gauges woke up the UN. The UN is likely worried that in the next 10 years we could be seeing ground combat with highly evolved machines VS humans "Sound familiar?".

Unlike the evil robots in that one movie these machines will have kill switches and will always have to report back built into their software. These machines will never be able to be completely alone in their actions over very extended periods of time. Ether way it will be interesting to see how the UN handles this new issue?
http://www.independent.co.uk/life-s...-to-be-debated-at-united-nations-9349037.html
 
I'm always so torn by this sort thing. Obviously, robots designed to recognize and kill humans on sight is really concerning, and could lead to some serious human rights violations. They're basically autonomous landmines: Deadly, and completely indiscriminate in who they target.

On the other side, robots with guns is so fucking cool.
 
I'm always so torn by this sort thing. Obviously, robots designed to recognize and kill humans on sight is really concerning, and could lead to some serious human rights violations. They're basically autonomous landmines: Deadly, and completely indiscriminate in who they target.

On the other side, robots with guns is so fucking cool.
No, they are not "completely indiscriminate". That's the Hollywood version where not only do they, (1) spontaneously become self-conscious, but (2) also decide, after having realized that miracle, that they hate humans and want to kill us all.

Otherwise, if they do what they're programmed to do-- keeping in mind that 99.9% of the time this goes just as boringly as planned-- then they will be uniformly discriminating. But yeah, the potential for calamity when the hardware shorts out, or if the software results in the machine functioning in unpredictable ways (highly likely in software of this sophistication even post-Beta), then there is an unsettling possibility to consider.

Doesn't matter what the UN rules. Nations will use whatever they feel offers them a military advantage, especially if military force can be projected without much fear of reprisal that would result in the typical political discontentment at home. To me, that is the most unsettling truth in the context of this approaching technology. We've already seen it in drones.
 
I can't see them letting machines make the decision to kill entirely autonomously.
At the very least there'll be a geek in a trailer somewhere clicking on the "next" button.
 
No, they are not "completely indiscriminate". That's the Hollywood version where not only do they, (1) spontaneously become self-conscious, but (2) also decide, after having realized that miracle, that they hate humans and want to kill us all.

Otherwise, if they do what they're programmed to do-- keeping in mind that 99.9% of the time this goes just as boringly as planned-- then they will be uniformly discriminating. But yeah, the potential for calamity when the hardware shorts out, or if the software results in the machine functioning in unpredictable ways (highly likely in software of this sophistication even post-Beta), then there is an unsettling possibility to consider.

Doesn't matter what the UN rules. Nations will use whatever they feel offers them a military advantage, especially if military force can be projected without much fear of reprisal that would result in the typical political discontentment at home. To me, that is the most unsettling truth in the context of this approaching technology. We've already seen it in drones.

Yeah, the removal of one of the deterring costs of war is the major concern here. My other fear (apart from malfunction) would be security of software. Presumably these robots could never have a self contained system which would allow for the possibility of outside technological espionage.

If robots became prevalent you'll see massive r&d put into both protecting the interface technology and sabotaging the technology.
 
I can't see them letting machines make the decision to kill entirely autonomously.
At the very least there'll be a geek in a trailer somewhere clicking on the "next" button.

Hopefully a third party doesn't obtain the ability to be that geek in the trailer.
 
No, they are not "completely indiscriminate". That's the Hollywood version where not only do they, (1) spontaneously become self-conscious, but (2) also decide, after having realized that miracle, that they hate humans and want to kill us all.

Otherwise, if they do what they're programmed to do-- keeping in mind that 99.9% of the time this goes just as boringly as planned-- then they will be uniformly discriminating. But yeah, the potential for calamity when the hardware shorts out, or if the software results in the machine functioning in unpredictable ways (highly likely in software of this sophistication even post-Beta), then there is an unsettling possibility to consider.

Doesn't matter what the UN rules. Nations will use whatever they feel offers them a military advantage, especially if military force can be projected without much fear of reprisal that would result in the typical political discontentment at home. To me, that is the most unsettling truth in the context of this approaching technology. We've already seen it in drones.

Until the technology is well beyond the current state in the best public labs in the nation, there is no way for AI to effectively discriminate between enemy combatants and noncombatants in a manner that is much more effective than a bomb. You aim it, and it'll kill things there, but unless you are preprogamming specific entities as targets/nontargets, you're not going to have much luck.

This is the biggest difference between deploying "robots" (qua autonomous machines) and "drones" (driven by people)
 
I work with a not overly complicated piece of Human Resources Software.

It is far from perfect.
Every system is far from perfect, and when its life or death you, restroing from backup is no longer the worse case scenario.


Further anything you to make yourself safe would also be done by those who it seeks to target.


Massively against,

Well unless each country is willing to have the same robots with the same operating system configurationtested extensively within their home country and within their own population first.
 
What is scary about his potentiality is, only countries with resources will ever be able to field an army of killer robots and those countries will only fight proxy wars that lead to killer robot on human pawn violence.




[YT]/0tmGv5LZYiQ[/YT]
 
The people who plan on implementing a world army are debating weather or not Killer Robots would be to their benefit.

Extra napkins were ordered because of drool buildup.
 
Since no one got the reference, here are the three laws of Robotics.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law


And for those a fan of the film Aliens, Bishop (played by Lance Henricksen) attempts to reassure Ripley that the new class of Androids can't harm humans by citing the first law.
 
I'm against it. Would anyone like an autonomous M-1 Abrahams tank patrolling their neighborhood? I'm all for remote piloted drones, but too many things can go wrong with robots.
 
Since no one got the reference, here are the three laws of Robotics.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law


And for those a fan of the film Aliens, Bishop (played by Lance Henricksen) attempts to reassure Ripley that the new class of Androids can't harm humans by citing the first law.

These laws also initially inhibited Alex Murphy!
 
I'm against it. Would anyone like an autonomous M-1 Abrahams tank patrolling their neighborhood? I'm all for remote piloted drones, but too many things can go wrong with robots.

How about a swarm of mosquito drones armed with poison? I think I'd rather have the tank.
 
All this talk of robots killing humans. Obviously countries would make their own robots to fight the other side's robots.
 
Back
Top