Slaughter BOTS (AI drones) Capable of targeting you based on politics, beliefs, social media, etc.

mcveteran81

Veteran
Banned
Joined
Mar 29, 2016
Messages
3,732
Reaction score
0
http://www.foxnews.com/tech/2017/11...erbots-video-on-killer-drones-goes-viral.html

A UC Berkeley computer science professor helped to create a video that imagined a world where nuclear weapons were replaced by swarms of autonomous tiny drones that could kill half a city and are virtually unstoppable.

Stuart Russell, the professor, said these drones are already a reality.

The video takes the viewer to an auditorium where a speaker showcases a drone roughly the size of a mockingbird. At one point the drone lands on his hand, the speaker quickly recalibrates it and then throws it out into the audience again. After a few seconds, the small drone turns back to the stage and crashes into the forehead of a dummy standing off to the left of the speaker.

The demonstration was meant to show how a palm-sized drone is capable of penetrating a human’s skull and destroying “the contents" inside.

The video was released earlier this month by the Future of Life Institute, which is backed by Stephen Hawking and Elon Musk. It was presented by Russell at a United Nations Convention on Conventional Weapons in Geneva, according to The Mercury News.

“Trained as a team, [the drones] can penetrate buildings, cars, trains, all while having the capacity to evade any countermeasure. They cannot be stopped,” the speaker told the audience in the video.

He noted that “a $25 million order” can now buy a swarm of such tiny “slaughterbots” that could kill half a city.

Russell said that although A.I.’s “potential to benefit humanity is enormous, even in defense,” allowing the widespread use of machines that “choose to kill humans will be devastating to our security and freedom.”

“Thousands of my fellow researchers agree. But the window to act is closing fast,” he said.

Video Link (7mins 47seconds) :


Official anti slaughter bots org site: http://autonomousweapons.org/

---------------------------------------------------------------------------------------------------------------------

Welcome to the present day future gents.
 
Why waste money on suicidal drones when we can equip them with mini guns and fly around shooting at people.

Can also equip them with explosives for a more devastating attack.

A swarm of bees is deadly enough imagine a swarm of these bots. Yea the future is dangerously scary.
 
Fuck it , I'm wearing a helmet in public from now on
 
I, for one, welcome our new electronic overlords.
 
Fuck it , I'm wearing a helmet in public from now on
tpb.jpg
 
anyone remember that computerized rifle scope that never panned out because they couldnt get their zero to function? ( basic step in a traditional rifle set up)

gun-info.jpg


i think something similar will happen to artificial intelligence type weapons.

mother nature will throw a piece of dust particle that hits a photon of light in a novel way that an algorithm hasnt taken into consideration, and the entire computerized house of cards will come crashing down.

same thing with laser range finders that these drone type weapons will surely have to implement on some level. a basic piece of dust will throw it off completely.

i got my money on mother nature in the event that there was ever a war between robot and human. and i think mother nature would return the favor and put her money on human.

what would a basic laser from amazon do to one of these sensors? how about a spotlight? how about radar jammers? how about weather?
how about sound disturbace from a stereo? how about my birdshot 12g mossberg 500? bring it on!
 
So this website : http://stratoenergetics.com/

This is business that sells Drones that are capable of killing people.

WHAT WE OFFER
GLOBAL COVERAGE

We offer worldwide support to governments and peace-keeping agencies to ensure the safety of troops and citizens.

STRATEGY
Swoop in. Take out the enemy. Get out. No troops necessary. No friendly casualties. Our Autonomous anti-Personnel Systems (APS) do all the heavy lifting.

REMOTE AUTONOMY
Control from anywhere in the world. Just program your target into the system and the APS do the rest. Attacks can be launched from any location and target any location. No global limitations.

LEARN MORE ABOUT US


-----------------------------------------------------------------------------

About Us


It’s hard to believe how far we’ve come. We launched just a few years ago with our surveillance drones, and in this short amount of time, we’ve retooled cutting-edge artificial intelligence and machine learning techniques, such as deep learning and convolution neural networks, into hardware systems that governments and peace-keeping agencies can use to keep their troops safe. No more sending our brave patriots home in coffins. Now the artificial intelligence does all of the work. Our autonomous weapons are small, fast, accurate, and unstoppable.

And they are just the beginning. Smart swarms. Adaptive adversarial learning. On-the-field robust target prioritization. AI-powered tactics and strategy a Go master would envy. This is the future of peacekeeping.



2tech_stinger.png

face-recognition-man-dots2-e1510460205218.jpg

ABOUT THE TECHNOLOGY
Our Autonomous anti-Personnel Systems (APS) employ an array of cutting edge AI technology, including:

  • Obstacle avoidance: the neural network copied into every APS has been trained through the equivalent of millions of hours in varied simulated environments to avoid obstacles, even when they are in motion.
  • Stochastic movements: trained on thousands of movies of mosquitoes and other flying insects, our drones will defeat any attempt to anticipate their flight patterns.
  • Efficient munitions: using precise targeting, we are able to drive the size of a projectile and propellant to a bare minimum.
  • Facial recognition: it’s in your iPhone and it’s in APS, along with parallel networks to identify targets by gait, gender, uniform, even ethnicity.
  • Multiple self-location protocols: along with GPS, our APSs use a variety of proprietary technologies to locate themselves in space.
  • Incommunicado and EMP hardened: once an APS gets flying, there is no way to stop it electromagnetically by jamming, spoofing, zapping, or anything else.
  • Big data links: we’ve acquired and consolidated a host of data sets. Using our servers you can reliably tie individuals to their individual characteristics for later targeting. We’d tell you more, but as the joke goes, then we’d have to kill you. (And of course, since you’ve visited this site we know exactly who you are.) Again, just kidding!
 
Continued......

Frequently Asked Questions


Q: How do your systems work?

A: Our Autonomous anti-Personnel Systems (APSs) navigate to their target automatically, avoiding obstacles, finding entries, and overcoming countermeasures. Then, based on criteria provided by the operator prior to launch, they select and eliminate one or more targets.

Q: What are the advantages of your autonomous weapons over more conventional weapons such as aircraft or drones?

A: There are many. Military aircraft are obscenely expensive to purchase, maintain, and pilot. Piloted drones require a complex communication infrastructure only available at great cost to major militaries, along with trained pilots. Our APSs require far less expertise and infrastructure, allowing them to be used by a much wider variety of clients. They are also much more targeted, and cheaper as well!

Q: Do your APS save lives?

A: Yes! You can use our systems risk-free without endangering your own personnel, and there is a much lower chance of collateral damage in achieving your goals. This means you can use them against a much wider variety of targets, greatly reducing the risk and cost of armed conflict and enabling a much greater scope of operations.

Q: Are your systems compliant with international humanitarian law?

A: While we of course support international humanitarian law, we consider it the responsibility of our clients to ensure that such rules are followed. Where the specific technology makes this extremely difficult (e.g. in ensuring proportionality, distinction, and non-discrimination) we encourage our clients to take appropriate measures to mitigate this difficulty.

Q: What if one of your systems accidentally attacks the wrong target?

A: We work hard to ensure that targeting is accurate given the parameters laid out by the operator, and we are confident that accidental and collateral deaths are statistically well below that of most other weapons systems. (Note that by the terms of our user agreement, responsibility for such accidental injuries lies expressly with the unit operator, or with the unit itself should the unit have made the incorrect decision.)

Q: What are the defenses against your systems?

A: With autonomous weapons offense is much easier than defense, so we’re confident we can stay ahead of countermeasures. For example, auto-avoidance, stochastic flight patterns, and EMP-hardening are already built in. Future systems will counteract future countermeasures. And if all else fails, you can succeed with numbers, since you can send thousands or even millions of drones at your adversary for the cost of a single missile, overwhelming the most robust defenses.

Q: How do we stay ahead of adversaries with similar systems?

A: Despite the rapid pace of technology, we are confident that it will be months or even years before other major military powers — those who are not our clients — will have comparable capabilities, and even longer for smaller states. Although the hardware is within capability for many manufacturers, our state-of-the-art software is multiple years ahead of the competition, and our IP is closely guarded. Stealing our software would be almost as hard as smuggling information out of the NSA.

Q: How does this fit into the landscape of military technologies and global levels of armament?

A: We’re proud to be a disruptive company. Every new military technology will have upsides and downsides for various parties. Our technology excels at making it inexpensive, low-risk, and easy to eliminate opponents, even anonymously. Parties that have an interest in more of that taking place in the world will be advantaged at the expense of those that don’t.

Q: I’m interested in using APS to remove the command structure of an adversary. Is that a good use case?

A: Absolutely. If you can identify your adversary’s leaders by image, voice, gait, or other characteristics, you can take them out automatically. (Here we must stress that these weapons be used in compliance with national and international laws of war and engagement, as per and committed through the terms of service. We do not endorse the use of our systems to attack persons, entities, or governments not deemed a threat by your national or the international community, even though it may be operationally trivial.)

Q: What about peacekeeping and population control?

A: This is one of the most exciting things: connecting our systems to easily available and purchasable sets of personal data means that our systems can identify targets for surveillance (or more) by nationality, gender, contacts with subversive elements, even radical political ideology, however defined.

Q: How do your systems rate in terms of kills-per-dollar?

A: They are superb. Drone strikes are extravagantly expensive, costing upwards of $30,000 per hours of flight, plus expensive munitions. Bullets are cheap, but well-trained soldiers can cost hundreds of thousands of dollars a year to maintain. Even nuclear weapons don’t rate as high: a single-warhead missile can cost $75-200 million, and is unlikely to kill more than a million people even with the most efficient targeting. An entry-level APS unit can cost as little as $50. The only thing more efficient are biological weapons – nothing can compete with a vial of smallpox!

Q: Don’t you think by posting all this information you are giving good ideas away to people who should not have them?

A: We’ll be honest: all of our products are based on ideas that have been publicly discussed in the industry and more widely. The key point is that we have been able to miniaturize and refine our systems, and outfit them with carefully-engineered expert-developed software. This is only made possibly using the generous funding of lucrative government military R&D contracts, so individuals or small companies are unlikely to be able to compete.

Q: Is this website for real?

A: No, but this is a glimpse of where the current trajectory of automation in weapons systems is likely to take us, unless such weapons are banned or curtailed by international agreement. If you don’t like what you see, get involved.
 
continued.............


Ready to defeat the enemy? We've got APS for that.
Autonomous anti-Personnel Systems

A NEW CONCEPT
The Stinger anti-Personnel System

The Stinger is our first mass-produced mini-weapon. It's fully autonomous with wide-field cameras, tactical sensors, facial recognition, processors that can react 100 times faster than a human, and its stochastic motion is an anti-sniper feature. Inside it are three grams of shaped explosives that offer just enough power to penetrate the skull and kill the target with surgical precision.
Video Player



00:00
00:06

Video Player



00:00
00:07

PURCHASE
ALL ABOUT TEAMWORK
Vanguard Delivery and Breaching System

Each delivery system can carry 18 Stingers. The delivery system arrives at a building or some other enclosed space (car, train, plane, you name it), releases its cargo, attaches to the barrier, and blows a hole in the wall or window. The Stingers can then enter the building and find their targets. These systems are virtually unstoppable. Target terrorist cells, infiltrate enemy compounds -- the bad guys will not be able to defend against them.
PURCHASE
Video Player


00:00
00:15

IDENTIFY AND TARGET ONLY THE BAD GUYS
The EyeFire Target Identification System

All of our systems can use facial recognition to identify a predetermined target, but what about threatening underground movements or secret terrorists cells? It's not like there are public hashtags that terrorists use to identify themselves. The EyeFire Target Identification System isn't just the little bot, but an entire big-data processing system that can scan billions of tweets, posts, pages, videos, and anything else you can find online to identify patterns indicative of terrorist activity. The system then crawls that data to identify IP addresses and GPS locations to identify the suspect posting the dangerous messages. It can also track down who the suspect is collaborating with. We'll help you weed out the bad guys.

Video Player



00:00
00:05

PURCHASE
SoftTouch Bot
A new form of regime change. The world is full of deadly leaders who are harming their citizens and threatening global security. Until recently, it's been hard to oust these bad actors: they're hard to get to and even if someone can get close enough for the kill, an obvious murder can lead to greater unrest. But these little bots are about the size of a bee, they can fly anywhere, get inside any building, hide inside any vent, and strike while the target sleeps. The question literally becomes: what's your poison? The bots can be filled with a lethal dose of the poison of your choice, and the mark left on the body will be barely noticeable, looking like nothing more than a bug bite. The target will appear to have simply passed away peacefully in their sleep.

On a less lethal mission? The SoftTouch bot can also be filled with a non-lethal formula designed to merely knock the target out for some specified period of time.

PURCHASE
 
anyone remember that computerized rifle scope that never panned out because they couldnt get their zero to function? ( basic step in a traditional rifle set up)

gun-info.jpg


i think something similar will happen to artificial intelligence type weapons.

mother nature will throw a piece of dust particle that hits a photon of light in a novel way that an algorithm hasnt taken into consideration, and the entire computerized house of cards will come crashing down.

same thing with laser range finders that these drone type weapons will surely have to implement on some level. a basic piece of dust will throw it off completely.

i got my money on mother nature in the event that there was ever a war between robot and human. and i think mother nature would return the favor and put her money on human.

what would a basic laser from amazon do to one of these sensors? how about a spotlight? how about radar jammers? how about weather?
how about sound disturbace from a stereo? how about my birdshot 12g mossberg 500? bring it on!
As scary as these slaughterbots appear, they're science fiction. You nailed the main points.
Artificial intelligence is a real and serious field, however there is a huge gap between real AI and the popular media version of it. These killer drones fall into the latter.
Facial recognition in real time with moving people is not the same as facial recognition in a front facing picture. Current AI cannot properly find the edge of a towel, much less go around a city searching for a specific person to kill:

Here is a video, sped up 50x of a robot trying to figure out where the edges of a towel is, after finding the edge, it can hold it and it will be easier to fold. Note the green background, the size of the robot and the power cord.
Even if a drone had perfect face recognition software, it would have no way to search for you. If you drop a bunch in pakistan, how would they be able to determine what house Bin Laden was in, search for him and hope he is not wearing a mask.

Doing all of that consumes power, battery suck and show very little improvement compared to computing power. A tiny drone would run out of power very fast, you can buy a tiny drone like that on amazon, they don't last more than 10 minutes usually, and that's without a powerful computer doing facial recognition(I mean recognizing a specific face like facebook does, not recognizing there is a face).

Let's than consider it would be just a killer drone that would kill anybody, the first person it sets its sights on. That could work better, but still, how would it be able to actively search for people in a building for example? It could randomly fly about, very inefficient and would sometimes fail and get captured by the enemy(it could self destruct, though).

More importantly, how is that better and cheaper than simply using conventional cluster bombs? Or a sniper with a gun? Or in the case of terrorists a suicide killer?
It would be completely useless against hardened targets, a VIP could simply hide in a bunker, armored car or even a house with the doors closed and a bunch of bodyguards shooting them down with shotguns.
Let's say America wants to kill Abu Bakr al Baghdadi, a JDAM would be a lot better and cheaper than sending in a thousand drones buzzing around that wouldn't be able to get him if he is wearing a mask.
 
22 Minute video (a collection of possibilities that can be applied to drone capabilities)

 
I wonder why there's much facial recognition technology in our smart phones and airports right......
 
As scary as these slaughterbots appear, they're science fiction. You nailed the main points.
Artificial intelligence is a real and serious field, however there is a huge gap between real AI and the popular media version of it. These killer drones fall into the latter.
Facial recognition in real time with moving people is not the same as facial recognition in a front facing picture. Current AI cannot properly find the edge of a towel, much less go around a city searching for a specific person to kill:

Here is a video, sped up 50x of a robot trying to figure out where the edges of a towel is, after finding the edge, it can hold it and it will be easier to fold. Note the green background, the size of the robot and the power cord.
Even if a drone had perfect face recognition software, it would have no way to search for you. If you drop a bunch in pakistan, how would they be able to determine what house Bin Laden was in, search for him and hope he is not wearing a mask.

Doing all of that consumes power, battery suck and show very little improvement compared to computing power. A tiny drone would run out of power very fast, you can buy a tiny drone like that on amazon, they don't last more than 10 minutes usually, and that's without a powerful computer doing facial recognition(I mean recognizing a specific face like facebook does, not recognizing there is a face).

Let's than consider it would be just a killer drone that would kill anybody, the first person it sets its sights on. That could work better, but still, how would it be able to actively search for people in a building for example? It could randomly fly about, very inefficient and would sometimes fail and get captured by the enemy(it could self destruct, though).

More importantly, how is that better and cheaper than simply using conventional cluster bombs? Or a sniper with a gun? Or in the case of terrorists a suicide killer?
It would be completely useless against hardened targets, a VIP could simply hide in a bunker, armored car or even a house with the doors closed and a bunch of bodyguards shooting them down with shotguns.
Let's say America wants to kill Abu Bakr al Baghdadi, a JDAM would be a lot better and cheaper than sending in a thousand drones buzzing around that wouldn't be able to get him if he is wearing a mask.

You really think the government has perfected the system? and what if they have? Have you even looked at the capabilities posted in some of the videos?
 
Straight from the horses mouth.

Military Times Link for USAF drone swarm test : https://www.militarytimes.com/news/...ully-tests-world-s-largest-micro-drone-swarm/

U.S. military officials in California have conducted a test launching more than 100 micro-drones from three F/A-18 Super Hornets, the largest-ever test for the cutting-edge "swarm" technology, defense officials said.

The swarm consisted of 103 Perdix micro drones, which are small, low-cost, battery-powered devices, launched from three separate Super Hornets. The exercise was conducted at China Lake, California, by the Pentagon's Strategic Capabilities Office, or SCO, working with Naval Air Systems Command.

The micro-drones demonstrated advanced swarm behaviors such as "collective decision-making, adaptive formation flying, and self-healing," according to a Defense Department statement Monday.

"This is the kind of cutting-edge innovation that will keep us a step ahead of our adversaries. This demonstration will advance our development of autonomous systems," Secretary of Defense Ash Carter, who created the SCO in 2012, said in the statement.

The test was conducted in October and aired on Sunday’s CBS News program " 60 Minutes," according to a Defense Department (DoD) press release.

Perdix are low-altitude micro drones, capable of autonomously conducting intelligence collection and surveillance operations.

" Due tothe complex nature of combat, Perdix are not pre-programmed synchronized individuals. They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature," said SCO Director William Roper. "Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team."

Developed by engineering students at the Massachusetts Institute of Technology in the Aeronautics and Astronautics Department; Perdix drones were eventually modified for military application at MIT’s Lincoln Laboratory in 2013.

Previous successful demonstrations have included an air-drop from F-16 flare canisters by the Air Force Test Pilot School at Edwards Air Force Base in 2014; and in 2015, roughly 90 Perdix missions were undertaken during U.S. Pacific Command’s Northern Edge exercise in Alaska, an exercise that witnessed the first successful swarm test of 20 Perdix drones.

The SCO plans to partner with the Defense Industrial Unit‐Experimental (DIUx), an organization announced by Carter to promote and facilitate technological development for the U.S. military in 2015, in order to produce a thousand units this year.

@Cuauhtemoc
 
Back
Top