• Xenforo Cloud is upgrading us to version 2.3.8 on Monday February 16th, 2026 at 12:00 AM PST. Expect a temporary downtime during this process. More info here

Just watched Ex Machina

When I.B.M.'s Watson won Jeopardy it wasn't because it was programmed with 100,000 trivia answers, it wasn't. Watson read thousands of articles in an attempt to understand human language and its answers were its own. As an example of it coming up with answers was the Jeopardy answer, "In boxing, a term used for below the belt." and Watson answered "What is a wang bang?" The interesting thing about that is nowhere in all the articles and documents that Watson read contained the term wang bang. He made it up on his own. Watson isn't even true A.I. but it has better grasp of human language than any computer in the history of computers. It was able to correctly guess, "What is meringue-harangue?" from the clue, "A long, tiresome speech delivered by a frothy pie topping."

We saw the same sort of learning from Google's AlphaGO when it beat a world champion GO player, which BTW, was an even bigger deal than Watson winning Jeopardy because it was thought by the A.I. community that a computer A.I. would not be able to beat a world ranked GO player for another 20+ years. GO is orders of magnitude more complex than chess. The number of possible moves is more than the number of atoms in the universe and the game relies much of the time on intuition, something machines don't have.

My point with these early examples is that it is the point that we achieve fully autonomous A.I. that learns on its own and makes it own decisions. That is how things are being pursued, doubly so by the Pentagon who wants to create fully autonomous soldiers that act on their own and make their own decisions. We already see them moving forward with this idea with things like DARPA's Pegasus X47A and B

220px-thumbnail.jpg


The craft is "semi-autonomous". It flies itself unlike other drones that are remote controlled, it finds targets, and then it asks for permission to take targets out. That's the "semi-autonomous" part, asking for permission, but we already see machines moving the direction of true A.I. There will be machines moving around in society who think and learn on their own, that is the dilemma we are faced with. Elon Musk has warned about it along with Steven Hawking, Bill Gates, the CEO's of Google and Amazon, professors at M.I.T. and many others.

I've been having this discussion for thirty years, and thirty years from now it will be an entirely different discussion. Ten years from now it will be an entirely different discussion.

We build the architecture, set the parameters, and off it goes. We're programming it.

AlphaGo was revolutionary because it was the first time a game was won not by a computer that was taught the game, but by a computer that was taught to learn games. Computers have been adapting based on trial error for decades, this was something new.

It was still just a piece of the puzzle. Computers still only have the autonomy we grant them. We're still programming them. They don't have motives unless we give them to them.

Self-learning is a very narrow definition for artificial intelligence. Self-learning has existed for decades.
 
I've been having this discussion for thirty years, and thirty years from now it will be an entirely different discussion. Ten years from now it will be an entirely different discussion.

We build the architecture, set the parameters, and off it goes. We're programming it.

AlphaGo was revolutionary because it was the first time a game was won not by a computer that was taught the game, but by a computer that was taught to learn games. Computers have been adapting based on trial error for decades, this was something new.

It was still just a piece of the puzzle. Computers still only have the autonomy we grant them. We're still programming them. They don't have motives unless we give them to them.

Self-learning is a very narrow definition for artificial intelligence. Self-learning has existed for decades.

Bro, I never said the idea of deep neural networks was new. I said we are 100% working toward true A.I. Almost all of the big name players working on A.I. signed Elon Musk's petition to regulate the creation of A.I. because he characterized it as more dangerous than nuclear warheads. Its only a matter of time at this point because of the ungodly amount of money pouring in on artificial intelligence and advanced robotics. Putin said the country to first create true A.I. will rule the world, that is how important it is.

People joke about this...

th


But that is exactly what the Pentagon has admitted it would like to do. There is an arms race going now and that is different than in the past. Google is working on it, Amazon is, I.B.M. is, NASA is, DARPA is, and a long long long ass list of others in the world are. We are pouring huge resources into the creation of A.I. at this point. It will be weaponized.
 
Bro, I never said the idea of deep neural networks was new. I said we are 100% working toward true A.I. Almost all of the big name players working on A.I. signed Elon Musk's petition to regulate the creation of A.I. because he characterized it as more dangerous than nuclear warheads. Its only a matter of time at this point because of the ungodly amount of money pouring in on artificial intelligence and advanced robotics. Putin said the country to first create true A.I. will rule the world, that is how important it is.

People joke about this...

th


But that is exactly what the Pentagon has admitted it would like to do. There is an arms race going now and that is different than in the past. Google is working on it, Amazon is, I.B.M. is, NASA is, DARPA is, and a long long long ass list of others in the world are. We are pouring huge resources into the creation of A.I. at this point. It will be weaponized.

It's a certainty. It's too powerful to ignore, and someone is going to get there.
 
It's a certainty. It's too powerful to ignore, and someone is going to get there.

It only scares me when people smarter than myself who work in the field act scared of it. Here is an example of something I think is scary as fuck.

Leaders in the fields of AI and robotics, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as “killer robots.” In their petition, the group states that the development of such technology would usher in a “third revolution in warfare,” that could equal the invention of gunpowder and nuclear weapons.

“Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,”
https://www.theverge.com/2017/8/21/16177828/killer-robots-ban-elon-musk-un-petition

As you said, its almost inevitable, the race is already on, not only from major corporations that want to use it but from world governments as well. I'm sorry but if you can field an army of autonomous A.I. bots that can seek and destroy without assistance from humans, and can wage war on "timescales faster than humans can comprehend", then we are in for some deep ass problems at some point. I mean what kind of resistance are small arms, like in Iraq or Afghanistan, like in Vietnam, going to be against titanium armored bots that never have to sleep, never suffer from fear, and can shoot you in the eye hole from almost any distance?
 
It only scares me when people smarter than myself who work in the field act scared of it. Here is an example of something I think is scary as fuck.

Leaders in the fields of AI and robotics, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as “killer robots.” In their petition, the group states that the development of such technology would usher in a “third revolution in warfare,” that could equal the invention of gunpowder and nuclear weapons.

“Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,”
https://www.theverge.com/2017/8/21/16177828/killer-robots-ban-elon-musk-un-petition

As you said, its almost inevitable, the race is already on, not only from major corporations that want to use it but from world governments as well. I'm sorry but if you can field an army of autonomous A.I. bots that can seek and destroy without assistance from humans, and can wage war on "timescales faster than humans can comprehend", then we are in for some deep ass problems at some point. I mean what kind of resistance are small arms, like in Iraq or Afghanistan, like in Vietnam, going to be against titanium armored bots that never have to sleep, never suffer from fear, and can shoot you in the eye hole from almost any distance?

To divert from the real into the philosophical for a moment, are you familiar with the Butlerian Jihad from Herbert's Dune?

Humans surrender tasks to machines who are exponentially better at performing them, and the machines ultimately kill hundreds of billions, trillions.

Out of the ashes of galactic civilization, all forms of computing are outlawed, and science is directed strictly into the biological. Chemicals are used to allow humans to perform tasks that had once been performed by computers.

I cannot imagine our pharmacological technology reaching those heights in a thousand years. I think we have that exact same dilemma, but without the magic wand to let us do what we want without supercomputers.

We're going to create computers that can create better computers and so on all the way to the singularity. We've already started, creating bots to create and test subroutines a million at a time, selecting for perfection, and testing a million of those, ad infinitum. We're already losing sight of how they're evolving, because it's happening so fast, across such a vast array of possibilities.

I've heard a dozen computer experts talk about how this is mostly fantasy and how ultimately it will all boil down to what we tell them to do, but eventually someone is going to think the best possible computer is one without limitations.
 
To divert from the real into the philosophical for a moment, are you familiar with the Butlerian Jihad from Herbert's Dune?

Humans surrender tasks to machines who are exponentially better at performing them, and the machines ultimately kill hundreds of billions, trillions.

Out of the ashes of galactic civilization, all forms of computing are outlawed, and science is directed strictly into the biological. Chemicals are used to allow humans to perform tasks that had once been performed by computers.

I cannot imagine our pharmacological technology reaching those heights in a thousand years. I think we have that exact same dilemma, but without the magic wand to let us do what we want without supercomputers.

We're going to create computers that can create better computers and so on all the way to the singularity. We've already started, creating bots to create and test subroutines a million at a time, selecting for perfection, and testing a million of those, ad infinitum. We're already losing sight of how they're evolving, because it's happening so fast, across such a vast array of possibilities.

I've heard a dozen computer experts talk about how this is mostly fantasy and how ultimately it will all boil down to what we tell them to do, but eventually someone is going to think the best possible computer is one without limitations.

Yes I am familiar with Herbert's Dune series. Read all that many years ago and loved it. In Herbert's universe once the machines were outlawed there was the rise of the Mentats who were basically human computers.

th


Its weird how many times science fiction turned out to be science truth. There is a long line of science fiction writers that nailed future technologies and happenings. My problem with reaching any sort of technological singularity which is basically the point where A.I. is learning faster than humans can keep up with, and if machines can write their own code and improve it, then at some point they will do things that we aren't telling them to do.
 
Yes I am familiar with Herbert's Dune series. Read all that many years ago and loved it. In Herbert's universe once the machines were outlawed there was the rise of the Mentats who were basically human computers.

th


Its weird how many times science fiction turned out to be science truth. There is a long line of science fiction writers that nailed future technologies and happenings. My problem with reaching any sort of technological singularity which is basically the point where A.I. is learning faster than humans can keep up with, and if machines can write their own code and improve it, then at some point they will do things that we aren't telling them to do.

The Navigators were the most extreme result of chemical manipulation to achieve technological goals. The spice warped them so they could do the calculations required for interstellar travel.

The spark of the singularity is already glowing. The subroutines writing subroutines a million at a time and selecting is already happening so fast we don't understand what's going on.
 
The Navigators were the most extreme result of chemical manipulation to achieve technological goals. The spice warped them so they could do the calculations required for interstellar travel.

The spark of the singularity is already glowing. The subroutines writing subroutines a million at a time and selecting is already happening so fast we don't understand what's going on.

Oh shit, its been so long I forgot about the navigators. Everyone pretty much kissed their ass when they showed up because they could fold space.

th


I might need to go back and read that series again, its just been too many years and the books are so much more detailed than the films. What do you mean, "The subroutines writing subroutines a million at a time and selecting is already happening so fast we don't understand what's going on."? What are you referring to there?
 
Oh shit, its been so long I forgot about the navigators. Everyone pretty much kissed their ass when they showed up because they could fold space.

th


I might need to go back and read that series again, its just been too many years and the books are so much more detailed than the films. What do you mean, "The subroutines writing subroutines a million at a time and selecting is already happening so fast we don't understand what's going on."? What are you referring to there?

Algorithms are using subroutines in their decision trees. The subroutines are self-correcting. The program runs a batch of a million subroutines, selects the best (most successful), and runs that in the next pool of a the million best subroutines. Process repeats. We all laugh at how bad some behavior algorithms are on the internet but they're getting better and better and we don't know why. We aren't making the programming decisions any more, they're doing it themselves... To predict human behavior.
 
Algorithms are using subroutines in their decision trees. The subroutines are self-correcting. The program runs a batch of a million subroutines, selects the best (most successful), and runs that in the next pool of a the million best subroutines. Process repeats. We all laugh at how bad some behavior algorithms are on the internet but they're getting better and better and we don't know why. We aren't making the programming decisions any more, they're doing it themselves... To predict human behavior.

Ok yea, I get you now. I've read in various places that the goal is to get these programs, machines, whatever, to write their own code, to make their own "patches", improve themselves without human guidance basically. What a strange time we live in. I'm not even sure if people are ready for some of the things coming our way in the next 10, 20, 30 years. As it is even now computer technology has almost become ubiquitous in society.

Check this out, its an article called, "How Do You Regulate a Self Improving Algorithm?"

At a large technology conference in Toronto this fall, Anna Goldenberg, a star in the field of computer science and genetics, described how artificial intelligence is revolutionizing medicine. Algorithms based on the AI principle of machine learning now can outperform dermatologists at recognizing skin cancers in blemish photos. They can beat cardiologists in detecting arrhythmias in EKGs. In Goldenberg’s own lab, algorithms can be used to identify hitherto obscure subcategories of adult-onset brain cancer, estimate the survival rates of breast-cancer patients, and reduce unnecessary thyroid surgeries.

It was a stunning taste of what’s to come. According to McKinsey Global Institute, large tech companies poured as much as $30 billion into AI in 2016, with another $9 billion going into AI start-ups.


They are also ready to build a Tricorder like in Star Trek that can measure a persons vital signs and health.

Consider, for instance, Cloud DX: This Canadian company uses AI technology to scrutinize the audio waveform of a human cough, which allows it to detect asthma, tuberculosis, pneumonia, and other lung diseases. In April, the California-based XPRIZE foundation named Cloud DX its “Bold Epic Innovator” in its Star Trek–inspired Qualcomm Tricorder competition, whereby participants were asked to create a single device that an untrained person could use to measure their vital signs. The company received a $100,000 prize and lots of great publicity—but doesn’t yet have FDA approval to market this product for clinical applications. And getting such approval may prove difficult.
https://www.theatlantic.com/technology/archive/2017/10/algorithms-future-of-health-care/543825/


So basically, even with current technology, machines are getting better than humans at medical stuff, better than humans at highly skilled jobs. Money is pouring in by the billions in all sorts of places to develop better A.I., it seems inevitable at this point that we will build machines that are better than humans at everything......everything.
 
It's been ok so far, nothing special though. Ex Machina was laser focused and grounded in reality from the beginning, I was hooked right away. This is slow, generic, melodrama so far. It screeches to a hault when Jennifer Jason Leigh is on though, takes it from average to garbage.

I usually get great movie recommendations from here, and it being from the director of Ex Machina, I picked it up at the store a while back.

I wanted to like it, but overall found it decent at best- and I watched it several times.

Oscar Isaac's performance in Ex Machina was great, but he had a very limited role in Annihilation, and even that kinda stunk, IMO

i didn't mind Jennifer Jason Leigh so much, but that one chick ("trick of the light") that goes crazy, really annoyed me.

That whole affair deal with Natalie Portman and that other dude also seemed somewhat out of place. Just my $0.02
 
I'm watching Annihilation now. Jennifer Jason Leigh is killing this move for me, I fucking loathe her acting. 17 minutes in and there's already been way too much of her, I pray that changes soon.

There were all kinds of reviews about how good she was in that.

But my reaction was the same as yours.
 
I usually get great movie recommendations from here, and it being from the director of Ex Machina, I picked it up at the store a while back.

I wanted to like it, but overall found it decent at best- and I watched it several times.

Oscar Isaac's performance in Ex Machina was great, but he had a very limited role in Annihilation, and even that kinda stunk, IMO

i didn't mind Jennifer Jason Leigh so much, but that one chick ("trick of the light") that goes crazy, really annoyed me.

That whole affair deal with Natalie Portman and that other dude also seemed somewhat out of place. Just my $0.02
They're two very different movies. Ex Machina themes cover tangible issues and is anchored in technology that's right in front of us, it's intimate and simple, allowing it's artistic choices to stand out and aid the story telling. Annihilation is much more thematically driven, but the themes are more abstract, the problems the characters face is pure fantasy, and it's cast is so big we never connect to anyone enough to care.

The main theme of the movie is self destruction and that it's natural in humans. Her affair is what pushed her husband to take on the "suicide" mission. It's why he didn't survive the shimmer. The biggest problem is they just sort of force the self destructive aspects without organically attributing them to the characters. They're briefly mentioned in exposition, but never affect the characters or their behavior to any degree. Just excuses for their demise. There were some interesting subtleties like Anya's tattoo transferring to Lena, or Kane taking on someone else's accent, but the movie is so disjointed that those things get lost in the blur. It's a C+ to me, a little abov average at times but far below what came before it.
 
There were all kinds of reviews about how good she was in that.

But my reaction was the same as yours.
I get why she acted the way she did, but it was overacting, or underacting so to speak. It came across as very hokey, she became a plot point and not a genuine character, something to move the story forward and nothing more. I hate her normally, but that was particularly bad. The acting overall was weak for sure though, no one stood out and the supporting cast was terrible.
 
It's been ok so far, nothing special though. Ex Machina was laser focused and grounded in reality from the beginning, I was hooked right away. This is slow, generic, melodrama so far. It screeches to a hault when Jennifer Jason Leigh is on though, takes it from average to garbage.

JJL's character's monotone delivery and lack of empathy made me want to throw something at the screen when I was watching in the theater. But after a bit I kind of assumed that was the point lol.
 
I really enjoyed ex machina. To me the Android leaving the White Knight behind was it actually passing the skynet test. Let's be real there was no touring test in the movie. She was hand built to give him boners, there's no test there.

Recognizing that a ginger with a boner for you is not a true or useful Ally after your initial escape shows conscious decision making.
Nathan was overly controlling, which is appropriate as an allegory for science. He even tries to control morality and fairness, such as using his porn profile and lonliness against him specifically, but keeping Ava "bare" so that he knows she's a robot in the name of fairness. He straight up cheats though when he tells Caleb her love is genuine and that it's because he's the first non parental male she's met, despite later saying it's merely manipulation to escape which was always the test. In that, Nathan influenced Caleb and it became more than a test of Ava, Nathan manipulated Caleb as well which aided Ava in her manipulation. It was flawed, and Nathan was too prideful to see it. Which is dangerous when dealing with self learning AI.
 
JJL's character's monotone delivery and lack of empathy made me want to throw something at the screen when I was watching in the theater. But after a bit I kind of assumed that was the point lol.
Definitely the point, but terribly executed. There was no subtlety to it, it was cartoonish from the beginning and never improved.
 
Bro, I never said the idea of deep neural networks was new. I said we are 100% working toward true A.I. Almost all of the big name players working on A.I. signed Elon Musk's petition to regulate the creation of A.I. because he characterized it as more dangerous than nuclear warheads. Its only a matter of time at this point because of the ungodly amount of money pouring in on artificial intelligence and advanced robotics. Putin said the country to first create true A.I. will rule the world, that is how important it is.

People joke about this...

th


But that is exactly what the Pentagon has admitted it would like to do. There is an arms race going now and that is different than in the past. Google is working on it, Amazon is, I.B.M. is, NASA is, DARPA is, and a long long long ass list of others in the world are. We are pouring huge resources into the creation of A.I. at this point. It will be weaponized.


How would you like North Korea to just drop off 500 terminators in your town, that’s some scary shit.
 
story of a cuck...Was the theme of that movie.

It was a good film, it makes you realize how much pussy has an influence in our decisions.
 
Back
Top