- Joined
- May 6, 2008
- Messages
- 64,301
- Reaction score
- 30,021
When I.B.M.'s Watson won Jeopardy it wasn't because it was programmed with 100,000 trivia answers, it wasn't. Watson read thousands of articles in an attempt to understand human language and its answers were its own. As an example of it coming up with answers was the Jeopardy answer, "In boxing, a term used for below the belt." and Watson answered "What is a wang bang?" The interesting thing about that is nowhere in all the articles and documents that Watson read contained the term wang bang. He made it up on his own. Watson isn't even true A.I. but it has better grasp of human language than any computer in the history of computers. It was able to correctly guess, "What is meringue-harangue?" from the clue, "A long, tiresome speech delivered by a frothy pie topping."
We saw the same sort of learning from Google's AlphaGO when it beat a world champion GO player, which BTW, was an even bigger deal than Watson winning Jeopardy because it was thought by the A.I. community that a computer A.I. would not be able to beat a world ranked GO player for another 20+ years. GO is orders of magnitude more complex than chess. The number of possible moves is more than the number of atoms in the universe and the game relies much of the time on intuition, something machines don't have.
My point with these early examples is that it is the point that we achieve fully autonomous A.I. that learns on its own and makes it own decisions. That is how things are being pursued, doubly so by the Pentagon who wants to create fully autonomous soldiers that act on their own and make their own decisions. We already see them moving forward with this idea with things like DARPA's Pegasus X47A and B
![]()
The craft is "semi-autonomous". It flies itself unlike other drones that are remote controlled, it finds targets, and then it asks for permission to take targets out. That's the "semi-autonomous" part, asking for permission, but we already see machines moving the direction of true A.I. There will be machines moving around in society who think and learn on their own, that is the dilemma we are faced with. Elon Musk has warned about it along with Steven Hawking, Bill Gates, the CEO's of Google and Amazon, professors at M.I.T. and many others.
I've been having this discussion for thirty years, and thirty years from now it will be an entirely different discussion. Ten years from now it will be an entirely different discussion.
We build the architecture, set the parameters, and off it goes. We're programming it.
AlphaGo was revolutionary because it was the first time a game was won not by a computer that was taught the game, but by a computer that was taught to learn games. Computers have been adapting based on trial error for decades, this was something new.
It was still just a piece of the puzzle. Computers still only have the autonomy we grant them. We're still programming them. They don't have motives unless we give them to them.
Self-learning is a very narrow definition for artificial intelligence. Self-learning has existed for decades.
