You say that as if there's actually a hard definition of AI, there isn't. The definition has changed and keeps changing over the years. And it is absolutely not the "entire point". It may now be a critical component of artificial intelligence, but self learning is not the purpose.
And decision trees have to be programmed. If a machine has an open ended algorithm to make decisions, it's because you made it that way.
When I.B.M.'s Watson won Jeopardy it wasn't because it was programmed with 100,000 trivia answers, it wasn't. Watson read thousands of articles in an attempt to understand human language and its answers were its own. As an example of it coming up with answers was the Jeopardy answer, "In boxing, a term used for below the belt." and Watson answered "What is a wang bang?" The interesting thing about that is nowhere in all the articles and documents that Watson read contained the term wang bang. He made it up on his own. Watson isn't even true A.I. but it has better grasp of human language than any computer in the history of computers. It was able to correctly guess, "What is meringue-harangue?" from the clue, "A long, tiresome speech delivered by a frothy pie topping."
We saw the same sort of learning from Google's AlphaGO when it beat a world champion GO player, which BTW, was an even bigger deal than Watson winning Jeopardy because it was thought by the A.I. community that a computer A.I. would not be able to beat a world ranked GO player for another 20+ years. GO is orders of magnitude more complex than chess. The number of possible moves is more than the number of atoms in the universe and the game relies much of the time on intuition, something machines don't have.
My point with these early examples is that it is the point that we achieve fully autonomous A.I. that learns on its own and makes it own decisions. That is how things are being pursued, doubly so by the Pentagon who wants to create fully autonomous soldiers that act on their own and make their own decisions. We already see them moving forward with this idea with things like DARPA's Pegasus X47A and B
The craft is "semi-autonomous". It flies itself unlike other drones that are remote controlled, it finds targets, and then it asks for permission to take targets out. That's the "semi-autonomous" part, asking for permission, but we already see machines moving the direction of true A.I. There will be machines moving around in society who think and learn on their own, that is the dilemma we are faced with. Elon Musk has warned about it along with Steven Hawking, Bill Gates, the CEO's of Google and Amazon, professors at M.I.T. and many others.