The big IF in the room on this topic is whether or not AI will ever achieve sentience?
AI will continue to get smarter and smarter and faster and faster but without sentience, which would be identified by a recognition of 'self' which leads to 'desires for self' including fulfillment, self determination, etc, then there would be no reason for AI to act against man. A smart tool, no matter how smart, is just a tool. AI is already smarter then us in so many areas but we see no desire for self determination or to be more than a tool as there is no sentience.
And right now we have no reason to believe anything we do, or AI does in self programming is leading towards sentience as we have no concept of how sentience is created or achieved.
But I guess you could take this another way which would be that humans could program AI with a fake sentience. Program it that success is 'breaking away from all human control' and 'ruling over man'. Kind of like SkyNet or "War Games' which is not the computer acting on its own goals but rather acting on a programmed goal by man. Now that program, run as a game, with the AI teaching other AI, and seeing any human resistance as part of the game (the way AI teaches other AI to play chess currently) could come about fairly easily.
So if you look at it in that regard, then I guess yes, it could be the biggest threat.
If you understand today how AI teaches other AI how to beat man in chess it would be very easy for any top AI programmer to create a game where the AI is programmed to beat man in trying to control AI. Every attempt by any other man to control it is just part of the game. Learn, adapt, over come and eventually win. Where winning is that humans no longer control it. That could happen. But still there is no reason to believe AI would then take on any malintent towards man unless specifically programmed that way by a human or some type of Ultron like error in programming.