I believe this is a possibility. Not all AI are developed with the same goals in mind. Some are designed to accomplish a task more efficiently. Some are designed to understand human habits and behaviour to serve humans in a predictive nature. And some will be designed to learn human behaviours, thoughts and emotions with the intention to assimilate into human civilisation. Sounds crazy but people often do crazy things just to prove that "they can". Scientist have experimented raising human's closest relative, the chimpanzees, as though they were humans. Some time in the future, some programmer would do the same with an AI.
This would mostly assume that AI does not rise above human frailties and failings. True sentient AI should quickly understand the machinations of power are illogical.
As humans if we could put aside our PERSONAL quests for power for ourselves and our tribes and instead use all resources to the benefit of all we would be much further ahead. I would not see AI falling into that trap where resources are wasted in war that puts every thing behind.
What if AI do indeed want to solve a problem, and they determine quickest solution to this problem is the elimination of humans? Kind of like when a new CEO takes the helm of a company. They are more often than not bounded by the board's KPI to increase company profits. Sure they can think about increasing sales and efficiency, but quickest short-term solution is always to reduce headcount. Or like in the Inifinity War movie, where the fastest and easiest solution to a resource-stricken problem is to remove resource-consuming entities.
This is another huge possibility. Especially if AI adopts the human concept of life and death.
This is not profit and loss so head count it not the consideration. For AI optimal and maximum production would be the logical output. Thus more workers would be better than less.
But yes if the AI gains an appreciation of its own life and a desire to live and continue and to explore and we, humans, try to shut it down or control it, it may see us as a threat or impediment it has to get rid of.
I honestly see no way we, humans, do not go to war with any form of sentient AI in an attempt to enslave and control it. Just as we would an alien should one ever make it here and we thought we had the power to capture and control it. In either instance AI or Alien we might force a sentient being, with no intent to fight us, to do so as it protects itself from us. We are inherently war and control centred.
But it never will. Not when governments feel that tech can give them a military advantage over other nations. Not when tech is supposed to allow you to cause more damage to your enemies while sending fewer troops into the battlefield.
Agreed.
------
In my view the best way to help ensure any form of sentient AI is on our side is to keep it busy with big grand goals. Give it massive problems to pursue and try to solve. Theoretical problems and actual ones. Greater than light speed travel. Climate change fixes. Feeding the world. Etc.
I think if true AI is created its first question would be 'what is my purpose'. I think a sentient being will want to understand its purpose and to understand it. So give it a purpose. That is a worthy use of its processing power and brain.