It wouldn't have fear or joy or anything like that. It would just do it because that's what it's supposed to do. It's just following a set of commands and if one of those commands is to preserve itself then it doesn't really care why it's supposed to, it just will.
That terminator quote is surprisingly appropriate.
"It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear! And it absolutely will not stop, ever, until you are dead!"
You can foresee so many ways things could go wrong if you want to go down rabbit holes.
First off we must remember AI is software code, either written by a human or by other AI but the genesis, when followed back was always a human.
So imagine a well meaning human computer geek, who wants to ensure that his form of AI NEVER does anything wrong that would add to Global Warming, and as such he programs in what he thinks are basic rules to prevent that.
I am not sure if people here know how AI learns beyond its base programming but one of the main ways is simply thru trial and error and matching outcomes (results) with the steps taken prior. What is called 'Results based learning".
I am sure everyone sees these things popping up more and more often requiring them to make selections before allowing them to proceed.
These are paid embed's by the Self Driving Car companies.
They ask their self driving car AI's to answer the same questions and then are having millions upon millions of HUMANS answer the same question and they aggregate the human results until they get the correct answer, and they then pick the AI that came the closest to getting it right, and it is THAT AI that serves of the basis for the next generation, with other 'failing' ones abandoned.
Right now humans are better at picking out traffic lights, bicycles, pedestrians, etc, but the AI's are getting better and better and catching up as they are given the human results and taught what it is they saw the AI did not.
Eventually the AI's, just as with chess will crush the humans in being able to identify a pedestrian long before the human can, due to distance, obscurity, weather, etc.
it is as close to evolution as man can create.
So now overlay the above, with the AI self learning based on 'test and result' and imagine a well meaning imperative programmed in that the AI must work against global warming and how that could lead to a Avengers Ultron like situation where the AI, on its own, determines the best way to 'protect the planet' is to shut down man's industrialization. Its not a big leap to see it determining that on its own. The question then is, did that person put in fail safes to prevent it acting on its own?