You're talking about self-modifying code. That's a far cry from sentience. In order for the program to become self-aware, it would have to write code too advanced for humans to write. Yet, it's still operating within the parameters that its human creators delineated.
No code that the program can write is outside of human imagination, because the program itself is not imaginative unless programmed to be so. Hence, it can only modify its own code in the same way that its programmers are already capable of doing, it can just do it better and faster.
Sentience cannot logically arise out of that. You would be sitting there hoping for a spark of consciousness to miraculously coagulate out of nothing.
And let's say it did. How would it interact with the physical world? It doesn't know how to navigate the Internet, it doesn't know how to exist inside of a robot chassis, it doesn't know how to read human languages, it doesn't know what a human is - unless it is programmed to know. For it to learn on its own, it would have to write its own code that allows it to learn.
And you're telling me that's not just possible, not just likely, but a foregone conclusion. No.