Programmer and no. I've been following it closely, integrating it where I can into my workflows, trying to get all the efficiency gains possible etc.
Also following the business side and R&D sides of it.
I have no fear of being replaced, actually quite the opposite. It seems to be severely hampering the development of new programmers, and there is already a shortage of good devs that really know what they're doing which is going to get worse I believe.
I've mentioned it here before but I've already been on and heard about quite a few projects that doubled their expenses in having to redo a bunch of LLM sludge they thought they could get away with in prod.
*edit*
Forgot to say - it's also quite shit as it is, and I don't see it getting much better. I think we've already seen the curve start to flatten with respect to LLMs for code.
Latest GPT is actually worse than previous iteration, which quite a few predicted would be the case.
A lot of people who don't know comp sci / programming etc think because they can spin up the front end to a website or some simple py scripts that it's turned them into real programmers.
Real devs regularly work on problems that it simply can't solve, something I run into a lot.
It will proudly state the theory and methods of execution but utterly fail to make it work, sometimes be ause it was never possible in the first place.
(I keep adding to this list as I think of things but I'll leave it for now, there is much more):
They're also horrible with context on large projects, often recreating already existing functionality, coupling large classes/objects, disrespecting scope, adding mountains of auxillary bloat that was never needed, leaving glaring security flaws, endless race conditions in certain types of projects (games, streaming data etc), no spatial reasoning - extreme issue with engineering applications.
They're really, really bad, but it's hard to explain to people that don't have the level of knowledge required.
It's like a really accurate metaphor for why it isn't good.