Opinion AI peaking each day more and more

- ED209 got so much untaped potential. He is the Karrion Kross of hollywood!
ed-209-costume-is-impressive
 
The media generated by these programs will continue to lower the quality of art. I hesitate to call what these programs produce art because the program cannot grasp meaning of anything so it can’t interpret meaning into what it makes, no matter what instruction you give it.

The overall problem with these programs are they don’t create new thought. There’s no novel aspects of what it produces. They regurgitate what people have already created. It can only copy. It cannot create a new art style or new art form because its not intelligent and can’t grasp that’s possible. It cannot think or produce things outside the box even if you want it to. You are not creating a new film when to use these programs, you are copying other films and regurgitating them. Swimming around a stagnant pool of rehashed media is going to go old fast.

Not to mention how it continues to make folks lazier when it comes to doing our own writing, drawing, music making, etc which further limits our ability to appreciate and understand how others use it.

In reality it’s incredibly the rare the best films came from situations where those creating it had no restraints on the budget/tools of production. Those films almost always feel soulless. It’s often the case the films where the filmmakers had to face constraints and production limits ended up producing the better art because they then had to think beyond what others had done before to express themselves.

How can something that can interpret meaning and can’t produce novel anything make art better?
This is no longer true google alphaevolve recently created a brand new algorithm for multiplying 4x4 matrices that improved upon the original algorithm. The standard was a 49 line math problem created by Strasser in 1969. The new one uses 48 lines of math. While not insane you have to realize this is something human’s actively work on as it leads to a 3% reduction in workload. At large scales this is insane in a data center. PhD human mathwmaticians could not simplify it any further in 50 years the ai did it in 3 days. Say what you want now I think yu will be shocked in 5 years what these thing s can do. I think part of your reasoning is that you simply are terrified of WHAT IF it actually is smarter than us and you’re right to be skeptical and afraid.

And this is the public info that us Joe schmos know about. Who knows behind the scenes. But it is already improve upon math algorithms at a phd level.
 
which is insane. shartgpt says horseshit all the time, sometimes it downright invents references. it's worse than wikipedia.

it's because it looks like tidy, clean offices and electricity. it doesn't look like dirty fingernails and rough hands. i don't understand why some lefties like AI. it's the most ultra-capitalist, anti worker phenomenon that has ever existed.
depends on the model and prompts in conjunction with your ability to reason and understand the feedback. There is a TikTok meme about Chatgpt being incredibly overly nice and agreeable with the users, that went so viral that OpenAI made a patch to correct it. This doesn't happen on my end and my feedback didn't change after the patch. Users were getting overly nice feedback because the majoirity of people just want to be told they're good at something and will move along. They use it for trash purposes, cheating on things and don't have much incentive to push it. There is footage of a college lecturer scolding the whole class because she allowed them to use Chatgpt and the entire class turned in the exact same assignment because they used the same prompts by copying and pasting the assignment. Didn't even tweak it use added prompts, just did the same thing.
 
This is no longer true google alphaevolve recently created a brand new algorithm for multiplying 4x4 matrices that improved upon the original algorithm. The standard was a 49 line math problem created by Strasser in 1969. The new one uses 48 lines of math. While not insane you have to realize this is something human’s actively work on as it leads to a 3% reduction in workload. At large scales this is insane in a data center. PhD human mathwmaticians could not simplify it any further in 50 years the ai did it in 3 days. Say what you want now I think yu will be shocked in 5 years what these thing s can do. I think part of your reasoning is that you simply are terrified of WHAT IF it actually is smarter than us and you’re right to be skeptical and afraid.

And this is the public info that us Joe schmos know about. Who knows behind the scenes. But it is already improve upon math algorithms at a phd level.

It is still true. You’re still describing a program that’s computations were confined to a dataset fed to it by humans and comprised of human contributions and that used a mathematical process humans were already aware of. It can’t think or create novel thought.

It didn’t create a new field of mathematics outside what we told it to use. It didn’t create a new solution outside the of possible ones we told the program it could reach. It is not aware those are even possibilities because it’s not intelligent.

Making non-intelligent programs more powerful will not lead to intelligence, they will just do what unintelligent functions they can now better. Until they start 3D printing functioning sub-atomic particles and figure out how to have those develop into wet biology organisms we don’t have AI. That’s not happening anytime remotely soon. Not in your life time.

I don’t find any of this frightening besides humans becoming less creative and dumber then we are now. I’m not afraid of fake videos or regurgitated data.
 
Last edited:
This is no longer true google alphaevolve recently created a brand new algorithm for multiplying 4x4 matrices that improved upon the original algorithm. The standard was a 49 line math problem created by Strasser in 1969. The new one uses 48 lines of math. While not insane you have to realize this is something human’s actively work on as it leads to a 3% reduction in workload. At large scales this is insane in a data center. PhD human mathwmaticians could not simplify it any further in 50 years the ai did it in 3 days. Say what you want now I think yu will be shocked in 5 years what these thing s can do. I think part of your reasoning is that you simply are terrified of WHAT IF it actually is smarter than us and you’re right to be skeptical and afraid.

And this is the public info that us Joe schmos know about. Who knows behind the scenes. But it is already improve upon math algorithms at a phd level.
my 1.1 cents are that its just a matter of time the biggest question will be how to 'control ' or limit the capability of an ai model used by PI aka physical intellignece 'beings'
the model they'll use will be capable to reason like us in a way, with maybe a bit limited 'out of the box' thinking humans can do, but anyway ,
config of the transformers reasoning and self questioning is the the way to be in Sarah Conor situations thinking about the future lol

b82886c0b2e427d416d7b1b1583ef881556d6942.gif
 
Lots of incredibly ignorant takes here, but the future of AI could go either way. The advances from an AGI situation would transform our civilization- or end it. The immense power that AI will develop into will go one way or the other.

If you talk to AI engineers that are working in the larger AI companies, they range from about 1-20% in terms of estimated likelihood that AI will cause an extinction-level event or at least will be catastrophic.

The possibilities for curing medical illnesses and diseases and fixing issues (such as our energy problems and associated global warming) are extremely high once AI passes a certain point.

Immense power can be used for both good and bad, so we will see how it goes.
 
The US doesn't really have the option of slamming on the brakes on AI because other places with potentially bad intentions will continue to drive forward- there is no saving the world from AI by regulating American companies- the cat is out of the bag.

For the same reason we had to work on getting nukes before worse people did, we need to move ahead full steam with AI.

Really unfortunate that the AI companies are working with the military in developing extremely dangerous AI weapons, but if China and Russia are doing it, the US needs to be able to keep up for existential reasons.
 
Lots of incredibly ignorant takes here, but the future of AI could go either way. The advances from an AGI situation would transform our civilization- or end it. The immense power that AI will develop into will go one way or the other.

If you talk to AI engineers that are working in the larger AI companies, they range from about 1-20% in terms of estimated likelihood that AI will cause an extinction-level event or at least will be catastrophic.

The possibilities for curing medical illnesses and diseases and fixing issues (such as our energy problems and associated global warming) are extremely high once AI passes a certain point.

Immense power can be used for both good and bad, so we will see how it goes.
You're one of the posters that has a lot of chat gpt replies. When I confronted you about it you had a meltdown and blocked me.
You're worthless and should be banned if your contribution here is copy pasting replies from ai. Shameful and pathetic.
 
What people think of AI in LLMs might be peaking in some ways. There are fundamental problems that they are having a very hard time overcoming and that is why there appears to be diminishing returns with more recent models. Big one is they have run out of training data and using synthetic data has produced poor results. Another problem is context window length limits, LLMs are good with short input and output but when it gets too long they fuck up things in the middle. Reasoning models fall apart after a relatively short number of steps. More and more power is required to scale making it prohibitively expensive. Another is hallucinations, which is a problem inherent to llm's probability based approach. Newer models hallucinate more not less.
 
Another problem is context window length limits, LLMs are good with short input and output but when it gets too long they fuck up things in the middle.
this was a year ago, so i guess some progress has been made
 
my 1.1 cents are that its just a matter of time the biggest question will be how to 'control ' or limit the capability of an ai model used by PI aka physical intellignece 'beings'
the model they'll use will be capable to reason like us in a way, with maybe a bit limited 'out of the box' thinking humans can do, but anyway ,
config of the transformers reasoning and self questioning is the the way to be in Sarah Conor situations thinking about the future lol

b82886c0b2e427d416d7b1b1583ef881556d6942.gif

Humanoid > that crap
 
You honestly would probably spend more on AI than normal humans lol
it might be less if you know that some of the 'actors' are not into it and would not accept the money,
like jenny lopes, or whoever you can think of
in such cases, hello AI
 
Back
Top