AI and the singularity

Absolutely....... But I can't tell you anymore..... Otherwise it's not going to happen......
I guess nothing...? Like, I could make a protest for work exploration and stop buying expensive computers, but if I did that, would the problem be even meaningfully solved? No. Yeah if everyone thinks like this it won't be solved... but you also need to be aware of the impact you can make. IF is always a big word, and if the " if everyone thinks like this it will never happen" works, it sure won't be by a domino effect.
 
I guess nothing...? Like, I could make a protest for work exploration and stop buying expensive computers, but if I did that, would the problem be even meaningfully solved? No. Yeah if everyone thinks like this it won't, but you also need to be aware of the impact you can make. IF is always a big word, and if the everyone thinks like this it will never happen works, it sure won't be by a domino effect.

IF is tiny WHY is more important........... If there's no good Rum, dominos will not be played but Why is there no good Rum!!

And, Or another......they put a TV series on and call it Andor ........ another reference to what is coming maybe.....

There's too many patterns...... sometimes but if you reach back and remain flexible you'll see the truth........
 
IF is tiny WHY is more important........... If there's no good Rum, dominos will not be played but Why is there no good Rum!!

And, Or another......they put a TV series on and call it Andor ........ another reference to what is coming maybe.....

There's too many patterns...... sometimes but if you reach back and remain flexible you'll see the truth........
Yes, but again, who's the good Rum up to? Who can realistically make it a good Rum?
 
i think we need a technological reset before this happens like in the end of Escape from LA.
 
Many people may not realize it now, but the singularity is going to be the single most important thing that has ever happened in human history and will define our future in ways that we can't even imagine.
Like normal I still don't believe you.
 
Many people may not realize it now, but the singularity is going to be the single most important thing that has ever happened in human history and will define our future in ways that we can't even imagine. This is going to happen in our lifetime, and not only is it going to happen in our lifetime, but it's approaching rapidly. Some AI experts predict it could happen as soon as 2026/2027 on the current trajectory. People have not really begun to wrap their heads around what this will look like, as many of us are used to the traditional scaling of technology, which has increased dramatically in the past 100 years, but is nothing compared to advances we will get with AI.

For those who don't understand why the singularity is important, let me briefly explain. Advances in AI and computing power will allow us to create a super intelligent autonomous AGI that will be able to learn on its own without humans, replicate, and it will be many times smarter than the smartest human who ever lived. Imagine an artificial intelligence a million times smarter than the smartest human, that can solve the most complex problems in our society in seconds. Biology can be completely transformed, cures for cancer, any type of disease, life extension technology. This all becomes possible really quickly with advanced AGI. Not only can it advance us in biology, but it can be used to advance us in all fields of science. Now imagine a machine spitting out 10 Nobel prize winning ideas every second, and how fast that would transform society and the world as we know it. The AI will be able to eventually write its own code and upgrade itself as well, it will not need humans for anything.

This is basically what is coming, an intelligence so advanced that we can't predict what will happen once it arrives, the singularity. Nobody can really picture or try to understand what kind of advances an intelligence so vastly superior to ours could implement. The doomsday scenario is that the AI could enslave us with its superior intelligence, a worst-case scenario.

The best-case scenario is the rapid advancement of technology and science to levels we can't even comprehend, that we use to solve our problems and build a utopia for all.

This is almost here, and nobody except a very small portion of people actually understand the implications of this and how life altering it will be for everyone.

Is this something you guys think about?
Thoughts?
Is it something I think about?
I have passive thoughts but if it's something that will occur, it's beyond any kind of interception of the common man. At best, if one imagines they know what is going to happen, they can perhaps take part in forums, ethics discussions and petition political powers for whatever they believe the solution to be.

"This is almost here".
I disagree. I believe the capability is almost here, in the same sense that the nuclear arms race during WW2 was the most intense scientific pursuit of that time - however, any kind of singularity of AI would require co-operation between the most advanced break-throughs. In a world of unparalleled historical greed for power, there is still too much disparity for "the one" to break through. I believe that there is too much unseen in-fighting between those who have the best capable AI models.

The best-case scenario is absolutely the least-likely. If any utopia is to exist as a result, it's not for the common man.

At best (of the worse-case), will be implementations of AI usage and ideas that are forced upon governmental policy makers to propose such utopias but laws will be made to enforce them in such a way that (again) the common man will be forced to live a particular way contrary to what feels natural, or organic.
 
It's a certainty that the singularity will occur, we may be in it right now.

I'm reasonably optimistic, in that I believe AI advancement will so rapidly eclipse the abilities of nefarious or careless actors.

Cooperation is fundamentally more useful and it's not as if the AI won't be able to convince all humans of anything.

I give it a 70% chance we survive the next two decades, I think that's cause for optimism.

We will see shortly the end of work, the end of death, the end of scarcity, the end of unnecessary suffering. Capitalism is already convulsing and eating itself as the strategies for financial market wealth generation become more opaque.

We don't know what's happening in the black box already. Advances are being made like alphafold, where every single protein combination in the universe are simulated.

I'm leaning towards the Huxley Brave New World scenario of us being managed, population controlled with artificial births, soma being fed to be us so that we enjoy ourselves whilst the real work will be done by clusters of AI in space penetrating the very fabric of our reality so as to explore the universe that our universe exists in.
 
So the reason why this may be very close is that we have an understanding of how the model scales, though we don't really understand when the breakthrough will happen. We know increasing the amount of GPUs allows the AI to become smarter, but it's not like if we double the GPUs that the AI becomes twice as smart. We can't predict when the next jump in intelligence may occur, it could require a 10x increase in flops, we just don't know.

So we build increasingly bigger and bigger AI superclusters, hundreds of thousands of GPUs networked together, with huge training models. Every year we've been increasing the size of these superclusters and we're set to build even bigger ones in the coming years. Right now Elons supercluster is by far the largest and most powerful. I think it's something like 200k GPUs.

With the size of these superclusters, the amount of energy required to run the largest AI clusters will essentially require their own power plants to run. Energy will be so important, and Trump actually understands the importance of this and that's why he wants to double the power infrastructure.

The country that reaches the singularity first will have an immense advantage as you can imagine. If China develops this before we do, they may use it to advance their weapons or shut down our power grid. That's why it's so important for us to get there first.
 
Kant really broke people huh? The singularity is fake and gay and so is transhumanism. As J.G. Ballard said "The future will be boring." So tired of hearing these nerds spouting gnostic heresies because they have a broken paradigm. It's an incoherent religion and shouldn't be taken seriously. You will not immanentize the eschaton.
 
I used to think AI was going to take over until I got the paid version of ChatGPT (Plus). I started having it try to work with small amounts of text (five page Word documents) and Excel spreadsheets. Turns out it's not very capable at all. It can't process large amounts of text without random deletion, environment resets and it can't remember the project rules you told it under the project you created, can't follow basic instructions with Excel, etc.

I still think it's going to take over, but either it's going to take a while or the capability needs to exponentially increase.
 
I use AI often.. Perplexity is the best for research. My staff uses AI to write about 75% of their code (not Perplexity). I am a solution architect in my spare work time and it's one of the things I enjoy the most. I work with, argue with, and beat up solutions with the Idaho National Lab Data Scientists when time permits. I am of the opinion that most of my staff is superior to them. We do solutions for nuclear plants and large government sites / projects that are not to be discussed in detail.

I shared that, because I think I have a strong understanding of AI. I think initially we are seeing great gains, but I think those gains are going to slow not accelerate. My opinion is converse to many of my peers... so it's just an opinion. I do think the Singularity is possible, but I think the biggest obstacle is experience.

There is a little talked about Theory of Communication called the Triangle Theory. It's pretty simple, but very important. There are three points to the triangle.
  • The Word
  • The object or concept the word denotes
  • The experience of the person / entity considering the word
Simple Example. Snow to an Eskimo versus a person from Arizona. The word "snow" means far different things. You wouldn't want the Arizonian telling you about or giving you a solution for snow. The Eskimo is the human and the Arizonian is the computer.

I think that experience point hurts the growth of AI and the possibility of the Singularity. As much as we try to tell the AI about the experience I think it falls short. It's an interesting debate.

Guys... I'm drinking fine whiskey tonight and I type about 120 a minute... so this shit spills out.
 
I used to think AI was going to take over until I got the paid version of ChatGPT (Plus). I started having it try to work with small amounts of text (five page Word documents) and Excel spreadsheets. Turns out it's not very capable at all. It can't process large amounts of text without random deletion, environment resets and it can't remember the project rules you told it under the project you created, can't follow basic instructions with Excel, etc.

I still think it's going to take over, but either it's going to take a while or the capability needs to exponentially increase.

I've written a SciFi Trilogy. The first book was published years ago. I ran the second book, just the first chapter through ChatGPT and it shit itself. Yeah... I totally agree.
 
Back
Top