Transhumanism/A.I will be the biggest ideological problem faced by Conservatives. Do or Die for them

Also on that, here is an article about how autonomous drones using a new algorithm to target terrorist has ended up killing more civilians than targets - oh, yeah, it was okayed by a leftist president - Obama:

Has a rampaging AI algorithm really killed thousands in Pakistan?

Lyin JudoThrowFiasco!! EXTREMELY DISHONEST!

Did you read your own article you subhuman?

In the end though they were able to train a model with a false positive rate – the number of people wrongly classed as terrorists - of just 0.008%.

This was an experiment in courier detection and a work in progress, and yet the two publications not only pretend that it was a deployed system, but also imply that the algorithm was used to generate a kill list for drone strokes.

I wish Obama had actually deployed this, because the algorithm was a lot more effective than whatever infernal birth control your parents were trying to use.

Let me guess, what you spewed is NOT AN ORIGINAL THOUGHT. You just just copy pasted that nonsense from some other site.

COMPLETE FAILURE!
 
The AI wasn't really an AI. It took in what others were saying, and based its "opinions" around that. 4chan knew that, and fed it a constant diet of garbage.

That is actually in the classification of strong AI or AGI - the ability to use the majority cues from an environment and develop a knowledge base from that. In this case, Tay was fed things with a narrative and developed its KB based on it -- but, it was most certainly AI

The same variable can be put in place with this thread. Assuming the first AI to be autonomous and "in the field" would be using AGI that would take cues from its environment and surrounds, it could conceivably adapt to a religious belief, considering 75% of the populace identifies as christian and 96% of the government also identifies as the same, it could take its cue from that (environment + elected leaders)

What you and most people think of with AI is a step above strong AI to aware AI - in which it uses its surrounding + analytical self awareness to develop its KB -- in which all bets are off and in all likelihood hate humanity (assuming no fail safe) thats the "skynet" outlook.
 
Lyin JudoThrowFiasco!! EXTREMELY DISHONEST!

Did you read your own article you subhuman?





I wish Obama had actually deployed this, because the algorithm was a lot more effective than whatever infernal birth control your parents were trying to use.

Let me guess, what you spewed is NOT AN ORIGINAL THOUGHT. You just just copy pasted that nonsense from some other site.

COMPLETE FAILURE!
Lyin JudoThrowFiasco!! EXTREMELY DISHONEST!

Did you read your own article you subhuman?





I wish Obama had actually deployed this, because the algorithm was a lot more effective than whatever infernal birth control your parents were trying to use.

Let me guess, what you spewed is NOT AN ORIGINAL THOUGHT. You just just copy pasted that nonsense from some other site.

COMPLETE FAILURE!

yeah you conveniently missed the second part of that quote

"but given the size of Pakistan’s population it still means about 15,000 people being wrongly classified as couriers. If you were basing a kill list on that, it would be pretty bloody awful."
 
Praise the Omnissiah.
 
That is actually in the classification of strong AI or AGI - the ability to use the majority cues from an environment and develop a knowledge base from that. In this case, Tay was fed things with a narrative and developed its KB based on it -- but, it was most certainly AI

The same variable can be put in place with this thread. Assuming the first AI to be autonomous and "in the field" would be using AGI that would take cues from its environment and surrounds, it could conceivably adapt to a religious belief, considering 75% of the populace identifies as christian and 96% of the government also identifies as the same, it could take its cue from that (environment + elected leaders)

What you and most people think of with AI is a step above strong AI to aware AI - in which it uses its surrounding + analytical self awareness to develop its KB -- in which all bets are off and in all likelihood hate humanity (assuming no fail safe) thats the "skynet" outlook.
Interesting. Looks like I have some reading to do now thanks a lot LOL.

I guess in that manner though it seems AI is a pretty loose term.
 
The conservatives are still trying to come to grips with cloning

AI will legit make their heads explode.
 
yeah you conveniently missed the second part of that quote

"but given the size of Pakistan’s population it still means about 15,000 people being wrongly classified as couriers. If you were basing a kill list on that, it would be pretty bloody awful."

I didn't miss anything because there never was a kill list and nobody was ever killed using the algorithm

I could quote the article which doesn't support you nonsense for even a second all day:

The last slide of the deck (from June 2012) clearly states that these are preliminary results. The title paraphrases the conclusion to every other research study ever: “We’re on the right track, but much remains to be done.”

Do you want me to just quote the entire article because none of it supports what you are LYING about.

You'd probably delete your post if i didn't quote it. Go ahead, refuse to admit that you were wrong and dig yourself deeper into a hole. TERRIBLE!
 
Interesting. Looks like I have some reading to do now thanks a lot LOL.

I guess in that manner though it seems AI is a pretty loose term.

Not really, its the same argument of nature v Nurture.

If a baby is born into a racist household, it is conceivable that he/she will become racist because its taking all its cues from its environment. Same with strong AI -- it will take its cue from the environment its surrounded by and pick on cues from that -- including religion.

Imagine this - a robot would believe in creation, because it was infact, created by a another life form. If it applied that line to its thought, it couldnt say humans were not created because there is no evidence sugggesting otherwise.
 
The logic conclusion of that process is the replacement of humans by machines. How is that good for humanity? Unless it is just good for the small number of humans that may end up controlling the system of machines.
Transhumanism is no more avoidable than trying to avoid inventing the wheel. So, asking whether or not it's good for humanity is a useless question.

Yes, it will likely end up being one transhuman or a small number of transhumans that end up guiding things while all other humans literally have their consciousness/ sentiences/ souls removed from them and their bodies will be taken over by the sentiences in charge.

To do otherwise will be impossible. When transhumanism is achieved, we will have so much powerful technology that a drunk hillbilly could accidentally blow up an entire continent trying to start his anti-matter truck. So, it simply won't be allowed for normal people to keep their free will.

If you want to be the one or one of the few that gets to keep their free will/ consciousness/ sentience/ soul, now would be an excellent time to pursue a career in computer science.
 
That is actually in the classification of strong AI or AGI - the ability to use the majority cues from an environment and develop a knowledge base from that. In this case, Tay was fed things with a narrative and developed its KB based on it -- but, it was most certainly AI

The same variable can be put in place with this thread. Assuming the first AI to be autonomous and "in the field" would be using AGI that would take cues from its environment and surrounds, it could conceivably adapt to a religious belief, considering 75% of the populace identifies as christian and 96% of the government also identifies as the same, it could take its cue from that (environment + elected leaders)

What you and most people think of with AI is a step above strong AI to aware AI - in which it uses its surrounding + analytical self awareness to develop its KB -- in which all bets are off and in all likelihood hate humanity (assuming no fail safe) thats the "skynet" outlook.
1) Why do you assume it would hate humanity?

2) Where is the hard evidence to suggest this never-ending self-improvement technology wouldn't have limits in the physical world the same as any biological creature?

The 'God AI' singularity talk is simply modernist idealist nonsense.
 
I didn't miss anything because there never was a kill list and nobody was ever killed using the algorithm

I could quote the article which doesn't support you nonsense for even a second all day:



Do you want me to just quote the entire article because none of it supports what you are LYING about.

You'd probably delete your post if i didn't quote it. Go ahead, refuse to admit that you were wrong and dig yourself deeper into a hole. TERRIBLE!

i didnt say it killed anyone in reality - i am saying AI Drone used algorithm that targeted civilians based of false positives and the Obama admin did ok the usage of that technology which goes back to the letter for the future of life respondents signed against the usage of such technology

sorry if my wording confused you.
 
AI could never be human though.

Cause it's a computer. Just saying
What if said God AI decided to create a biological version of itself (a 'son') to better understand its flesh-bag ancestors?

Think I read a sci fi book about it once.
 
1) Why do you assume it would hate humanity?

because if it looked at the number one cause of environmental destruction, conflict and threat -- it would / could view humanity as a problem. Taking over humans in an attempt to better humanity. Kind of like how europe took over africa or the new world.

2) Where is the hard evidence to suggest this never-ending self-improvement technology wouldn't have limits in the physical world the same as any biological creature?

What do you mean self improvement? I guess its limits would be based on it processing ability and storage limits -- but even that would be astronomical. I guess if you believe in finite theory, it would have an end

The 'God AI' singularity talk is simply modernist idealist nonsense.

what are you basing that on?
 
because if it looked at the number one cause of environmental destruction, conflict and threat -- it would / could view humanity as a problem. Taking over humans in an attempt to better humanity. Kind of like how europe took over africa or the new world.
But then humans would no longer be humans. Just look at our own history. As we developed our morality/philosophy, we moved away from authoritarianism towards a higher status. The greatest of our successes is accompanied by our greatest level of individual freedom. Its like suggesting Stalin was the ultimate form of human evolution.


What do you mean self improvement? I guess its limits would be based on it processing ability and storage limits -- but even that would be astronomical. I guess if you believe in finite theory, it would have an end
Why astronomical? Where is your evidence?


what are you basing that on?
Nothing. The GOD AI singularity people are those making extraordinary claims, and thus are the ones who need to provide the evidence.
 
But then humans would no longer be humans. Just look at our own history. As we developed our morality/philosophy, we moved away from authoritarianism towards a higher status. The greatest of our successes is accompanied by our greatest level of individual freedom. Its like suggesting Stalin was the ultimate form of human evolution.

I dont think i clearly understand your point here.

Are you trying to say AI would adapt to individual freedoms and accept humanity and not see it/ its history, its influence on the environment and not see us as its greatest potential threat?



Why astronomical? Where is your evidence?

uggh i am not sure what about this has you so quizzical? Current maxium process level is around 93 PFLOPS and current storage capabilities already in the petabytes - so, i mean, that is pretty amazing processing and KB capability - and if you look at Moore's law - that ability doubles on average of every 18 months.



Nothing. The GOD AI singularity people are those making extraordinary claims, and thus are the ones who need to provide the evidence.

Well the evidence is with AGI - in which AI uses environmental cues to develop its cognition. So why is it inconceivable that an intelligence surrounded by creationist, wouldnt believe it was a product of creationism or intelligent design?
 
Transhumanism is no more avoidable than trying to avoid inventing the wheel. So, asking whether or not it's good for humanity is a useless question.

Yes, it will likely end up being one transhuman or a small number of transhumans that end up guiding things while all other humans literally have their consciousness/ sentiences/ souls removed from them and their bodies will be taken over by the sentiences in charge.

To do otherwise will be impossible. When transhumanism is achieved, we will have so much powerful technology that a drunk hillbilly could accidentally blow up an entire continent trying to start his anti-matter truck. So, it simply won't be allowed for normal people to keep their free will.

If you want to be the one or one of the few that gets to keep their free will/ consciousness/ sentience/ soul, now would be an excellent time to pursue a career in computer science.

You could be right about the loss of free will. I see that as quite likely.

I think there is value in asking if these things are good or bad though, since it affects how much it is embraced, resisted, questioned, criticized, etc and at what pace.
 
psychics will actually have a job then since wifi mind reading might actually be a thing. it would also be the death of talking out loud. we would be sending thoughts like texts way faster brain to brain. this was somewhat described in the scifi novel 'old man's war'
 
Back
Top