- Joined
- Jul 4, 2013
- Messages
- 37,170
- Reaction score
- 49,594
Can you explain why it generates pro-pedo answers?Can you explain what you think Gemini does to respond to a question?
Can you explain why it generates pro-pedo answers?Can you explain what you think Gemini does to respond to a question?
Duh, do you think the picture program prerecorded black people in chains eating watermelon?I'm not sure what the point of this cut and paste is.
Transformers are software architecture, employing an encoder-decoder design pattern, it's not a language like Python or a framework like Pytorch.
The point of describing how transformers work is that it's not just a bunch of preprogrammed responses matched to conditionals (if, then statements).
Can you explain what you think Gemini does to respond to a question?
Can you explain why it generates pro-pedo answers?
right.No person at Google handcrafts any response unless it's a guardrail. Otherwise the responses is influenced by the models training (it's data) and the prompt itself. For instance, you can coerce a model to give a pro pedophilia response based on your prompts unless there are strong guardrails that detect what you're doing and actively tries tries to prevent the model from responding in that way.
right.
that's why it sounds exactly like every blue-haired baboon screeching about systemic oppression. everything i've seen these days indicated that whatever it's learning from, these "guardrails" are more than "don't show people how to build a bomb" and more ideological precepts - force diversity in everything, etc.
it's an epic fail from the google ideologues. may they get what they deserve.
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...A generative model like Gemini is trained on 100s of gigabytes of text (usually scraped from the internet). It takes a question (a "prompt") and encodes it into a format that the model can understand then model takes the encoded prompt one word (or "token") at a time and tries to predict what the next token (or "word") should be using backprogation which eventually results in full sentences and paragraphs. That's why it's called generative because it's generates data.
No person at Google handcrafts any response unless it's a guardrail. Otherwise the responses is influenced by the models training (it's data) and the prompt itself. For instance, you can coerce a model to give a pro pedophilia response based on your prompts unless there are strong guardrails that detect what you're doing and actively tries tries to prevent the model from responding in that way.
No, because i'd be wrong .Have you considered the possibility that you're a right wing nutjob and that anything that isn't perfectly aligned with right wing nuttery sounds like "blue baboon screeching" to you?
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...
How does it not make sense? Your response is a complete non sequitur to what I said. Corporations must grow, infinitely. White people are the majority, but they're not everyone. So corporations now market to everyone in an attempt to expand their customer base so they can continue to grow. There's no fucking conspiracy. It's money. It's always money
The thing is, the entire model is guardrailed, every prompt and every answer, because it has been finetuned.That's a guardrail which I covered.
The absence of a guardrail does not mean that Google endorses what the model spits out and it doesn't mean that they control exactly what it generates. Every model has a disclaimer stating as much.
Also, guardrails can be circumvented.
The normal and non insane non culture warrior brained people working in the marketing departments of multinational corporations don't believe that making an ad that appeals to minorities is somehow an attack on non minorities. That isn't a normal way to think. "This commercial is appealing to gay people?!? STRAIGHT PEOPLE ARE UNDER ATTACK!!" is a completely deranged and brain diseased way to think.Business 101; if you piss off your largest consumer base, your company is not only not going to grow, it's going to go bankrupt
The thing is, the entire model is guardrailed, every prompt and every answer, because it has been finetuned.
What you are talking about is the conversational scoping of the model ie. how you get it to not answer certain questions. Which is needed on top of finetuning the model. You need both.
The problem isn't purely in the training of the base model and the data set. But in Google's terrible attempt to steer the model into "woke" answers. Which, given that LLM's are unpredictable, generated some hilarious examples.
There is consistency and correlation with other "woke" topic and answers. It gives some of the blanket responses to some of the questions I have posted. How is this correlated with NLP modules training. It uses some of the same key terms as M-A-P-S in answers with consistency. The software itself is still in different generational languages, where can one throw in the wrench.You seem to be talking about the images.
But that's not what I'm talking about. I'm talking about two specific responses from the model regarding pedophilia and the claim made by several people here that Google "preprogrammed" it to be "pro-pedophilia".
There is no question that Google is intervening in some manner to make images of people more diverse. I acknowledged that in a previous reply.
I don't know how to make a nuclear bomb but that doesn't mean I don't know right from wrong and have some kind of voice with the use of one. Never got that type of thinking ..we got another one of those "you don't know how this works" dudes around here that somehow never touch on how "it worked" itself into pro-pedo replies.
you're pathetic.
OK here we go. I went to have a sit in on a “intervention” with a VP who tried to insist that he did not want to run his team with the prescribed diversity ideals of the company.I have a pretty funny insight into this that I’ll post up later
That's true, I don't think it was ever intented for the model to be pro-pedophilia. In fact it seems to be a failure that the model didn't pick up on this being an extremely sensitive topic to begin with and shut the conversation down. But I do believe it's still the result of Google "handcrafting".You seem to be talking about the images.
But that's not what I'm talking about. I'm talking about two specific responses from the model regarding pedophilia and the claim made by several people here that Google "preprogrammed" it to be "pro-pedophilia".
There is no question that Google is intervening in some manner to make images of people more diverse. I acknowledged that in a previous reply.
That's true, I don't think it was ever intented for the model to be pro-pedophilia. In fact it seems to be a failure that the model didn't pick up on this being an extremely sensitive topic to begin with and shut the conversation down. But I do believe it's still the result of Google "handcrafting".
Research shows that finetuning, RLHF etc. can make LLM's worse.
Gemini forces diversity in their image generation trough bruteforcing prompts, not very elegant and we saw the results...
It's another failure with the conversational model, which has been tuned to return results in a certain style, length, tonality etc. What I've observed is the extreme "bothsideism" it's forced into on basically any topic (not involving white people). Basically, it tries to give you arguments from all sides on a topic. Which isn't really appropriate when it comes to pedophilia...
Woman is a gender identity. Most commonly held (but not limited to) adult females, and is associated with certain traits and behaviours that can vary depending on the culture. In American (and many westernised cultures), identifying as and behaving as a woman, is generally associated with things like femininity, child-raising, emotional sensitivity, etc. However, people can identify as a woman without adhering to specific traits because how someone chooses to express their identity can vary from person to person.
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...