Social Steven crowder demonetized

Section 230 is intended for impartial platforms/providers/hosts (whatever word @Ruprecht wants us to use). The second YT removes content they are no longer impartial.
Where do you get the impression that they have to be impartial in the way you are describing? Nothing seems to suggest to me that was the intention of the statute. Here's the exact wording
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
And right under it...
(2) Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A)
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)
any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]
That's what YT is doing and they are within their rights.
So make the argument that he did . . . only thing Crowder is guilty of is taunting Maza IMO.

Yet he still wasn't found to have violated their ToS . . .
I don't watch Crowder's content because I think its ash but I'd be surprised if he hasn't called transpersons mentally ill which also violates those guidelines
 
Last edited:
And users are able to decide for themselves what content they want to view . . . back when I was more active on there I routinely got to validate moderation of comments.

Are you truly against users deciding what we view?

There's a user based system for moderation based on activity, but their paid editors have unlimited moderation points. Also even a theoretical complete democratisation of moderation isn't the same as individual user decisions. That just means majority rules. The more focused an audience is, the worse that becomes. That's just going to polarise online media further.
 
There's a user based system for moderation based on activity, but their paid editors have unlimited moderation points. Also even a theoretical complete democratisation of moderation isn't the same as individual user decisions. That just means majority rules. The more focused an audience is, the worse that becomes. That's just going to polarise online media further.

Regardless, leaving content up and letting users decide what they want to see and what they want to mute/block/hide makes more sense and seems much more impartial to me.
 

@Kafir-kun your quote code is goofed up so I had to tag you.

Those rules you mention are developed by the platform. Seem to change on a whim. Are inconsistently enforced.

That's a big part of the issue IMO.

And good for you being adult enough to decide you aren't watching Crowder's channel. Interesting how that works.
 
Those rules you mention are developed by the platform. Seem to change on a whim. Are inconsistently enforced.

That's the issue.
How is that the issue though? They are a platform exercising their rights to moderate their own service. I can understand being unhappy with it, I have been unhappy with YT for years now based on how they manage their platform, but that doesn't mean they aren't legally entitled to the protection of section 230.
And good for you being adult enough to decide you aren't watching Crowder's channel. Interesting how that works.
I personally just don't watch stuff. The most I'll do is dislike it. But I also understand that a community like YT doesn't work if people don't flag objectionable content. So I am free-riding off of those who do good moderation work of others

And honestly your position here seems inconsistent. You say that moderation should be performed by users but that also users should not moderate, they should just avoid what they don't want to watch. Which is it? Should a platform have any moderation in your view and if so by whom?
 
Regardless, leaving content up and letting users decide what they want to see and what they want to mute/block/hide makes more sense and seems much more impartial to me.

That means you're mandating that any hosting service (i'm assuming in print, video or audio) has to democratise their terms and conditions and effectively their business model. It's unworkable.
 
How is that the issue though? They are a platform exercising their rights to moderate their own service. I can understand being unhappy with it, I have been unhappy with YT for years now based on how they manage their platform, but that doesn't mean they aren't legally entitled to the protection of section 230.

How is a platform playing both sides and no longer remaining impartial not an issue? Isn't Section 230 intended to protect an impartial platform to present ALL information and give as much control to the individual users?

I personally just don't watch stuff. The most I'll do is dislike it. But I also understand that a community like YT doesn't work if people don't flag objectionable content. So I am free-riding off of those who do good moderation work of others

I'm definitely not on there like my daughters are for sure . . . I'll subscribe to various gun related channels and watch product reviews, but I'm not out there searching for content that I disagree with or know I don't like/support. And I'm definitely not out there searching for that content to flag it or report it.

And honestly your position here seems inconsistent. You say that moderation should be performed by users but that also users should not moderate, they should just avoid what they don't want to watch. Which is it? Should a platform have any moderation in your view and if so by whom?

I don't understand what you're not getting here. I've said IF moderation happens it should ONLY be completed by the user population similar to the slashdot model. You would have the ability to set a default score/rating for content that you wouldn't ever see. You the user would have control of what you see or don't see.
 
That means you're mandating that any hosting service (i'm assuming in print, video or audio) has to democratise their terms and conditions and effectively their business model. It's unworkable.

I guess I could be underestimating how simple this could be . . . or maybe we're not on the same page.

Seems to work for AT&T. Seems to work for Verizon. I may get robocalls or other harassing phone calls, but I have the freedom to not answer or block the caller.
 
How is a platform playing both sides and no longer remaining impartial not an issue? Isn't Section 230 intended to protect an impartial platform to present ALL information and give as much control to the individual users?
Section 230 does not specify that it only protects impartial platforms that present all information and in fact right afterwards the law affirms their right to good faith moderating, even of constitutionally protected material. So merely moderating does not make one ineligible for protection under section 230
I'm definitely not on there like my daughters are for sure . . . I'll subscribe to various gun related channels and watch product reviews, but I'm not out there searching for content that I disagree with or know I don't like/support. And I'm definitely not out there searching for that content to flag it or report it.
That's fine but some people do moderate and flag content and that is not necessarily bad. In fact it is, if done in good faith, actually a good thing for any site.
I don't understand what you're not getting here. I've said IF moderation happens it should ONLY be completed by the user population similar to the slashdot model. You would have the ability to set a default score/rating for content that you wouldn't ever see. You the user would have control of what you see or don't see.
So the content doesn't even get removed, just hidden? So neither the users nor the platform has the ability to remove content?

I don't think that system would work very well at all, I am glad YT does not use it. And I see no reason to think that YT is legally obligated to use such a system to maintain its protection under section 230.
 
I guess I could be underestimating how simple this could be . . . or maybe we're not on the same page.

Seems to work for AT&T. Seems to work for Verizon. I may get robocalls or other harassing phone calls, but I have the freedom to not answer or block the caller.

AT&T and Verizon's phone networks aren't open access content funded by advertising and while AT&T's subscription based HBO might be more relaxed with it's content (compared to advertising driven networks), it's certainly not user produced and determined.
ISPs could easily be treated as utilities, maybe even infrastructure like Amazon S3/Cloudflare, but actual hosts of user created content? Not so much.
Alternatively if you require them to be legally resposible for everything published on their product, that may rule out open hosting all together. I don't think they could even effectively screen for libel on an international basis with current technology and resources. Even if such technology is developed, it will be a huge barrier to entry if it's required.
 
How is that the issue though? They are a platform exercising their rights to moderate their own service. I can understand being unhappy with it, I have been unhappy with YT for years now based on how they manage their platform, but that doesn't mean they aren't legally entitled to the protection of section 230.
There's an issue I saw this morning where IraqVeteran (dude is a gun channel) showed his log of a video he hasn't even PUBLISHED yet has already been demonetized. Meaning, there was no way for a viewer to flag it but it's already been demonetized. That seems like an out and out obvious pre-determined stance on stuff.

This is the sort of thing he does:
 
There's an issue I saw this morning where IraqVeteran (dude is a gun channel) showed his log of a video he hasn't even PUBLISHED yet has already been demonetized. Meaning, there was no way for a viewer to flag it but it's already been demonetized. That seems like an out and out obvious pre-determined stance on stuff.

This is the sort of thing he does:


They use an algorithm at uploading to search for controversial topics. Same reason a lot of content gets demonetised.
Just have a look at their guide.
Definitely any links or mention of firearms retailers get's you demonetised / a strike though.
 
They use an algorithm at uploading to search for controversial topics. Same reason a lot of content gets demonetised.
Just have a look at their guide.
Definitely any links or mention of firearms retailers get's you demonetised though.
Tis dumb imo. Oh well...
 
Section 230 does not specify that it only protects impartial platforms that present all information and in fact right afterwards the law affirms their right to good faith moderating, even of constitutionally protected material. So merely moderating does not make one ineligible for protection under section 230

As I understand the various summaries of 230 it was specifically passed to prevent overmoderation due to a fear of liability for content.

That's fine but some people do moderate and flag content and that is not necessarily bad. In fact it is, if done in good faith, actually a good thing for any site.

Yes . . . that should be done by you, me and the user community IMO . . .

So the content doesn't even get removed, just hidden? So neither the users nor the platform has the ability to remove content?

Correct. Obvious exceptions would be child porn, etc.

I don't think that system would work very well at all, I am glad YT does not use it. And I see no reason to think that YT is legally obligated to use such a system to maintain its protection under section 230.

Why wouldn't a system where you're able to select the categories you're interested in viewing and then adjust your individual account settings to hide videos based on being rated 1-5 stars work?

I never said they were legally obligated to do anything.

All I'm saying is that YOU should decide for yourself what content you choose to view. Not me. Not YT. Not Twitter.
 
AT&T and Verizon's phone networks aren't open access content funded by advertising and while AT&T's subscription based HBO might be more relaxed with it's content (compared to advertising driven networks), it's certainly not user produced and determined.
ISPs could easily be treated as utilities, maybe even infrastructure like Amazon S3/Cloudflare, but actual hosts of user created content? Not so much.
Alternatively if you require them to be legally resposible for everything published on their product, that may rule out open hosting all together. I don't think they could even effectively screen for libel on an international basis with current technology and resources. Even if such technology is developed, it will be a huge barrier to entry if it's required.

Look, all I'm advocating for is giving the consumer the control to determine what we view. Let advertisers determine if they want to monetize a firearms video regardless of whether YT or Google supports firearms.

Folks who like various content will support it.
 
Look, all I'm advocating for is giving the consumer the control to determine what we view. Let advertisers determine if they want to monetize a firearms video regardless of whether YT or Google supports firearms.

Folks who like various content will support it.

Well advertisers do have the option to advertise on "sensitive social issues" (which includes firearms), but they don't (although I don't think Youtube accepts advertising from firearms manufacturers or accessory manufacturers).
Advertisers are Youtube's consumers, the viewers are the product.

youtube-ad-placements-excluded-content.jpg

youtube-ad-placements-excluded-types-and-labels.jpg
 
Well advertisers do have the option to advertise on "sensitive social issues" (which includes firearms), but they don't (although I don't think Youtube accepts advertising from firearms manufacturers or accessory manufacturers).

Well there you go . . . wonder why they choose to not take advertising from Firearms related companies? Those folks would pay a ton to support educational channels or reviews of their products.

Advertisers are Youtube's consumers, the viewers are the product.

Select advertisers maybe . . . so again, is YT a platform/host/provider or a publisher?
 
Well there you go . . . wonder why they choose to not take advertising from Firearms related companies? Those folks would pay a ton to support educational channels or reviews of their products.

They still host the manufacturer's channels and advertisements though strangely enough.

Select advertisers maybe . . . so again, is YT a platform/host/provider or a publisher?

Depends on the context and definition.
 
So they do take their money or not?

I don't think so, although I can't find a definitive statement from them.

Yep. Would be nice if YT could provide that for us . . .

It's going to depend on the legal framework they are responding to.
Their culpability to their advertisers is always going to be the deciding factor in terms of their responsibility over content though.
Earlier this year Disney, Nestle and others pulled out of advertising due to pedophiles commenting on children's videos on youtube.
 
Back
Top