Newspapers do that because they are liable for what they publish. Twitter is not liable for what it publishes. Why should it not be liable? The answer is because they are just providing a platform and others are publishing. But the moment they use their platform to modify and censor what people publish, then they should probably be liable, right?
If someone uploads their library of child porn encoded to base64 split across tons of tweets, do you want Twitter to have a choice between removing that content and continuing to operate?
We have 3 options here:
1. No moderation allowed whatsoever on a site without a court order. That obviously leads to a terrible, toxic community with lots of reprehensible content that the average person wouldn't want to participate in.
2. A good faith effort at moderation. This allows the most reprehensible, highest-impact content to be removed and allows users to participate in the moderation process.
3. No content can be published without moderation, on any site anywhere. Want to post a Facebook status? Have fun paying $20 for the privilege of waiting 48 hours for a human to review it.
All of this is irrelevant though, because this executive order is not targeted at censorship. It's targeted at a private individual who voluntarily, for free, passed on a message from one person to many other people and decided to tell them "this seems fishy, you might want to read up on it."
Twitter is actually required by law to remove child porn, not being forced to keep it up, that would be insane. This isn't a call for no moderation, it is a call for a neutral platform when companies are on the verge of monopolies. They are definitely monopolize your followers, switching platforms is not even a choice. You can work you're entire life building a channel and audience on social media site and they can take it away in a heartbeat and you don't even get a chance to let your followers know what happened, e-mail them even, let them know where they can find you going forward. That's not the worst part, the worst part is they can and do do it without any reason whatsoever, it could be just because someone at the company dislikes what you said. Again this is not about moderation, this is about companies that were built off of being a neutral platform that cannot be sued for liability, like a phone company but are now using their monopoly over your follower's information to hold over you and control what you say. If anything what really needs to be in legislation is rights to notify followers where else they can find you if you are banned or censored.
There exist many more options and law is notorious for having grey zones where the complexity of context, and intent and outcomes.
In the specific example of child porn, would removing it be protected speech and a copyrightable work? To my knowledge, no, it is not.
However telling someone "this seems fishy, you might want to read up on it.", attached to someone else copyrighted work, is to me speech. It is also a copyrightable work if its original enough. It could also be a defined as a derivative work if it includes major copyrightable elements of the original, which in this context is likely.
The difference between removing child porn and creating derivative work is one that I don't think courts will have a problem to distinguish between. Both may end up being described as moderation, but the outcome, intent and context is very different.
Why have a Supreme Court or even normal Judges if Laws are only allowed to be this binary? Why not option 4 you are allowed to censor if you give reason and the reason is in your TOS. But you are for example not allowed to Edit (Fact Check), Censor or manipulate Votes of your political opponents, or loose your libel protections.
Twitter isn't liable for illegal content posted by their users, as long as they take it down in time and make good faith efforts to keep it from being posted in the first place. If they weren't free from liability then a service like Twitter would need heavy human moderation and be extremely expensive to operate - perhaps it wouldn't exist at all.
That's the only reason this non-liability exists. It has nothing to do with moderation or censorship. Twitter, as any other web property, have the right to curate their platform and make it pleasant for their other users. It's their personal property.
but newspapers aren't required to express support for particular political candidates or viewpoints. That can get into trouble for publishing libel, ie maliciously presenting false statements as fact, but the criteria for what constitutes libel are very narrow. A newspaper can certainly publish an opinion like 'Politician X is a fool whose proposal should be ignored.'
> But the moment they use their platform to modify and censor what people publish, then they should probably be liable, right?
They've always done some editing and removal of certain content.
In this particular case, was anything modified or censored though? It seems more like Trump had his say, and Twitter had theirs. Is Trump saying that Twitter can't also express themselves on their platform?
> But the moment they use their platform to modify and censor what people publish, then they should probably be liable, right?
There are degrees to moderation, but not to liability. This black or white approach doesn't seem appropriate. They should be liable in a degree proportional to the moderation they introduce.