What moderation policies should Social.Coop adopt toward Fediverse instances run by for-profit corporations?
In brief
Social.Coop has adopted a policy that Threads.net content should be moderated at a "Limit" level, which means that content posted by accounts on Threads.net will be
hidden from all other users on that instance, except for its followers. All of the content is still there, and it can still be found via search, mentions, and following, but the content is invisible publicly.
We do not, however, have any policy on what our relationship should be with instances run by for-profit corporations more generally. Having an explicit policy on how for-profit instances should be handled by the Community Working Group (CWG) would streamline future decision-making and clarify for users of differing opinions whether Social.Coop is the right choice for their their long term home on Mastodon.
It would therefore be valuable to discuss where our membership comes down on this question, and to work toward a written policy.
Longer context
Prior to the Facebook's/Meta's creation of Threads.net in 2023, I believe that federated social media was almost exclusively open software platforms run in a distributed fashion, and generally not for profit. Individuals would sometimes release cross-posting tools/robots that utilized the API of centralized corporate platforms to relay content from the corporate platforms onto "Fediverse" platforms, but this was not core functionality.
As such, up until 2023, the question of how Mastodon and other federated platforms would relate to corporate platforms was largely a moot point. Different instances did have different policies on bridges and cross-posting robots, but my sense is that it was generally understood that Mastodon and corporate social media were separate things. To a large extent, I believe that this is still the case. Threads.net has not in fact implemented full federation using ActivityPub. However, the 2022/2023 announcement by Meta of plans to adopt ActivityPub prompted immediate and strong responses from the Fediverse community, some seeing this as an opportunity to connect with the much larger number of people who use corporate platforms, while others expressed alarm about a long history of problematic behaviors by Facebook and Meta.
Brian Vaughan introduced the question of whether Social.Coop should join Vanta Black's the Anti-Meta Fedi Pact in June 2023 (https://www.loomio.com/d/AZcJK6y2/discussion-support-the-anti-meta-fedi-pact). After extensive discussion, we decided not to join the pact, but to instead impose a minimum of Limit moderation level on Threads.net. One of the many themes running through that discussion was the question of how we see the role of for-profit platforms in the Fediverse.
There was also a related discussion of whether we could formalize a policy that would capture the objections to Threads.net in a generalizable form, including laying out explicit norms for respect and policies about privacy and user control of their own data, which several of our members began writing up in the collaborative document that Eduardo Mercovich links to here: https://www.loomio.com/d/W6tL5cvp/the-bluesky-bridge/34
Most recently, there have been questions raised by Sam Whited and Flancian about how to moderate bridge instances that permit interoperability between Social.Coop and Bluesky (a very new platform that so far has much better moderation policies and social track record than the social network giants Facebook/Meta and Twitter/X, but which is also run by a for-profit public benefit corporation). The discussion about the bridges with Bluesky is here: https://www.loomio.com/d/W6tL5cvp/the-bluesky-bridge
Inevitably there will be future questions about moderation of corporate instances, as well as bridges with such instances, and I think it would be valuable to not need to discuss each case individually.
Dynamic Sun 8 Sep 2024 12:37PM
"If individual users must actively opt out (or opt in) is it the responsibility of individuals to keep track of new corporate instances that emerge and to change their settings accordingly, or should Social.Coop work to support a process to track and keep users informed about the emergence of new corporate instances that we federate with?"
I included this question in my list above, but in reality I don't think it's realistic to either put this labor onto individual users or onto Social.Coop as an organization. I do think the question illuminates part of why these moderation questions are so fraught.
Individual Mastodon users have all the same moderation tools that are available at the instance level, but taking positive action on every new instance that you might [not] want to connect with would be a huge amount of work.
Justin du Coeur Sun 8 Sep 2024 3:57PM
My problem here is that "for-profit" is a squishier and more arbitrary line than it sounds like on the surface. I mean, sure -- a general policy for billion-dollar multinationals might make sense. But that's just the tiniest fraction of "for-profit" companies, the vast majority of which aren't rapacious giants. (Or even greedy startups run by techbros.)
An example or two that I'm personally involved with:
On the one hand, I'm on the Board (and really, effectively the chair) of a company that is legally a for-profit -- the NE Scala conference. This is a small co-op conference that ran for a number of years; it was interrupted by the pandemic, but we still hope to get it restarted. It's all volunteer-run, and none of the Board ever touch the bank account for personal purposes. It should be a 501(c)3 non-profit, but it's a huge headache to get to that point, and we're too small to have the resources to do so easily. But it needs to have a corporate identity, because that's necessary for legal safety in the US. So it's currently an LLC, and we just try to run basically at breakeven.
OTOH, there's Querki. That is technically a startup -- an actual C-Corp, registered in Delaware and everything, of which I'm the CEO, and have been for 12 years. That doesn't mean what people think it does, though. I'd love to make some money from that someday, but in practice it's never made a cent in revenue, is completely open source, and has no venture backing because I'm not willing to be unethical enough to go in that direction.
Both of those are, legally, "for-profit companies". But both are very much passion projects run by people who want to see these things happen -- similar in spirit, if not legal structure, to much of the coop movement we're supposed to be supporting here.
(This isn't even a completely idle example. Querki has a built-in conversation system, and in principle I'd love to bridge that into ActivityPub, if I ever figure out what that ought to look like.)
Contrast that with the reality that there are plenty of corrupt not-for-profits that really exist to line the pockets of their C-suite, and which do arguably unsavory things. I mean, let's not forget that OpenAI was (maybe still is, I haven't been tracking those shenanigans) technically owned by a non-profit -- I don't think anybody here particularly wants to give them a free pass just because of that non-profit status.
So while it's a plausible consideration to factor in, I don't think having a firm policy for for-profit corporations and another for non-profits is appropriate. The reality is that "for-profit" doesn't always (or even usually) mean "rent-seeking avaricious capitalist" -- it's just that the biggest for-profit companies mostly lean in that direction.
Dynamic Sun 8 Sep 2024 4:18PM
@Justin du Coeur
Might this instead point toward something akin to the Posifedi Pact document?
I linked to the shared document above, but here's the link again because I feel like links easily get lost in these kinds of conversations: https://cryptpad.fr/pad/#/2/pad/edit/5xxQZvxvjfAP2eQTYenLuWOZ/)
Justin du Coeur Sun 8 Sep 2024 4:35PM
@Dynamic That certainly seems like a reasonable guideline, although it still requires a good deal of judgement on the part of the CWG. (Might be as good as we're likely to get; I dunno.)
It does have a clear implication that we're making a distinction between services that have somewhat active moderation vs those that don't -- "free speech" absolutist sites (whether sincere or Xitter-style right-wing BS) clearly fail. That's a bit subtle, but it's an interesting and relevant way to think about the problem.
Dynamic Sun 8 Sep 2024 4:39PM
@Justin du Coeur
FWIW, I believe the Posifedi Pact in its current form is intended to be a document in progress. Perhaps @Eduardo Mercovich can provide additional context.
Eduardo Mercovich Mon 9 Sep 2024 10:54PM
@Dynamic Hi everyone. Yes, it is a work in progress and it is actually going to be a project proposal in a couple weeks. It is not public yet since it is taking form, but the idea is to form a group of instances that share values and norms and to collaboration as a federation to create safer, more welcoming, diverse and richer fediverse spaces while diminishing the moderation effort (to start).
If it helps, a temporary code name is The Ekumen (see Úrsula K. LeGuin Hainish Cycle, https://en.wikipedia.org/wiki/Hainish_Cycle). :)
If someone is interested to participate, please tell me or @Flancian so we can add you when this thin coalesces, hopfully in 1 or 2
weeks. :)
Dynamic Mon 9 Sep 2024 11:04PM
@Eduardo Mercovich
What does it mean to say that it is going to be a project proposal?
Eduardo Mercovich Mon 9 Sep 2024 11:31PM
@Dynamic Sorry I wasn't clear, I never meant a social coop project but that I am organizing all the information in order to present in clearly to those that could be interested. Mission, background, a plan to go forward, etc. :)
[it is night and tomorrow will be offline almost all day, in case questions arise will happily reply on Wed].
Zane Selvans Sun 8 Sep 2024 4:22PM
I don't think "for-profit" vs. "non-profit" will be a good general purpose discriminator between problematic and good-faith instances. For-profit companies that do a good job of aligning their interests with users -- like a small business without outside investors providing a subscription service so users are actually the customers -- are going to behave very differently from companies with a business model resembling Facebook or Twitter -- where users are the product, being sold out to generate returns for external investors.
Dynamic · Sun 8 Sep 2024 12:29PM
Here are some smaller questions that I think are folded into the above overarching question:
Do we need additional policies, or are our existing policies sufficient?
If we are to formulate additional policies, should they explicitly address whether instances are run by for-profit corporations, or instead be framed in terms of behaviors exhibited (as I believe is suggested by the document that some of our members began to draft as the Posifedi Pact here: https://cryptpad.fr/pad/#/2/pad/edit/5xxQZvxvjfAP2eQTYenLuWOZ/)
If we do have policies on federation with / moderation of for-profit instances, what default policies do our members want and expect. For example:
If some of our users want to be able to interact with for-profit instances and others do not, should the default be to allow interactions for everyone (with the option to opt out) or to disallow interactions for everyone (with the option to opt in)
If individual users must actively opt out (or opt in) is it the responsibility of individuals to keep track of new corporate instances that emerge and to change their settings accordingly, or should Social.Coop work to support a process to track and keep users informed about the emergence of new corporate instances that we federate with
When available technology does not support the preferred degree of flexibility for Social.Coop users, should we err in the direction of caution or in the direction of permissiveness?