Loomio
Fri 13 Jan 2023 6:28AM

Framework for Moderation

ED emi do Public Seen by 255

It’s been exciting to see many new tooters on social.coop and though each of our timelines may look a little different, we are looking for ways to ensure that the social.coop corner of the fediverse continues to reflect our values and ethos for being a cooperative and collaborative space. The CWG Ops team has been hard at work at our moderation duties through this large growth phase and we have been grappling with some moderation questions.

We would like to add clarification to encouraged and discouraged behaviours outlined in our CoC and Reporting Guidelines.

In an effort to encourage and model positive and constructive modes of discourse that demonstrate a mode of online communication that can prove counter to the norms on other platforms,

we’d like to introduce nuance to moderation based on the guidelines outlined on this blog: https://wiki.xxiivv.com/site/discourse.html

In this thread we will discuss different moderation challenges and based on those discussions, will propose amendments to the CoC for approval by the membership.

MS

Matthew Slater Fri 13 Jan 2023 11:17PM

I think the aim is to reduce what are being called 'moments of constriction', and to remind people of the rules and call them to monitor themselves, because moderation is a limited resource. Calling for moderation should be the last resort.

  • tell the person why you didn't like their toot and invite them to soften it

  • block the offending person to protect urself

  • call the moderator to protect others.

SW

Sam Whited Fri 13 Jan 2023 11:22PM

FWIW, in all of the things you're complaining about this is what we did the first time. This isn't about moderators who overreached and deleted things when they should have talked to the user first, this is about users who repeatedly ignored the rules after being politely reminded of them and being asked not to post certain content or soften their tone. These users are hurting others and we're aiming to protect them, if the user ignores us we have to take stronger actions than just asking them to consider being nicer or not posting anti-vax content or whatever the case may be.

MP

Michael Potter Fri 13 Jan 2023 11:41PM

Aaron, I didn't get the impression that you were a right-wing type. Also, I like to think that I'm realistic enough that if I find an idea threatening, it's because it is. Anyway, I took anti-fascist from our own CoC: "Let there be no confusion, Social.Coop is anti-racist, anti-fascist, and anti-transphobic."

I see what you mean about groupthink and hard rules not always having the intended effect. Will the rules not apply to all situations, or would they condemn people who aren't doing anything wrong? I do believe that truth is generally verifiable, not philosophy or theology, but scientific truth. Certain conspiracy theories are well-known, debunked and aren't worth further discussion, imo. I don't think the bird app blocked disinformation out of the goodness of their hearts, but to protect themselves from liability. We might do well to consider that.

MP

Michael Potter Sat 14 Jan 2023 12:49AM

Online spaces that become popular will attract trolls. The purpose of moderation is to stop them from ruining the experience for everyone else.

AW

Aaron Wolf Sat 14 Jan 2023 1:05AM

Glad Loomio is threaded. Still, slight constriction about making sure I don't get into over-posting… I do want to clarify, trying to be concise.

I like to think that I'm realistic enough that if I find an idea threatening, it's because it is

I think a far better mental model is: "If I find an an idea threatening, I need to trust my intuitions and treat it as a threat because it's dangerous to ignore our fears." We can have that attitude while still holding more lightly to whether we are right about the danger. People in general will feel righteous about our judgments while feeling threatened. If we can later get to a state of real open curiosity and equanimity, we can review from a distance whether our fears were calibrated well. If we practice this consciously as a pattern, we can grow in our confidence. I wonder if you are recognizing enough the threat that comes from being overconfident about the accuracy of our perceptions of danger.

The model I'm proposing is one that does not support inaction in the face of perceived threats. I think it's essential that potential threats be addressed sooner rather than later. I just think we need to do the minimum to address immediate perceived danger and allow for a more patient facilitated process for how to finish resolving the situation after initial reaction.

from our own CoC: "Let there be no confusion, Social.Coop is anti-racist, anti-fascist, and anti-transphobic."

For transparency, I opposed that when added. There were similar tensions in drafting where people in a state of threat and constriction insisted on these trigger labels being added. I felt and continue to feel constriction about their presence. I read that sentence as people saying they are too scared of nuance and interpretation and felt only safe with some aspects of zero-tolerance policies, and putting that in was a compromise between that view and the view from myself and others who wanted an effective but less blunt and hard-line approach. So, I want the co-op to have moderators who use best practices and make human judgments to block harm. I don't want the co-op to say that certain debates (e.g. questions of biological sex versus socialized gender, or questions about climate science) are absolutely prohibited. I see that attitude as part of the purist trend in social media that blocks constructive dialogue and growth. I do recognize the serious risk of allowing too much subtle dangerous ideas in the name of dialogue. I want our moderation methods to empower humans to adapt the practical approaches over time as needed. Anyway, this is already way too long here.

Yes, I support any necessary efforts to protect ourselves from liability.

AW

Aaron Wolf Sat 14 Jan 2023 1:11AM

I don't like the term "hate speech" for all these things. Advocating violence isn't hate speech, it's just advocacy of violence. Hate speech is specifically about targeting particular groups and that does not include politicians as a group. It does have legal meaning even.

The Trump example is the only one I would support without any constriction myself. Some of those might be okay hidden behind a Content Warning labeled something like "rage rant" or something. It's okay for people to express some hyperbole, especially in context. I'd like it to be acknowledged consciously and not supported as the norm of communication.

AW

Aaron Wolf Sat 14 Jan 2023 1:18AM

This [EDIT: Sam's post, Loomio isn't threaded beyond single-level apparently, darn!] is the first in this thread to explain what the OP was referring to. I didn't even notice the issue until now. Can you just give a quick summary? I'm imagining something like: poster links to mainstream articles that are popular with anti-vaxxers even though the articles do not take anti-vaxxer position; then people complain; the posters keep doing it and ignore complaints; moderators step in and delete stuff. Is that right?

If so, this goes to my core points that I want to see: more onboarding process where new members are actively (like in conversation with some guide, a real human being) introduced to the social norms, which include making adjustments in light of complaints, not ignoring them. There's almost always some way to improve. And if someone egregiously complains too many times and is hassling others, moderators can deal with that pattern. I don't see how a community can maintain healthy norms and be welcoming without an active onboarding process.

I fully support moderators taking action if users ignore complaints. Furthermore, I wish there was a way to do something like hide a post behind a Content Warning even if the original poster didn't make one, because that is a medium action between merely asking the poster to take action themselves and just deleting their post.

T

tanoujin Sat 14 Jan 2023 1:27AM

Hi Matt, I'd like to look into that case if there is any documentation available - do you have a link to the evidence or related discussions?

T

tanoujin Sat 14 Jan 2023 1:57AM

I see your point, Aaron, but I think instead of hiding a post, the best practice would be to moderate proactively, like adding a public mod warning before a thread gets out of control, locking it and, yes, deleting toots that violate the CoC (https://wiki.social.coop/docs/Code-of-conduct.html). I take this from experience with fora, so I am not sure how to realize that in a microblogger environment though.

If you want discussion about questionable toots, you will run into dead ends offering mediation, just because (qualified!) manpower is limited and such proccesses will most probably fail if there is no commitment by the parties in conflict.

I could imagine a possibility to hand in a moderation complaint to a committee of supervisors which handles such cases swiftly backstage, but transparent to our members. Following discussions should utilize the usual channels imo. (You can see Matthew Slater trying to initiate something like that above.)

T

tanoujin Sat 14 Jan 2023 2:24AM

@Ana: I made good experience (elsewhere) with the purpose of keeping all posts in accordance to (don't laugh at me) the UDHR (https://www.un.org/en/about-us/universal-declaration-of-human-rights).

Just two examples for this minimal consent in action:

Matthew's "Too bad Epstein wasnt hung by his balls" -> Art.5 "No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment."

A person tooting an SS flag without any sarcastic or ironical framing: showing a symbol of "the violent elite of the master-race" -> Art1 "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood." (and a couple more).

That makes it pretty easy, right? Instead of listing what we do not want to see, we refer to the catalogue of rights we want to see untouched at minimum in our reach.

MN

Matt Noyes Sat 14 Jan 2023 2:28AM

It would be great if mods could add CWs, as it is the user has to do that (which is good in that it requires us to convince the user that it is a good idea, but only leaves us with the option of deleting the toot if the user does not agree...)

AW

Aaron Wolf Sat 14 Jan 2023 6:31AM

I'm confused about your reply. Hiding a post (if it were possible) is proactive moderation and is specifically about taking action before a thread gets out of control. Oh, I guess you imagined hiding to be like allowing people who already saw it to go ahead arguing and replying to each other not even realizing that it's now basically private? I didn't mean that. The hiding I imagine would be temporary disabling that just allows it to be edited and reposted (Discourse has that functionality for flagged posts, though they don't do it proactively enough and the default notice is way too harsh about shaming someone for getting flagged instead of helpfully encouraging them to edit and be gracious).

The context I imagine is one where parties commit to engage with constructive processes as part of joining the co-op and where norms about conflict-resolution and so on are repeatedly highlighted as part of the culture.

AW

Aaron Wolf Sat 14 Jan 2023 6:36AM

@[email protected] Sat 14 Jan 2023 3:26PM

Totally on a different angle, but having some constant education on how filters work will help here.

There are people who I would be happy not NOT block, but if they use a twitter cross-poster I would really not like to see the retweets - not because the content is offensive to me (it may or may not be), but because it's part of a larger conversation or thread on a different site.

If I (and others) could figure out how to filter out retweets (for example) on the home, federated, and local feeds, if some type of posts bother me I could just...not see them.

T

tanoujin Sat 14 Jan 2023 9:51PM

emi do:

1. <<In this thread we will discuss different moderation challenges and based on those discussions, will propose amendments to the CoC for approval by the membership.>>

2. <<add clarification to encouraged and discouraged behaviours outlined in our CoC / Reporting Guidelines.>>

3. <<...encourage (and model) positive and constructive modes of discourse (that [...] can prove counter to the norms on other platforms.)>>

----------------------

ad 1.

moderation challenges: examples?

propose amendments after discussion: texts?

ad 2.

encouraged:

inclusivity, open participation / care, consideration, respect / avoid assumptions of bad faith / constructive critique / legitimacy of resistance / authenticity / deescalation -> https://git.coop/social.coop/community/docs/-/wikis/Conflict-Resolution-Guidelines-v3.1

note: missing link on the Conflict Resolution Guidelines Page

discouraged ("unacceptable")

violence, threats / harrassment (inc sexual attention, intimidation, stalking, dogpiling) / offensive, harmful, abusive comments (insults) in rel to diversity / advocate, encourage aota

Reporting guidelines https://git.coop/social.coop/community/docs/-/wikis/Reporting-Guide/Reporting-Guide-v3.1

note: how to / procedural work in progress paper: missing image, missing link to CMT (?) for appeals.

ad 3. note: I see this realized in the spirit of the conflict resolution guidelines


Suggestions:

Moderators, please deliver examples of the challenges you are facing. Let us try to see how your examples relate to CoC, Reporting Guidelines, Conflict Resolution Guidelines. Let us quote text passages that we want to improve or complete, make proposals and discuss.

Questions:

is there any documentation of moderation we can use here?

@[email protected]: does the Community Working Group have anything in the pipeline yet? What grappling with moderation questions did happen? How can the plenum support you?

T

tanoujin Sun 15 Jan 2023 5:55AM

Becoming aware we are running into permission issues here, I am providing my moderation experience with Flancian. Good job, @Flancian, thank you again! Let me take the opportunity to invite others to share their moderation experience for constructive discussion if and how the CoC, Reporting/Conflict Resolution Guidelines should be amended.

DB

Doug Belshaw Sun 15 Jan 2023 1:20PM

I'm not sure there's much point in my adding my opinion to this thread, other than to say that I've historically blocked several people who have replied here due to their posts flouting any Code of Conduct worth having.

MP

Michael Potter Sun 15 Jan 2023 3:46PM

That is very interesting.

AW

Aaron Wolf Sun 15 Jan 2023 9:34PM

Interesting. For perspective when reading my question below, note that these cases are all new to me, I've barely posted at all or read anything (as in I actually go to or use social.coop a couple times a month at most, though I plan to ever participate more in the future), so I'm sure I haven't seen any of the relevant toots.

@Doug Belshaw Since you brought it up, can you tell us whether you did anything other than blocking? So, did you consider or were you aware of any options besides blocking? Was there any way in which anyone (you or others, moderators or not) did anything in the direction of consciousness-raising or feedback so that the members in question would have any chance to even know how others like you were reacting to their posts?

DB

Doug Belshaw Mon 16 Jan 2023 6:27AM

Yes, Aaron, blocking is a last resort for me, and I usually mute. Sadly, in the last few months I've had to mute many people on this instance.

AW

Aaron Wolf Mon 16 Jan 2023 6:58AM

Since the purpose of this thread is figuring out whether improvements can be made to conflict resolution and moderation process and policies, can you describe generally what your other efforts tend to look like and how they tend to go?

DB

Doug Belshaw Mon 16 Jan 2023 8:24AM

No, especially given you're one of the people I've historically muted. I find it inordinately tiring to (endlessly) debate this stuff, and find requests to explain why I'm opposed to what I believe to be anti-vaxxers and Nazi enablers very frustrating.

This is how good instances turn into bad ones IMHO; I'm one step away from quitting social.coop ( http://social.coop ) for the second time.

WM

Will Murphy Mon 16 Jan 2023 3:01PM

I did not see the posts in question, but I just looked up Jem Bendell's writings on COVID and this is definitely someone with a history of misrepresenting data to downplay the efficacy of vaccines.

From https://jembendell.com/2022/10/09/theyve-gone-too-far-with-the-children-so-what-do-we-do/

UK data from the Office of National Statistics for the year until the end of January 2022 showed that confidence about the effectiveness of vaccines against hospitalisation and death was premature. Although being doubly vaccinated reduced likelihood of death for most of 2021, after Omicron arrived the death rates in the doubly vaccinated but unboosted rapidly grew to higher than in those who had never been vaccinated (across all age groups). This was graphically represented by the ONS until early April 2022 when they removed the graphs, so we can see them using the wayback engine for their website at the end of March 2022. Is this an anomaly? Official data from USA finds something similar.  Vaccinated Californians had a higher rate of hospitalizations (severe illness) than those who were unvaccinated but had prior immunity from a past infection. The government did not do that analysis of its own data, but it is easy to do for yourself. That is enough reason for us to keep watching this issue – and one of the best places to observe this issue is Australia, because it still collects and releases decent data.

This is a classic data misuse technique where you start with one claim "confidence about the effectiveness of vaccines against hospitalisation and death was premature," change the topic slightly to a different question, "Vaccinated Californians had a higher rate of hospitalizations (severe illness) than those who were unvaccinated but had prior immunity from a past infection," and then cite data that actually refutes the original claim as if it supported it.

The latter citation leads to this graph, which shows that vaccination provided 6-8 fold protection compared to unvaccinated in the Fall of 2021. Yes for the Omicron wave specifically (but not any previous wave, BTW), those who had previous suffered COVID did enjoy similar and likely even better protection compared to those who were vaccinated, but that's a different claim. That's the question of "are the vaccines more effective than previous COVID infection" which is entirely different from the stated claim of "the effectiveness of vaccines against hospitalisation," which the cited data clear show they were.

Also consider the conclusion of the article that was cited:

What are the implications for public health practice?

Although the epidemiology of COVID-19 might change as new variants emerge, vaccination remains the safest strategy for averting future SARS-CoV-2 infections, hospitalizations, long-term sequelae, and death. Primary vaccination, additional doses, and booster doses are recommended for all eligible persons. Additional future recommendations for vaccine doses might be warranted as the virus and immunity levels change. 

So my point here is that @Matthew Slater 's comment frames it like it is a foregone conclusion that the removal of Jem's posts was an undesirable moderation result, but based on this history I don't think we should accept that framing.

MS

Matthew Slater Mon 16 Jan 2023 3:45PM

Thanks for looking into this Will.

I think it is perfectly acceptable to cite data in an article which draws different conclusions from the data. I've seen papers which draw wrong conclusions from their own data because they are only written for the headlines.

I see how Bendell makes one claim and then switches to a closely related claim, but he says what he is doing and why - because the UK stopped publishing that data. I don't see what is being misrepresented here, and your assertion of 'misinformation' seems very strong. I know Bendell very well and I know that he acts in good faith and professionalism.

I would rather be confused and undecided about these technical matters than have them filtered for me by people who, for all I know, might have different standards of truth for different opinions. Wouldn't you agree that claims for vaccine efficacy were criminally exaggerated in the early days and hence siding with the skeptics was/is quite a reasonable course of action?

AW

Aaron Wolf Mon 16 Jan 2023 4:10PM

Doug, I don't know what conflicts we've ever even had, and I'm barely active on Mastodon. I would really love to somehow clarify misunderstandings (or whether even you are mistaking me for someone else). I'm struggling with plain text to guess how much I could even say effectively. I take it you feel exasperated right now, and I just want to express my sympathies.

I deleted a longer reply I first posted. It's for a discussion between people willing to discuss. Would you please clarify whether you are at all open to discussing here on Loomio what we as a co-op can do to improve the experience you are having in this community? I don't want to add to your frustration by presuming your willingness. I appreciate you for sharing your thoughts and feelings here.

To just point out one bit of clarity for anyone reading: I was not in any way asking for explanations about objections to vaccine-skeptics etc. I was asking whether people are doing anything besides muting, blocking, or replying. The things suggested by the conflict resolution guidelines (https://wiki-dev.social.coop/Conflict_resolution_guide ) are private communication to posters (not to argue about the topic but to express feedback/requests about the conduct issues), asking for help, and filing formal CoC report. I think we can expand that with more specific guidance and other options. So, my question was whether people are only muting, blocking, replying, or nothing. Are people asking for help and reporting issues, and if so is that not working well?

MP

Michael Potter Mon 16 Jan 2023 5:05PM

I saw that in Jem Bendell's posts, too, and this is exactly the kind of thing I'm talking about. To be clear, I think the document Emi initially posted is terrific, and it's a great resource in a small group of people who are committed to enlightened interactions. Unfortunately, this group is already too large for that and we already have "challenges," so that document is like the obelisk at the start of Kubrik's 2001.

We need a more complex protocol. The question is, how do we determine if a person is willing and able to build a bridge? Frankly, I looked at Jem Bendell's posts, one of which included a link to Rumble, and I was pretty sure I wouldn't be able to build a bridge with him, or people like Mathew Slater who are shilling for him. We'll just end up going in circles and waste time.

I think a sound policy, or the beginning of one, could be to check posts against resources like this:

https://realityteam.org/factflood/

If a post is demonstrably false, already debunked, etc, I think moderators need to be able to issue a warning, one warning only or we risk a bad actor stringing us along. Then, moderators delete the post (or add a content warning, if that's technically viable and appropriate). If there are repeated offenses, then the person gets kicked out of social.coop, as in excised, out the airlock. This would send a very clear message both to people testing to see what they can get away with, and to regular users who might feel that this site isn't moderating effectively.

AW

Aaron Wolf Mon 16 Jan 2023 5:16PM

Strong agreement. We need to be proactive at blocking harms right away. Then, we can have processes for restoration/resolution etc. once it's clear that contentious risky things are limited. Otherwise, we lose people and the community degrades. What I want to see is the enlightened-small-group-ideals etc all in place for what happens after the initial stop-harm actions.

So, newcomers should find out right away that they do not get away with almost anything problematic. However, we don't then just feel done then. We then want to onboard people with human connection and understanding and support people in becoming constructive co-op members and building skill at discussing controversial things with due care and sensitivity.

So, it's a yes-and approach I'm suggesting, not a dichotomy of passivity vs blunt blocking as the only choices.

G

Graham Mon 16 Jan 2023 5:38PM

I've been vaguely paying attention to this discussion for a while, and haven't wanted to get involved, beacuse it's not my field, but it feels like things are getting a bit crazy in here right now. To suggest that someone like Professor Jem Bendell is in some way a 'bad actor' seems nonsensical to me. He's a respected academic who, as far as I'm aware, is focussed on an objective and rigorous scientific approach. And to use pejorative language to describe Matthew Slater, someone who I know and have a lot of time for, indicates to me that this debate may be losing its grip on common sense.

Yes, we all want social.coop to work well and be successful, and that means having a useful policy framework that can help moderators to do their work fairly and in the spirit of cooperation and tolerance in which I hope this place is founded, encouraging users in turn to be cooperative and tolerant. If the intent is instead to ensure that social.coop stops people like Jem and Matthew from contributing then I fear that cooperation and tolerance are long gone. And if that's the case, then I will most likely go too.

AU

Ana Ulin Mon 16 Jan 2023 6:34PM

It is January 2023. We are no longer "in the early days", and the context and data around Covid vaccines and the associated discourse has vastly changed.

AU

Ana Ulin Mon 16 Jan 2023 7:33PM

Lest anyone think that @Doug Belshaw is an outlier, I'll say that I, too, have recently been forced to start muting and blocking folks in the social.coop timeline, and have started to strongly consider moving instances. social.coop has never been perfect, but it feels like we are not trending in the right direction.

I made an attempt at engaging on this thread early on, wanting to try to give Matthew Slater the benefit of the doubt. It's clear to me now that he is a right-wing troll -- "anti-woke", in his own words (for anyone who might want some context on the term, see e.g: https://www.theguardian.com/us-news/2022/dec/20/anti-woke-race-america-history).

Since this thread, despite being titled "Framework for Moderation", actually seems to be about Matthew's complaints on moderation of Bendell's posts, and since I do not think that trolls can actually be appeased, I shall do my best to stay away from this conversation from now on. (Pray for me, y'all.)

But perhaps we do need a separate, serious discussion on what is our collective stance in the face of such members.

Much love to the folks of the CWG, who have to deal with this shit all the time. 💗

AW

Aaron Wolf Mon 16 Jan 2023 7:42PM

@Matthew Slater said:

Wouldn't you agree that claims for vaccine efficacy were criminally exaggerated in the early days and hence siding with the skeptics was/is quite a reasonable course of action?

That's not an assertion that we are in the early days now. He was describing an opinion about what was the early days (i.e. the very beginning of vaccine availability in late 2020 and early 2021).

I happen to not agree with the ironically exaggerated language of "criminally exaggerated". The public messaging around covid was indeed exaggerated, flip-flopping, problematic, hypocritical… and the best views I found were from people like Zeynep Tufekci. The hyperbole of "criminal" to describe the tragedy of incompetent public messaging seems unhelpful though.

Anyway, the key point IMO is that the co-op should not take a hard-line position on discussions of vaccines but should take some sort of hard-line position on use of triggering and provocative language around identified sore-spots.

We could still make a democratic decision that vaccine-skepticism is too much of a landmine and so we simply ban it as a topic despite acknowledging that doing so is an unfortunately blunt decision. I would personally prefer that we mark certain topics like vaccine-skepticism as "sensitive" and require extra care such as that all mention of identified sensitive topics get specific CW's and hashtags and avoid provocative language.

In short: there are all sorts of policies and tools that fit different levels between unhealthy tolerance (i.e. allowing harms to the community, such as turning away lots of people) and zero-tolerance.

AW

Aaron Wolf Mon 16 Jan 2023 7:52PM

I believe reactions like yours and Doug's are typical and are representative of the harms caused by allowing problem posts to keep going. The top priority IMO is to take immediate action to make sure people like you and Doug (and so many silent others) feel welcome and reassured enough to stick around.

Making sense of how fair it is to judge Matthew as a right-wing troll has to come later (and ideally wouldn't be asserted like this if we can cut off initial tensions sooner, so you don't feel exasperated). The story in my mind is that defensive frustrated people are more quick to judge and that supposed trolls also dig in defensively when threatened, labeled, and targeted as the problem (ironically escalating the conflict in their defensiveness). Incidentally, I have no reason myself (in what little I've seen) to think anyone on any side of this is here in bad-faith. I don't believe that can be effectively discussed at all while people are "hot" with immediate reactions, feeling constricted and defensive. If someone really is persistently toxic, we will still see that when we are calm and reflective.

We need ways to cut off the immediate conflicts in order to then deescalate and figure out later what to do when we're more relaxed. We need ways to temporarily block things while expressing to everyone that it's about addressing the immediate tensions and that we have a due process to help everyone willing to work on being constructive and accommodating co-op members.

MS

Matthew Slater Mon 16 Jan 2023 8:34PM

I called this discussion in hope of tightening up the moderation guidelines and I have been dismissed, by name, for the first time in my life I might add, as shilling disinformation, as anti-woke, and a right wing troll.

Let me remind you, I haven't abused or insulted anyone here. I've listened and been polite at all times. Yesterday on the site somebody accused me of 'spreading misinformation' and then blocked me. IRL that's called slander, and talking behind someone's back.

There's nothing more I can do here. It feels the worst parts of pre-Musk twitter, with a veneer of cooperative righteousness, which I find ugly. So this is goodbye. A troll leaving of his own accord. Right.

MN

Matt Noyes Mon 16 Jan 2023 8:42PM

Thanks, Aaron, I think this is helpful. I also hope people will cool down, take time away if needed, and then think about how we can help improve our moderation and, more importantly, the relationship-building that builds trust and patience.

LF

Lynn Foster Mon 16 Jan 2023 8:52PM

I truly hope that social.coop does not turn into a place where someone's ideas of political correctness and scientific correctness beat out actual science and respectful political discussion. We have twitter for that.

We so badly need social spaces where the harms of capitalism don't basically control the discourse. It looks from where I sit that a lot of the US mainstream media is under substantial control by capital in the US, and this is true in many other countries too. So for example I don't consider the sources on the list at https://realityteam.org/factflood/ of "the most reliable national news sites" very comforting. In journalism, as well as in public health, we have some crazy mix of non-bought people in all positions trying to do the right thing and provide useful information, vs. people in positions of supposed public trust who will lie to beef up their stock portfolio, politicians who will lie to satisfy their biggest donors, people who have to lie to keep their job, etc.

I also think we need to allow people to develop informed opinions and decisions; and raise our levels of critical thinking ability, in safe spaces. I find the level of "I am right and I will protect people from you and protect you from other people" making an appearance above to be somewhat scary in the way that our society's control of "truth" is scary now. In the way that the McCarthy era in the 1950's in the US was scary.

I'm not an expert in moderation. And it is a thorny issue. But as of now, having read through the relevant conversations of the last days, I would agree with the position that moderation should address speech that is directly aimed at hurting people in some way, or inciting hurt, or even just mean and disrespectful. I don't think we want to, or even can given the resources and level of governance it would take, moderate for what is actually science, especially when presented respectfully with links to studies. It's a moving target by its nature (the scientific method), it is co-opted in so many ways, and more information is good.

Also, fortunately, people can follow, mute, block who they want. That seems to me like a perfectly OK way for people to manage their social networks.

To emphasize, this is my 2 cents as a person who does not have moderation experience, and does not often run into disrespect because my networks are small and I spend less time on social media than anyone I know. I could be wrong about some of the above, but I don't think I'm wrong about the danger that "I will control what (respectfully delivered) content you can hear" presents in this stage of capitalism.

WM

Will Murphy Mon 16 Jan 2023 9:25PM

Hi @Matthew Slater can you confirm whether you are a currently social coop member? Your mastodon profile page says you have left the coop and I couldn't find you among the contributors on Open Collective

AW

Aaron Wolf Mon 16 Jan 2023 9:32PM

I have no other knowledge, but maybe that's because he quit? As he posted here 50min before your question

WM

Will Murphy Mon 16 Jan 2023 9:36PM

Ah thanks, I missed that comment.

MN

Matt Noyes Mon 16 Jan 2023 9:39PM

Hi folks, look, we here on social.coop have been buffeted by a couple of big waves. First, Covid, with all the trauma (lives lost, physical damage), anxiety, fear, confusion, overwork, etc. that comes with it. I would be a liar if I didn't say that three years of constant concern hasn't taken a toll on me. Not to mention the mental labor of sorting out my understanding and response to changing information and policy in a highly charged atmosphere. And the toll of physical isolation.

Second, the recent influx of former/continuing Twitter users, who often bring with them a culture of anger, defensiveness, contempt, provocation-outrage, etc. (along with some great content and insights!) The first wave of Twitter users in 2018, and the reactions of existing members, nearly destroyed SC, but ultimately led us to make crucial improvements and build a better culture.

Moreover, we have grown rapidly -- this influx is much bigger -- and are finding our existing protocols and organization practices tested. Changes are needed, but I think our current Code of Conduct and Conflict Resolution Guidelines are both "good enough for now" and "safe enough to use." The immediate challenge we face is what to do when members refuse to follow the guidelines (e.g., by escalating) and reject the efforts of community working group ops team members to encourage good behavior and facilitate conflict resolution.

So far, our experience on the CWG ops team has been that people are usually willing to engage with us and take steps to resolve conflicts, even when they think we are wrong, for example by using content warnings or editing a post. People routinely post about controversial subjects and others decide if and how to respond, including by muting/blocking, if they wish. But, in the case of Covid vaccine related content, this hasn't been the case. That's why we on the ops team are looking for additional approaches and guidance. To that end we opened this discussion, knowing that there was a risk of further conflict and loss of members. I think there is a lot of useful content in this thread.

Soon, in the next couple of months, we need to hold elections for the CWG ops team, so maybe it would be good to have some live sessions to introduce the work involved and help new people come in. We have encouraged people to join our meetings to see how what we do and how we do it.

SM

Scott McGerik Tue 17 Jan 2023 1:23AM

I'm looking for suggestions for trainng/learning materials on moderation, de-escalation, conflict resolution, etc. I admit to being in over my head on these topics but I want to contribute to and support the community.

T

tanoujin Tue 17 Jan 2023 6:37AM

There would be so much to say on multiple levels - I just wish everyone all the support and care they need beyond this little forum to process the emotional stress such confrontations can individually cause.

@Matt: "The immediate challenge we face is what to do when members refuse to follow the guidelines (e.g., by escalating) and reject the efforts of community working group ops team members to encourage good behavior and facilitate conflict resolution."

From my POV and recent experience you are doing enough to offer discussion about problematic toots. Right now you are going public with some of your routines, and that is an important second step.

Usually there is one moderator in charge for a specific case and handles the communication, while consulting their peers if needed. As far as I know, the conflicting user is confronted with an ad-hoc group decision at a certain point of escalation and more drastic one-sided measures are taken. After that policing operation the reprimanded user has an option to appeal (never encountered this on either side of the show), there will be a revision and a final decision. So far so good.

Since you are an elected member of the democratic legitimated moderation team and there is a code of conduct which was self imposed by the coop members in their current version, you have the mandate to interprete and enforce the rules to the best of your knowledge.

Mistakes might be made, a learning process takes place, possibly there could be efforts to make this more transparent to faciliate public control and the development of merit and trust.

So, what to do if a member does not cooperate? I am a Newbie, and I was not aware of the scope of the contract I was entering. This changed very quickly and we had our handshake. I was corrected, and I changed my course - to something better I was used to elsewhere.

If a member does not cooperate, you will explain the consequences, and if this does not help, you are going to boot them. That is the burden of your office. The reactions of the witnesses may be mixed, you are facing consequences yourself by no later than the next elections. So you will try to do your best, if not anyway out of principle.

If this discussion here works out as intended, you will not have a handful of factions beating each other up, but a consensus born out of compromise.

To discuss Covid is a quagmire. There is less light than we wish for, hopefully we will know better in some years. But now we are in the midst of it, albeit this is not the topic at hand.

I understand you need more appropriate directives for your work.

Can we agree to add something like:

Inacceptable: false claims, abuse of scientific sources, etc.

Discouraged: pretence to belong with established disourse communities

Encouraged: adhering to the standards of academic writing in case you are presenting, interpreting or questioning scientific topics.

Mind you, I do not attribute any of this to preceding toots, it is just about what we ask of a toot to qualify. I suggest to keep it on that level, even if some review takes place here.

This might alleviate discussions about toots and your work in the future - which seems urgently necessary to me.

Edit: I will refrain from posting for the rest of the week. Thanks for your patience.

SW

Sam Whited Tue 17 Jan 2023 12:03PM

For what it's worth, we have tried to give them every benefit of the doubt (long past what was due, IMO). They don't have to agree with everything in the social.coop code of conduct, or the spirit in which we enforce it, but these two have repeatedly argued with the moderators, refused to make changes, re-posted things we've deleted that other users have reported, etc.

When we say "use a content warning on this type of post please", the answer is to either appeal it to the community or just say "I disagree, but yah, sure, for the sake of the community who didn't like it I'll start doing that". Attacking the moderators, refusing to follow up, or just re-posting it, etc. is a problem.

I'd also like to caution us all in this thread a bit: let's not feed the trolls and turn this into arguing over whether COVID Is real or not; the point isn't whether the thing you were moderated for is correct or not, and that's going to make this discussion less useful. While I personally think we should simply ban posting anti-vax stuff, that is not the problem here. The problem is how they responded to the community and, after escalation, the moderators asking them to stop.

JNM

J. Nathan Matias Tue 17 Jan 2023 3:02PM

Hi @Scott McGerik, thanks for raising this question. I'm not aware of any trainings, unfortunately. A few folks in our research lab, which specializes in working with communities on moderation questions, has been brainstorming whether to co-design with communities a "summer school" of short weekly session that are accessible to folks who want to do or already do this work, even as volunteers. Would be interested to hear more about the demand for it (outside of this thread would be best I think).

C

Christine Wed 18 Jan 2023 12:07PM

Because the goal of the thread is to propose updates to the code of conduct, it would be very helpful if someone from the community working group could clarify which problems they are trying to solve with the update. Apparently one problem is users ignoring content moderation actions, but it's not obvious how the guidance in this piece would mitigate an enforcement problem. It also sounds like there are other challenges that content moderators are grappling with, but normal users do not necessarily know what those are. With more complete information, we could provide more relevant feedback.

AW

Aaron Wolf Fri 20 Jan 2023 4:01AM

[edited from a similar post I made on Matrix] I want to apologize for some of my confusion and overly-long replies here. I think I now better see some of the issues.

The key point: the CoC does not have anything adequate in it to address disinformation. I am wary about how it could possibly be done, but we have to have some capacity to defer to the CoC in removing harmful disinformation.

I have been overly focused on conflict-resolution (and too compulsive in my unsuccessful efforts to help resolve immediate tensions). I still think conflict-resolution and various intermediate forms of moderation are essential and that we clearly have some missing processes and norms for those. That does not in any way reduce the need to update the CoC. We need both.

The link in the initial post here was all about conflict-resolution and healthy communication, so I focused on that, thinking that was the topic. But the covid-debate clearly brings up the need to support some sort of CoC around disinformation. How can figure out some CoC wording that supports the CWG in moderating such things without being too blunt and too deferential to whatever the dominant mainstream view of topics happens to be?

BH

Bob Haugen Fri 20 Jan 2023 1:31PM

I think some of the current information/disinformation controversies, including those that MatSlats and Jem Bendell have been involved in, are lose-lose arguments. For example, vax vs anti-vax. The official maintstream views of these topics have lost credibility with many people, because the official sources, including the US government, have lost credibility. And many of the "alternative" sources are unreliable, too.

So if the moderators allow these topics, they could take over the space, and drown out all other topics, because many of the arguers are passionate about their opinions, often on both sides of the issue.

So I think it might help to create an alternative channel for such arguments. I don't know exactly how that would be done, might need to be a centralized Mastodon site where moderators can redirect such topics.

Alternatively, the moderators could offer an easy way to filter out aguments about such topics. Like, I would immediately filter out vaccination, vax, and anti-vax.

This is an alternative to banning the people. Both MatSlats and Jem Bendell do good work and provide very useful info on other topics and I follow both of them and do not want to see them banned.

SW

Sam Whited Fri 20 Jan 2023 1:44PM

This is also a separate conversation that's worth having, but I'd also like to stress that they're redirecting the conversation this way and we shouldn't allow it to be redirected. The problem with these two users in particular isn't just that they posted content that the moderators think is harmful and should not be allowed on the instance, it's the way they responded to us reaching out to discuss it (by ignoring what we told them and re-posting things we'd removed, hurling abuse at us and other users, etc.) Even if we accepted for the sake of argument that their content was okay, we still need a good framework for moderation and some sort of consequences for creating an unwelcoming space for other members of the co-op.

EDIT: to clarify, because I think I confused myself when writing that, what I'm saying is that they violated the CoC as it stands today and continue to do so with their aggressive behavior, regardless of the specific content.

BH

Bob Haugen Fri 20 Jan 2023 2:02PM

@Sam Whited thanks for the quick response. So your comment is about behavior about moderator decisions, not the topics themselves. But...

they posted content that the moderators think is harmful and should not be allowed on the instance

...was that anti-vax stuff? (Just to see if I am understanding the situation...)

SW

Sam Whited Fri 20 Jan 2023 2:04PM

That's correct; I just want to make sure that we don't get too deep down the rabbit hole of litigating whether vaccines are real or not when the real problem that needs to be addressed in the CoC is how moderators deal with bad behavior. Although, this was not actually the thread I thought it was (oops!) so maybe both are worth discussing.

BH

Bob Haugen Fri 20 Jan 2023 2:18PM

@Sam Whited and all: I think a framework for moderation needs to have a graduated range of methods for dealing with controversies, from polite requests not to do that, to allowing people to easily filter out stuff that they don't want to clog their timelines, to deleting posts, to banning people.

And also methods for people to appeal moderator decisions.

SW

Sam Whited Fri 20 Jan 2023 2:29PM

Indeed. My first step when someone reports something like this (for users on our server, users on other servers I generally just skip to a limit or a ban depending on how bad the content is, and of course if the content is extremely harmful I might skip ahead a step or two depending on what it is) is to to first ask the reporter if they're comfortable discussing it with the person that was reported, if not I delete the posts in question, then reach out to the user and inform them that this was potentially a CoC violation and ask them not to do it again. Normally, that's enough. However if a user gets aggressive, or continues posting the same stuff, I think limiting their account is a good idea (existing followers can still interact and you can search for that user and find them, but they don't show up in the timelines of others and their posts don't show up in search results unless you're searching for that particular post, eg. by typing in its URL).

If the posts are extremely harmful a provisional limit (or a ban if we don't even think the people following them will want to see the stuff they're now posting) that lasts until the moderation team can discuss it is probably the way to go. If it's an outright ban from the co-op that we think is the only solution, I assume we'd have to bring that up with the broader membership for a vote.

At any time the user could of course go on Loomio, post in our group to discuss it with the all the moderators, or even bring it to the attention of the full community who can probably override us if we overstep.

WM

Will Murphy Fri 20 Jan 2023 5:27PM

Thanks @Sam Whited this comment is really helpful to me - understanding now the goal of this thread is frameworks/policies for the individuals' behavior in response to the Mastodon moderation action. I was thinking of "moderation" only the the sense of Mastodon post moderation.

Do you think a CoC addition about respecting CWG decisions would help? For example, under "3. Encouraged Behaviour. I will..." add:

  • Honor the decisions of the elected CWG and use the CWG election process as the means to address and resolve disagreements on moderation policies

SB

Steve Bosserman Fri 20 Jan 2023 5:54PM

"So I think it might help to create an alternative channel for such arguments. I don't know exactly how that would be done, might need to be a centralized Mastodon site where moderators can redirect such topics."

Maybe Bob's notion of offering pathways to alternative channels for various topics and participants is what Sam referred to, in part, as a topic for a separate conversation. To me, the concept of the fediverse suggests that everyone can find a place to have meaningful exchanges with others who share similar interests without hijacking the entire community. If this makes sense, then how to get topics and the people who want to talk about them into places where they can do so safely would be an interesting question to pursue.

SW

Sam Whited Fri 20 Jan 2023 6:12PM

That seems sane to me (maybe with a mention that they can always bring it up with the community if we've overreached).

That being said, while I don't think it hurts to add more examples of things that we think are bad behavior, I also think we need to avoid falling into the trap that the trolls set of "well it's not explicitly laid out in the CoC therefore what I did wasn't a problem and you can't do anything about it". Just because the CoC doesn't mention a specific behavior doesn't mean we can't enforce the general spirit of the CoC.

WM

Will Murphy Fri 20 Jan 2023 6:34PM

That being said, while I don't think it hurts to add more examples of things that we think are bad behavior, I also think we need to avoid falling into the trap that the trolls set of "well it's not explicitly laid out in the CoC therefore what I did wasn't a problem and you can't do anything about it". Just because the CoC doesn't mention a specific behavior doesn't mean we can't enforce the general spirit of the CoC.

Very much agree - CWG should be empowered to remove content that is, in their judgment, harmful to the community (whether that be by chasing off members, risking defederation, or other measures). In a way that's covered by 4.5 "[I will not] Make offensive, harmful, or abusive comments or insults, particularly in relation to diverse traits (as referenced in our values above)," but I'd wholeheartedly support the addition of distinct bullet point on the topic.

BH

Bob Haugen Fri 20 Jan 2023 6:43PM

I would also support some moderation guidelines about what is and is not suitable content for social.coop, which was originally created for social conversations about cooperatives.

For example, I would support ruling out some kinds of topics that are just argument-bait, like vax and anti-vax. Or any other topics that have nothing to do with cooperatives but become timeline hogs.

For example, I might enjoy mocking certain politicians but do not need to toot about it in social.coop.

MP

Michael Potter Fri 20 Jan 2023 7:32PM

The weakness of a non-zero sum outlook for interacting with others is that it doesn't work when dealing with a zero-sum person. So, if someone posts obvious disinformation, and we do nothing, we may think we're being enlightened, but we're actually approving of very destructive content spreading through our service. People should be offered more for reporting disinformation than a talk about conflict resolution (especially when it's been tried and failed) and a tall glass of "shut up."

My stance is that there are verifiable, reliable sources of information and only disinformation itself calls this into question. In fact, the purpose of disinformation is to call reality into question to make it easier to spread lies. So I post again a link that I like to use when investigating sources:

https://realityteam.org/resources/credible-sources/

I propose that we alter the CoC to include a simple procedure that does not waste time. If a post includes a known conspiracy theory it should have an immediate mark of sensitive. Same with something that can be debunked at a place like Snopes or FactCheck. If the person won't delete or modify, keeps reposting deleted things, etc, then suspend them. There may be cases where a lesser freeze/limit are appropriate, but probably not.

AW

Aaron Wolf Fri 20 Jan 2023 7:45PM

if someone posts obvious disinformation, and we do nothing

Well, doing nothing is not something I support or have advocated for in the slightest. Do note that a conflict-resolution process is not nothing and it doesn't help to attack it as such. Also, I have said all along that conflict-resolution should start after immediate harms are stopped. So, temporary blocking or hiding of some sort is often necessary in order to deal with emerging issues as a prerequisite to doing any conflict-resolution.

I propose that we alter the CoC…[snip]… If a post includes a known conspiracy theory it should have an immediate mark of sensitive. Same with something that can be debunked at a place like Snopes or FactCheck. If the person won't delete or modify, keeps reposting deleted things, etc, then suspend them. There may be cases where a lesser freeze/limit are appropriate, but probably not.

With my tiny edits, I fully support this proposal.

I would maybe more generically say that suspension would be appropriate for any form of not cooperating with the CWG and the process. Though I would hope that there is a healthy appeal process to have some check that the CWG is being fair and diligent. Any appeal process must be set up to not just be costly and abused.

AU

Ana Ulin Fri 20 Jan 2023 8:22PM

I do not believe that social.coop was "originally created for social conversations about cooperatives". From https://wiki.social.coop/home.html: "Social.coop was founded in 2017 in the wake of the BuyTwitter campaign. Since then, we have worked toward the goal of placing ever more or our online social lives under good cooperative governance."

Since its inception, social.coop has attracted folks interested in the cooperative space, and during the last couple of community strategy discussions there has been explicit interest in fostering more collaboration and communication about cooperativism, but that still does not make social.coop a space solely for conversations about cooperativism.

If non-coop content became discouraged on social.coop, I would probably shut down my account, as I rarely post about coop stuff (I think cat pictures are probably the bulk of my contributions to the timeline).

SM

Scott McGerik Fri 20 Jan 2023 8:33PM

Agreed. I did not join social.coop because it is about discussing cooperatives. I joined because I am interested in participitating in and operating in an environment of cooperative governance.

MP

Michael Potter Fri 20 Jan 2023 9:04PM

I don't mean that conflict resolution is not valid or workable, just that in some cases, it's not going to work. If someone told me that a post of mine offended them, I'd be open to altering it or deleting it if they seemed sincere.

BH

Bob Haugen Fri 20 Jan 2023 9:04PM

That's a good addition or correction to what I wrote above.

BH

Bob Haugen Fri 20 Jan 2023 9:12PM

I wrote

social.coop, which was originally created for social conversations about cooperatives.

which @Ana Ulin and @Scott McGerik and maybe some other people say was wrong. They are correct, I was wrong. What's the best way to erase the false impression? I don't see a way to edit my original post. So best I see to do is add corrective comments.

SW

Sam Whited Fri 20 Jan 2023 9:23PM

No big deal, it's a simple misunderstanding that has been corrected :)

SM

Scott McGerik Fri 20 Jan 2023 9:30PM

Your post is sufficient! Thank you.

SM

Scott McGerik Fri 20 Jan 2023 9:32PM

In a way, it was a good way to clarify what social.coop is about.

SM

Scott McGerik Sat 21 Jan 2023 1:43AM

Stupid fat fingers. Gonna write this off line and then post it.

SM

Scott McGerik Sat 21 Jan 2023 1:48AM

Until your post, I had not heard of consciousness raising. A search on the internet revealed https://en.wikipedia.org/wiki/Consciousness_raising on Wikipedia. 

SJK

Stephanie Jo Kent Sat 21 Jan 2023 3:43PM

Echoing @Ana Ulin and @Scott McGerik that I joined to participate in "the goal of placing ever more or our online social lives under good cooperative governance." Truly appreciate the quality of communication here (in Loomio), and very much value and am grateful to all of you who consistently keep up and re-cultivate the spirit of collaboration.

WM

Poll Created Sat 21 Jan 2023 7:17PM

Proposed addition to CoC prohibitions for harmful content Closed Tue 24 Jan 2023 7:02PM

Outcome
by Will Murphy Tue 24 Jan 2023 8:24PM

Not broadly supported. Withdrawn

As the goal of this thread is to propose amendments to the CoC, and several members expressed agreement with a comment suggesting this, I'd like to check if we're ready to start working on one such proposal. This is not a binding proposal, but a sense check that would lead to a formal proposal after updating with feedback.

Addition to section 4 of the Code of Conduct, Unacceptable Behavior:

  • [I will not] post content that is harmful to Social Coop community or to the community's standing within the broader fediverse

Results

Results Option % of points Voters
Looks good 37.0% 10 SB LO RH D BV ZS SW JF SM T
Not sure yet 11.1% 3 AW DZ GC
Concerned 51.9% 14 LF JNM NS MS G MP AR N N AS J C BS BJ
Undecided 0% 68 KF ST DM CZ BH JC J L MC TB PA FSF SV MN DB SJK LS AW CB JB

27 of 95 people have participated (28%)

AW

Aaron Wolf
<span class="translation_missing" title="translation missing: en.poll_proposal_options.not sure yet">Not Sure Yet</span>
Sat 21 Jan 2023 7:17PM

I support amending the CoC and specifically to mention disinformation and cooperation with CWG process (maybe there mentioning CWG's responsibility to block other harms when urgent, using their best judgment)

I'm concerned the proposal as is doesn't say anything about how "harmful" is to be interpreted. I imagine worst case outcomes including scaring away members who worry about being subjected to arbitrary opinions of "harmful" and exacerbating long debates about what counts as "harmful".

SM

Scott McGerik
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Sat 21 Jan 2023 7:17PM

I agree with @Aaron Wolf's concern regarding how "harmful is to be interpreted" but I believe that can be addressed in a separate discussion but I believe there is a need for something like the proposed addition.

MS

Matthew Slater
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

'Harmful' needs definition, especially since physical harm is not involved. Its a terrible guideline which adds nothing to existing guidelines. My suggestion of 'hateful' seems much clearer and BTW I left this group 5 days ago, but Loomio still invited me to vote.

I'm hurt at how, above, and in my absence, the portrayal of my behaviour continues to decline. @Sam Whited I have only ever been polite and curious, and never disrespected any other member, especially not a moderator.

MP

Michael Potter
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

This is so vague as to be meaningless, and I didn't see several people agree with it. Several people DID agree that disinformation should be specifically dealt with in the CoC, and the comment was added that some provision for loopholes might be a good idea.

SB

Steve Bosserman
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Sat 21 Jan 2023 7:17PM

I agree there needs to be an amendment to the CoC that clarifies "unacceptable behavior" and the consequences for a member who exhibits it. As others have already stated, how we word the amendment will make all the difference in how others interpret and act upon it. But I have confidence in the collective wisdom of this group to figure it out so it has the desired effect.

SW

Sam Whited
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Sat 21 Jan 2023 7:17PM

Something along these lines seems like a good addition; there's a thing the trolls do (or those who know their behavior is bad but want to keep doing it) where they pretend that because the CoC or other governing documents doesn't explicitly state something it means it can't be moderated, but a well written CoC gives the community latitude and flexibility to change standards with the times and not have to make new changes to it every time some new behavior surfaces.

LF

Lynn Foster
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

Concern: This is so broad that it is impossible for members to know what it is; and will be very difficult to use to moderate without a lot of difficulty and possibly personal arbitrariness. I do think we need to agree on what "harm" means. And be vary careful with "disinformation", which is often used to mean "I disagree because I have an agenda, or I automatically agree with the mainstream media's or a political party's or the government's statements, even if I haven't researched it myself".

AS

Andrew Shead
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

"Harmful" isn't specific enough; it encompasses what Section-4 of the CoC says already and doesn't add anything. Isn't it the collective responsibility of community individuals to Mute or Block content they find offensive??? With enough Ms&Bs, bad content would be shunted into the void. Can the CWG then banish a user who has high levels of Ms&Bs? In this way the community is dynamically voting against a bad user, which makes the CWG's life easier and less complicated.

N

Nic
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

We could set up a second instance to discus indefinitely what harmful means..

But maybe we need to collaboratively develop policies for specific issues? Eg health info, current affairs or war reporting.

For health, posts outside mainstream consensus seem to need some kind of peer-reviewed science they point to. It can't be enough to say 'chocolate cures cancer': point to the study, or prefix 'I believe'.

RE disinfo, @mpotter's proposal above banning known conspiracy theories? sounds good.

AS

Andrew Shead Sun 22 Jan 2023 3:52PM

As members of a co-operative, we each have one vote and a duty of care to exercise that vote. Beyond the CoC, regulating individual behaviour is a collective responsibility. We each are our own authority and different from other individuals, what is offensive to one person is mildly irritating to another. When a reader of a post to social.coop finds the post to be offensive or affronting or otherwise violating the CoC then the first thing for the reader to do is to Mute the author. If the author becomes obnoxious then the next step for the offended person is to Block the author.

In this way, the membership of the co-operative quietly exercises its voting power against undesirable behaviour, without immediately involving the CWG. Ultimately, the community will sequester obnoxious behaviour by its collective voting power through the sanctions of Mute and Block.

When the number of Mutes and Blocks against an author rises to an appropriate level, the CWG can then intervene to banish the obnoxious author or instance, without getting involved in an argument about what’s good or bad.

So, perhaps the CoC needs language that says community members must actively engage in moderation by first Muting or later Blocking posts they find offensive.

AW

Aaron Wolf Sun 22 Jan 2023 4:14PM

I agree about collective responsibility. I strongly disagree about muting and blocking as the recommended defaults. They are options people should feel free to use, but they do not help anyone learn, do not resolve any misunderstandings…

What you're suggesting easily looks like a poster not even knowing they were being misunderstood or offending anyone until the CWG comes along and tells them they are on the verge of being kicked out of the co-op. This is dysfunctional and goes against everything that restorative justice and co-op principles stand for.

We need resolution mechanisms that include some place for assumptions of good faith, for people to know that people are reacting negatively, to get feedback, to learn…

Also, we need mechanisms where harmful posts can be muted more quickly for everyone. It is not okay for me to see some hateful post and then just mute/block for myself and leave it to everyone else to have to see the harmful post. The first people who see a harmful post should have a way to reduce the entire co-op's exposure as quickly as possible. Then restorative process focusing on learning could be done afterwards. And people joining the co-op should understand this so they aren't shocked or scared, so they know that if they get moderated in some way it doesn't mean their co-op standing is threatened. Any public debate about problem posts is liable to cause more defensiveness and escalate things. We should be stronger in our pro-active moderation and hiding of problem posts and more gracious and patient with resolution and learning.

https://wiki-dev.social.coop/Conflict_resolution_guide tries to go in a healthy direction.

In short: YES, the enforcement mechanism and the co-op onboarding process needs to make it clear that we are all responsible for responding to problems. Our response options should include ways to more quickly hide problems from everyone and should emphasize restorative resolutions after that. Just adding up blocks and mutes on the way to bans is not restorative at all.

AS

Andrew Shead Sun 22 Jan 2023 4:21PM

Thank you, that's something I hadn't considered.

NS

Nathan Schneider
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

This is far too vague, and vague prohibitions are ripe for abuse. As a member I would have no idea how to interpret that, and could therefore veer in the direction of excessive self-censorship. I encourage a new proposal, if necessary, that is far more precise, communicative, and enforceable.

ZS

Zee Spencer
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Sat 21 Jan 2023 7:17PM

To be frank, I believe this is not a "rules" problem, but rather a social immaturity problem. I think specifying that we do not want people to engage in disinformation or otherwise harmful is reasonable to include as a thing to 'point to' if moderators need a rule to fall-back on; but I would prefer we proactively removed/blocked people who "toe the line." For people who are threatened by vagueness they should err on the side of "post, but be ready to edit or delete at moderators request."

G

Graham
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

My concern is that it might be really difficult/impossible to develop an agreed definition of harmful content.

AR

Alex Rodriguez
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

Changed my vote after reading some of the other concerns, especially re: having something more specific and also something that emphasises collective responsibilities and steps that can be taken

SJK

Stephanie Jo Kent Mon 23 Jan 2023 3:40PM

There is something in @Andrew Shead's idea about metrics regarding who gets Muted or Blocked, and as well with @Aaron Wolf 's point that no one should be suddenly surprised or shocked by a CWG message about posting content or behavior. "Harmful" is tricky since that's very much defined by the "receiver" - what hurts me may not hurt you. But I'm not in support of a blanket "mainstream consensus" as @Nic Wistreich suggested - there's much that is presumably common sense or mainstream that is in desperate need of challenge and change. Patterns of communicative behaviors that seek understanding and perspective are preferable to those that seek to rile folks up, yet sarcasm and dark humor are tools in service of necessary change, too. Is there a way (or need) to better describe the characteristics of patterns of posting that could be described in the CWG? Too much in the weeds?

N

Noah
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

I agree with the concern that "harmful" is too open to interpretation - especially "within the broader fediverse" - there is a seed of something very legitimate in there, but as written it could be interpreted in very adverse ways. There are huge swathes of the fediverse which we absolutely do not need to be in good standing with!

BV

Brian Vaughan Mon 23 Jan 2023 4:36PM

While it's good to try to be as clear as possible, there's a limit to what can be accomplished with formal rules, especially when dealing with malicious people who go to great lengths to subvert the intent of formal rules. I think we should rely on the moderation team acting on its best judgment of the intent of the membership in general, under the supervision of the membership in general.

DZ

Dmitri Z.
<span class="translation_missing" title="translation missing: en.poll_proposal_options.not sure yet">Not Sure Yet</span>
Sat 21 Jan 2023 7:17PM

I agree with the 'this is too vague' / 'repeats existing policy' concerns. At the very least, let's add a few more words to this clause, explaining the procedures of how 'harmful' will be determined.

DB

Doug Belshaw Mon 23 Jan 2023 6:06PM

This statement is too vague and needs to talk about categories of harm. Also, people who have been kicked off social.coop, or have left before being pushed, should no longer have Loomio access IMHO.

JNM

J. Nathan Matias
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

A good general principle when creating new policies is to be clear about what those policies mean, and I think "harm" is too vague a formulation. I would be happy to contribute to a conversation about what we might mean by harm. I would be open to approving a change to policies to include harm, so long as we would agree to implement those policies only after defining harms in greater detail.

AW

Poll Created Mon 23 Jan 2023 7:24PM

Proposal: add "cooperation with CWG" to the CoC Closed Mon 30 Jan 2023 7:01PM

Outcome
by Aaron Wolf Wed 1 Feb 2023 2:33AM

Hi all, we have an apparently successful proposal! Zero votes against, one abstention. More than 18% of co-op members with Loomio accounts voted. More turnout would be ideal, but this is pretty clear support.

As this is specifically about and relevant to the Community Working Group, it seems appropriate to me to ask them to confirm and make the official change to the Code of Conduct. By my best understanding, of the 5 CWG members, only 1 voted in the poll. I assume the others also will support the change.

Here is the final version of the proposal, given some amendments after initial feedback:

Amend the Code of Conduct as shown at https://wiki-dev.social.coop/Code_of_conduct:

  • Expand item 6 "Reporting" to be renamed "Reporting and Resolving"

  • Within item 6, add a new line: "If the Community Working Group (CWG) contacts me about my role in any conduct complaint, I will cooperate with their process and guidance, communicating respectfully and working with them to make clear agreements for resolving the situation."

Note that this proposal can and will be edited if there are suggestions that do not change the basic gist.

Proposal for amending the Code of Conduct as shown at https://wiki-dev.social.coop/Code_of_conduct:

  • Expand item 6 "Reporting" to be renamed "Reporting and Resolving"

  • Within item 6, add a new line: "If the Community Working Group (CWG) contacts me about my role in any conduct complaint, I will cooperate with their process and guidance, communicating respectfully and working with them to make clear agreements for resolving the situation."

UPDATE EDITS: removed "fully" from "fully cooperate", added further clause to clarify meaning

Note: We might like to also include reference to appropriate appeals process, but we need to define that itself before we can include mention of it in the CoC.

Note: Further updates about disinformation and updates to the CWG process can be made as well. For this poll, the question is whether the proposal above is good enough and safe enough to be helpful as a positive step on its own.

Results

Results Option % of points Voters
Agree 98.1% 51 LF NS G AW TB K SB MN MP SJK AR MB LJ EM N RH ES EC D AS
Abstain 1.9% 1 J
Disagree 0.0% 0  
Undecided 0% 275 DS KF ST DM JD CZ BH WO JC JNM F J BM SH KT C ZS DH AM MSC

52 of 327 people have participated (15%)

MP

Michael Potter
Agree
Mon 23 Jan 2023 7:24PM

I hadn't read the specific line carefully before, when I disagreed. I like this, because it gives grounds for escalation in these cases of re-posting content, ignoring or even abusing moderators.

BJ

Blake Johnson
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Mon 23 Jan 2023 7:39PM

While I'm sympathetic to the basis of bringing this forward, I think the wording is overly broad.

EM

Eduardo Mercovich
Agree
Mon 23 Jan 2023 7:24PM

Seems simple and ok to me.

C

Christine
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

This is too vague for users to understand what is and isn't permitted

AR

Alex Rodriguez
Agree
Mon 23 Jan 2023 7:24PM

Additional clarity in the update works for me

SM

Scott McGerik
Agree
Mon 23 Jan 2023 7:24PM

The proposed amendment should, hopefully, make it clear that uncooperative behavior is not accepted.

AS

Andrew Shead
Agree
Mon 23 Jan 2023 7:24PM

Seems OK, but the concerns of @arod make me wonder about the wording.

AW

Aaron Wolf Tue 24 Jan 2023 3:54AM

In response to @Alex Rodriguez helpful points and questions: I did not want to mean "comply" as that gives me a sense of unidirectional orders. My suggestion of "cooperate" means to be in constructive communication, hearing what the CWG is saying, responding, making agreements, and so on. Examples of not cooperating would be ignoring, responding with defensive aggression such as denial or sarcasm, making a public stink, reposting flagged content, and so on.

As I am not on the CWG, I'm not quite on top of the current process. I imagine their process will continue evolving and improving and getting better documented over time. Whatever the status, the co-op has elected them to do the work within the bounds of the agreed policies. So, whatever their process, that's what it refers to. If their process has problems, that's a separate issue to fix outside of the CoC. I did/do have some worries about worst case scenarios where the process has problems, and for that I really want to see a solid appeal process. I just don't think that's a blocker to updating the CoC for now.

As to "guidance", I was imagining that the CWG might make specific suggestions in their communication with people. Cooperating with guidance I imagine as hearing it and telling the CWG whether the person agrees or has concerns, and if they don't agree, they discuss that with the CWG to get to agreement rather than disregard the suggestions. "Guidance" is different than process because process would be the steps they take in communicating (such as keeping things private, being patient in waiting for replies, etc.), removing posts, adjusting privileges or similar.

Note that I carefully wrote "my role" in a conduct complaint because I imagine this cooperate-with-CWG line to apply to everyone, including the people complaining or people who are tangential. For example, even if I was only part of an argument where someone else's post was reported by a third person, I still have responsibility to cooperate if the CWG asks for my involvement or input in resolving the issue.

If having clarified this, anyone disagrees or has questions or suggestions for editing the proposal, please share your thoughts.

J

JohnKuti
Agree
Mon 23 Jan 2023 7:24PM

Moderators have a really tough job as technology creates increasingly toxic online environments. The advantage of social.coop ought to be the involvement of the whole community (via the CWG) to intervene. I think that is the idea here.

J

JohnKuti
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Sat 21 Jan 2023 7:17PM

I'd like to see something positive rather than a prohibition. An obligation to respect the common culture of the site and promote cooperativist principles.

TR

Tom Resing
Agree
Mon 23 Jan 2023 7:24PM

If it helps with moderation, it's good with me. I disagree with @arod's points on the language. You can cooperate with guidance. And you should! :)

BS

Brecht Savelkoul
<span class="translation_missing" title="translation missing: en.poll_proposal_options.concerned">Concerned</span>
Tue 24 Jan 2023 8:18AM

While I agree with the general principle, the current phrasing is so vague that I could be used in pretty much any context. So I fear including this in its current state will create more problems than it resolves.

BS

Brecht Savelkoul
Agree
Mon 23 Jan 2023 7:24PM

This looks reasonable and useful to me. To address @Alex Rodriguez 's worries, maybe we just need clarify what the minimum standard for "cooperating" is. I'd say "acknowledging and responding to complaints voiced by the moderators" would be the lowest bar to consider someone as cooperating with the process.

G

Graham
Agree
Mon 23 Jan 2023 7:24PM

Seems fair, although I would urge NOT to use the word fully, as it implies that there are then degrees of cooperation, rather than a simpler binary - cooperating/not cooperating.

NS

Nathan Schneider
Agree
Mon 23 Jan 2023 7:24PM

AW

Aaron Wolf Tue 24 Jan 2023 4:04PM

@Nathan Schneider The purpose of adding "resolving" to the heading was to make it logical that the items below include more than reporting. Do you have a suggestion for better heading? I think it's important for the CoC to not be limited to the stages of normal behavior (before there's any concerns or conflicts) and reporting. Member conduct includes what happens after something is reported. The CoC doesn't currently say anything about post-report conduct. I suggested "resolving" to be in the heading (again, it's just a title describing what the section is about) as a term for the post-report stage of things.

I removed "fully" from "fully cooperate" because I agree it sounded too absolutist. I agree 100% with the concern about what to do if someone disagrees with CWG. I think removing "fully" really does help, though appeals process is still needed. Is removing "fully" enough for now though? Or you really think we need to get an appeals process nailed down as a prerequisite? Do we really want to say that until we have a clear appeals process we do not expect members to necessarily cooperate with CWG?

To be blunt, I'm imagining that if the CWG actually abuses their role or something, that it's okay for people to speak up about this in violation of the CoC. Just like in civil disobedience, if the law is wrong or breaks down, people might escalate to breaking the law as part of protesting. There are real-world ways for people to bring the community's attention to unjust behavior by the CWG even if that requires violating the CoC. The community can probably tell the difference between someone whistleblowing on really unfair CWG behavior and someone just being belligerent and uncooperative.

All in all, I don't think we need to block helpful CoC updates by requiring we finish setting out the appeal process first, though we do need to get such a process. In practice, I expect the CWG will include being open to hearing constrictive feedback and concerns as part of the process. As long as the communication is in private channels with the CWG and the members involved, I trust the CWG to hear the members' concerns if they have a problem with CWG decisions and guidance. I also trust the CWG to come to the overall membership (as they did here) when they feel the need for extra support and perspectives to keep a healthy restorative approach to their work.

I look forward to your further thoughts. I greatly appreciate your concerns.

T

tanoujin
Agree
Mon 23 Jan 2023 7:24PM

„fully“ is a bit of a pleonasm. But fine!

WM

Will Murphy Tue 24 Jan 2023 8:22PM

@Andrew Shead I appreciate the intent here, but I think you're missing a very important dynamic that Social Coop is one community in a federation of communities. If we were to all mute & block a toxic user on our instance, but allowed that user to continue their behavior towards users on other instances, we would very quickly, and justly, find our entire community ostracized and defederated

G

Giacomo
Agree
Mon 23 Jan 2023 7:24PM

G

Giacomo Tue 24 Jan 2023 10:51PM

Hi Aaron, thank you for what you're doing. I think maybe it might be useful to phrase "cooperate with their process,and guidelines" in a more specific way

AW

Aaron Wolf Wed 25 Jan 2023 12:11AM

Thanks, do you have any suggestions or any other thoughts that might prompt me or others to find a superior phrasing? Are you willing to consider whether the proposal is still a positive, worthwhile step even if we do not come up with ideal phrasing yet?

G

Giacomo Wed 25 Jan 2023 12:32AM

I like the way you explained the concept: "constructive communication, hearing what the CWG is saying, responding, making agreements".

AW

Aaron Wolf Wed 25 Jan 2023 2:06AM

Hmm yeah. So maybe more specifically spelled out like this:

"If the Community Working Group (CWG) contacts me about my role in any conduct complaint, I will cooperate with their process and guidance, communicating respectfully and working with them to make clear agreements for resolving the situation."

What do you think? Any suggested improvements?

I considered adding wording about refraining from other actions around the issue during the resolution process, however, I think that can be part of the CWG process rather than put into the CoC. So, the CWG themselves can state, as appropriate, that they want someone to put other actions on hold… I think that concern might vary from case to case.

A

alain
Agree
Mon 23 Jan 2023 7:24PM

Good enough for me

MP

Poll Created Wed 25 Jan 2023 8:10PM

Specific addition to CoC of disinformation rules Closed Wed 1 Feb 2023 3:01AM

Outcome
by Michael Potter Wed 1 Feb 2023 6:00AM

The rule I proposed regarding a specific disinformation rule in the CoC clearly doesn't have popular support in its current form. There was a lot of concern about whether the language would limit the intended disinformation while allowing all else through. Others felt that a rule may not be necessary or could be too difficult to formulate. The discussion thread has given me a lot to think about.

This is a poll to add specific language to the CoC about disinformation. I speak firmly about disinformation because I think it's inherently antisocial. It confounds what truth means in order to divide us and to bring our democracies and civil society down. Also, bigotry is commonly woven into conspiracy theories.

The best language to use is a little less clear. My attempt is below.

I propose that we add the following bullet point under Section 5 of the CoC, where it says "I will not post, threaten to post, nor share from this instance or external sources:"

  • Disinformation, meaning false information which is intended to mislead. This includes demonstrably false claims, debunked conspiracy theories, or references to unreliable or unknown sources.

My goal was to be comprehensive and yet not target things like spirituality, religion, politics. If someone posts about astrology, for example, it's not clear that we should censor that. I'm open to suggestions on how to improve this.

Results

Results Option % of points Voters
Agree 28.9% 13 DU DB IS AU ZS NC TR JF JN L EL JF HB
Abstain 31.1% 14 DM G SV BS AR N ES SG MA FM M BS D GC
Disagree 40.0% 18 LF JNM AW SB I EC AS A BV J DS WM A DT RP SM SS D
Undecided 0% 282 DS KF ST JD CZ BH WO JC F J BM NS SH KT C ZS DH AM MSC CCC

45 of 327 people have participated (13%)

MA

Mitra Ardron
Abstain
Wed 25 Jan 2023 8:11PM

This is a tough one, I believe that misinformation (e.g. conspiracy theories) is probably the biggest harm we face as a society - because of its divisiveness, but also because it enables other harms - for example we wouldn't be in the mess with climate change, except for the intentional disinformation driven by fossil fuel companies. BUT how do you handle this without a chilling effect on ideas. A moderation team? Maybe an expectation that extraordinary claims require extraordinary evidence.

AW

Aaron Williamson Wed 25 Jan 2023 8:29PM

This is a pretty tricky one to nail down. I'd be interested to see a list of claims that would violate the proposed policy. I'm also curious to know how irony would be treated under this policy.

Item removed

DT

Daniel Thomas
Disagree
Wed 25 Jan 2023 8:11PM

I think that there should be something like this in there but I don't think this text is the right text. Posting references to unknown or unreliable sources to point out that they were unreliable or to ask others what they thought about the reliability of it is fine. It is the deliberate use of such sources to mislead that is the problem. Possibly shorten to just "Disinformation, meaning false information which is intended to mislead."?

SJK

Stephanie Jo Kent
Agree
Mon 23 Jan 2023 7:24PM

simple, elegant, strengthens the CoC

Item removed

A

AG
Disagree
Wed 25 Jan 2023 8:11PM

And who’s to say what an “unreliable or unknown” source is? I’m sympathetic, to a degree, but my Nature may well be your Joe Rogan, and I’m not sure we want to get into the infinite regress of defining what constitutes valid authority or criteria for credibility.

WM

Will Murphy
Disagree
Wed 25 Jan 2023 8:11PM

This would place an unreasonable burden on the CWG to have to be the arbiters of what is true and what is false. People who want to spread disinformation could just endlessly debate the merits of their claims, exhausting our CWG and putting our community at risk of losing our invaluable CWG members.

This is why I suggest broad prohibitions that allow the CWG to act quickly and decisively using their judgment, while we have the CWG election process to resolve any issues with said judgment

D

Dynamic
Disagree
Wed 25 Jan 2023 8:11PM

Except in a small number of well-known cases, I don't think it's realistic for moderators to judge whether or not posts meet this definition of disinformation, and individual users are going to have different perspectives on what is disinformation or not based on what they have or haven't seen and read elsewhere.

SS

Sven Shipton
Disagree
Wed 25 Jan 2023 8:11PM

I think we should substitute intention with the potential for harm. Something like "Disinformation, meaning false information that is misleading and has a high risk of causing harm to an individual, group or wider society."

Holocaust denial has a high risk of causing harm to both a group and to wider society. But claiming that the moon landings were faked does not, as far as I'm aware, pose a high risk.

D

Dynamic
Agree
Mon 23 Jan 2023 7:24PM

Cooperation with efforts to moderate seems like a reasonable thing to expect of users.

Item removed

MP

Michael Potter Wed 25 Jan 2023 8:48PM

This is as much a test as a proposal. I think as a group, we really don't want to have even the appearance of authoritarian-style censorship. Disinformation is not usually about whatever topic it pretends to have an opinion about, it's about calling into question every verifiable source of information. I posted in mastodon about this, but it bears repeating.

To believe the moon landing was faked, for example, can seem cute, but in order to believe that, you have to stop trusting the government, medical and academic authorities, and generally recognized news sources. Specifically, antivax claims have actually been used to incite violence.

At the same time, how do we disallow fraud, quack remedies, and political disinformation but leave things like spirituality alone. If someone posts about Jesus or astral projection or tarot reading, they should probably slip through. It's not clear that a political comment should inherently be against the rules, either.

LF

Lynn Foster
Disagree
Wed 25 Jan 2023 8:11PM

I strongly object to including anything about disinformation in our CoC.

1) Who determines intent to mislead?

2) It requires way too much original research, and even expertise, to discover actual facts these days. This is too high an expectation for moderators, or other working groups.

3) People should be able to honestly and respectfully discuss divisive issues. The big platforms systematically thwart free speech, it is a really important service to provide here.

IS

Ian Smith
Agree
Wed 25 Jan 2023 8:11PM

I think this is reasonable language that will make the moderation teams job easier, while still leveraging their discretion and judgement. Agree.

AS

Andrew Shead
Disagree
Wed 25 Jan 2023 8:11PM

I agree with @lynnfoster and strongly object.

D

Dynamic Wed 25 Jan 2023 9:42PM

I think the blog post linked to at the very top of this thread (https://wiki.xxiivv.com/site/discourse.html) is very thought-provoking, but if I understand it correctly, it's about how people engage in discourse, not about the content of that discourse.

I feel generally okay about establishing behavior norms for how to engage, e.g. no harassment, not trying to force people into conversation after they've indicated they want to disengage, no "sea-lioning."

I have a harder time thinking of clear policies with regard to the information content. Certainly there are kinds of ideas that feel inappropriate to push, but I have a much harder time figuring out how you can formulate general rules about what kinds of content is or isn't acceptable, and so I'm generally reluctant to "go there."

MP

Michael Potter Wed 25 Jan 2023 9:58PM

Repost for reference:

I'll post this again:

https://realityteam.org/resources/credible-sources/

I'd say if someone argues that the World Heath Organization is wrong, and they provide a link to "Bob" on facebook, it's reasonable to go with the WHO.

T

tanoujin Wed 25 Jan 2023 11:27PM

Not yet sure, I see some issues, but tend to agree in general. I think some things could pass tagged as fictional or not testified, if the sources are weak. It depends on the framework. When there is an appeal procedure, performed by a democratically legitimated supervising body (not the ops-team) transparent by published transcript or minutes, we can delegate the case-to-case discussion there and more or less call it a day.

DS

dan simpson
Disagree
Wed 25 Jan 2023 8:11PM

I think support the intent of the proposition, but i think the language needs to be refined. A helpful exercise may be to think through the the type 1 and type 2 errors (false positives and false negatives). Also, do you really mean not to target political disinformation?

AW

Aaron Wolf
Disagree
Wed 25 Jan 2023 8:11PM

While disagreeing with the disinfo proposal as is, I support having "disinformation" as a topic subjected to moderation. The process I imagine: CWG moderators are explicitly granted leeway to hide or delete (giving limitations of tech) posts they judge to be disinformation; then, they invite the poster to show them if they've made a mistake, that the post has some minimum level of apparent merit and should be allowed; and the CoC somehow tells people that this process is a thing here.

AW

Aaron Wolf Thu 26 Jan 2023 4:10AM

I don't think the CoC can possibly have adequate wording that clarifies what counts as disinformation. I think the essential thing is to transparently say to the community effectively: "we do consider the question of whether something seems to be disinformation when moderating", and we can describe something about how moderators are to make such judgments.

What if we simply say something to the effect of:

  • We don't want disinformation, we see it as harmful

  • It's extremely hard to nail down what counts as disinfo once it gets fuzzy

  • Here's a list of topics we have already voted on and decided to count as disinfo

And we develop such a list via actually encountering specific issues. So, we don't curate a perfect list, we basically say, "these came up ever and we came to the conclusion that they will be barred here"

Maybe nobody ever says vaccines include microchips and so it doesn't come up. Maybe someone starts pushing that, so it goes through moderation process, and the co-op confirms "yes, we agree that's disinfo" so it goes on the list. And maybe we require a supermajority vote?

I just think while there's so many pitfalls in this topic, but we shouldn't throw up our hands and deny having disinfo as a valid reason to report a post. So, the minimum could acknowledge disinfo as a type of thing that is reportable even as we also acknowledge that drawing the line is hard.

A

alain
Disagree
Wed 25 Jan 2023 8:11PM

I also condemn disinformation but I think that kind of rules are either too detailed to cover specific cases (and must be updated every time a new case appears), either too vague and subject too dangerous interpretations.

Maybe CWG members could explain difficulties they encounter moderating disinformation to better the need of such tool in our CoC

GC

Gabriel Coleman
Abstain
Wed 25 Jan 2023 8:11PM

I agree this is important to articulate but would suggest a couple of language changes.

  1. We could just say "conspiracy theories," as they can be harmful whether or not they've been debunked.

  2. Its clearer to insert the examples in the first clause: "false information, e.g. [list categories], with the intention to decieve." There are reasons to post false info; to debunk, warn, or even make a joke, and this would help clarify that intention is what makes it DISinformation.

D

Dynamic Thu 26 Jan 2023 11:16AM

A number of people have suggested inclusion of language about intention to deceive. If we were to go in that direction, how would we distinguish between someone whose intention is to deceive vs. someone who has themselves been deceived and is posting sincerely?

DB

Doug Belshaw
Agree
Wed 25 Jan 2023 8:11PM

I agree that this is good enough as the whataboutery from other commenters is preventing something that is "good enough for now, and safe enough to try". I'd also point to some work that I've done in this area which references this excellent article with a graphic I found super-helpful.

N

Nic
Abstain
Wed 25 Jan 2023 8:11PM

I support the aim of this, but there's much open to interpretation, espec "references to unreliable or unknown sources".

If we want to avoid disinfo on SC it seems either:

– deep specificity - specific policies on Ukraine, Covid, Gates, homeopathy, 4G etc as/when issues emerge. Maybe set by moderators but debateable/votable by the community somewhere.

– something general - like this proposal but shorter & giving power to the mods (espec if they can be transparent on # of blocks/common topics)

G

Graham
Abstain
Wed 25 Jan 2023 8:11PM

Whilst I support the aim of this, I'm concerned that there is a lot of subjectivity in this. what is "false information", what is an "unreliable or unknown source"?

TR

Tom Resing
Agree
Wed 25 Jan 2023 8:11PM

I agree that this should be included as long as you're sure that you can enforce it fairly and without too much cost in time to the moderators.

SB

Steve Bosserman
Disagree
Wed 25 Jan 2023 8:11PM

Like @Andrew Shead, I disagree with this proposal for reasons similar to those raised by @Lynn Foster.

Item removed

AU

Ana Ulin
Agree
Wed 25 Jan 2023 8:11PM

This addition makes the CoC better, and I see no reason not to include it.

I am disappointed to see that, as a community, social.coop is more interested in rules-lawyering "what is disinformation" or "what does cooperating mean" than in trusting the moderation team do their job as moderators. Surely none of you really believes that we can write a set of rules precise and concrete enough that it will cover all possible situations, and remove the need for moderators to make a judgement call.

SV

Sébastien Vigneau
Abstain
Wed 25 Jan 2023 8:11PM

I agree with the principle but have read many good suggestions on this thread that may improve the wording, and which I'd like to see incorporated, e.g., to better emphasize harmful intent.

LF

Lynn Foster Thu 26 Jan 2023 9:42PM

Some further thoughts. The discussion has made it clear that there is disinformation and there is disinformation.... we don't all mean the same thing.

Your Ukraine example is a great one @Nic Wistreich . From what I've read, I have understood for a long time what you became aware of 2 weeks ago, as well as the historical context since 2014 and before. My worst fear in all of this is that information to support that kind of view is blocked from social.coop only because the predominate mainstream presentation is the opposite. Or that someone saying that partially since delta and for the most part since omicron, vaccines haven't prevented transmission, with studies to back it up, will be automatically blocked as anti-vax. Not that anti-vax views should be automatically blocked either, if there is science behind the viewpoint and it is presented honestly.

On the other hand, on @Doug Belshaw 's chart above, some of those categories seem like possibly realistic ways to identify actual intentional disinformation (as opposed to the kind that the powers that be have just declared as disinformation for their purposes). Like I don't support allowing pictures of Palestinian children presented as Ukranian children, which has been exposed in mainstream media. I'd be curious to hear from the moderators if that kind of thing has been encountered.

So, while I have some doubts that this can be done, if there are ways we can challenge clear disinformation techniques (various kinds of manipulation of data), without discouraging views honestly held by a minority, and if it *is* really a problem, then that is worth considering. And encouraging adding /sarc or content warnings or whatever to clarify satire seems like a reasonable idea.

Context... I come from a left anti-capitalist perspective, which does inform my trust or lack thereof of certain sources, for what it's worth.

And... I'm a white older female, for whatever that's worth. The old part might be worth something. :)

AS

Andrew Shead Thu 26 Jan 2023 10:21PM

Instead of trying to formulate an all-encompassing prohibition, we could treat disinformation case-by-case as we do for the defederation of instances. There is a wiki page that lists Suspended and Muted instances, when and why: https://wiki-dev.social.coop/Defederation_of_instances: Similarly, we could maintain a wiki page that lists community-agreed disinformation with links to the relevant discussions on Loomio and final community vote. As a community, we decide what constitutes disinformation with a traceable record of the decision making process. Doing it this way allows us to revoke a particular decision when new information changes our collective mind. This process should give us maximum flexibility of engagement with disinformation, and assist the CWG in its work.

A

AG Thu 26 Jan 2023 10:31PM

Is that list of defederated instances updated regularly?

WM

Will Murphy Thu 26 Jan 2023 10:54PM

It looks like no-one from the CWG has weighed in on the disinfo policy vote - could y'all tell us if this would be helpful to you? I'd gladly change my vote if so

@Matt Noyes @emi do @Mica Fisher @Sam Whited @Eduardo Mercovich

AS

Andrew Shead Thu 26 Jan 2023 11:16PM

@adamgreenfield I don't know. If not then it should be kept current. Also, that link is to the proposed new wiki, currently in development.

AU

Ana Ulin Thu 26 Jan 2023 11:55PM

[ETA4: comment edited to remove no-longer-relevant references to deleted content]

Quite a few other Fedi instances have no problem dealing with misinformation without needing to have a novel-length Loomio discussion or a precise policy outlining every possible edge case.

MP

Michael Potter Fri 27 Jan 2023 12:30AM

This is why I like Ana's responses, they usually cut to the heart of the matter.

D

Dynamic Fri 27 Jan 2023 12:53AM

I love this post so much.

Item removed

D

Dynamic Fri 27 Jan 2023 1:01AM

I've had some pretty strong feelings about this conversation. Like others, I think @Lynn Foster has been hitting the nail on the head. I feel like I could throw my support behind any moderation policy she supports.

D

Dynamic Fri 27 Jan 2023 1:03AM

I also have a question. When I mentioned this thread to my spouse, he asked whether something had happened to prompt this conversation on social.coop. And I realized I have no idea. Is there a specific reason why the lack of a disinformation policy is being brought up at this time?

BV

Brian Vaughan Fri 27 Jan 2023 1:08AM

I was a moderator on another Mastodon instance. I resigned, and switched to social.coop, because the site admin and most of the other moderators thought the idea of discussing moderation policies was laughable. I wanted to be on a Mastodon instance that actually took democratic discussion of policy seriously, and that's what social.coop appears to be doing.

AU

Ana Ulin Fri 27 Jan 2023 1:09AM

@Dynamic as I understand it, the originating point for the current discussion was this post: https://www.loomio.com/d/zT7jiGxI/moderation-of-fake-news

MP

Michael Potter Fri 27 Jan 2023 1:28AM

@nicw For what it's worth, Nic, I liked your posts and I was referring to liking Ana's comment that we may not need a gigantic novel to discuss all this, which my proposal has caused us to write.

Item removed

AU

Ana Ulin Fri 27 Jan 2023 1:58AM

@Nic Wistreich I can see how my original comment was more strongly worded than I had intended. I apologize for that. This ongoing neverending discussion about "framework for moderation" has me more than a little bit hurt/angry and disappointed, and I clumsily pointed those feelings at your comment. Sorry.

N

Nic Fri 27 Jan 2023 2:10AM

@Ana Ulin thanks for saying that - and sorry too for complicating things with the delete, and now making this thread even longer!

D

Dynamic Fri 27 Jan 2023 3:00AM

Thanks for the context. I never would have guessed, although I guess (knowing how riled up people get about vaccines) that I probably should have.

Based on the description provided, this is actually exactly the kind of content that I *don't* want to see banned from our forum. I support vaccines and I think they are great, but I also think it's really important for people to be able to have these conversations. I have no idea what the quality of the vax-skeptic posts were, so my opinion is based on how they were described in the thread you link to. My impression from what was presented is that author was not forcing their views on people nor intentionally trying to convince people of things that they themselves did not believe (i.e. I am seeing no evidence of intention to deceive).

Assuming that those are the case, I strongly want there to continue to be mixed spaces where people can hash out these kinds of fraught issues without it turning into an abusive free-for-all. I would feel good about social.coop being (or continuing to be?) a place like that.

D

Dynamic Fri 27 Jan 2023 3:20AM

Hmm... reference to this thread going on forever did prompt me to scroll way up to where @Will Murphy dissected some of the writings from the person in question (the irony is not lost on me), and I'm now less inclined to believe that this Jem Bendell person is acting in good faith.

I could see myself supporting a policy based on the "gotcha" argument structures. I feel much less comfortable with a policy on posts supporting particular beliefs, even wrong ones.

AW

Aaron Wolf Fri 27 Jan 2023 4:23AM

Quite a few other Fedi instances have no problem dealing with misinformation

Thanks Ana, that's a good point. Do you have any specific information about Fedi instances you think handle this well? We could avoid reinventing the wheel by just looking at the best examples already out there.

Please, anyone, share links to examples of mis/disinformation policies we could consider drawing from, especially if they are already shown to be effective and in use in the Fediverse so we can observe them in practice.

AW

Aaron Wolf Fri 27 Jan 2023 4:28AM

as I understand it, the originating point for the current discussion was this post: https://www.loomio.com/d/zT7jiGxI/moderation-of-fake-news

Thank you! I could have asked more bluntly, but I asked several times in different contexts about what I was missing to understand what happened. This is the first time in this long thread that I've seen any link to that topic, and I never saw it before. I wonder how much this whole "framework" topic was confusing because the context wasn't clear at the top of it.

AW

Aaron Wolf Fri 27 Jan 2023 4:37AM

Quick note to everyone about some contention and misunderstandings that have come up here. Thank you to those who worked to find resolution and to recognize in the end the good-faith of one another and to own our reactivity (which is normal and not the same as bad-faith).

Please note that this is why I and others wrote https://wiki-dev.social.coop/Conflict_resolution_guide the way we did. The guide emphasizes using direct-messages and getting facilitation assistance. The more we can avoid public back-and-forth and public accusations of bad faith or other uncharitable interpretations, the more we can keep focused on topics at hand. When we're upset, it's too easy to imagine bad-faith in others, and replying to others while we're upset consistently goes worse than if we own our feelings and follow best-practices for healthy communication.

Please use the conflict-resolution guide, and if you don't like something about it, help us improve it. (And yes, I need to eat my projections and model this myself, I'm still working on the same challenges — cooperation can be hard, and yet it's worth the work in the end).

MN

Matt Noyes Fri 27 Jan 2023 5:27AM

I think the conflict resolution process is the best tool in our toolbox, but it is largely unused. When it has been used, it has been really helpful, both to ops team members and to members in a conflict. We should definitely hold some intro sessions on it for new and existing members.

MP

Michael Potter Fri 27 Jan 2023 7:31AM

It's obvious I have some opinions on disinformation. I've tried not to post too much today so I could get an idea of what people thought. Some people asked questions that I'd like to address.

Why did I mention intent? It's in the definition:

https://en.wikipedia.org/wiki/Disinformation

I don't think this means the poster always knowingly spreads a lie, but the information itself isn't spontaneous, it's created by someone to deceive. It's a weapon.

@adamgreenfield asked: And who’s to say what an “unreliable or unknown” source is?

Do we really have to talk about Fox being an unreliable source?

As for unknown sources, the fact is, anyone, anywhere in the world, can create a website, or a youtube channel, or a facebook group, and post pretty much anything there, within certain limits. I could make a page arguing that the Moon is made of green cheese. There's also state actors:

https://www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house

@lynnfoster said: 2) It requires way too much original research, and even expertise, to discover actual facts these days.

I don't think this is true. We don't have to be astrophysicists to believe the Earth is round, or become doctors to trust in a vaccine. Sites like nasa.gov or cdc.gov are credible and can answer questions quickly. People who deny this are either drinking the Kool Aid, or pouring it.

LF

Lynn Foster Fri 27 Jan 2023 1:33PM

>[from @Michael Potter ] Sites like nasa.gov or cdc.gov are credible and can answer questions quickly. People who deny this are either drinking the Kool Aid, or pouring it.

I wish that were true. I used to trust the CDC, now no longer do, which I'm sad about. For example, they were back and forth on masks, and weren't clear on mask quality. Another one, they didn't recognize that covid is spread by aerosols for a long time after the science was clear (a year?), and still don't emphasize it properly so that people understand the importance of ventilation. We won't ever know how many lives that cost. I've read a lot of original studies to figure out my own covid response, and the CDC was frequently wrong or behind. And was several times criticised strongly by people like University of Minnesota epidemiologist Michael Osterholm (who has been on the president's advisory group) for being wrong, it's not just unqualified people like me. I've also read about some of the financial conflicts of interest involved, which exist, whether or not people acted on them. There's so much more, this is the tip of the iceberg.

P.S. I'm not personally anti-vax. But think a respectful discussion is good, in fact critical. The above, and other comments recommending trusting mainstream media for our disinformation decisions, is exactly the reason.

A

AG Fri 27 Jan 2023 1:52PM

Somewhat unexpectedly chiming in to support Lynn’s comments here. Even the most trustworthy of state institutions can be compromised or subverted, and their credibility in the past does not guarantee their reliability in inimical hands. This is one reason why I dislike policies that specify some sources as prima facie credible.

SG

Shauna Gordon-McKeon
Abstain
Wed 25 Jan 2023 8:11PM

I support moderators removing disinformation but think we should word the addition to the CoC broadly, to give them latitude. If there are issues with them interpreting this too aggressively or members feeling like their speech has been chilled, we could then revisit. "Disinformation, meaning false information which is intended to mislead" seems good enough. That said, I very much defer to what the moderation team itself thinks would be helpful. Have any moderators weighed in?

NS

Nick Sellen Fri 27 Jan 2023 4:53PM

The discussion got a bit overwhelming and noisy now, a lot of opinions, not a lot of discussion, and hard to get perspective I think. It feels to me like a crowded room of people shouting over each other.

I do notice by looking in the sidebar on the right that the middle proposal is almost all green, and the other two are majority red, so maybe the middle green proposal is good enough to go forward with.

That emphasised that the main issue wasn't about trying to decide what is or isn't misinformation (spicy topic), but focusing on co-operating with our lovely moderator team (people seem broadly to approve of this).

To my mind, it would be nice to know if the moderator team is sufficiently supported to do their work, maybe they need more people, money, guidance? Or maybe there are things outside of the scope that they can handle?

I personally, would be happy to leave this one to the moderator team (aka the CWG)...

As it's grown, increasingly I have the sense of social.coop not so much as a coherent community, but a co-operative service provider of a mastodon instance. It feels too big to have a real discussion (which to me is mostly about listening, and less about talking). I don't know who you all are, and I don't really have a way to find out.

The working groups, on the other hand, I think can achieve that coherence... a much smaller set of people with some longer term bond and committent.. it's that I put my faith in more than the public town square style we have here.

I have the sense now of my comment here being written on a piece of the paper, which I'll now launch into the wind, to go where it goes :)

MP

Michael Potter Fri 27 Jan 2023 5:27PM

@lynnfoster I got feedback that my post was snarky, and on re-reading, I agree and I apologize. To rephrase non-snarkily.

I'm not a fan of blind trust in any authority, and the CDC isn't a monolithic authority, but part of a broader scientific and medical consensus. It's that consensus that I generally trust.

Your paragraph on the CDC seemed like a challenge to "do my own research," the results of which I can guarantee no one on this thread wants to hear. Also, I don't feel the need to refute any of that in order to trust in science.

A

AG Fri 27 Jan 2023 5:47PM

I don’t believe that last characterization (“something we heard on the internet”) is at all fair to any of us in this conversation, Michael. While I am unyieldingly pro-vaccination, and in general an advocate of robust public health measures, the issues at the CDC are a matter of public record.

SM

Scott McGerik Sat 28 Jan 2023 12:16AM

Based upon the following from Ana:

"Quite a few other Fedi instances have no problem dealing with misinformation without needing to have a novel-length Loomio discussion or a precise policy outlining every possible edge case."

I have the feeling that we could handle mis/dis-information by letting the CWG do its work, with appropriate levels of transparency. If necessary, we could have a Loomio diiscussion.

BS

Billy Smith
Abstain
Wed 25 Jan 2023 8:11PM

While we do need some form of moderation for disinformation, as people pissing in the fountain of truth is never a good thing, this needs re-wording.

Yes, the intent may not be to censor religion, astrology or spirituality, but the wording doesn't mention those, or say that they are okay.

As for conspiracy theories, the Snowden revelations about the NSA surveilling everything on the internet, was a conspiracy theory until Snowden proved that it was real...

AR

Alex Rodriguez
Abstain
Wed 25 Jan 2023 8:11PM

Abstaining until someone from the moderation team weighs in, as I'd like to defer to any perspective any of them have to offer since they are closer to the work of actually enforcing any policy like this.

D

Dynamic Sat 28 Jan 2023 3:11PM

My read is that this thread has about burnt itself out (and yes, that we should defer to CWG), but for me it's raised some questions about the process we use for developing Loomio proposals, and I started a thread about my questions here: https://www.loomio.com/d/vzhQxDHA/how-we-use-proposals

(Please also let me know if I've done anything incorrectly in either the technical or social aspects of setting up and making people aware of a Loomio thread for the group.)

MP

Michael Potter Sat 28 Jan 2023 9:33PM

I'm not sure if I've helped or hindered the CWG with this proposal, but my proposal hasn't demonstrated broad approval as it is. We could either refine the language, or I could hit the button to close early.

I've opposed using the word "harmful" because I'm sure a clearly authoritarian regime, like Iran's leadership, could use such a broad definition to justify their terrible actions against protestors. Also the other proposal suggesting harmful as the key word isn't doing well.

Billy's comments touch on what I had in mind with the initial language. Some conspiracy theories are true, which is why I said "debunked," to make a distinction for things that have not failed fact checks. Snowden was not debunked because, unlike many conspiracy theorists, he provided copies of the proof to respected journalists.

Religion and spirituality are not the sort of information that can be effectively fact-checked and are outside of the purview of science. I'm not sure of the best way to describe information that we definitely don't want to censor.

WM

Will Murphy Sun 29 Jan 2023 7:22PM

Hi Michael I'd like to ask you to reconsider your comment drawing a comparison between the 5 individuals and members of our community who serve as the elected CWG and the government of Iran.

J

Josiah Mon 23 Jan 2023 7:24PM

MP

Michael Potter Sun 29 Jan 2023 7:57PM

I really didn't intend to compare the CWG to Iran, and I've edited to clarify that.

ES

Ed Summers @edsu
Agree
Mon 23 Jan 2023 7:24PM

D

djm
Abstain
Wed 25 Jan 2023 8:11PM

I abstain, because the language is admittedly not clear in the proposal. I am torn between the need to be clear and legal tendencies to say as little as possible. It is a good code of conduct, but needs more discussion and wordsmithing perhaps.

J

jonny
Agree
Mon 23 Jan 2023 7:24PM

I agree with this but we definitely need a clear process including appeals to both keep CWG from having to constantly parse murky space as well as give ppl trust in the process ❤️❤️

JO

Jere Odell
Agree
Mon 23 Jan 2023 7:24PM

Agree.

EL

Eliot Lash
Agree
Wed 25 Jan 2023 8:11PM

I support a covenant of cooperating with the CWG.

BV

Brian Vaughan
Disagree
Wed 25 Jan 2023 8:11PM

While I generally think it's appropriate for moderators to try to stop the spread of disinformation, I'm concerned that the language in this proposal implies a bias towards formal authority, in "references to unreliable or unknown sources".

JNM

J. Nathan Matias
Disagree
Wed 25 Jan 2023 8:11PM

Intent is typically a very difficult standard to uphold, since you would need corroborating evidence that a person had that intent. In a court system where it's possible to legally subpoena someone's private communications, it's sometimes possible to establish intent in cases like defamation. But the co-op presumably shouldn't have that power. At the same time, we all have mistakenly published inaccurate information at some point.

J

JohnKuti
Disagree
Wed 25 Jan 2023 8:11PM

As I said about the first proposal about "harmful" stuff:

I'd like to see something positive rather than a prohibition. An obligation to respect the common culture of the site and promote cooperativist principles.

BS

Brecht Savelkoul
Abstain
Wed 25 Jan 2023 8:11PM

Generally okay with this, but I don't like the "unreliable or unknown sources" bit. A random civilian posting an eye witness account of an ongoing event would be an unknown source. A satirical publication would be an (intentionally) unreliable source.

I suggest dropping this clause altogether, leaving us with something like: "Disinformation, meaning false information which is intended to mislead, such as demonstrably false claims and debunked conspiracy theories." That should suffice, I think.

L

Luke
Agree
Wed 25 Jan 2023 8:11PM

Agree with the proposal, but also agree with some that potential for harm should be added in to be a bit more specific.

DU

Deleted account
Agree
Wed 25 Jan 2023 8:11PM

this kind of disinformation has already begun to divide communities and is a serious threat to democracy, I therefore agree that such safeguards against disinformation should be written into the CoC.

ZS

Zee Spencer
Agree
Wed 25 Jan 2023 8:11PM

I am overwhelmed by the sheer volume of proposals. What might it be like if if we allow our moderation working group to use their expertise to define and enact policy, rather than attempt to coalesce everyone into a cohesive stance? It seems unreasonable to require our Code of Conduct to be universally debated, and I would prefer if we all the Moderators (who have the experience and expertise) to engage in advice-seeking + deciding rather than whole-group voting. Rule of two feet and all that.

I

Isabel
Disagree
Wed 25 Jan 2023 8:11PM

This is too broad and vague, especially "references to unreliable or unknown sources"

MP

Michael Potter Tue 31 Jan 2023 9:39PM

A number of people really do not like the mention of specific case types. What about the following edit:

CUT OUT: This includes demonstrably false claims, debunked conspiracy theories, or references to unreliable or unknown sources.

This would leave the final language as follows. I added a blurb on intent.


I propose that we add the following bullet point under Section 5 of the CoC, where it says "I will not post, threaten to post, nor share from this instance or external sources:"

Disinformation, meaning false information which is intended to mislead. This may not reflect the intent of the poster.

ED

emi do Fri 3 Feb 2023 7:54PM

Thank you to everyone who has put so much time, thought and energy to forwarding this discussion. It has been interesting and challenging to navigate the increase in diversity and volume of moderation issues that the Ops Team has been handling.

Many members of the Ops Team have not been actively participating in this discussion but we are all incredibly grateful to have been able to take a back seat and to see such respectful and thoughtful debate happening. Though it might be frustrating, I think these types of processes in which we all are able to speak to our various lived experiences is rare. We will not always agree with one another but I love that we are all striving to live with one another.

I've just announced the passed proposal on the instance. @Aaron Wolf, please go ahead and amend the Code of Conduct on the wiki!

AW

Aaron Wolf Fri 3 Feb 2023 9:47PM

updated at https://wiki-dev.social.coop/Code_of_conduct so it will be merged whenever that dev version of wiki gets linked in better.