Loomio
Thu 27 Apr 2023 8:53PM

Proposal for Change to Code of Conduct

JC juniper cameryn Public Seen by 215

Hello all!

I shared a concern in here about specificity of language and potential for abuse several months ago. You can read that thread here: https://www.loomio.com/d/sdUcLBWD/specificity-of-language-regarding-content-warnings

I'm bringing this back up because today I was notified that a post of mine was reported from another instance and was determined to not have violated the code of conduct. However it brought back up for me that this particular clause could easily be abused without clear and specific guidelines around it, and that as our instance grows that specificity becomes even more important.

The things I write about can by nature be distressing. It is part of what my audience values my writing for- it is pushing against popular wellness narratives and attempting to help people grow and be more mature and healthy in the way they relate to themselves and each other. I have been targeted for harassment in the past because of the things I write (and being honest I suspect the person who reported me may have been adjacent to those people).

This matters a lot to me because while I fully agree with any code of conduct that ensures we aren't posting things that are hateful or abusive, it is the nature of life and growth that we may find things that push us to think in ways we're not comfortable with "distressing." We also have a plethora of tools on the platform from blocking, muting, and hiding certain terms if people don't wish to see it. This is a rule that begs a lot of questions too about news and content that is political or challenging in nature. The climate crisis is one example that comes to mind.

Here is the proposal I am submitting:

I propose that we remove "content that is likely to be distressing or hurtful to others" under what must be nested under a CW from Social.Coop Member Code of Conduct v3.1 section 5 https://wiki.social.coop/docs/Code-of-conduct.html

I am submitting this proposal under the assumption that we can have some productive conversation if there is any disagreement or other members think doing so would require us to flesh out boundaries a bit more clearly.

DB

Doug Belshaw Fri 28 Apr 2023 6:03AM

Absolutely! But this is being framed as a vote, not a discussion.

DS

Danyl Strype Wed 3 May 2023 8:07AM

@juniper cameryn

I'm not looking for people to immediately accept my proposal as is, I'm looking for discussion that can help flesh out and develop these concerns.

This is totally reasonable. Loomio has a number of tools that can be used for this, such as the Sense Check, before proceeding to a formal proposal.

JN

Joshua Neds-Fox [@[email protected]]
Disagree
Thu 27 Apr 2023 8:54PM

I think CWs are the mechanism that allow us to advance challenging discourse without unduly subjecting each other to automatic distress. I'm not sure there's a balanced way to come to a consensus about what constitutes "distress" or "harm," and that's what we'd need before we pulled this language out. I vote to err on the side of protection.

AS

Andrew Shead
Abstain
Thu 27 Apr 2023 8:54PM

I have no preference either way.

ZS

Zee Spencer
Disagree
Thu 27 Apr 2023 8:54PM

It sounds like the clause worked as intended, in that another instance with a different definition reported content; and our moderation team's evaluation was that it didn't violate our policies.

Subjective definitions are scary, but ultimately trust is the key to healthy practices.

I am curious if a better place to focus and build trust would be shoring up our policies around removing bad moderators?

JC

juniper cameryn Thu 27 Apr 2023 9:55PM

@Zee Spencer I'm curious what you think about my proposed language addition in a comment below (I'll tag you in it). I think you're right that trust around uncertainty is definitely at play here. Maybe the angle can be encouraging relational responsibility as well, which I'm attempting to do with the added language below.

MB

Moon Baron
Disagree
Thu 27 Apr 2023 8:54PM

I feel that broad definitions are most insidious when they have dramatic implications, & I do agree that the current phrasing is broad & open to lots of interpretation.

That said, I feel like compelling someone to use CWs is a pretty low-impact accommodation. We can maybe add language in the future to ensure that members aren't unduly censured for not using CWs b/c the situations under which they "must" be applied by the CoC are very fuzzy, but I haven't much problem with the current CW clause.

AW

Aaron Wolf
Disagree
Thu 27 Apr 2023 8:54PM

I think this whole issue is yet another symptom of unhealthy common views of what Codes of Conduct are for. Violations of CoC should be common enough and treated with restorative approaches so that we don't see them as some absolute hard-line zero-tolerance sort of thing. It's okay that they are open to human interpretation. We avoid corruption more at the enforcement-process layer rather than make CoC try to function like strict programs. The CW action is not the same as blocking a message.

EL

Eric Leland
Disagree
Thu 27 Apr 2023 8:54PM

I do agree that "distressing" is very fuzzy, however a CW is more of a heads-up courtesy and not censoring. It empowers people to report content as distressing and have the team here judge that accordingly. I would prefer to enable some more reporting here in case patterns emerge that are causing distress that we did not anticipate and find valid.

TR

Tom Resing
Disagree
Thu 27 Apr 2023 8:54PM

There’s no reason stated for making this change.

SW

Sam Whited
Disagree
Thu 27 Apr 2023 8:54PM

SJ

Scott Jenson
Disagree
Thu 27 Apr 2023 8:54PM

I agree the current warning should be improved. But that should be in the proposal.

KL

Konrad Lawson
Disagree
Thu 27 Apr 2023 8:54PM

I think my reasons for voting against are in line with comments from Matt Noyes and Zee Spencer here who emphasize the value of flexible, and not overly detailed provisions. As Zee nicely puts it, our hard working moderators are humans that we can all work with. I would caution against the temptation for too much detail or precision in the policy. If we find there are a significant number of problematic cases that impact our community then I would be happy to support revisions.

L

Luke
Disagree
Thu 27 Apr 2023 8:54PM

It's a warning not a ban and I think we need the former for such content. Of course, the wording of the policy could be discussed but I think we need CWs.

DT

Daniel Thomas
Disagree
Thu 27 Apr 2023 8:54PM

I can see that there might be grounds for rephrasing this as there is potential for abuse but this proposal removes both "distressing" and "hurtful". I am fine with posts that are truthful and might make some readers feel uncomfortable but at the point that they are "hurtful" it sounds like they are not being phrased in a helpful way.

BS

Billy Smith
Disagree
Thu 27 Apr 2023 8:54PM

There should have been a discussion before the voting proposal was made.

I realise from reading the rest of the thread, that this was intended to start the conversation about this, but as a voting proposal, i can only say no.

As a topic for further discussion, it should take place elsewhere.

EC

Eamon Caddigan
Disagree
Thu 27 Apr 2023 8:54PM

I disagree with this proposal, for two reasons. But both of them echo issues raised above, so feel free to ignore my comments below!

EC

Eamon Caddigan Fri 28 Apr 2023 8:54PM

First, I’m comfortable with community agreements (e.g, bylaws and codes of conduct) that have ambiguous language that’s open to interpretation, because I’m convinced (based on experience) that it’s impossible to craft a bulletproof set of community agreements that will prevent all forms of abuse and toxic behavior (a clever abuser will always be able to figure out how to stay within the letter of the law). The best agreements can help make a community resilient to toxic behavior, but at the end of the day, it’s always up to the community itself to organically maintain a healthy culture, by enforcing its norms using a mix of formal and informal methods. While it’s possible for language to be too vague, I’m not convinced that that’s case in this specific example.

Second, I am generally a fan of Content Warnings, and think it’s unfair to treat them as a form of censorship. A CW policy tells our members: YES you can absolutely discuss this topic on social.coop… just please put a CW on it. I miss the days when most people on the Fediverse used CWs for discussions of US politics, for instance. “Distressful or hurtful” content should have a CW, in my opinion.

LR

Leigh Ridgway (dprk_ebooks)
Disagree
Thu 27 Apr 2023 8:54PM

I support change to the CoC around this generally, but I think that based on the community discussion there may be better solutions. Perhaps a proposal to form a working group to re-develop this section in consultation with the community might be a starting point, I don't know enough about the co-op and how heavy the change management processes are however.

JC

juniper cameryn Sun 30 Apr 2023 7:35PM

@Leigh Ridgway (dprk_ebooks) i like the idea of a WG around this and I'd like to participate in something like that.

BV

Brian Vaughan
Disagree
Thu 27 Apr 2023 8:54PM

I feel CWs are overused on mastodon in general, but the existing language seems to me to be a perfectly reasonable guideline for when they should be used.

JC

juniper cameryn Thu 27 Apr 2023 8:55PM

I'm withholding my personal vote for the moment even though I submitted the proposal because I want to see where discussion leads us first!

SM

Scott McGerik Thu 27 Apr 2023 9:12PM

Rather than removing "Always use content warnings (CWs) when sharing content that is likely to be distressing or hurtful to others" I'd prefer we adjust the wording.

  • "Likely" is highly subjective and is dependent upon the frame of reference of the person making the judgement call.

  • And I encounter "distressing" content every day in my interactions with others, so it seems to me to be a meaningless restriction.

The above two points said, I don't have a specific suggestion for a wording change.

MN

Matt Noyes Thu 27 Apr 2023 9:30PM

Hi friends, I see CWs as a fluid, evolving mode of discourse. Vague is good here because we are learning what content others in our community feel needs a CW. The purpose of this language is to encourage us all to learn to be more aware of and sensitive to each other as we discuss content of any kind.

LR

Leigh Ridgway (dprk_ebooks) Thu 27 Apr 2023 9:31PM

I'm not sure if this is an ideal model or would help here, but my old instance does three things around CW that I think are good - they differentiate between content where they expect you to CW and content where it's recommended that you CW. The second part is that they include a carve-out for members of marginalised groups to post about their lived experiences without CW. Finally, they have a living section that includes the kind of things that the community was organically using CW for before the policy which is a non-exhaustive guide to help new users. It's not perfect but I think it strikes a good balance. Reference: About - Aus.Social

SM

Scott McGerik Thu 27 Apr 2023 9:33PM

@Leigh Ridgway (dprk_ebooks) That suggests to me we need a new section that recommends content warnings for specified content. The current language specifies to always use content warnings.

JC

juniper cameryn Thu 27 Apr 2023 9:36PM

I'm curious what people think about instead of removing the clause, we add something like this (new text bolded):

Always use content warnings (CWs) when sharing content that:

  • is likely to be distressing or hurtful to others. While hurt and distress are subjective experiences, we expect users to be mindful in reporting other's posts, and first attempting either direct feedback with the poster, and utilization of the platform's tools, such as muting, blocking, and hiding terms to attend to one's own wellness. Escalations can be dealt with via our Conflict Resolution Guidelines.

SM

Scott McGerik Thu 27 Apr 2023 9:45PM

@juniper cameryn I see that as an improvement based upon the idea that we all cannot know what is distressing to others. I suggest removing the "or hurtful" to a separate clause, however, I think hurtful to others is probably covered elsewhere.

JC

juniper cameryn Thu 27 Apr 2023 9:55PM

@Zee Spencer this here

ZS

Zee Spencer Thu 27 Apr 2023 10:01PM

@juniper cameryn I don't see this as a substantive improvement, and continue to believe your concerns are better addressed through clarifying the moderation team policies and community oversight; rather than adjusting the particular wording of the code of conduct.

JC

juniper cameryn Thu 27 Apr 2023 10:03PM

@Zee Spencer Unless I'm missing something- the moderation team bases their moderation on the Code of Conduct- https://wiki.social.coop/community-working-group/Community-Working-Group.html

ZS

Zee Spencer Thu 27 Apr 2023 10:14PM

@juniper cameryn Your concern seems to be about moderators interpreting the code-of-conduct in a way that is abusive or discriminatory. Because moderators are people, even if you have a highly precise definition, a sufficiently motivated moderator could find a way to interpret that definition in an abusive or discriminatory way.

In policy-writing, there is often a sweet-spot between specificity and ambiguity.

Consider how overly specific means-testing policies result in a welfare cliff, where people who need public services are denied access because they do not meet an overly specific definition.

In overly ambiguous policies; people may feel like it creates too much lee-way for an abusive authority figure to harm them.

This is why I recommend modifications to the community working group policies to increase community oversight over amendment to the code-of-conduct.

Questions I would ask, if I did not already trust the moderators, are things like:

  1. How are moderators accepted into the group? How are they removed?

  2. How are appeals handled in the event of a member claiming discriminatory or unfair moderator actions?

  3. What happens if there is a disagreement between moderators about a future or past action?

If those are lacking then opportunities for discrimination are present, no matter how well-written the code-of-conduct is.

JC

juniper cameryn Thu 27 Apr 2023 10:22PM

@Zee Spencer Ahhh okay, I appreciate you expanding on this, this makes a lot of sense to me. I do trust our moderators but I absolutely think exploring those questions is warranted as the instance grows and new moderators are added.

I also wonder if there is any way to anticipate the possibility that a report feature will be abused to target someone. In the "Unacceptable Behavior" section we talk about thing like dogpiling and harassment. I think another concern is this being used as a form of harassment against users and might warrant an addition in the Unacceptable Behavior section.

AW

Aaron Wolf Sat 29 Apr 2023 3:41AM

@juniper cameryn I disagree with the entire idea that muting/blocking should come before reporting/flagging. I think that whole approach is harmful. Muting and blocking even on an individual level is far more escalation than flagging. Flagging as a CoC violation is like interrupting me and saying, "hey, that's not cool!" such that I can respond by saying "oh, I see, sorry, I'll work on doing better". The idea that anyone would go first to refuse to hear from me or even block communication with me is really misguided IMO. Flagging violations should be seen as light no-big-deal things we do all the time to keep each other at our best. Private messages is just an alternate version if you feel no need to have help from someone else to review/facilitate. Muting/blocking even individually should be reserved for situations where normal reconciliation and communication aren't working (and really, muting is more like just filtering your own preferences for your attention, and blocking is the escalation that should be reserved more).

AW

Aaron Wolf Thu 27 Apr 2023 11:10PM

I think that we thrive if we have an enforcement policy that makes it safe to violate the CoC.

So, if I'm thinking something is distressing, I should use a CW. If I forget, I shouldn't feel super threatened by getting flagged, I should just fix it.

If I think something is so important that it is worth distressing people by avoiding a CW, I should not worry that violating the CoC will jeopardize my rights in the co-op. The violation might get flagged, I would have to go through a process of reacting to that, and maybe I can justify that it is worthwhile, like in other cases of civil-disobedience, I can do it and accept that there might be consequences, and I do it consciously.

So, I don't think it's healthy for people to think that in general, CW's aren't needed on distressing content. I think CW's should indeed by the default. And we use human judgment case-by-case.

If someone abuses things by flagging non-CW'ed content as distressing as a way to target someone, that itself needs a process to be addressed and reconciled.

I want people to feel comfortable referring to the CoC to flag my posts if they think I should have used a CW when I didn't. That's okay as long as it's easy to fix and forgive and not some sort of punitive system where I get 3 flags and then lose privileges or something. The decision to reduce someone's privileges should be based on human judgment, not on an algorithm (except in obvious cases for spam and such).

SW

Sam Whited Fri 28 Apr 2023 12:55AM

I tend to disagree with this for the reasons others have said. I'd go so far as to say that the specific wording of a good CoC doesn't matter, it just sets the tone for the instance. To help the moderators and community members filing reports make good decisions we should focus on removing implicit biases in the moderation team and creating a culture of respect that allows us to easily discuss reports without requiring punitive measures or the moderator equivalent of mandatory-minimum sentencing

To apply that to this specific CoC wording: I don't think the words "distressing" or "hurtful" are important themselves, it could just as well say "Consider the way your words will impact others and use content warnings if appropriate". It's the spirit of the thing, not the exact words that matter here.

RJ

Rich Jensen Fri 28 Apr 2023 4:02PM

How significant is this as a functional issue in current operations? Is this about anticipating issues that may come? Or is this about a gap in guidance that is causing difficulty or distress to current worker-moderators? Is there an archive that documents the frequency that violations of this policy are reported? Are reports increasing? Does anyone feel they have been unfairly reported?

RJ

Rich Jensen Fri 28 Apr 2023 4:22PM

Full disclosure: I hadn't read this policy carefully until now and may be a habitual violator. Oopsie-doopise!

I'm a fan of promoting critical inquiry to challenge assumptions and behaviors embedded within many significant social structures. This may cause some annoyance to some people who encounter my thread. I will have to think about what place CW stylings may play in relation to the content I share. Fortunately, I don't think I've increased anyone's workload to date. To the best of my knowledge no one has been irritated to the point of filing a complaint. Hmmm. Is this an indication of my lack of efficacy as an agent of change? So much to ponder..

JC

juniper cameryn Sun 30 Apr 2023 7:50PM

I am going to close this poll now after feedback. I hear a few others feel similarly about the vagueness of language yet want to see an alternative proposal. I think the point Zee made resonates the most with me regarding that even the most precise language won't avoid loopholes or potential abuse. Other good points were made about this being a question about moderation and trust.

I still think it's very important hopefully as the instance grows to consider how to avoid becoming like other toxic social media cultures. I saw the proposal for a GM today, maybe that would be a good place to have a facilitated discussion about this. For me, when I look at our CoC, I see a lot about avoiding making others uncomfortable yet I don't see anything about self-regulation and self-responsibility, the clause I mentioned here is one of the most obvious ways that feels overlooked with emphasis on not making others feel "distress" which I still feel strongly is inappropriate wording! Not just in it's imprecision but in the implication that we are able to predict or control that. Particularly given the nature of my writing but honestly more importantly as a neurodivergent person who already struggles with neurotypical expectations to predict things like that or suffer social consequences. And I think that needs further exploration.

There have been many great suggestions here, thank you all for your feedback is given me a lot to think about.

If anyone is interested in forming a group to discuss and flesh this out further with me, please feel free to get in touch, my username is @wetpaper but I'm not always checking mastodon so email might be better [email protected]

AW

Aaron Wolf Sun 30 Apr 2023 11:12PM

@juniper cameryn Thanks, Juniper. I would like to get your perspective on my feelings about CoCs.

emphasis on not making others feel "distress" which I still feel strongly is inappropriate wording! Not just in it's imprecision but in the implication that we are able to predict or control that. Particularly given the nature of my writing

What do you think about reframing this story to remove the idea that it's a problem to get flagged? In other words, instead of the idea that you must know what will distress others, you simply use CW when you are confident that something is generally distressing, and if you do not think to do so, someone could message you or flag it so that you can edit it to add the CW. What's the harm there?

What I'm saying is: how do you feel about the idea that we reasonably predict members will unintentionally violate the CoC here and there, and we just have a process to fix those cases, and that's that? Do you think a CoC should be written so that a diligent person in good faith reliably never violates it? Or is it okay that violations may happen, and we get the experience of fixing things and addressing such issues as they arise while presuming good faith all along?

JC

juniper cameryn Mon 1 May 2023 12:26AM

@Aaron WolfWolf My take is that it definitely is unreasonable to assume that the code of conduct will never be violated. That said, I think striving to be as clear as possible is a good thing, and that a code of conduct does set a tone for and reflect culture in important ways. And personally the wording of this clause reflects enmeshment/codependency to me. It reflects an expectation that we are responsible for other people's feelings and that we need to anticipate them, which breeds codependency and hyper vigilance. Those are very high expectations in general, but particularly for neurodivergent people, people who come from backgrounds of abuse or cult-like communities, and generally those who are trying to heal and grow from toxic, codependent dynamics.

If that's not what we're doing in real practice - it should not be in our code of conduct. What were actually doing should be reflected.

I think if we're reframing that it's a problem to get flagged that needs to be a discussion of it's own and be abundantly clear how flags are regarded, and what truly is behavior worthy of booting someone from the instance/co-op.

I don't think personally that we should rely on reporting for dealing with things that upset us. Setting the expectation that reports can be used liberally puts unnecessary stress on the mod team and is saying that we need mediators to deal with things we should be confronting directly (I recently attended an excellent conflict workshop by one of our fellow co-op members Sociocracy for All where they made this point, that communities should be empowered at the lowest level to work out conflicts and that relying on mediators is essentially saying "I don't want to be responsible for dealing with this so I'm gonna make someone else do it.")

If we lack the skills to be communicating directly and taking responsibility for ourselves in that way then it should be an expectation that we're attempting to grow in that way (and certainly have a culture of grace no one is perfect and I know myself and a lot of people are included in those who have room to grow here!).

In this case someone "unknowingly violating the CoC" means they've hurt someone or caused someone distress- in a context where hundreds if not thousands of people can see our posts, and thusly there are hundreds if not thousands of people that we need to anticipate. And the nature of triggers/distressing/hurtful (and any other vague, related term) is that they are varied and generally unpredictable. Someone could be triggered by dogs because they got bit by a dog. Someone could be hurt or distressed that I've posted a position I hold they disagree with. I welcome and encourage good faith discussion, debate, and disagreement on my posts! I also take feedback and have changed my position when presented with good arguments. If someone is unwilling to engage with me in that way and expects me to anticipate what's going to distress them so long as I am treating them respectfully I simply will not be responsible for it 🤷‍♀️ I just won't. For my own mental health and boundaries.

I agree that a CoC can't account for everything, I also agree that reports shouldn't be a super serious thing and that approaching them as places where we can work out conflicts is a great thing, AND at the same time I think striving for clarity in a code of conduct matters a lot. I also think approaching reports as not a huge deal doesn't mean we ignore that mutuality is a two way street- if I'm responsible for never hurting or distressing others and for anticipating when and how I might do that, then others are responsible for tending to their own well-being and engaging in good faith.

David Schnarch says "Those who can't control themselves will try to control others and their environment" and I think we should be looking at the ways self-responsibility meets communal responsibility and striving to create environments where distressing people is not an infraction and we're expected to handle discomfort with one another in good faith. I simply think this clause betrays that effort.

AW

Aaron Wolf Mon 1 May 2023 4:32AM

@juniper cameryn

Thank you for such a clear and comprehensive answer! I agree with everything you are saying here.

My ideal vision is of a system that works much more robustly to enable just the sort of low-level resolutions (not involving any moderator). I realize the software is nothing like my ideal. What I imagine is a situation where flagging goes directly to the poster with an optional feedback message along with a list of which CoC items are involved and a friendly, supportive prompt for the poster to simply edit. All of that should happen with no extra labor from anyone else. And I'd like the ability to opt-in to having my posts hidden immediately upon such flagging so that I can minimize tensions my post might cause until I have the chance to edit and repost. I would then have moderators involved only in cases where someone disputes a flagging and wants to keep their post as-is — then, the conflict will need to be resolved.

In a reality where flagging gives no alert and edit-prompt to the poster but instead calls in moderator attention, my ideal does not work. The workaround would be private messages though that lacks the options for anonymity and for temporary hiding until edit.

JC

juniper cameryn Mon 1 May 2023 4:36AM

@Aaron WolfWolf that would be a really cool system!

AW

Aaron Wolf Mon 1 May 2023 6:14PM

@juniper cameryn for perspective, I spent too much time trying to have such a system back around 2014 that it took time and energy away from my core work on Snowdrift.coop — But after scrapping all the work we sunk into a custom system, we later did adapt our Discourse instance to be as close as possible. It actually does have a system where flagging sends a message to the poster, hides the flagged post, and re-shows it upon editing. It's almost ideal. However, it doesn't have a custom form to check off what the CoC items are (only to check a few basic types like "off-topic" vs "spam" vs "conduct violation") and no place for a custom feedback message. Also, the default text in the message is a horrible style saying effectively "several people flagged you, so you must have done something awful, you better improve…" but we were able to edit the text and settings so that hiding and message can happen with a single flag and the text says more like "maybe this is just a misunderstanding, it's normal enough, everyone with good faith is welcome here, please just take a chance to review and see how you might edit and improve your post". Even still, it's really hard to shift culture when everyone seems used to the idea that flags are only for serious violations and getting flagged is a big threat.

I have other notes about how the ideal system could work, and I hope to make a good enough public draft proposal sometime to hopefully inspire discussion and development for how we can have a better interactive online communication. Anything that goes toward being okay to do things and make mistakes and learn and get feedback is an improvement over the broken status quo binary debate between "free speech" vs "safety".

JC

juniper cameryn Tue 2 May 2023 10:23PM

@Aaron WolfWolf

"Anything that goes toward being okay to do things and make mistakes and learn and get feedback is an improvement over the broken status quo binary debate between "free speech" vs "safety"."

Yeah definitely, and I see it as more complex than free speech vs. safety, I see it as a tension between autonomy and relationality. Its inevitable and important that there's space for it to be explored.

DS

Danyl Strype Wed 3 May 2023 8:23AM

Let's go right back to basics. What is the purpose of the CoC? Is it a set of guidelines that we expect everybody to fall outside of at times? Or a set of minimum expectations for participation, where violating these can lead to exclusion? When a CoC is defined for an in-person events or open source project, it's usually presented as the latter, in which case it's important to make sure all the crucial things are in there, and only those.

I recommend dividing the existing social.coop CoC into two documents; a minimal CoC as described above, and a set of server usage guidelines. Since I can't see anyone being kicked off the server for not using CW, I would put any recommendations about those in the guidelines document.

AW

Aaron Wolf Sun 7 May 2023 6:25PM

@Danyl Strype I agree with your points here. The adjustment I would propose is for the CoC to emphasize fixing such issues as they arise. So, the minimum expectations (violations can lead to exclusion) interpretation would be in this case the adding of CW's when something is flagged as needing it. The expectation isn't perfect use of CW's but the commitment to use them well and to add them when brought up (including some process to appeal if someone is abusively insisting another person do excessive CW's). Does that make sense? I would support excluding people who refuse to work constructively toward conduct ideals, and I want ideals spelled out adequately along with that…

JC

juniper cameryn Sun 7 May 2023 7:10PM

@Aaron WolfWolfWolfWolfWolfWolWolfolfWlfWofWolWolfolfalfagfagr

Omg sorry for the weird spam that happens when I'm on mobile and idk how to fix it.

Strongly agree with both of you here i love the idea of a minimum CoC and separating out guidelines/best practices.

My personal caveat is the option for some like myself who talk about certain topics at length, in fact as a field of specialty to have the option to have a pinned post with warnings specific to that page. I know you're not into mutes and blocks Aaron but I do prefer people to mute me if they don't want to see content related to my field. Or to use word blocks if they like my personal/unrelated/shitposting but not the more serious posts about things like abuse and whatnot. I know not everyone does this but I have noticed on mastodon it's common for people to put specific profile engagement guidelines so that's what I did when I got flagged.

AW

Aaron Wolf Sun 7 May 2023 8:01PM

@juniper cameryn I'm not against muting and blocking as useful tools for people choosing what they want to see, I'm against using them to skip conflict resolution / facilitation and also against using them as a way to put the onus on individuals each take our own actions to avoid harmful posts that actually should be blocked for everyone. I think mutes especially (and blocks less so but sometimes) are appropriate for simply curating your own experience, though I'd wish for people to err toward diverse perspectives vs building echo-chambers.