I want to clarify, for the record, that although I disagree with most members of the EA community on whether we should accelerate or slow down AI development, I still consider myself an effective altruist in the senses that matter. This is because I continue to value and support most EA principles, such as using evidence and reason to improve the world, prioritizing issues based on their scope, not discriminating against foreigners, and antispeciesism.
I think it’s unfortunate that disagreements about AI acceleration often trigger such strong backlash within the community. It appears that advocating for slowing AI development has become a “sacred” value that unites much of the community more strongly than other EA values do. Despite hinging on many uncertain and IMO questionable empirical assumptions, the idea that we should decelerate AI development is now sometimes treated as central to the EA identity in many (albeit not all) EA circles.
As a little bit of evidence for this, I have been publicly labeled a “sellout and traitor” on X by a prominent member of the EA community simply because I cofounded an AI startup. This is hardly an appropriate reaction to what I perceive as a measured, academic disagreement occurring within the context of mainstream cultural debates. Such reactions frankly resemble the behavior of a cult, rather than an evidence-based movement—something I personally did not observe nearly as much in the EA community ten years ago.
I think Holly’s tweet was pretty unreasonable and judge her for that not you. But I also disagree with a lot of other things she says and do not at all consider her to speak for the movement
To the best of my ability to tell (both from your comments and private conversations with others), you and the other Mechanize founders are not getting undue benefit from Epoch funders apart from less tangible things like skills, reputation, etc. I totally agree with your comment below that this does not seem a betrayal of their trust. To me, it seems more a mutually beneficial trade between parties with different but somewhat overlapping values, and I am pro EA as a community being able to make such trades.
AI is a very complex uncertain and important space. This means reasonable people will disagree on the best actions AND that certain actions will look great under some worldviews and pretty harmful under others
As such, assuming you are sincere about the beliefs you’ve expressed re why to found Mechanize, I have no issue with calling yourself an Effective Altruist—it’s about evidence based ways to do the most good, not about doing good my way
Separately:
Under my model of the world, Mechanize seems pretty harmful in a variety of ways, in expectation
I think it’s reasonable for people who object to your work to push back against it and publicly criticise it (though agree that much of the actual criticism has been pretty unreasonable)
The EA community implicitly gives help and resources to other people in it. If most people in the community think that what you’re doing is net harmful even if you’re doing it with good intentions, I think it’s pretty reasonable to not want to give you any of that implicit support?
I was going to write a comment responding but Neel basically did it for me.
The only thing I would object to is Holly being a “prominent member of the EA community”. The PauseAI/StopAI people are often treated as fringe in the EA community and the she frequently violates norms of discourse. EAs due to their norms of discourse, usually just don’t respond to her in the way she responds to others..
Just off the top of my head: Holly was a community builder at Harvard EA, wrote what is arguably one of the most influential forum posts ever, and took sincere career and personal decisions based on EA principles (first, wild animal welfare, and now, “making AI go well”). Besides that, there are several EAGs and community events and conversations and activities that I don’t know about, but all in all, she has deeply engaged with EA and has been a thought leader of sorts for a while now. I think it is completely fair to call her a prominent member of the EA community.[1]
I am unsure if Holly would like the term “member” because she has stated that she is happy to burn bridges with EA / funders, so maybe “person who has historically been strongly influenced by and has been an active member of EA” would be the most accurate but verbose phrasing.
I think there’s some speaking past each other due to differing word choices. Holly is prominent, evidenced by the fact that we are currently discussing her. She has been part of the EA community for a long time and appears to be trying to do the most good according to her own principles. So it’s reasonable to call her a member of the EA community. And therefore “prominent member” is accurate in some sense.
However, “prominent member” can also imply that she represents the movement, is endorsed by it, or that her actions should influence what EA as a whole is perceived to believe. I believe this is the sense that Marcus and Matthew are using it, and I disagree that she fits this definition. She does not speak for me in any way. While I believe she has good intentions, I’m uncertain about the impact of her work and strongly disagree with many of her online statements and the discourse norms she has chosen to adopt, and think these go against EA norms (and would guess they are also negative for her stated goals, but am less sure on this one).
My impression is that Holly has intentionally sacrificed a significant amount of influence within EA because she feels that EA is too constraining in terms of what needs to be done to save humanity from AI.
So that term would have been much more accurate in the past.
Right but most of this is her “pre-AI” stuff and I am saying that I don’t think “Pause AI” is very mainstream by EA standards, particularly the very inflammatory nature of the activism and the policy prescriptions are definitely not in the majority. It is in that sense that I object to Matthew calling her prominent since by the standard you are suggesting, Matthew is also prominent. He’s been in the movement for a decade and written a lot of extremely influential posts and was a well known part of Epoch for a long time and also wrote one of the most prescient posts ever.
I don’t dispute that Holly has been an active and motivated member of the EA community for a while
Can you be a bit more specific about what it means for the EA community to deny Matthew (and Mechanize) implicit support, and which ways of doing this you would find reasonable vs. unreasonable?
Matthew’s comment was on −1 just now. I’d like to encourage people not to vote his post into the negative. Even though I don’t find his defense at all persuasive, I still think it deserves to be heard.
What I perceive as a measured, academic disagreement
This isn’t merely an “academic disagreement” anymore. You aren’t just writing posts, you’ve actually created a startup. You’re doing things in the space.
As an example, it’s neither incoherent nor hypocritical to let philosophers argue “Maybe existence is negative, all things considered” whilst still cracking down on serial killers. The former is necessary for academic freedom, the latter is not.
The point of academic freedom is to ensure that the actions we take in the world are as well-informed as possible. It is not to create a world without any norms at all.
It appears that advocating for slowing AI development has become a “sacred” value… Such reactions frankly resemble the behavior of a cult
Honestly, this is such a lazy critique. Whenever anyone disagrees with a group, they can always dismiss them as a “cult” or “cult-adjacent”, but this doesn’t make it true.
I think Ozzie’s framing of cooperativeness is much more accurate. The unilateralist’s curse very much applies to differential technology development, so if the community wants to have an impact here, it can’t ignore the issue of “cowboys” messing things up by rowing in the opposite direction, especially when their reasoning seems poor. Any viable community, especially one attempting to drive change, needs to have a solution to this problem.
Having norms isn’t equivalent to being a cult. When Fair Trade started taking off, I shared some of my doubts with some people who were very committed to it. This went poorly. They weren’t open-minded at all, but I wouldn’t run around calling Fair Trade a cult or even cult adjacent. They were just… a regular group.
And if I had run around accusing them of essentially being a “cult” that would have reflected poorly on me rather than on them.
I have been publicly labeled a “sellout and traitor”… simply because I cofounded an AI startup
This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.
However, this only works if those less worried about AI risks who join such a collaboration don’t use the knowledge they gain to cash in on the AI boom in an acceleratory way. Doing so undermines the very point of such a project, namely, to try to make AI go well. Doing so is incredibly damaging to trust within the community.
I concede that there wasn’t a previous well-defined norm against this, but norms have to get started somehow. And this is how it happens, someone does something, people are like wtf and then, sometimes, a consensus forms that a norm is required.
Thanks for writing on the forum here—I think its brave of you to comment where there will obviously be lots of pushback. I’ve got a question relating to the new company and EA assignment. You may well have answered this somewhere else, if that’s the case please point me in that direction. I’m a Global Health guy mostly, so am not super deep in AI understanding, so this question may be Naive.
Question: If we frame EA along the (great new website) lines of “Find the best ways to help others”, how are you, through your new startup doing this? Is the for the purpose of earning to Give money away? Or do you think the direct work the startup will do has a high EV for doing lots of good? Feel free to define EA along different lines if you like!
I have been publicly labeled a “sellout and traitor” on X by a prominent member of the EA community simply because I cofounded an AI startup.
This accusation was not because you cofounded an AI startup. It was specifically because you took funding to work on AI safety from people who want to slow down AI development use capability trends to better understand how to make AI safer*, and you are now (allegedly) using results developed from that funding to start a company dedicated to accelerating AI capabilities.
I don’t know exactly what results Mechanize is using, but if this is true, then it does indeed constitute a betrayal. Not because you’re accelerating capabilities, but because you took AI safety funding and used the results to do the opposite of what funders wanted.
*Corrected to give a more accurate characterization, see Chris Leong’s comment
If this line of reasoning is truly the basis for calling me a “sellout” and a “traitor”, then I think the accusation becomes even more unfounded and misguided. The claim is not only unreasonable: it is also factually incorrect by any straightforward or good-faith interpretation of the facts.
To be absolutely clear: I have never taken funds that were earmarked for slowing down AI development and redirected them toward accelerating AI capabilities. There has been no repurposing or misuse of philanthropic funding that I am aware of. The startup in question is an entirely new and independent entity. It was created from scratch, and it is funded separately—it is not backed by any of the philanthropic donations I received in the past. There is no financial or operational overlap.
Furthermore, we do not plan on meaningfully making use of benchmarks, datasets, or tools that were developed during my previous roles in any substantial capacity at the new startup. We are not relying on that prior work to advance our current mission. And as far as I can tell, we have never claimed or implied otherwise publicly.
It’s also important to address the deeper assumption here: that I am somehow morally or legally obligated to permanently align my actions with the preferences or ideological views of past philanthropic funders who supported an organization that employed me. That notion seems absurd. It has no basis in ordinary social norms, legal standards, or moral expectations. People routinely change roles, perspectives evolve, and institutions have limited scopes and timelines. Holding someone to an indefinite obligation based solely on past philanthropic support would be unreasonable.
Even if, for the sake of argument, such an obligation did exist, it would still not apply in this case—because, unless I am mistaken, the philanthropic grant that supported me as an employee never included any stipulation about slowing down AI in the first place. As far as I know, that goal was never made explicit in the grant terms, which renders the current accusations irrelevant and unfounded.
Ultimately, these criticisms appear unsupported by evidence, logic, or any widely accepted ethical standards. They seem more consistent with a kind of ideological or tribal backlash to the idea of accelerating AI than with genuine, thoughtful, and evidence-based concerns.
It’s also important to address the deeper assumption here: that I am somehow morally or legally obligated to permanently align my actions with the preferences or ideological views of past philanthropic funders who supported an organization that employed me. That notion seems absurd. It has no basis in ordinary social norms, legal standards, or moral expectations. People routinely change roles, perspectives evolve, and institutions have limited scopes and timelines. Holding someone to an indefinite obligation based solely on past philanthropic support would be unreasonable.
I don’t think a lifetime obligation is the steelmanned version of your critics’ narrative, though. A time-limited version will work just as well for them.
In many circumstances, I do think society does recognize a time-limited moral obligation and social norm not to work for the other side from those providing you significant resources,[1] --although I am not convinced it would in the specific circumstances involving you and Epoch. So although I would probably acquit you of the alleged norm violation here, I would not want others drawing larger conclusions about the obligation / norm from that acquittal than warranted.[2]
There is something else here, though. At least in the government sector, time-limited post-employment restrictions are not uncommon. They are intended to avoid the appearance of impropriety as much as actual impropriety itself. In those cases, we don’t trust the departing employee not to use their prior public service for private gain in certain ways. Moreover, we recognize that even the appearance that they are doing so creates social costs. The AIS community generally can’t establish and enforce legally binding post-employment restrictions, but is of course free to criticize people whose post-employment conduct it finds inappropriate under community standards. (“Traitor” is rather poorly calibrated to those circumstances, but most of the on-Forum criticism has been somewhat more measured than that.)
Although I’d defer to people with subject-matter expertise on whether there is an appearance of impropriety here, [3] I would note that is a significant lower standard for your critics to satisfy than proving actual impropriety. If there’s a close enough fit between your prior employment and new enterprise, that could be enough to establish a rebuttable presumption of an appearance.
For instance, I would consider it shady for a new lawyer to accept a competitive job with Treehuggers (made up organization); gain skill, reputation, and career capital for several years through Treehuggers’ investment of money and mentorship resources; and then use said skill and reputation to jump directly to a position at Big Timber with a big financial upside. I would generally consider anyone who did that as something of . . . well, a traitor and a sellout to Treehuggers and the environmental movement.
This should also not be seen as endorsing your specific defense rationale. For instance, I don’t think an explicit “stipulation about slowing down AI” in grant language would be necessary to create an obligation.
My deference extends to deciding what impropriety means here, but “meaningfully making use of benchmarks, datasets, or tools that were developed during [your] previous roles” in a way that was substantially assisted by your previous roles sounds like a plausible first draft of at least one form of impropriety.
At least in the government sector, time-limited post-employment restrictions are not uncommon. They are intended to avoid the appearance of impropriety as much as actual impropriety itself. In those cases, we don’t trust the departing employee not to use their prior public service for private gain in certain ways.
This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.
However, this only works if those less worried about AI risks who join such a collaboration don’t use the knowledge they gain to cash in on the AI boom in an acceleratory way. Doing so undermines the very point of such a project, namely, to try to make AI go well. Doing so is incredibly damaging to trust within the community.
I agree that Michael’s framing doesn’t quite work. It’s not even clear to me that OpenPhil, for example, is aiming to “slow down AI development” as opposed to “fund research into understanding AI capability trends better without accidentally causing capability externalities”.
I’ve previously written a critique here, but the TLDR is that Mechanise is a major burning of the commons that damages trust within the Effective Altruism community and creates a major challenge for funders who want to support ideological diversity in forecasting organisations without accidentally causing capability externalities.
Furthermore, we do not plan on meaningfully making use of benchmarks, datasets, or tools that were developed during my previous roles in any substantial capacity at the new startup. We are not relying on that prior work to advance our current mission. And as far as I can tell, we have never claimed or implied otherwise publicly.
This is a useful clarification. I had a weak impression that Mechanise might be.
They seem more consistent with a kind of ideological or tribal backlash to the idea of accelerating AI than with genuine, thoughtful, and evidence-based concerns.
I agree that some of your critics may not have quite been able to hit the nail on the head when they tried to articulate their critiques (it took me substantial effort to figure out what I precisely thought was wrong, as opposed to just ‘this feels bad’), but I believe that the general thrust of their arguments more or less holds up.
I agree that some of your critics may not have quite been able to hit the nail on the head when they tried to articulate their critiques (it took me substantial effort to figure out what I precisely thought was wrong, as opposed to just ‘this feels bad’), but I believe that the general thrust of their arguments generally holds up.
In context, this comes across to me as an overly charitable characterization of what actually occurred: someone publicly labeled me a literal traitor and then made a baseless, false accusation against me. What’s even more concerning is that this unfounded claim is now apparently being repeated and upvoted by others.
When communities choose to excuse or downplay this kind of behavior—by interpreting it in the most charitable possible way, or by glossing over it as being “essentially correct”—they end up legitimizing what is, in fact, a low-effort personal attack without a factual basis. Brushing aside or downplaying such attacks as if they are somehow valid or acceptable doesn’t just misrepresent the situation; it actively undermines the conditions necessary for good faith engagement and genuine truth-seeking.
I urge you to recognize that tolerating or rationalizing this type of behavior has real social consequences. It fosters a hostile environment, discourages honest dialogue, and ultimately corrodes the integrity of any community that claims to value fairness and reasoned discussion.
I think Holly just said what a lot of people were feeling and I find that hard to condemn.
”Traitor” is a bit of a strong term, but it’s pretty natural for burning the commons to result in significantly less trust. To be honest, the main reason why I wouldn’t use that term myself is that it reifies individual actions into a permanent personal characteristic and I don’t have the context to make any such judgments. I’d be quite comfortable with saying that founding Mechanise was a betrayal of sorts, where the “of sorts” clarifies that I’m construing the term broadly.
Glossing over it as being “essentially correct”
This characterisation doesn’t quite match what happened. My comment wasn’t along the lines, “Oh, it’s essentially correct, close enough is good enough, details are unimportant”, but I actually wrote down what I thought a more careful analysis would look like.
They end up legitimizing what is, in fact, a low-effort personal attack without a factual basis
Part of the reason why I’ve been commenting is to encourage folks to make more precise critiques. And indeed, Michael has updated his previous comment in response to what I wrote.
A baseless, false accusation
Is it baseless?
I noticed you wrote: “we do not plan on meaningfully making use”. That provides you with substantial wriggle room. So it’s unclear to me at this stage that your statements being true/defensible would necessitate her statements being false.
Yes, absolutely. With respect, unless you can provide some evidence indicating that I’ve acted improperly, I see no productive reason to continue engaging on this point.
What concerns me most here is that the accusation seems to be treated as credible despite no evidence being presented and a clear denial from me. That pattern—assuming accusations about individuals who criticize or act against core dogmas are true without evidence—is precisely the kind of cult-like behavior I referenced in my original comment.
Suggesting that I’ve left myself “substantial wiggle room” misinterprets what I intended, and given the lack of supporting evidence, it feels unfair and unnecessarily adversarial. Repeatedly implying that I’ve acted improperly without concrete substantiation does not reflect a good-faith approach to discussion.
If you don’t want to engage, that’s perfectly fine. I’ve written a lot of comments and responding to all of them would take substantial time. It wouldn’t be fair to expect that from you.
That said, labelling asking for clarification “cult-like behaviour” is absurd. On the contrary, not naively taking claims at face value is a crucial defence against this. Furthermore, implying that someone asking questions in bad faith is precisely the technique that cult leaders use[1].
I said that the statement left you substantial wiggle room. This was purely a comment about how the statement could have a broad range of interpretations. I did not state, nor mean to imply, that this vagueness was intentional or in bad faith.
That said, people asking questions in bad faith is actually pretty common and so you can’t assume that something is a cult just because they say that their critics are mostly acting in bad faith.
To be clear, I was not calling your request for clarification “cult-like”. My comment was directed at how the accusation against me was seemingly handled—as though it were credible until I could somehow prove otherwise. No evidence was offered to support the claim. Instead, assertions were made without substantiation. I directly and clearly denied the accusations, but despite that, the line of questioning continued in a way that strongly suggested the accusation might still be valid.
To illustrate the issue more clearly: imagine if I were to accuse you of something completely baseless, and even after your firm denials, I continued to press you with questions that implicitly treated the accusation as credible. You would likely find that approach deeply frustrating and unfair, and understandably so. You’d be entirely justified in pushing back against it.
That said, I acknowledge that describing the behavior as “cult-like” may have generated more heat than light. It likely escalated the tone unnecessarily, and I’ll be more careful to avoid that kind of rhetoric going forward.
I can see why you’d find this personally frustrating.
On the other hand, many people in the community, myself included, took certain claims from OpenAI and sbf at face value when it might have been more prudent to be less trusting. I understand that it must be unpleasant to face some degree of distrust due to the actions of others.
And I can see why you’d see your statements as a firm denial, whilst from my perspective, they were ambiguous. For example, I don’t know how to interpret your use of the word “meaningful”, so I don’t actually know what exactly you’ve denied. It may be clear to you because you know what you mean, but it isn’t clear to me.
(For what it’s worth, I neither upvoted nor downvoted the comment you made before this one, but I did disagree vote it.)
“From people who want to slow down AI development”
The framing here could be tighter. It’s more about wanting to be able to understand AI capability trends better without accidentally causing capability externalities.
Yes I think that is better than what I said, both because it’s more accurate, and because it’s more clear that Matthew did in fact use his knowledge of capability trends to decide that he could profit from starting an AI company.
Like, I don’t know what exactly went into his decision, but I would be surprised if that knowledge didn’t play a role.
Arguably that’s less on Matthew and more on the founders of Epoch for either misrepresenting themselves or having a bad hiring filter. Probably the former—if I’m not mistaken, Tamay Besiroglu co-founded Epoch and is now co-founding Mechanize, so I would say Tamay behaved badly here but I’m not sure whether Matthew did.
Quick thoughts: 1. I think I want to see more dialogue here. I don’t personally like the thought of the Mechanize team and EA splitting apart (at least, more than is already the case). I’d naively expect that there might still be a fair bit of wiggle room for the Mechanize team to do better or worse things in the world, and I’d of course hope for the better size of that. (I think the situation is still very early for instance). 2. I find it really difficult to adjudicate on morality and specifics of the Mechanize spinnoff. I don’t know as much about the details as others do. It really isn’t clear to me what the previous funders of Epoch believed or what the conditions of the donations were. I think those details matter in trying to judge the situation. 3. The person you mentioned, Holly Elmore, is really the first and and one of the loudest to get upset about many things of this sort of shape. I think Holly disagrees with much of the EA scene, but in the opposite way than you/Matthew does. I personally think Holly goes a fair bit too far much of the time. That said, I know there were others who were upset about this who I think better represent the main EA crowd. 4. “the idea that we should decelerate AI development is now sometimes treated as central to the EA identity in many (albeit not all) EA circles.” The way I see it is more that it’s somewhat a matter of cooperativeness between EA organizations. There are a bunch of smart people and organizations working hard to slow down generic AI development. Out of all the things one could do, there are many useful things to work on other than [directly speeding up AI development]. This is akin to how it would be pretty awkward if there were a group that calls themselves EA that tries to fight global population growth by making advertisements attacking GiveWell—it might be the case that they feel like they have good reasons for this, but it makes sense to me why some EAs might not be very thrilled. Related, I’ve seen some arguments for longer timelines that makes sense to me, but I don’t feel like I’ve seen many arguments in favor of speeding up AI timelines that make sense to me.
In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don’t really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a “traitor”. If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a “traitor.” I don’t think that’s raising the definitional bar so high that no will ever meet it—it seems like a very basic standard.
I have no idea how to interpret “sellout” in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
I’m obviously not Matthew, but the OED defines them like so:
sell-out: “a betrayal of one’s principles for reasons of expedience”
traitor: “a person who betrays [be gravely disloyal to] someone or something, such as a friend, cause, or principle”
Unless he is lying about what he believes—which seems unlikely—Matthew is not a sell-out, because according to him Mechanize is good or at minimum not bad for the world on his worldview. Hence, he is not betraying his own principles.
As for being a traitor, I guess the first question is, traitor of what? To EA principles? To the AI safety cause? To the EA or AI safety community? In order:
I don’t think Matthew is gravely disloyal to EA principles, as he explicitly says he endorses them and has explained how his decisions make sense on his worldview
I don’t think Matthew is gravely disloyal to the AI safety cause, as he’s been openly critical of many common AI doom arguments for some time, and you can’t be disloyal to a cause you never really bought into in the first place
Whether Matthew is gravely disloyal to the EA or AI safety communities feels less obvious to me. I’m guessing a bunch of people saw Epoch as an an AI safety organisation, and by extension its employees as members of the AI safety community, even if the org and its employees did not necessarily see itself or themselves that way, and felt betrayed for that reason. But it still feels off to me to call Matthew a traitor to the EA or AI safety communities, especially given that he’s been critical of common AI doom arguments. This feels more like a difference over empirical beliefs than a difference over fundamental values, and it seems wrong to me to call someone gravely disloyal to a community for drawing unorthodox but reasonable empirical conclusions and acting on those, while broadly having similar values. Like, I think people should be allowed to draw conclusions (or even change their minds) based on evidence—and act on those conclusions—without it being betrayal, assuming they broadly share the core EA values, and assuming they’re being thoughtful about it.
(Of course, it’s still possible that Mechanize is a net-negative for the world, even if Matthew personally is not a sell-out or a traitor or any other such thing.)
Yes, I understand the arguments against it applying here. My question is whether the threshold is being set at a sufficiently high level that it basically never applies to anyone. Hence why I was looking for examples which would qualify.
Sellout (in the context of Epoch) would apply to someone e.g. concealing data or refraining from publishing a report in exchange for a proposed job in an existing AI company.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
Hmm, that seems off to me? Unless you mean “severe disloyalty to some group isn’t Ultimately Bad, even though it can be instrumentally bad”. But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm that’s good for humanity as a whole.
Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research can’t be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups that’s much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which I’d guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.
It’s not that I’m ignoring group loyalty, just that the word “traitor” seems so strong to me that I don’t think there’s any smaller group here that’s owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I “betrayed” them if I steal and publish their results as my own after they consulted me, but that’s a much weaker word.
[Context: I come from a country where you’re labeled a traitor for having my anti-war political views, and I don’t feel such usage of this word has done much good for society here...]
Edit: I think that Neel’s comment is basically just a better version of the stuff I was trying to say. (On the object level I’m a little more sympathetic than him to ways in which Mechanize might be good, although I don’t really buy the story to that end that I’ve seen you present.)
Wanting to note that on my impressions, and setting aside who is correct on the object-level question of whether Mechanize’s work is good for the world:
My best read of the situation is that Matthew has acted very reasonably (according to his beliefs), and that Holly has let herself down a bit
I believe that Holly honestly feels that Matthew is a sellout and a traitor; however, I don’t think that this is substantiated by reasonable readings of the facts, and I think this is the kind of accusation which it is socially corrosive to make publicly based on feelings
On handling object-level disagreements about what’s crucial to do in the world …
I think that EA-writ-large should be endorsing methodology more than conclusions
Inevitably we will have cases where people have strong earnest beliefs about what’s good to do that point in conflicting directions
I think that we need to support people in assessing the state of evidence and then acting on their own beliefs (hegemony of majority opinion seems kinda terrible)
Of course people should be encouraged to beware unilateralism, but I don’t think that can extend to “never do things other people think are actively destructive”
It’s important to me that EA has space for earnest disagreements
I therefore think that we should have something like “civilized society” norms, which constrain actions
Especially (but not only!) those which would be harmful to the ability for the group to have high-quality discourse
cf. SBF’s actions, which I think were indefensible even if he earnestly believed them to be the best thing
I want to clarify, for the record, that although I disagree with most members of the EA community on whether we should accelerate or slow down AI development, I still consider myself an effective altruist in the senses that matter. This is because I continue to value and support most EA principles, such as using evidence and reason to improve the world, prioritizing issues based on their scope, not discriminating against foreigners, and antispeciesism.
I think it’s unfortunate that disagreements about AI acceleration often trigger such strong backlash within the community. It appears that advocating for slowing AI development has become a “sacred” value that unites much of the community more strongly than other EA values do. Despite hinging on many uncertain and IMO questionable empirical assumptions, the idea that we should decelerate AI development is now sometimes treated as central to the EA identity in many (albeit not all) EA circles.
As a little bit of evidence for this, I have been publicly labeled a “sellout and traitor” on X by a prominent member of the EA community simply because I cofounded an AI startup. This is hardly an appropriate reaction to what I perceive as a measured, academic disagreement occurring within the context of mainstream cultural debates. Such reactions frankly resemble the behavior of a cult, rather than an evidence-based movement—something I personally did not observe nearly as much in the EA community ten years ago.
Some takes:
I think Holly’s tweet was pretty unreasonable and judge her for that not you. But I also disagree with a lot of other things she says and do not at all consider her to speak for the movement
To the best of my ability to tell (both from your comments and private conversations with others), you and the other Mechanize founders are not getting undue benefit from Epoch funders apart from less tangible things like skills, reputation, etc. I totally agree with your comment below that this does not seem a betrayal of their trust. To me, it seems more a mutually beneficial trade between parties with different but somewhat overlapping values, and I am pro EA as a community being able to make such trades.
AI is a very complex uncertain and important space. This means reasonable people will disagree on the best actions AND that certain actions will look great under some worldviews and pretty harmful under others
As such, assuming you are sincere about the beliefs you’ve expressed re why to found Mechanize, I have no issue with calling yourself an Effective Altruist—it’s about evidence based ways to do the most good, not about doing good my way
Separately:
Under my model of the world, Mechanize seems pretty harmful in a variety of ways, in expectation
I think it’s reasonable for people who object to your work to push back against it and publicly criticise it (though agree that much of the actual criticism has been pretty unreasonable)
The EA community implicitly gives help and resources to other people in it. If most people in the community think that what you’re doing is net harmful even if you’re doing it with good intentions, I think it’s pretty reasonable to not want to give you any of that implicit support?
I was going to write a comment responding but Neel basically did it for me.
The only thing I would object to is Holly being a “prominent member of the EA community”. The PauseAI/StopAI people are often treated as fringe in the EA community and the she frequently violates norms of discourse. EAs due to their norms of discourse, usually just don’t respond to her in the way she responds to others..
Just off the top of my head: Holly was a community builder at Harvard EA, wrote what is arguably one of the most influential forum posts ever, and took sincere career and personal decisions based on EA principles (first, wild animal welfare, and now, “making AI go well”). Besides that, there are several EAGs and community events and conversations and activities that I don’t know about, but all in all, she has deeply engaged with EA and has been a thought leader of sorts for a while now. I think it is completely fair to call her a prominent member of the EA community.[1]
I am unsure if Holly would like the term “member” because she has stated that she is happy to burn bridges with EA / funders, so maybe “person who has historically been strongly influenced by and has been an active member of EA” would be the most accurate but verbose phrasing.
I think there’s some speaking past each other due to differing word choices. Holly is prominent, evidenced by the fact that we are currently discussing her. She has been part of the EA community for a long time and appears to be trying to do the most good according to her own principles. So it’s reasonable to call her a member of the EA community. And therefore “prominent member” is accurate in some sense.
However, “prominent member” can also imply that she represents the movement, is endorsed by it, or that her actions should influence what EA as a whole is perceived to believe. I believe this is the sense that Marcus and Matthew are using it, and I disagree that she fits this definition. She does not speak for me in any way. While I believe she has good intentions, I’m uncertain about the impact of her work and strongly disagree with many of her online statements and the discourse norms she has chosen to adopt, and think these go against EA norms (and would guess they are also negative for her stated goals, but am less sure on this one).
“Prominence” isn’t static.
My impression is that Holly has intentionally sacrificed a significant amount of influence within EA because she feels that EA is too constraining in terms of what needs to be done to save humanity from AI.
So that term would have been much more accurate in the past.
Right but most of this is her “pre-AI” stuff and I am saying that I don’t think “Pause AI” is very mainstream by EA standards, particularly the very inflammatory nature of the activism and the policy prescriptions are definitely not in the majority. It is in that sense that I object to Matthew calling her prominent since by the standard you are suggesting, Matthew is also prominent. He’s been in the movement for a decade and written a lot of extremely influential posts and was a well known part of Epoch for a long time and also wrote one of the most prescient posts ever.
I don’t dispute that Holly has been an active and motivated member of the EA community for a while
Not sure how relevant this is given I think she disapproves of them. (I agree they are so fringe as to be basically outside it).
Can you be a bit more specific about what it means for the EA community to deny Matthew (and Mechanize) implicit support, and which ways of doing this you would find reasonable vs. unreasonable?
Matthew’s comment was on −1 just now. I’d like to encourage people not to vote his post into the negative. Even though I don’t find his defense at all persuasive, I still think it deserves to be heard.
This isn’t merely an “academic disagreement” anymore. You aren’t just writing posts, you’ve actually created a startup. You’re doing things in the space.
As an example, it’s neither incoherent nor hypocritical to let philosophers argue “Maybe existence is negative, all things considered” whilst still cracking down on serial killers. The former is necessary for academic freedom, the latter is not.
The point of academic freedom is to ensure that the actions we take in the world are as well-informed as possible. It is not to create a world without any norms at all.
Honestly, this is such a lazy critique. Whenever anyone disagrees with a group, they can always dismiss them as a “cult” or “cult-adjacent”, but this doesn’t make it true.
I think Ozzie’s framing of cooperativeness is much more accurate. The unilateralist’s curse very much applies to differential technology development, so if the community wants to have an impact here, it can’t ignore the issue of “cowboys” messing things up by rowing in the opposite direction, especially when their reasoning seems poor. Any viable community, especially one attempting to drive change, needs to have a solution to this problem.
Having norms isn’t equivalent to being a cult. When Fair Trade started taking off, I shared some of my doubts with some people who were very committed to it. This went poorly. They weren’t open-minded at all, but I wouldn’t run around calling Fair Trade a cult or even cult adjacent. They were just… a regular group.
And if I had run around accusing them of essentially being a “cult” that would have reflected poorly on me rather than on them.
As I described in my previous comment, the issue is more subtle than this. It’s about the specific context:
I concede that there wasn’t a previous well-defined norm against this, but norms have to get started somehow. And this is how it happens, someone does something, people are like wtf and then, sometimes, a consensus forms that a norm is required.
Thanks for writing on the forum here—I think its brave of you to comment where there will obviously be lots of pushback. I’ve got a question relating to the new company and EA assignment. You may well have answered this somewhere else, if that’s the case please point me in that direction. I’m a Global Health guy mostly, so am not super deep in AI understanding, so this question may be Naive.
Question: If we frame EA along the (great new website) lines of “Find the best ways to help others”, how are you, through your new startup doing this? Is the for the purpose of earning to Give money away? Or do you think the direct work the startup will do has a high EV for doing lots of good? Feel free to define EA along different lines if you like!
This accusation was not because you cofounded an AI startup. It was specifically because you took funding to work on AI safety from people who want to
slow down AI developmentuse capability trends to better understand how to make AI safer*, and you are now (allegedly) using results developed from that funding to start a company dedicated to accelerating AI capabilities.I don’t know exactly what results Mechanize is using, but if this is true, then it does indeed constitute a betrayal. Not because you’re accelerating capabilities, but because you took AI safety funding and used the results to do the opposite of what funders wanted.
*Corrected to give a more accurate characterization, see Chris Leong’s comment
If this line of reasoning is truly the basis for calling me a “sellout” and a “traitor”, then I think the accusation becomes even more unfounded and misguided. The claim is not only unreasonable: it is also factually incorrect by any straightforward or good-faith interpretation of the facts.
To be absolutely clear: I have never taken funds that were earmarked for slowing down AI development and redirected them toward accelerating AI capabilities. There has been no repurposing or misuse of philanthropic funding that I am aware of. The startup in question is an entirely new and independent entity. It was created from scratch, and it is funded separately—it is not backed by any of the philanthropic donations I received in the past. There is no financial or operational overlap.
Furthermore, we do not plan on meaningfully making use of benchmarks, datasets, or tools that were developed during my previous roles in any substantial capacity at the new startup. We are not relying on that prior work to advance our current mission. And as far as I can tell, we have never claimed or implied otherwise publicly.
It’s also important to address the deeper assumption here: that I am somehow morally or legally obligated to permanently align my actions with the preferences or ideological views of past philanthropic funders who supported an organization that employed me. That notion seems absurd. It has no basis in ordinary social norms, legal standards, or moral expectations. People routinely change roles, perspectives evolve, and institutions have limited scopes and timelines. Holding someone to an indefinite obligation based solely on past philanthropic support would be unreasonable.
Even if, for the sake of argument, such an obligation did exist, it would still not apply in this case—because, unless I am mistaken, the philanthropic grant that supported me as an employee never included any stipulation about slowing down AI in the first place. As far as I know, that goal was never made explicit in the grant terms, which renders the current accusations irrelevant and unfounded.
Ultimately, these criticisms appear unsupported by evidence, logic, or any widely accepted ethical standards. They seem more consistent with a kind of ideological or tribal backlash to the idea of accelerating AI than with genuine, thoughtful, and evidence-based concerns.
I don’t think a lifetime obligation is the steelmanned version of your critics’ narrative, though. A time-limited version will work just as well for them.
In many circumstances, I do think society does recognize a time-limited moral obligation and social norm not to work for the other side from those providing you significant resources,[1] --although I am not convinced it would in the specific circumstances involving you and Epoch. So although I would probably acquit you of the alleged norm violation here, I would not want others drawing larger conclusions about the obligation / norm from that acquittal than warranted.[2]
There is something else here, though. At least in the government sector, time-limited post-employment restrictions are not uncommon. They are intended to avoid the appearance of impropriety as much as actual impropriety itself. In those cases, we don’t trust the departing employee not to use their prior public service for private gain in certain ways. Moreover, we recognize that even the appearance that they are doing so creates social costs. The AIS community generally can’t establish and enforce legally binding post-employment restrictions, but is of course free to criticize people whose post-employment conduct it finds inappropriate under community standards. (“Traitor” is rather poorly calibrated to those circumstances, but most of the on-Forum criticism has been somewhat more measured than that.)
Although I’d defer to people with subject-matter expertise on whether there is an appearance of impropriety here, [3] I would note that is a significant lower standard for your critics to satisfy than proving actual impropriety. If there’s a close enough fit between your prior employment and new enterprise, that could be enough to establish a rebuttable presumption of an appearance.
For instance, I would consider it shady for a new lawyer to accept a competitive job with Treehuggers (made up organization); gain skill, reputation, and career capital for several years through Treehuggers’ investment of money and mentorship resources; and then use said skill and reputation to jump directly to a position at Big Timber with a big financial upside. I would generally consider anyone who did that as something of . . . well, a traitor and a sellout to Treehuggers and the environmental movement.
This should also not be seen as endorsing your specific defense rationale. For instance, I don’t think an explicit “stipulation about slowing down AI” in grant language would be necessary to create an obligation.
My deference extends to deciding what impropriety means here, but “meaningfully making use of benchmarks, datasets, or tools that were developed during [your] previous roles” in a way that was substantially assisted by your previous roles sounds like a plausible first draft of at least one form of impropriety.
My argument for this being bad is quite similar to what you’ve written.
I agree that Michael’s framing doesn’t quite work. It’s not even clear to me that OpenPhil, for example, is aiming to “slow down AI development” as opposed to “fund research into understanding AI capability trends better without accidentally causing capability externalities”.
I’ve previously written a critique here, but the TLDR is that Mechanise is a major burning of the commons that damages trust within the Effective Altruism community and creates a major challenge for funders who want to support ideological diversity in forecasting organisations without accidentally causing capability externalities.
This is a useful clarification. I had a weak impression that Mechanise might be.
I agree that some of your critics may not have quite been able to hit the nail on the head when they tried to articulate their critiques (it took me substantial effort to figure out what I precisely thought was wrong, as opposed to just ‘this feels bad’), but I believe that the general thrust of their arguments more or less holds up.
In context, this comes across to me as an overly charitable characterization of what actually occurred: someone publicly labeled me a literal traitor and then made a baseless, false accusation against me. What’s even more concerning is that this unfounded claim is now apparently being repeated and upvoted by others.
When communities choose to excuse or downplay this kind of behavior—by interpreting it in the most charitable possible way, or by glossing over it as being “essentially correct”—they end up legitimizing what is, in fact, a low-effort personal attack without a factual basis. Brushing aside or downplaying such attacks as if they are somehow valid or acceptable doesn’t just misrepresent the situation; it actively undermines the conditions necessary for good faith engagement and genuine truth-seeking.
I urge you to recognize that tolerating or rationalizing this type of behavior has real social consequences. It fosters a hostile environment, discourages honest dialogue, and ultimately corrodes the integrity of any community that claims to value fairness and reasoned discussion.
I think Holly just said what a lot of people were feeling and I find that hard to condemn.
”Traitor” is a bit of a strong term, but it’s pretty natural for burning the commons to result in significantly less trust. To be honest, the main reason why I wouldn’t use that term myself is that it reifies individual actions into a permanent personal characteristic and I don’t have the context to make any such judgments. I’d be quite comfortable with saying that founding Mechanise was a betrayal of sorts, where the “of sorts” clarifies that I’m construing the term broadly.
This characterisation doesn’t quite match what happened. My comment wasn’t along the lines, “Oh, it’s essentially correct, close enough is good enough, details are unimportant”, but I actually wrote down what I thought a more careful analysis would look like.
Part of the reason why I’ve been commenting is to encourage folks to make more precise critiques. And indeed, Michael has updated his previous comment in response to what I wrote.
Is it baseless?
I noticed you wrote: “we do not plan on meaningfully making use”. That provides you with substantial wriggle room. So it’s unclear to me at this stage that your statements being true/defensible would necessitate her statements being false.
Yes, absolutely. With respect, unless you can provide some evidence indicating that I’ve acted improperly, I see no productive reason to continue engaging on this point.
What concerns me most here is that the accusation seems to be treated as credible despite no evidence being presented and a clear denial from me. That pattern—assuming accusations about individuals who criticize or act against core dogmas are true without evidence—is precisely the kind of cult-like behavior I referenced in my original comment.
Suggesting that I’ve left myself “substantial wiggle room” misinterprets what I intended, and given the lack of supporting evidence, it feels unfair and unnecessarily adversarial. Repeatedly implying that I’ve acted improperly without concrete substantiation does not reflect a good-faith approach to discussion.
If you don’t want to engage, that’s perfectly fine. I’ve written a lot of comments and responding to all of them would take substantial time. It wouldn’t be fair to expect that from you.
That said, labelling asking for clarification “cult-like behaviour” is absurd. On the contrary, not naively taking claims at face value is a crucial defence against this. Furthermore, implying that someone asking questions in bad faith is precisely the technique that cult leaders use[1].
I said that the statement left you substantial wiggle room. This was purely a comment about how the statement could have a broad range of interpretations. I did not state, nor mean to imply, that this vagueness was intentional or in bad faith.
That said, people asking questions in bad faith is actually pretty common and so you can’t assume that something is a cult just because they say that their critics are mostly acting in bad faith.
To be clear, I was not calling your request for clarification “cult-like”. My comment was directed at how the accusation against me was seemingly handled—as though it were credible until I could somehow prove otherwise. No evidence was offered to support the claim. Instead, assertions were made without substantiation. I directly and clearly denied the accusations, but despite that, the line of questioning continued in a way that strongly suggested the accusation might still be valid.
To illustrate the issue more clearly: imagine if I were to accuse you of something completely baseless, and even after your firm denials, I continued to press you with questions that implicitly treated the accusation as credible. You would likely find that approach deeply frustrating and unfair, and understandably so. You’d be entirely justified in pushing back against it.
That said, I acknowledge that describing the behavior as “cult-like” may have generated more heat than light. It likely escalated the tone unnecessarily, and I’ll be more careful to avoid that kind of rhetoric going forward.
I can see why you’d find this personally frustrating.
On the other hand, many people in the community, myself included, took certain claims from OpenAI and sbf at face value when it might have been more prudent to be less trusting. I understand that it must be unpleasant to face some degree of distrust due to the actions of others.
And I can see why you’d see your statements as a firm denial, whilst from my perspective, they were ambiguous. For example, I don’t know how to interpret your use of the word “meaningful”, so I don’t actually know what exactly you’ve denied. It may be clear to you because you know what you mean, but it isn’t clear to me.
(For what it’s worth, I neither upvoted nor downvoted the comment you made before this one, but I did disagree vote it.)
Holly herself believes standards of criticism should be higher than what (judging by the comments here without being familiar with the overall situation) she seems to have employed here; see Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent.
“From people who want to slow down AI development”
The framing here could be tighter. It’s more about wanting to be able to understand AI capability trends better without accidentally causing capability externalities.
Yes I think that is better than what I said, both because it’s more accurate, and because it’s more clear that Matthew did in fact use his knowledge of capability trends to decide that he could profit from starting an AI company.
Like, I don’t know what exactly went into his decision, but I would be surprised if that knowledge didn’t play a role.
Arguably that’s less on Matthew and more on the founders of Epoch for either misrepresenting themselves or having a bad hiring filter. Probably the former—if I’m not mistaken, Tamay Besiroglu co-founded Epoch and is now co-founding Mechanize, so I would say Tamay behaved badly here but I’m not sure whether Matthew did.
Quick thoughts:
1. I think I want to see more dialogue here. I don’t personally like the thought of the Mechanize team and EA splitting apart (at least, more than is already the case). I’d naively expect that there might still be a fair bit of wiggle room for the Mechanize team to do better or worse things in the world, and I’d of course hope for the better size of that. (I think the situation is still very early for instance).
2. I find it really difficult to adjudicate on morality and specifics of the Mechanize spinnoff. I don’t know as much about the details as others do. It really isn’t clear to me what the previous funders of Epoch believed or what the conditions of the donations were. I think those details matter in trying to judge the situation.
3. The person you mentioned, Holly Elmore, is really the first and and one of the loudest to get upset about many things of this sort of shape. I think Holly disagrees with much of the EA scene, but in the opposite way than you/Matthew does. I personally think Holly goes a fair bit too far much of the time. That said, I know there were others who were upset about this who I think better represent the main EA crowd.
4. “the idea that we should decelerate AI development is now sometimes treated as central to the EA identity in many (albeit not all) EA circles.” The way I see it is more that it’s somewhat a matter of cooperativeness between EA organizations. There are a bunch of smart people and organizations working hard to slow down generic AI development. Out of all the things one could do, there are many useful things to work on other than [directly speeding up AI development]. This is akin to how it would be pretty awkward if there were a group that calls themselves EA that tries to fight global population growth by making advertisements attacking GiveWell—it might be the case that they feel like they have good reasons for this, but it makes sense to me why some EAs might not be very thrilled. Related, I’ve seen some arguments for longer timelines that makes sense to me, but I don’t feel like I’ve seen many arguments in favor of speeding up AI timelines that make sense to me.
What do you think would constitute being a “sellout and traitor”?
In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don’t really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a “traitor”. If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a “traitor.” I don’t think that’s raising the definitional bar so high that no will ever meet it—it seems like a very basic standard.
I have no idea how to interpret “sellout” in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
I’m obviously not Matthew, but the OED defines them like so:
sell-out: “a betrayal of one’s principles for reasons of expedience”
traitor: “a person who betrays [be gravely disloyal to] someone or something, such as a friend, cause, or principle”
Unless he is lying about what he believes—which seems unlikely—Matthew is not a sell-out, because according to him Mechanize is good or at minimum not bad for the world on his worldview. Hence, he is not betraying his own principles.
As for being a traitor, I guess the first question is, traitor of what? To EA principles? To the AI safety cause? To the EA or AI safety community? In order:
I don’t think Matthew is gravely disloyal to EA principles, as he explicitly says he endorses them and has explained how his decisions make sense on his worldview
I don’t think Matthew is gravely disloyal to the AI safety cause, as he’s been openly critical of many common AI doom arguments for some time, and you can’t be disloyal to a cause you never really bought into in the first place
Whether Matthew is gravely disloyal to the EA or AI safety communities feels less obvious to me. I’m guessing a bunch of people saw Epoch as an an AI safety organisation, and by extension its employees as members of the AI safety community, even if the org and its employees did not necessarily see itself or themselves that way, and felt betrayed for that reason. But it still feels off to me to call Matthew a traitor to the EA or AI safety communities, especially given that he’s been critical of common AI doom arguments. This feels more like a difference over empirical beliefs than a difference over fundamental values, and it seems wrong to me to call someone gravely disloyal to a community for drawing unorthodox but reasonable empirical conclusions and acting on those, while broadly having similar values. Like, I think people should be allowed to draw conclusions (or even change their minds) based on evidence—and act on those conclusions—without it being betrayal, assuming they broadly share the core EA values, and assuming they’re being thoughtful about it.
(Of course, it’s still possible that Mechanize is a net-negative for the world, even if Matthew personally is not a sell-out or a traitor or any other such thing.)
Yes, I understand the arguments against it applying here. My question is whether the threshold is being set at a sufficiently high level that it basically never applies to anyone. Hence why I was looking for examples which would qualify.
Sellout (in the context of Epoch) would apply to someone e.g. concealing data or refraining from publishing a report in exchange for a proposed job in an existing AI company.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
Hmm, that seems off to me? Unless you mean “severe disloyalty to some group isn’t Ultimately Bad, even though it can be instrumentally bad”. But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm that’s good for humanity as a whole.
Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research can’t be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups that’s much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which I’d guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.
It’s not that I’m ignoring group loyalty, just that the word “traitor” seems so strong to me that I don’t think there’s any smaller group here that’s owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I “betrayed” them if I steal and publish their results as my own after they consulted me, but that’s a much weaker word.
[Context: I come from a country where you’re labeled a traitor for having my anti-war political views, and I don’t feel such usage of this word has done much good for society here...]
Edit: I think that Neel’s comment is basically just a better version of the stuff I was trying to say. (On the object level I’m a little more sympathetic than him to ways in which Mechanize might be good, although I don’t really buy the story to that end that I’ve seen you present.)
Wanting to note that on my impressions, and setting aside who is correct on the object-level question of whether Mechanize’s work is good for the world:My best read of the situation is that Matthew has acted very reasonably (according to his beliefs), and that Holly has let herself down a bitI believe that Holly honestly feels that Matthew is a sellout and a traitor; however, I don’t think that this is substantiated by reasonable readings of the facts, and I think this is the kind of accusation which it is socially corrosive to make publicly based on feelingsOn handling object-level disagreements about what’s crucial to do in the world …I think that EA-writ-large should be endorsingmethodologymore thanconclusionsInevitably we will have cases where people have strong earnest beliefs about what’s good to do that point in conflicting directionsI think that we need to support people in assessing the state of evidence and then acting on their own beliefs (hegemony of majority opinion seems kinda terrible)Of course people should be encouraged to beware unilateralism, but I don’t think that can extend to “never do things other people think are actively destructive”It’s important to me that EA has space for earnest disagreementsI therefore think that we should have something like “civilized society” norms, which constrain actionsEspecially (but not only!) those which would be harmful to the ability for the group to have high-quality discoursecf. SBF’s actions, which I think were indefensible even if he earnestly believed them to be the best thing(Some discussion onhow norms help to contain naive utilitarianism)I feel that Holly’s tweet was (somewhat) norm-violating; and kind of angry that Matthew is the main person defending himself here