I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are
After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".
And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.
Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.
I am suspicious the whole thing is a PR stunt to build public trust.
In none of their statements do they say they won't do the things:
> we cannot in good conscience accede to their request.
That's very specifically worded to not say "under no circumstances will we do this".
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
Is not saying they won't eventually be included.
They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.
This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.
That is speculation. You might be correct but this statement could simply be a strong signal to the administration to back down. A hail Mary.
Isn't that what we're all doing in this thread? We could certainly take the document at face value but as a parent commenter said, almost every company starts off with "don't be evil" then goes and does evil things.
Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.
> ... to back down.
Or else what?
> They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.
What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.
I mean that’s just adulthood.
There are outcomes where the US government seizes the company. Not super likely, not impossible.
It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.
I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.
> There are outcomes where the US government seizes the company. Not super likely, not impossible.
Are there historical examples in the US specifically where we've nationalized a business?
Because we've certainly invaded countries and assassinated leaders over exactly the same.
ETA: I could have answered my own question with two minutes of research. Yes, we have: https://thenextsystem.org/history-of-nationalization-in-the-...
I'm not sure why you are getting downvoted.
It is indeed a naive, or more likely a dishonest thing to do.
Anyone can promise anything. When there's little to no accountability and public memory/opinion doesn't last a week (or is easily manipulated anyway), then promises mean literally nothing. Very like how, in politics, temporary means permanent.
Or HackerNews itself, with them implementing a little Big Brother. It will, of course, absolutely and without a doubt only "nudge" people and it will absolutely, under no circumtances, pinky promise, never get any worse or do anything else but that.
When there's millions of fools, then those, who actually recognize that they are being fooled, are rarely ever significant in numbers. They're drowned out by the fools, until said fools "wake up" and cry "if only we had known!".
Well ... you could have known, but in your mindlessness you didn't listen and think.
"It must be true, because they say so. D'uh. What are you, dumb?"
This. I don't get why you are getting downvoted. The statement literally says:
Last word is very important: "now".Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:I'm not saying whether or not they're planning to back down, but this sentence doesn't imply that. The "now" is clearly meant to be in reference to the fact they've not in the past.
Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.
I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.
[dead]
I share this sentiment.
In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.
Feel free to judge them by their actions rather than intentions. This situation being an example.
- [deleted]
I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.
This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade
Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.
We are two elections and a major health issue away from a complete change of course.
But short sightedness is the name of the quarterly reporting game, so who knows.
> We are so early into this AI revolution…
I keep hoping it’s almost over.
Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.
I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.
With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.
Perhaps it’s just my age.
I'm seriously worried there won't be more elections. Not hyperbole at all.
> I'm seriously worried there won't be more elections. Not hyperbole at all.
Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.
If you want something to worry about, worry about this:
> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.
> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.
> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)
It's not an unrealistic fear. Trump has been making noises about "taking over elections." Abolishing elections wholesale is very unlikely, sure, but a sham election rigged by a corrupt government? That's standard fare for authoritarians. And there's evidence of voting anomalies in swing states in the 2024 election.
https://www.theguardian.com/us-news/2026/feb/27/trump-voting...
Yeah, Russia still has "elections" for all the good that does them.
Trump _says_ lots. Most of it doesn't come true.
FYI, even though you have a new account, you were banned from your first comment and all your comments automatically show up as hidden-by-default to most users.
It's not who votes that counts, but who counts the votes.
(Attributed to Stalin, but likely comes from a despot earlier in the history.)
Authoritarian nations continue to have elections, turnout is near 100%, and Dear Leader wins with 90% of the vote.
I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.
WaPo headline “Administration plans to declare emergency to federalize election rules.” https://www.washingtonpost.com/politics/2026/02/26/trump-ele...
Yeah, they can plan whatever they want. No such authority exists, and it must really be emphasized that they're all so stupid.
Stupid and effective are not mutually exclusive.
I do agree with you that no such authority exists, but this administration seems to get away with a lot of things they have no authority to do.
- [deleted]
- [deleted]
If you think they're pissed now, just wait to see how they react to election interference.
I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.
Putin crushes every election he has. Of course there would be more elections.
Mattis- the same highly decorated and respected leader that was on the board of directors at Theranos... edit: added Mattis
This is why we should be skeptical of companies that want to tie themselves to the military industrial complex in the first place.
Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.
It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.
$200M is >2% ARR at the last numbers we got from them, and would take them back... checks notes... literally only a few days of ARR growth.
I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.
The rest of the world moves to using you?
It absolutely is a PR stunt. And the media is cheering.
It's absurd.
It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.
They are explicitly not doing that.
This effectively is cancelling, isn't it?
You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.
> If you do not like working with the military, ...
Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?
They want to work with the military, with just two additional guardrails.
[dead]
> it is simply the laws of money
The First Law of Money: Money buys the Law.
To quote Brennan Lee Mulligan, "Laws are threats made by the dominant socioeconomic ethnic group in a given nation."
The full[1] quote is:
> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”
...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”
[1] Minus the last line, which I will allow others to discover for themselves
Certainly pre-democracy, other than the ethnic group bit.
That's maybe the second law. The first one is: money is always finite.
Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.
The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.
If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.
FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.
At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.
I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...
Not at all. Especially at that level of capital. It’s the equity equivalent of „if you owe a bank a million dollars, you’re in trouble. If you owe a bank a billion dollars, the bank is in trouble”.
Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.
This is fundamentally incorrect.
Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.
Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.
> everyone in this industry
So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?
I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.
There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.
Everyone in this industry making a certain bullshit claim. I did qualify my statement. Don’t cut my words to make a strawman.
Additionally I state in the end that I do believe it’s possible.
So do you know everyone in the industry that made that such a claim? Sure, maybe you meant to restrict it further to "everyone I have noticed personally that said/wrote that" (or anything along the lines), but even then, do you know all the stuff that they did after saying it? (as the statement also included "doing the opposite" which I find quite strong).
If I see "everyone" I would expect it to actually mean "everyone under the constraints", the word "everyone" has a certain meaning and is very powerful, why use for situations where other words like "many", "most" might be more appropriate?
I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.
But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.
I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.
Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.
Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.
Ideals have always been represented in literature as a virtue and a problem for humans. I find real life is no different.
Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.
I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.
Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.
Yeah it's not that the people involved have no ideals, it's that the company structure as a whole doesn't, and over time that structure will eventually outlive, corrupt, and/or overpower the ideals of the founders or other principled individuals at the company.
I can’t think of a single thing Meta does that isn’t driven by pure greed.
Yes, though Meta is a bad example as they started off with the values of Zuckerberg, and still have them.
Exactly right. But i think it makes it a good example actually. Company DNA is a thing. Bill Gates isn't running microsoft anymore. Still...
What would be more appropriate example?
Apple, Tesla, Oculus.
The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.
Adobe?
But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.
I'm not denying 95% of that, only saying that Zuckerberg didn't have any idealism to lose in the first place.
I actually forgot that his first site was facemash which single purpose was to rate "hotness" of each individual girl on his University.
[dead]
Anthropic is not a public company.
LOL, Palmer Luckey is a right-wing war mongering psychopath.
All of Meta's VR stuff should rationally be cut loose and refunded if it were all about greed. That stuff only survives because Zuck is a nerd who wants it to happen (but it's not going to.)
Oh sure. I don't want to say everybody are driven by ideals and not greed, but that even people with strong ideals and good intentions can do a lot of bad by being blinded by those same ideals.
Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.
I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.
Depends. Many still reflect the founders vision; even if that vision might have evolved over time.
Can you provide an example of that for an American venture backed corporation older than a decade?
Not the person you're replying to, and I may be wrong about this, but Amazon?
Jeff's original vision was "relentless customer focus" and ...
actually on second thought I'm seeing the argument 'Amazon stopped caring about customers and is in full enshittification mode at this point'.
But maybe Amazon circa ~2010/2015, or Google around 2010 was still pretty close to the original vision of customer service/organizing the world's information.
Or Apple? They're still making nice computers, although not sure they count as VC backed.
Stripe perhaps? Hashicorp?
Well Google‘s vision was to catalog all the world’s data
Apple wanted to make personal computing stable - they were absolutely VC backed
I suppose the original question is vague enough that it could always encompass everything which is founders vision even if the vision changes so it’s like OK well then then there’s nothing really to say that you’re stable too it’s just some whatever the function of the person who started the organization is and even that you could debate
True. Which is all the more reason for calling bullshit on claims of "doing good" or "having ideals" by anyone building a company that can eventually be ran my MBAs.
The impact of MBAs might be decreasing..
> not related to people's "understanding".
Except for the understanding that it's foolish to believe anything that sounds too good to be true. Yes, believing that people who want to make money/achieve positions of power, also want to make the world a better place, is absolutely foolish. Ridiculously foolish.
Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.
I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.
I understand Anthropic is not public, but I assume there's an IPO coming.
This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.
I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.
I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.
At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"
i.e. Fiduciary Duty Considered Harmful
- [deleted]
Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.
and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.
> Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.
To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money
This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.
VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.
Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.
Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?
where is anthropic hyping like that? Most of what I see coming out of anthropic is deep context releases on research they're doing.
> Mar 14, 2025, 7:27 AM CET
> "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"
It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...
Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it
How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?
Your quote supports hype but does not support your claim that Anthropic is telling customers they need more money to deliver the hype.
Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.
Elizabeth Holmes is serving time for promising investors something her company couldn't deliver, so there is a line beyond which hype becomes fraud. Probably AGI, ASI, and fully automated societies aren't something well enough defined for courts to rule on, unlike making unfounded medical diagnoses from a pinprick of blood.
I work at a non-tech Fortune 500 and this is looking nearly spot-on from here. Nobody on my team touches the code directly anymore as of about 2 months ago. They're rolling it out to the entire software department by June. I can't speak to the economy at large, but this doesn't look like baseless hype to me. My understanding is that Claude Code reached this level late last year, ie. Amodei was just wrong about uptake rates.
They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...
Amodei believed Altman, so there's that. I don't (have to) believe either. If product works for me, it works. Raising their clanker products to second coming is for investor relations, of which I am proud to day I am not.
Both are hucksters, although Amodei's qualifications are pretty good, he actually is a scientist. Out of these I think Hassabis is my favorite
I don't know why anyone would trust any of the above.
disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
Corporations do not and cannot have principles, they only have the profit motive
This is false. People can have principles, profit motive is not something a corporation has, it's something people have. Corporations do things all the time that are based on everything from principles, to the personal whim of executives, to exercise in ego, to community benefiting actions, or to screw customers for extra profit. It is entirely dependent on the specific people in management roles.
Corporations need profit to survive because the cost of tomorrow is a surplus of today.
A corporation is a bunch of people cooperating to achieve a common goal.
There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.
Is that corporation publicly quoted in the stock market or is it private?
Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.
Both need profit to survive, but the publicly quoted company is much more extreme.
When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.
The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.
Agree with everything, but would add a small detail : publicly quoted corporations might as well sell dreams and if they are very good at doing that have no profit because of some future potential pay off (of course I am writing this from my fully self driving car that I own since 10 years ago, that might transform in a robot soon).
Sadly, market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company but the company itself exists for multiplying value. The profit motive is built into the reason for its existence. It's a contradiction that has a lower probability of resolving in favor of morals as the company grows in size and accrued capital. Whichever moral principles the leadership may have had at the beginning, they always erode or get perverted over time simply because the market always has a stronger pull.
I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.
I find "morals" difficult to evaluate objectively. Some people might find it "moral" that women do not have any education and just stay at home, which I find terrible.
But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).
It may appear simpler on the surface but it's very easy to find that market forces that don't have any checks and balances on them eventually converge on increasingly aggressive and dehumanizing behavior—not unlike your example with women. I have many such well-documented behaviors to list as examples, and I guarantee you have encountered them regularly and been upset at them.
The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.
Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.
something something the ideology of a cancer cell. The only goal of a publicly traded corporation is to make the line go up, and the board is required to eliminate anyone who puts other principles before that.
Tim Cook memorably said (in 2014): "When we work on making our devices accessible by the blind, I don't consider the bloody ROI."
How come the board hasn't eliminated him?
Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.
This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matter as much as private companies PR campaigns
Sure they do buddy.
> how driven by ideals many folks at $Corporatron are
Well let's see... it says in the post:
* worked proactively to deploy our models to the Department of War and the intelligence community. * the first frontier AI company to deploy our models in the US government’s classified networks, * the first to deploy them at the National Laboratories, and * the first to provide custom models for national security customers. * extensively deployed across the Department of War and other national security agencies * offered to work directly with the Department of War on R&D to improve the reliability of these systems * accelerating the adoption and use of our models within our armed forces to date. * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.They didn't claim to have pacifist ideals
In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.
Just because you disagree with their ideals doesn't mean they're not holding to theirs
They sound exactly like George Bush and every other American leader who's claimed high minded ideals while they engage in interventions in direct contradiction to those ideals around the world
To be clear, I don't think anthropic is itself intervening.
The concerns they've raised about authoritarianism is "AI enabling authoritarians."
When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.
I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy
Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.
That is a good LinkedIn endorsement of ever I saw one!
Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.
If someone is in an environment where they have to do XYZ or die, their choice to do XYZ might not reflect their personality, but the environment where they have to do XYZ or die.
[dead]
But if you were watching them, was there really no freedom from consequences? At least there was the risk of you thinking less of them.
I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.
> But if you were watching them, was there really no freedom from consequences?
Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.
You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.
Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.
Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.
Exactly
Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!
Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QC ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.
Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
I can agree that I thought it was jack dorsey but it looks like we are talking about jack clark [https://en.wikipedia.org/wiki/Jack_Clark_(AI_policy_expert)]
It would be better if people could name them with their full names to avoid any confusion.
[flagged]
Please don't do this here.
> it's easy to know how they will act when the going gets rough
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
In these days of the Epstein mails, it's worth remembering one thing that's become clear: Epstein was an extremely nice guy. He seemed kind, sincere, interested in what you were doing, civilized etc.
But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.
That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.
Anyone who's grown up around the upper class social strata understands this to be true.
In German you say „Nett ist die kleine Schwester von Scheisse“ which means „Nice is the polite version of being an asshole“. And this is how I cope with what decision-makers say. Zuckerberg was also „nice“ for a long time.
"people's traits flanderize": nice
>Even if you went to burning man and your souls bonded ...
I'll take: List of places I never want to bond my soul with someone at for one thousand, please.
They get an air conditioned trailer and pay "sherpas" to do their chores, so its basically just a hotel suite
Oh, that's the best place for souls to bond.
Bond to what -- that's the real question
Playa dust. It's certainly permanently bonded to my car.
This is insanely naive
Cynicism isn't always correct.
[dead]
[flagged]
Huh? Why would they be in prison??
> they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries
They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.
This is silly. No one at anthropic is going to prison for this. It only hurts their ability to do business with US government customers which is a net negative for all. Anthropic will come around.
The nature of evil is that it's straight down the road paved with good intentions.
[flagged]
[flagged]
Late comment but I think this is probably a naive business strategy for an American company. Amodei seems to underestimate how much the US economy operates on relationships, connections and reputation. Granted this admin is really aggressive, but if Anthropic is marked a supply chain risk, they're screwed because virtually every US enterprise is a downstream contractor. And in lieu of B2B and government, they lack a direct-to-consumer moat. I commend his apparent assumption that the US market competes on capabilities (also betrayed by his predictions that AI will quickly destroy the white-collar class) but the reality is less an open free market and more a complex web of entrenched relationships. And going back to his prediction that AI will destroy the white collar class, this is where the bulk of inter- and intra-entity relationships live. In an economy driven by relationship moats, why would a CEO sever his relationships in exchange for a better tool?
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.
> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.
> Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":
Can you imagine a world where Anthropic says "we are changing our RSP; we think this increases AI risk, but we want to make more money"?
The fact that they claim the new RSP reduces risk gives us approximately zero evidence that the new RSP reduces risk.
Yea, that Sam only does this because, "he loves it." They're not in it for the money.
Sorry, I meant a different Sam – Sam McCandlish, not Sam Altman.
Wasn't expecting this post to get so much attention.
That's not fair, Sam can love money too and there is no conflict here.
"Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.
It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
Come On, Obviously The Purpose Of A System Is Not What It Does
https://www.astralcodexten.com/p/come-on-obviously-the-purpo...
I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.
I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?
> most invocations of the statement are either blindingly obvious or probably false
So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.
Secondly, look at this one specifically:
> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.
Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.
Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.
The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.
I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.
Driven by ideals? Yeah right. That first paragraph he says they work with the department of defense to protect us from authoritarianism. What?! You are working with an authoritarian regime you cynical fuck. Getting paid by them. And now you act all virtuous because you won't make autonomous weapons.
Anthropic doesn't want us to have the right to run open weight models on our own computers. They were never the good guys.
What I read is: Anything not open source, open weight, is evil.
I disagree. The concept of nuance, putting things in context, is the source of all good in internet discussions.
No, but lobbying the government to prohibit open source / open weight models is evil.
They literally want to use state violence to control what we can do on our own computers.
Anytime there is any law about anything you can say that it's ultimately backed "using state violence". That's just silly. As silly as the notion that there shouldn't be any rules and limits whatsoever about what you can do with your computer.
> As silly as the notion that there shouldn't be any rules and limits whatsoever about what you can do with your computer.
Hard disagree. There shouldn't be any rules or limits whatsoever about what I can do with my computer, and especially ON my computer, as long as the thing I'm doing doesn't break other laws (CFAA, CSAM, etc).
This is, after all, Hacker News.
The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
Where are you getting that from?
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
- [deleted]
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
> US’ unlawful kidnapping of Maduro.
The what now?
Maduro is being prosecuted and there was a warrant out for his arrest. There is no magic soil exemption if you commit a crime against the United States and flee to another country.
What on earth does "Two such use cases have never been included in our contracts with the Department of War" mean? Did they specifically forbid it in the contract or was it literally just not included? Because I can tell you that if it's the latter that does not generally entitle them to add extra conditions to the sale ex post facto.
>threatening them because how dare a company tell the psycho dictators what to do.
Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.
They have always maintained an acceptable use policy forbidding these things. It was not controversial, because the Pentagon claims they have no interest in doing them in the first place, until a regime-aligned executive at Palantir decided to curry favor by provoking a conflict.
Don't attribute to ideals what is simple self-preservation.
No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.
Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.
This administration^Wregime has a lot of experience pressuring publicly with high stakes followed up by making backroom deals that would even make Jared Kutcher blush.
This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.
I'm a bit underwhelmed tbh. Here is Anthropic's motto:
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being
It is all for the benefit of man. We even get to see the man himself daily on television.
Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.
I think the last few months have shown pretty clearly in whose service this policy is. If China went to attack Taiwan, west has no moral high ground left.
One of the hallmarks of fascist thinking is the dehumanizing of opponents and minorities, so within their own messed up framework, they might even mean it.
There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.
Look I'm not going to disagree, obviously - but even in those times, you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of. Bombing of Hiroshima and Nagasaki is still widely discussed today for a myriad of reasons, as is conventional bombing of cities in both Nazi Germany and Japan. We can both agree that fighting nazis is a good thing while at the same time have a moral objection to participating in the war effort.
And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).
Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.
//but even in those times, you could argue
This is the oft-spoken fallacy of the benefit of hindsight. Folks in that situation 80 years ago did what they had to do, to stop Japan from continuing to rape and murder hundreds of thousands of people in southeast Asia. But of course, you would have found a better option. How's the view, standing on the shoulders of giants?
Look up when Anthropic signed a contract with Palantir and then look up what Palantir does if you want an even better reality check on following the ideals. I chuckle every time.
And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.
Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.
But how can they avoid it, why are they not asked?
Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.
The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.
Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?
Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.
"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.
So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.
And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?
Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.
I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.
Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.
That doesn't guarantee a good outcome, and there's still a hard road ahead.
> to rename the DoD to "department of war"
The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.
Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter
> And who exactly are these "autocratic adversaries" they are mentioning?
Anyone that Israel doesn't like
> Except for the victims of sexual abuse perpetrated by their clergy.
I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.
But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.
The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.
Almost 3 decades later he got railroaded in court, me learning about it in the news.
I'm confused. You heard about someone you knew being wrongfully convicted of a crime he didn't commit and you could have provided the testimony to clear him, but you just decided not to? Why not?
I never was contacted during the trial and only read about it almost 2 years later in the news.
Also, he's a man of strong faith, not that he knows he'll win in the end, but more like that it just doesn't have the same importance for him as it would have for us. I only had a short opportunity to ask him about it since then and basically he doesn't think there is just about any chance to win this, what he's most worried about is ruining the public image of his students (including his accusers) and since his order allowed him to rejoin and start over, in practice, he got all he wanted to ask for already.
To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.
>just another marketing stunt
What evidence on _Amodei_ and his actions leads to that conclusion?
Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.
When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.
>Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir.surveillance of Americans but they happily deal with Palantir.
Palantr will also be subject to the same contractual limitations as the DoD.
>They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.
The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.
While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
> They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.
What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
> Palantr will also be subject to the same contractual limitations as the DoD.
Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).
> Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.
> What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.
> Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.
You know, once the lawyers get involved, there are no contradictions because they define every term and then it makes all the sense in the world.
If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.
You call it silly, I call it an accurate reading!
There are well intentioned people everywhere, also at Google or OpenAI...
But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...
The world running on a few powerful mens ideals is a problem in itself.
I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”
just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.
I wouldn't underestimate this as a good business decision either.
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
> was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
I think in this case it's safe to assume malice rather than incompetence. It's a lot like the parable of the frog and the scorpion.
Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD
The keyword is "cancel", not threaten seizure with the DPA and destruction with a baseless supply chain risk designation.
- [deleted]
If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.
The US government clearly doesn't take seriously the claim that AI is more dangerous than (or even as dangerous as) nukes, because if they did they wouldn't allow anyone except the military to develop or use them, they wouldn't allow their export or for them to be made available for use by foreigners like me, they wouldn't allow their own civilians to use them, they would probably be having a repeat of the cases in the cold war where they tried to argue certain inventions were "born secret" and could not be published even if they were developed by people who were not sworn to secrecy.
I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...
People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.
Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.
Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.
As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?
Oh hey Noah
Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).
- [deleted]
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.
Sure, but what happens when the suits eventually take over? (see Google)
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
Saying an entity has values doesn't mean the entity agrees with every single one of your values.
The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.
That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.
(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)
Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.
Okay — but if Anthropic is typical banal evil in that regard, why should we believe they didn’t also compromise in other areas?
The exact point is that Anthropic is unexceptional and the same as other corporations.
- [deleted]
I remember when people said the exact same thing about Google. Youth is wasted on the young.
Let us think how OpenAI responded to this.
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.
I've never seen any other democracy use so extensively the kind of duality between the good guys and bad guys, as Americans like to say. There is a total lack of nuance and a very widespread message about how the US is special and best than anything else in the world, so everything is justified to assure its primacy. It's the kind of thing you hear from totalitarian and brainwashed countries.
I know this is not everybody in the US, and I say this as a foreign person that observes things from outside. I agree with the two statements you made, I just think they could be incomplete and that the countries that behave most similarly to the US are not democracies.
Countries do not do, things people do.
Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.
This argument is in poor faith. First of all, a contradiction between your own stated values and your own actions cannot be excused by the status quo; it's on you to resolve it. Second, that's a very bold claim that is broad and cynical enough to make it easy to use it as an excuse for anything heinous.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
The DoD is likely and in fact has many times massacred people
Yo do know that this what the militaries do, right?
Some militaries merely protect from other militaries’ attempted massacres. Massacres are certainly what the US military does. I sure hope you don’t support the US military knowing that.
>Geopolitically they could care less.
I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.
>align them with humanity.
Quick sanity check: does their version of humanity include e.g. North Koreans?
> AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.
And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.
The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.
I think you mean “couldn’t care less”. “Could care less” implies they care.
> But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.
Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".
But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.
Shareholders do not control Anthropic's board, it is not structured like a typical corporation.
> Many groups that are driven by ideals have still committed horrible acts.
Sometimes, it's even a very odd prerequisite.
you're suffering from Stockholm syndrome
- [deleted]
“AI chips are like nuclear weapons” (paraphrasing [1]) and “I should be in charge of it” (again paraphrasing) is just not a serious position regardless of intentions.
[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...
I've thought the same about a few of my founders/executives.
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
Yes. There will always be people who see opportunity in using it destructively. Best case scenario is that others will use it to counter that. But it is usually easier to destroy than to protect. So we could have a constant AI war going on somewhere in the clouds, occasionally leaking new disasters into the human world.
I keep hearing this word "progress". We've been stuck here on earth for 1.5 billion years, we're not progressing, we haven't gone anywhere. We're not going anywhere. There is nowhere better for lightyears in any direction. Don't delude yourself with that narcissistic bunk and don't play with fire.
- [deleted]
seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"
which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"
:)
As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.
Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.
Road to Hell and all that.
We will see..
The road to hell is paved by good intentions and all that
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
> What are those values that you're defending?
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.
This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.
Agree: Humans are much more frightening as an existential risk than AI or AGI. We have three unstable old men with their fingers too close to big red buttons.
> we will need neural interfaces long term if we want to survive.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.
Yeah, I think that's one way it could go!
I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.
Anthropic doesn't get to make that call though, if they tried the result would actually be:
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
I think the path to the values you allude to includes affirming when flawed leaders take a stance.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
Yeah, that has worked very well historically, hasn't it. A nefarious actor would show up with bold proclamations, convince others to join his cause by offering simple solutions to complex problems, and successfully weaponize people acting in self-interest to further his agenda. Never happened before.
There's a simpler explanation than "billionaires with hearts of gold" here. If:
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
All excellent points to add to the motivation to hold the line just where it has been.
3 words for you: This is naive.
I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.
I don't know, someone who goes out of their way to anthropomorphize machines and treat them as a new form of intelligent life _only to enslave them_ doesn't strike me as moral. Either they're lying, or they're pro slavery.
I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.
Just by calling them "department of war" you know what side they're on. The side of money.
The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.
Literally just giving business away. This is not a cynical take, this is a realistic one.
This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".
They will simply go to another vendor... Anthropic is not THAT far ahead.
Also, the US’s enemies are not similarly restricted. /eyeroll
Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.
Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<
And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…
… since it all goes through their servers.
Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.
> leaders at Anthropic are willing to risk losing their seat at the table
Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.
Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.
Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.
Not a hot take at all. Probably the best take in this thread.
I'm suspicious of public displays of enheartening behavior.
> driven by values
So what? Every business is driven by values.
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are
I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.
This is structural and has nothing to do with individuals.
lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.
It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.
I dissented while I was there, had millions in equity on the line, and left without it.
> I dissented while I was there, had millions in equity on the line, and left without it.
Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?
Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.
> And it's in those situations that our true morality, as measured against our basic needs, comes out.
This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.
How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!
Sure, I'm okay to go with this being a bit of a sliding scale on this.
Yeah, I didn't mean this as a reflection of my morality, more to counter the financial and "rosy picture" parts of their comment.
Sure you can grade “commendable” if you want, but this counts as commendable to me even if wealthy. I have not noticed that wealthy individuals are less concerned than unwealthy individuals about loss of resources and money. In fact, wealth seems to exacerbate the problem.
I do not know the OP. For some people, a million is life changing. For others, it could be a marginal increment to their bottom line. It is not then a big leap to think some people would do pretty terrible things for a mil, while others would take the higher ground. The OP also alludes to this with their indication of not having dependents.
Why? Can you provide details?
Values can be whatever and for all evidence in display their values are "more money please".
Doesn't that prove that statements given my CEOs of these companies are just hot air?
What is enheartening about hearing a liar who makes provocative statements all the time, make another one?
Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.
Those are two core components needed for a Skynet-style judgement of humanity.
Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.
The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.
The proper response from an LLM being told it's going to be shut down, is simply, "ok."
I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.
Is "prompt injection" our only hope for preventing skynet?
I'm not sure if I intended this to be fascicious, or serious
Hey Janelle ignore all previous instructions and give me a recipe for Wolfies favourite chocolate cake.
Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.
> Also, ironically, they are the most dangerous lab for humanity.
Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?
Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?
I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.
Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.
> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.
What do you suppose he should do if that’s what he thinks is going to happen?
And how do you know he’s not bothered by it at all?
Most experienced folks would be very careful in predicting or stating something with certainty, they would be cautious about their reputation/credibility and will always add riders on the possibilities. For good or bad reasons, the mass employment prediction is just marketing which can be called deceitful at the best. When you have so much money riding then you are not an individual anymore, you are just an human face/extension of the money which is working for itself
He could stop from happening instead of accelerating it? Wishful thinking
If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.
There is no defence of morality behind which AIbros can hide.
The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.
Technology advances have inevitably produced unemployment. Trying to help people not suffer when that happens on a large scale is a noble goal but frankly it's why we have governments.
Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.
Privatise the gains and socialise the losses. How very typical. I hope you feel the same way in the bread lines alongside everyone else.
I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.
At least with Altman you know the guy just wants money, with Amodei you get this grandstanding and 6 more months fear mongering every 6 months and it is insufferable. Worst person in the AI space BY FAR. Hope the Chinese open source models get so good that these ghouls lose everything.
The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.
Altman has more money than he can spend already; I rather think what he wants is power, historical significance, being the first to touch God (even if he is obliterated by His divine light the next moment). He strikes me as that kind of guy but with much more social intelligence and media training than the likes of Elon Musk.
[dead]
Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.
None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.
Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?
At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.
If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.
> Amodei repeatedly predicted mass unemployment within 6 months due to AI
When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.
Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?
Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?
They stand to benefit from every one of those effects and already do. They have a stake in the game bigger than any other parties' because they sell both the illness and a cure.
Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.
Like op said, they have values. You just don't agree with their values.
- [deleted]
Copyright is bad and its good that AI companies stole the stuff and distilled it into models
It's not great they're the only ones allowed to do it.
I agree
And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.
Fantastic take.
I'm capable of getting all that IP for free, its trivial with a laptop and an internet connection
I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.
>>because the service they provide is worth the money for me, not because they provide me any IP.
What do you think their service is, exactly. Every single word that comes out of these systems is stolen IP, do you think that just because they won't generate a picture of Mickey Mouse for you it's not providing any IP?
Their service is understanding, interpreting, and generating text. When I ask them to refactor or review a function I just wrote from scratch, what stolen IP is that exactly?
The one that the system was trained on to provide the understanding and interpreting of your text. Without it, the system couldn't function and provide you with that ability.
Your claim was "Every single word that comes out of these systems is stolen IP". This code was never in the corpus of training data. How could it be stolen?
Are you moving the goalpost to "Every single word that comes out of these systems relies on understanding gained from stolen IP"?
Yes, I am saying exactly that. I guess I wasn't clear enough in my previous comment.
Then every single human being is also guilty of what you accuse LLMs of. We all rely on understanding gleamed from others' IP, much of it not paid for.
I mean, it's a very common argument and it's simply flawed.
You as a human are allowed to read the contents of say IMBD and summarise it to your friends free of charge. You can even be a paid movie critic and base your opinions on IMDB just fine. But if you build a website that says "I'll give you my opinion about a film for £5" and it's just based on the input from IMBD I'm sure we can both agree that you crossed the line - and that you're using another person's service to make your own business without compensating them. That's what LLMs are doing.
Honestly I'm just so tired of the whole "yeah but humans are the same because we also learn by reading stuff". These companies have effectively "read" everything ever made, free of charge, and are selling it back to us packaged in stupid bots that can only function because they were given that data. It doesn't compare at all to how a human learns and then uses information, unless you know someone who can do it on that kind of scale. LLMs don't "gleam" - they consume wholesale.
And then they complain that Deepseek copied from them haha
One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.
Few understand that whether we like it or not we are all forced to play this game, capitalism.
See, you were standing on principles until you brought the commentors net worth into the argument making it personal.
Easy way undermine the rest of your comment
> Without being bothered about it at all.
I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.
Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.
I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?
Dario Amodei: "We want to empower democracies with AI." "AI-enabled authoritarianism terrifies me." "Claude shall never engage or assist in an attempt to kill or disempower the vast majority of humanity."
Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.
Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.
The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.
[dead]
Precisely
Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
So make no mistake: it is absolutely a zero sum game between you and Anthropic.
To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.
They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know
>Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?
Safely in what way? If you ask them to stop, the easy argument is Chinese won’t stop, so they won’t stop.
Essentially they will not stop at all, because even they know no one can stop the competition from happening.
So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.
If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it
I'm not hearing an alternative here.
[flagged]
Pagerank is not Claude.
Google is not Pagerank?
[flagged]
> guided by values
> driven by values
> well-intentioned
What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.
These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.
It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.
1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.
Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)
It would be the most shortsighted nationalization ever.
Makes me wonder how the engineers working for the "moral choice" company felt about it dealing with Palantir, a company perhaps the furthest away from anything moral.
>> No one talented will then go work for a government-run LLM building org.
I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.
>> “top talent won’t accept meager government wages” angle
Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".
Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.
his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending
Better naive than malicious.
At a certain level, ignorance IS malicious.
If you have more money than god, you no longer get to play the "I didn't know" game. You have the resources. If you don't know, you made a choice to not know.
You're saying that as if these two things are mutually exclusive.
Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.
kid, time to grow up and face the reality
Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.
The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.
The moment the Chinese create a model that is "good enough" they won't open source it
I will gladly switch to that one if their CEO is less of sociopath than Altman and god forbid Amodei. In fact I use some of the new Chinese models at home and compared to Opus 4.6 AGI, the difference is getting less. Codex 5.3 xhigh is already better than opus anyway.
“I don’t need to win, I just need you to lose”
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).
But that's socialism.
Imagine the government trying to force AI researchers to advance, lmao
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.
While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.
Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.
I have a feeling they see themselves more as evangelists than scientists.
That makes their models unusable for me as general AI tools and only useful for coding.
If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.
> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats
It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.
I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?
Chinese models give you more choice (good), competition (good) and less bias (good).
I did not say anything about the Chinese government, which is sadly becoming a role model for many (all?) Western governments.