This seems to be the source report: https://openai.com/index/disrupting-malicious-ai-uses/ (since it would of course kill CNN, like almost all media outlets, to link to a non-affiliated primary source...)
Does this level of detail seem strange to anybody else? Shining such a strong light on OpenAI's moderation/manual review efforts seems like it would draw unwanted attention to the fact that ChatGPT conversations are anything but private, and seems somewhat at odds with their recent outrage about the subpoena for user chats in the NYT case.
Manual reviews of sensitive data are ok as long as their own employees are the reviewers, I suppose?
From Anthropics recent blog post: https://www.anthropic.com/news/detecting-and-preventing-dist...
> By examining request metadata, we were able to trace these accounts to specific researchers at the lab.
> The volume, structure, and focus of the prompts were distinct from normal usage patterns
Clearly some employees of Anthropic personally looked at individual inputs and outputs of their API
Yes, it is either a lie or an admission that OpenAI is a global surveillance mechanism.
Alas! My vision of One Fed Per Child hath come to pass!
This feels very planted. Wouldn't be surprised if this some attempt to look patriotic with the DoW turning up the heat against Anthropic.
that creepy feeling of "being watched" has mostly kept me from taking advantage of any SOTA models, i only dabble in a few local ones.
The level of detail does not seem surprising. they're both charged with maintaining a facade of privacy while eliminating any and all miss-use. Certainly they heavily analyze basically everything given to them.
And generally as a society we've been ok with basically zero privacy as long as the data we send stays inside the company we sent it too. Google reads all your emails? Sure thing, read away, just don't send them to the popo. Apple knows when you're ovulating? no problem, just don't tell Amazon. etc