> That conversation showed how ChatGPT allegedly coached Gordon into suicide, partly by writing a lullaby that referenced Gordon’s most cherished childhood memories while encouraging him to end his life, Gray’s lawsuit alleged.
I feel this is misleading as hell. The evidence they gave for it coaching him to suicide is lacking. When one hears this, one would think ChatGPT laid out some strategy or plan for him to do it. No such thing happened.
The only slightly damning thing it did was make suicide sound slightly ok and a bit romantic but I’m sure that was after some coercion.
The question is, to what extent did ChatGPT enable him to commit suicide? It wrote some lullaby, and wrote something pleasing about suicide. If this much is enough to make someone do it.. there’s unfortunately more to the story.
We have to be more responsible assigning blame to technology. It is irresponsible to have a reactive backlash that would push towards much more strengthening of guardrails. These things come with their own tradeoffs.
I agree, and I want to add that in the days before his suicide, this person also bought a gun.
You can feel whatever way you want about gun access in the United States. But I find it extremely weird that people are upset by how easy it was to get ChatGPT to write a "suicide lullaby", and not how easy it was to get the actual gun. If you're going to regulate dangerous technology, maybe don't start with the text generator.
Tools are not responsible for our decision.
Or maybe do both, in whatever order
I think you have it backwards. OpenAI and others have to be more responsible deploying this technology. Because as you said, these things come with tradeoffs.
More guardrails means a shitter product for all of us. And it won’t do much to prevent suicides. Not sure who wins other than regulators
You don't even know if it would mean that.
It could well be that the model was trained to maximize engagement and sycophancy, at the expense of its capabilities in what you're most interested in.
What makes you think it wouldn't do much to prevent these suicides?
> More guardrails means a shitter product for all of us
the horror
>We have to be more responsible assigning blame to technology.
Because we are lazy and irresponsible: we don't want to test this technology, because it is too expensive and we don't want to be blamed for its problems because, after we released it, it becomes someone else's problem.
That's how Boeing and modern software works.
Did you even read the article? You don't seem objective about this
can you objectively tell me how chatgpt coached him to suicide?
I'm not sure what to tell you, check the quotes in my other messages (https://news.ycombinator.com/item?id=46642148, https://news.ycombinator.com/item?id=46642310)
none of them show that it coached him to suicide