

Ok hear me out: the output is all made up. In that context everything is acceptable as it’s just a reflection of the whole of the inputs.
Again, I think this stems from a misunderstanding of these systems. They’re not like a search engine (though, again, the companies would like you to believe that).
We can find the output offensive, off putting, gross , etc. but there is no real right and wrong with LLMs the way they are now. There is only statistical probability that a) we’ll understand the output and b) it approximates some currently held truth.
Put another way; LLMs convincingly imitate language - and therefore also convincing imitate facts. But it’s all facsimile.
So maybe we’re kinda staring at two sides of the same coin. Because yeah, you’re not misrepresentin my point.
But wait there’s a deeper point I’ve been trying to make.
You’re right that I am also saying it’s all bullshit - even when it’s “right”. And the fact we’d consider artificially generated, completely made up text libellous indicates to me that we (as a larger society) have failed to understand how these tools work. If anyone takes what they say to be factual they are mistaken.
If our feelings are hurt because a “make shit up machine” makes shit up… well we’re holding the phone wrong.
My point is that we’ve been led to believe they are something more concrete, more exact, more stable, much more factual than they are — and that is worth challenging and holding these companies to account for. i hope cases like these are a forcing function for that.
That’s it. Hopefully my PoV is clearer (not saying it’s right).