3 Comments
User's avatar
alexis's avatar

(1) I very much agree that it's a good idea to have AI systems always distinguish themselves from human ones either through a watermark or just a regulated requirement for a preamble (2) I think it would be a very positive change for AI systems to be RLHF'd away from pretending to care about people and express emotions, and the only reason corps aren't doing it is profilt and (3) While the chat interface is incredibly useful because it mirrors the interface affordances of a command prompt where I re-evaluate print loop and so I don't think it should go away. It is unfortunate that most people's experience with That sort of interface is messaging actual human beings and not command lines, so they will interpret more as the former than the latter.

However, there are a few points that I think you could edit to briefly address that would make your case a lot better:

That first link, that claims that something like 40% of people think AI is sentient, is a screenshot of a Google Doc posted on a random subreddit. That doesn't seem like a very good source to me. And it, in fact, seems to be... based on completely hallucinated sources that the author of the Google Doc refuses to actually provide but provides screenshots of a chat GPT conversation and a blog post for.

Additionally, the model collapse paper works really artificially hard to achieve that model collapse whereas modern large language models are in a very substantial portion trained on synthetic data generated by earlier models.

Expand full comment
Deimos's avatar

Yeah that image isn't pulling its weight, I've changed the link to just link to the subreddit, because its existence and size was more what I was aiming at.

Regarding the model collapse paper, how did you mean to connect that to my text?

Expand full comment
alexis's avatar

Sorry lol, I confused your article with another one id just read,y disability is acting up to day and I'm quite foggy

Expand full comment