Who the hell is thinking of these concepts. News has been running on chatgpt giving dangerous hallucinations, suicide instructions, mimicking love and attachment.
In short the only way to wind up with an LLM that’s probably safe for kids, would be to start training from zero. Give it absolutely no exposure to anything that wasn’t curated from the start… say the initial data set being a catalog of mr rogers and seseme street scripts. Starting from “everything on the internet” and then trying to restrict down is a fools erend. That’s like trying to make a porn blocker with a blacklist strategy.
There is also the privacy concern (which isn’t new) about toys that upload everything your child says to the company’s servers. There’s a concern about the privacy of your child’s words, but also about the corporation getting recordings of their voice, given all the nefarious purposes a voice recording can be used for these days (surveillance voice recognition, deepfakes, etc.). Plus the toy could be listening and recording at any time.
In the future, local AI models will solve this problem! Then parents will be complaining about how hot the toy is and it’ll get recalled because little kids everywhere kept getting “GPU burns”.
Who the hell is thinking of these concepts. News has been running on chatgpt giving dangerous hallucinations, suicide instructions, mimicking love and attachment.
In short the only way to wind up with an LLM that’s probably safe for kids, would be to start training from zero. Give it absolutely no exposure to anything that wasn’t curated from the start… say the initial data set being a catalog of mr rogers and seseme street scripts. Starting from “everything on the internet” and then trying to restrict down is a fools erend. That’s like trying to make a porn blocker with a blacklist strategy.
There is also the privacy concern (which isn’t new) about toys that upload everything your child says to the company’s servers. There’s a concern about the privacy of your child’s words, but also about the corporation getting recordings of their voice, given all the nefarious purposes a voice recording can be used for these days (surveillance voice recognition, deepfakes, etc.). Plus the toy could be listening and recording at any time.
In the future, local AI models will solve this problem! Then parents will be complaining about how hot the toy is and it’ll get recalled because little kids everywhere kept getting “GPU burns”.