Let me introduce you to something known as military time (which, yes, even exists in the US)…
What I mean is that if people keep making it produce garbage tied to some keyword or phrase and people publish said garbage, that’ll only strengthen AIs’ neural network between the bad data and that keyword, so AI results for such trees will drift even further away from the truth.
So we could keep having it generate these and poison its own training data!
Remember, “AI” (autocomplete idiocy) doesn’t know what sense is; it just continues words and displays what may seem to address at least some of the topic with no innate understanding of accuracy or truth.
Never forget that ChatGPT 2.0 can literally be run in a giant Excel spreadsheet with no other program needed. It’s not “smart” and is ultimately millions of formulae at work.
… from a distance.
Right, but I think it’d be harder to get it to unlearn the wrong data if the topic itself is obscure.