• Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      Given the success of language models it should be moderately trivial to train one to recognize when a factual statement is made and apply the above warning.

      Is it??? Because I feel like context is a real weak point for bots and ai to figure out.

      Hell, it feels like half the HUMANS don’t know whats factually true. Is the covid vaccine a society saving development which saved the lives of millions? Or is it full of bill gates mind control computer chips to rule over the portion of society dumb enough to get the vaccine willingly?

      Who’s to say?

    • my_hat_stinks@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      I’m not sure where you’re getting the idea that language models are effective lie detectors, it’s very widely known that LLMs have no concept of truth and hallucinate constantly.

      And that’s before we even get into inherent biases and moral judgements required for any form of truth detection.