I’m not sure where you’re getting the idea that language models are effective lie detectors, it’s very widely known that LLMs have no concept of truth and hallucinate constantly.
And that’s before we even get into inherent biases and moral judgements required for any form of truth detection.
I’m not sure where you’re getting the idea that language models are effective lie detectors, it’s very widely known that LLMs have no concept of truth and hallucinate constantly.
And that’s before we even get into inherent biases and moral judgements required for any form of truth detection.
deleted by creator