Well maybe yes in some aspects, but other AIs are pretty damn censored as well. At least deepseek gave me a really impressive multipage summary of pages of maths used to answer my question about how many ping pong balls can fit in the average adult vagina. Whereas Gemini just kind of shrugged me off like I was some sort of weirdo for even asking.
please let the username not check out…
Even the Chinese AI is better at maths.
You’re asking the real important questions!
They’re all censoring answers. They get flack every day for not censoring enough of them.
Just to clarify - DeepSeek censors its hosted service. Self-hosted models aren’t affected.
I ran Qwant by Alibaba locally, and these censorship constraints were still included there. Is it not the same with DeepSeek?
I think we might be talking about separate things. I tested with this 32B distilled model using
llama-cpp
.
Deepseek 2 is censored locally, had a bit of fun asking him about China 1989 (Running locally using Ollama with Alpaca as GUI)
On another person who’s actually running locally. In your opinion, is r1-32b better than Claude sonnet 3.5 or OpenAI o1? IMO it’s been quite bad, but I’ve mostly been using it for programming tasks and it really hasn’t been able to answer any of my prompts satisfactorily. If it’s working for you I’d be interested in hearing some of the topics you’ve been discussing with it.
R1-32B hasn’t been added to Ollama yet, the model I use is Deepseek v2, but as they’re both licensed under MIT I’d assume they behave similarly. I haven’t tried out OpenAI o1 or Claude yet as I’m only running models locally.
Making the censorship blatantly obvious while simultaneously releasing the model as open source feels a bit like malicious compliance.
I haven’t looked into running any if these models myself so I’m not too informed, but isn’t the censorship highly dependent on the training data? I assume they didn’t release theirs.
It just returns an error when I ask if Elon Musk is a fascist. When I ask about him generally it’s just returns propaganda with zero criticism.
AI already has to deal with hallucinations, throw in government censorship too and I think it becomes even less of a serious, useful tool.