

We’re talking about memory recall, not mind control. Memories are fundamentally unreliable and are very easy to influence, intentionally or not.
We’re talking about memory recall, not mind control. Memories are fundamentally unreliable and are very easy to influence, intentionally or not.
Hypnotics regression absolutely 100% does not work. You’re creating false memories, which is incredibly easy to do. Human memory is very, very, very fallible. This nonsense is exactly what triggered the satanic panic in the 70s, where there was a sudden proliferation of stories of childhood ritual abuse all coming from this kind of hypnotic regression, and every single one turned out to be complete fabrications from the subjects being hypnotized.
Thanks for the info! I guess that’s ultimately what I’m looking for more about: how much do we know about cellular traffic? Obviously with encryption we can’t just directly read cell signals to find out what’s being sent, so do people just record the volume of data being sent in individual packets and make educated guesses?
It seems plausible to run a simple(non-AI) algorithm to isolate probable conversations and send stripped and compressed audio chunks along with normal data. I assume that’s still probably too hard to hide, but if anyone out there knows of someone that’s looked for this stuff, I’d love to check it out.
It’s almost like they were asking about sources for people looking or something.
If you’re not going to contribute, why are you wasting people’s time?
It’s a reasonable explanation, and what I typically assume to be true. Still, I’m curious about the actual mechanics, and if it potentially could be being done by Google without the larger tech industry being aware of it.
That makes sense, but isn’t it assuming they’re processing data on the device? I would expect them to send raw audio back to be processed by Google ad services. Obviously it wouldn’t work without signal either, but that’s hardly a limitation.
As someone else pointed out, how does the google song recognition work? That’s active without triggering the light indicating audio recording, and is at least processing enough audio data to identify songs.
As someone relatively ignorant about the mechanics of something like this, would it not make more sense that the app would be getting this data from the Android OS, with Google’s knowledge and cooperation?
The place I see the most unsettling ads (that seem to be driven by overheard conversation) tends to be the google feed itself, so it seems reasonable to me that they could be using and selling that information to others as well, and merely disguising how the data were acquired.
deleted by creator
Thanks for clarifying, now please refer to the poster’s original statement:
AI doesn’t grok anything. It doesn’t have any capability of understanding at all. It’s a Markov chain on steroids.
I think if someone looks good in photos but not on video, they probably just have a skilled photographer.
“Most of the time, when people ask me a question, it’s the wrong question and they just didn’t know to ask a different question instead.”
“I’ve tried asking ChatGPT “How do I get the relative path from a string that might be either an absolute URI or a relative path?” It spat out 15 lines of code for doing it manually. I ain’t gonna throw that maintenance burden into my codebase. So I clarified: “I want a library that does this in a single line.” And it found one.”
You see the irony right? I genuinely can’t fathom your intent when telling this story, but it is an absolutely stellar example.
You can’t give a good answer when people don’t ask the right questions. ChatGPT answers are only as good as the prompts. As far as being a “plagiarizing, shameless bullshitter of a monkey paw” I still don’t think it’s all that different from the results you get from people. If you ask a coworker the same question you asked chatGPT, you’re probably going to get a line copied from a Google search that may or may not work.
How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.
I don’t think you’re on the right track here. There are definitely existing laws in most states regarding ‘revenge porn’, creating sexual media of minors, Photoshop porn, all kinds of things that are very similar to ai generated deep fakes. In some cases ai deepfakes fall under existing laws, but often they don’t. Or, because of how the law is written, they exist in a legal grey area that will be argued in the courts for years.
Nowhere is anyone suggesting that making deepfakes should be prosecuted as rape, that’s just complete nonsense. The question is, where do new laws need to be written, or laws need to be updated to make sure ai porn is treated the same as other forms of illegal use of someone’s likeness to make porn.