It’s insane that this is even legal.
It’s insane that this is even legal.
There’s a separate command called visudo
for this purpose.
You CAN use any ol’ text editor but visudo has built-in validation specific to the sudoers file. This is helpful because sudoers syntax is unique and arcane, and errors are potentially quite harmful.
There are a handful on non-default apps I’ve used across my last 3-4 distros at least:
mpv - the best video player, period. Minimalist UI, maximalist configuration options. I’ve been using it for many years across many OSes and at this point everything else feels wrong.
Geany - My favorite GUI text editor on Linux.
Foliate - the simplest eBook reader I’ve found.
Strawberry - It’s “fine”. Honestly, I’ve never found a music player on Linux that I really liked. I keep falling back to Strawberry because it’s familiar and generally works as expected.
If the guesser wins routinely, this suggests that the thinker can access about 220≈1 million possible items in the few seconds allotted.
I’m not sure this premise is sound. Are there not infinitely more than 2^20 permutations of the game?
This would be true if the questions were preset, but the game, in reality, requires the guesser to make choices as the game progresses. These choices can be quite complex, relying on a well developed theory of mind and shared cultural context. Not all the information is internal to the mechanics of the game.
The unspoken rules of the game also require the thinker to pick something that can plausibly be solved. Picking something outlandishly obscure would be frowned upon. The game is partly cooperative in that sense.
If you were to reduce the game to “guess the number I’m thinking of between 0 and infinity”, then it wouldn’t be very fun, it would not persist across time and cultures, and you wouldn’t be studying it. But you might get close to a 0% win rate (or…maybe not?).
I’d guess that most of the “few seconds” the thinker spends is actually to reduce the number of candidates to something reasonable within the context of the game. If that’s true, it says nothing whatsoever about the upper bound of possibilities they are capable of considering.
Idea for further research: establish a “30 questions” game and compare win rates over time. Hypothesis: the win rate in 30 questions would fall to similar levels as with “20 questions” as players gained experience with the new mechanics and optimized their internal selection process.
our brain will never extract more than 10 bits/s
Aren’t there real recorded cases of eidetic memory? E.g. The Mind of a Mnemonist. I have not re-read that book with a mind toward information theory, so perhaps I am overestimating/misremembering the true information content of his memories.
Related feature on my wish list: I’d love a way to basically fork a feed based on regex pattern matching. This would be useful for some premium feeds that lump multiple podcasts together. For example, one of my Patreon feeds includes three shows: the ad-free main feed, the first-tier weekly premium feed, and the second-tier monthly premium feed.
I don’t want to filter them out because I DO want to listen to all of them, but for organizational purposes I don’t want them lumped together. I’d prefer to display these as two or three separate podcasts in my display.
Another example is the Maximum Fun premium BoCo feed. They include the bonus content for ALL their shows (which is…a lot) in a single feed. I only listen to about half a dozen, and even that is a bit of a mess in one feed!
Great points, thanks.
Can you clarify what you mean by “local decryption”? I thought Proton and Tuta work pretty much the same way, but perhaps there’s a distinction I’m missing.
One thing I like about Tuta is that it has the option to cache your messages in localstorage in your browser so you can do full-text search. FWIW, I think Proton added a similar feature recently, though I have not tried it. I imagine neither would work very well with large mailboxes; probably better to configure a real email client.
Do they offer cloud storage now? From what I can see on their web site, it’s 500GB…just for email. I mean sure, that’s cool, but it would take me several lifetimes to accumulate 500GB of email so it’s not much of a selling point to me.
It’s a good email service, anyway. I’ve been using the free tier for a few years. Similar to Proton, and in theory Tuta is more private because they encrypt the headers as well as the message body.
Whisper is open source. GPT-2 was, too.
Absolutely this. Phones are the primary device for Gen Z. Phone use doesn’t develop tech skills because there’s barely anything you can do with the phones. This is particularly true with iOS, but still applies to Android.
Even as an IT administrator, there’s hardly anything I can do when troubleshooting phone problems. Oh, push notifications aren’t going through? Well, there are no useful logs or anything for me to look at, so…cool. It makes me crazy how little visibility I have into anything on iPhones or iPads. And nobody manages “Android” in general; at best they manage like two specific models of one specific brand (usually Samsung or Google). It’s impossible to manage arbitrary Android phones because there’s so little standardization and so little control over the software in the general case.
Is this legit? This is the first time I’ve heard of human neurons used for such a purpose. Kind of surprised that’s legal. Instinctively, I feel like a “human brain organoid” is close enough to a human that you cannot wave away the potential for consciousness so easily. At what point does something like this deserve human rights?
I notice that the paper is published in Frontiers, the same journal that let the notorious AI-generated giant-rat-testicles image get published. They are not highly regarded in general.
DuckDuckGo is an easy first step. It’s free, publicly available, and familiar to anyone who is used to Google. Results are sourced largely from Bing, so there is second-hand rot, but IMHO there was a tipping point in 2023 where DDG’s results became generally more useful than Google’s or Bing’s. (That’s my personal experience; YMMV.) And they’re not putting half-assed AI implementations front and center (though they have some experimental features you can play with if you want).
If you want something AI-driven, Perplexity.ai is pretty good. Bing Chat is worth looking at, but last I checked it was still too hallucinatory to use for general search, and the UI is awful.
I’ve been using Kagi for a while now and I find its quick summaries (which are not displayed by default for web searches) much, much better than this. For example, here’s what Kagi’s “quick answer” feature gives me with this search term:
Room for improvement, sure, but it’s not hallucinating anything, and it cites its sources. That’s the bare minimum anyone should tolerate, and yet most of the stuff out there falls wayyyyy short.
I recently upgraded to a 7900 XTX on Debian stable, as well. I’m running the newest kernel from Debian’s backports repo (6.6, I think), and I didn’t have that same problem.
I did have other problems with OpenCL, though. I made a thread about this and solved it with some trouble. Check my post history if you’re interested. I hope it helps. I can take a closer look at my now-working system for comparison if you have further issues.
IT WORKS NOW! I will need time to run additional tests, but the gist of my solution was:
Backport llvm-18 from sid following the guide you linked at https://wiki.debian.org/SimpleBackportCreation
After compiling and installing all those deb files, I then installed the “jammy” version of amdgpu-install_6.0.60002-1.deb from https://www.amd.com/en/support/linux-drivers
Downloaded the latest kernel sources from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git, and simply copied all the files from its lib/firmware/amdgpu folder into my system’s /lib/firmware/amdgpu. Got that idea from https://discussion.fedoraproject.org/t/amdgpu-doesnt-seem-to-function-with-navi-31-rx-7900-xtx/72647
sudo update-initramfs -u && sudo reboot
I’m not totally sure it step 3 was sane or necessary. Perhaps the missing piece before that was that I needed to manually update my initramfs? I’ve tried like a million things at this point and my system is dirty, so I will probably roll back to my snapshot from before all of this and attempt to re-do it with the minimal steps, when I have time.
Anyway, I was able to run a real-world OpenCL benchmark, and it’s crazy-fast compared to my old GTX 1080. Actually a bigger difference than I expected. Like 6x.
THANKS FOR THE HELP!
Thanks for the links! I’ve never attempted making my own backport before. I’ll give it a shot. I might also try re-upgrading to sid to see if I can wrangle it a little differently. Maybe I don’t actually need mesa-opencl-ics if I’m installing AMD’s installer afterwards anyway. At least, I found something to that effect in a different but similar discussion.
Update: I upgraded to Sid. Unfortunately, mesa-opencl-icd depends on libclc-17, which uninstalls -18. So I can’t get OpenCL working while the correct libclc is installed.
No idea where to go from here. I’ll probably restore my Bookworm snapshot, since I don’t want to be on Sid if it doesn’t solve this problem.
Update: Running amdgpu-install did not provide those files. There were a few errors regarding vulkan packages when I attempted, I guess because it’s assuming Ubuntu repos. Trying with just opencl and not vulkan succeded, but still clinfo
reported the missing files.
I don’t think I can get this working without a whole newer llvm.
Ah, somehow I didn’t see 18 there and only looked at 17. Thanks!
I tried pulling just the one package from the sid repo, but that created a cascade of dependencies, including all of llvm. I was able to get those files installed but not able to get clinfo to succeed. I also tried installing llvm-19 from the repo at https://apt.llvm.org/, with similar results. clinfo didn’t throw the fatal errors anymore, but it didn’t work, either. It still reported Number of devices 0
and OpenCL-based tools crashed anyway. Not with the same error, but with something generic about not finding a device or possibly having corrupt drivers.
Should I bite the bullet and do a full ugprade to sid, or is there some way to this more precisely that won’t muck up Bookworm?
Can you explain more about your workflow? Do the Nix packages have their own isolated dependency resolution? How does it work when Debian packages depend on a library you get from Nix, or vice-versa?
Thanks, that’s good advice. There are lower-numbered gfx* files in there. 900, 902, 904, 906. No 1030 or 1100. Same after reinstalling.
Looks like these files are actually provided by the libclc-15
package. libclc-16 has the same set of files. Even libclc-17 from sid has the same files. So I guess upgrading to testing/unstable wouldn’t help.
apt-file search gfx1100-amdgcn-mesa-mesa3d.bc
yields no results, so I guess I need to go outside of the Debian repos. I’ll try the AMD package tonight.
It used to say “container-native”. They recently changed the wording, but there was no technical change.
It’s a Linux distro that runs locally, like any other. It has no particular tie-in with any cloud services. If Flatpak, Docker/Podman, Distrobox, Homebrew, etc. are “cloud” just because they involve downloading packages hosted on the internet, then I don’t know why you wouldn’t call “traditional” package managers like apt, dnf, zypper, etc. “cloud” as well. 🤷 So yeah, I feel your confusion.
The big difference compared to something like Debian or vanilla Fedora is that Bazzite is an “immutable” distro. What this means is that the OS image is monolithic and you don’t make changes directly to the system. Instead, you install apps and utilities via containers, or as a last resort you can apply a layer on top of the OS using rpm-ostree.
The only thing cloud-related about any of this is that atomic OS images and containers are more common in the server space than the desktop space.