What is it you’re struggling to understand? Like is there a concept or command or something that just isn’t clicking?
Formerly /u/neoKushan on reddit
What is it you’re struggling to understand? Like is there a concept or command or something that just isn’t clicking?
I know it’s not helpful or what you’re asking for but honestly, just learn docker and all of these kinds of problems just go away.
Once you learn to spin up one docker container, you can spin up nearly any of them (there’s a couple of extra steps if you need GPU acceleration, things like that).
You’ll be kicking yourself that you didn’t hadn’t the jump earlier. Sounds like you’re already using Linux too, so it’s really not that big a leap.
I’m more upset that RSS is dying off.
TOP is a measurement of performance (Trillions of Operations Per second), a CUDA core is not, it’s a physical processing unit on the chip.
TOP is the AI version of a FLOP, the thing we typically use to measure graphics performance.
Yup I also use ntfy and it’s brilliant, easy to send notification events to it from almost anything and the android app is very responsive.
Another vote for restic, best backup software I’ve ever used.
I don’t know why LTT are somehow the bad guys in this, they weren’t the only ones to realise that the extension messed with their affiliate links and it’s not like it’s a thing to publicly shout about every dropped sponsor.
I bet LTT has dropped plenty of sponsors without making a big public deal about it.
There’s the nanokvm, similar idea but cheaper. I have one and it’s okay but a bit sore, I’m hoping the jet is faster
For that chrome book like experience, the genuinely think Chrome OS flex is probably a better option for most people (privacy concerns not withstanding).
People blame Google for the death of jabber because of one blog post from a disgruntled contributor but the truth is jabber was never popular and Google chat died as well.
Jabber was a mess, most of the clients were barely compatible with Each other and it was a wild west of feature support. Some clients were well featured with the ability to send richer messages, but typically only worked with a specific server and the same clients. Jabber did a crap job at making sure clients and servers interacted properly with each other and didn’t push the standards quickly enough, forcing clients to do their own thing.
Which is all Google did, they went their own way because nobody used jabber and the interoperability was causing more harm than good. It didn’t work, Google talk died and many years later clients like WhatsApp took over instead.
One that always stood out to me was the ending of the Tom Cruise war or the world’s movie.
Now to be clear, this is not a good film and I don’t recommend that anyone bothers to go watch it, but a criticism I regularly saw was that the ending was bad - the aliens all just die suddenly.
That was literally the only thing that film got right from the source material. They changed literally everything else in an attempt to modernise it, it didn’t work but they at least kept the ending and that’s the bit people didn’t like.
I think if you’re comparing open world games to open world games then yeah, BOTW doesn’t do anything too terribl differenty, but when you compare BOTW to other Zelda games then it’s very different and that’s where the criticism comes from.
Personally I feel BOTW is a very competent open world game, probably one of the better ones I’ve played but I still didn’t gel with it because I was already strongly feeling fatigued from too many games becoming open world and not making that leap particularly well (Mass Effect Andromeda and FFXV coming to mind for me personally), what I wanted was a more traditional Zelda game and that’s simply not what BOTW was.
I’m on the side of “automate it all and stop whining”, but I do think it’s important not to so readily dismiss the thoughts and opinions of those this directly affects in favour of the opinions of the security researchers pushing the change.
There are some legitimate issues with certain systems that aren’t easily automated today. The issue is with those systems needing to be modernised, but there isn’t a big push for that.
Made with Layers (Thomas Sanladerer)
Nah this isn’t usual Nintendo bullshit, this guy was installing pirated games as part of his mods - he’s brought this on himself.
Yeah, really want musk to buy it…
Definitely do! It’s entirely command line driven, but don’t let that put you off, it’s quite easy to use and well thought out.
If that’s still a concern, there’s also backrest, a project that puts a web UI in front of restic:
I have a Nas running nextcloud for general ease of automatically backing up anything important from my phone or pc.
Nextcloud and important things from the server are backed up using a tool called “restic” which honestly does not get enough mention here.
Restic is amazing, it supports just about every cloud storage provider out there - could be Amazon S3 or backblaze, but it could also be OneDrive or Google drive. If you’ve got some cloud storage somewhere, restic will probably support it.
Restic is super clever, it takes snapshots and only backs up any data that has changed - so it’s very space efficient and fast. I back up hourly, it only takes a few mins and if nothing has changed, there cost is also basically nothing. But you can pull back files from any snapshots you keep and when you delete a snapshot, it only deletes data that’s not used by any snapshot.
This means you can have backups going back months or years at very little data cost. You can restore a full backup, or just a specific file if you need.
Seriously, restic is amazing and more people need to know about it.
Multicast is a thing, though it doesn’t seem to be widespread. That would make a lot more sense than this weird DRM broadcast system.
Okay, so I think I can help with this a little.
The “secret sauce” of Docker / containers is that they’re very good at essentially lying to the contents of the container and making it think it has a whole machine to itself. By that I mean the processes running in a container will write to say
/config
and be quite content to write to that directory but docker is secretly redirecting that write to somewhere else. Where that “somewhere else” is, is known as a “volume” in docker terminology and you can tell it exactly where you want that volume to be. When you see a command with-v
in it, that’s a volume map - so if you see something like-v /mnt/some/directory:/config
in there - that’s telling docker "when this container tries to write to/config
, redirect it to/mnt/some/directory
instead.That way you can have 10 containers all thinking they’re each writing to their own special
/config
folder but actually they can all be writing to somewhere unique that you specify. That’s how you get the container to read and write to files in specific locations you care about, that you can backup and access. That’s how you get persistence.There’s other ways of specifying “volumes”, like named volumes and such but don’t worry too much about those, the good ol’ host path mapping is all you need in 99% of cases.
If you don’t specify a volume, docker will create one for you so the data can be written somewhere but do not rely on this - that’s how you lose data, because you’ll invariably run some docker clean command to recover space and delete an unused unnamed volume that had some important data in it.
It’s exactly the same way docker does networking, around port mapping - you can map any port on your host to the port the container cares about. So a container can be listening on port 80 but actually it’s being silently redirected by the docker engine to port 8123 on your host using the
-p 8123:80
argument.Now, as for updates - once you’ve got your volumes mapped (and the number and location of them will depend on the container itself - but they’re usually very well documented), the application running in the container will be writing whatever persistence data it needs to those folders. To update the application, you just need to pull a newer version of the docker container, then stop the old one and start it again - it’ll start up using the “new” container. How well updates work really depends on the application itself at this point, it’s not really something docker has any control over but the same would be if you were running via LXC or apt-get or whatever - the application will start up, read the files and hopefully handle whatever migrations and updates it needs to do.
It’s worth knowing that with docker containers, they usually have labels and tags that let you specify a specific version if you don’t want it updating. The default is an implied
:latest
tag but for something like postgress which has a slightly more involved update process you will want to use a specific tag likepostgres:14.3
or whatever.Hope that helps!