• 2 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle
  • I’ve no problem with paying for good services

    Exactly. It used to be that netflix was all you needed to get most quality content, and it was a fair deal for customers: you pay a reasonable monthly amount, and you and your family gets convenient access to most streamable movies and TV series.

    Now that quality content is spread out and locked out over half a dozen other streaming services, and subscribing to them all is not just a hassle but also incredibly bad value compared to the original offer.

    In a healthy competitive environment, you would expect companies to counter reduced value by increasing customer value in other ways or by reducing prices, but instead we got price hikes, lots of low quality filler content, crack downs on password sharing, advertising, various unpopular UI changes and other service reductions decreasing value even further.

    To solve this, I think the content producers and streaming services should be split up, because right now they’re not really competitors in a true sence but small monopolies who each clutch the keys to their own little franchises. It should be noted for example that music streaming works a lot better: there are various competitors that each hold a viable content library on their own, so you don’t need more than one music streaming service. IMO that’s because Spotify, Tidal, YT Music, etc. are merely distributors and not the actual producers.







  • You can use the wildcard domain

    Yeah the problem was more that this machine is running on a network where I don’t really control the DNS. That is to say, there’s a shitty ISP router with DHCP and automatic dynamic DNS baked in, but no way to add additional manual entries for vhosts.

    I thought about screwing with the /etc/hosts file to get around it but what I ended up doing instead is installing a pihole docker for DNS (something I had been contemplating anyway), pointing it to the router’s DNS, so every local DNS name still resolves, and then added manual entries for the vhosts.

    Another issue I didn’t really want to deal with was regenerating the TLS certificate for the nginx server to make it valid for every vhost, but I just bit through that bullet.



  • Hmm no, that’s not really it… that’s more so that you don’t pass URLs starting with /app1/ onwards to the application, which would not be aware of that subpath.

    I think I need something that intercepts the content being served to the client, and inserts /app1/ into all hardcoded absolute paths.

    For example, let’s say on app1’s root I have an index.html that contains:

    ...
    src="/static/image.jpg"
    ...
    

    It should be dynamically served as:

    ...
    src="/app1/static/image.jpg"
    ...
    




  • SpaceCadet@sopuli.xyztoWorld News@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    So how many sockpuppet/bot accounts do you have? Every comment you post immediately gets a +4. There’s absolutely no way that less than 1 minute after you post a comment on a Lemmy post that’s already downvoted to shit immediately gets 4 genuine upvotes unless you’re manipulating it.

    Edit: and now the fake insta-upvotes on his comments disappeared, someone’s getting rid of the evidence lol



  • SpaceCadet@sopuli.xyztoWorld News@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Learn how to disagree.

    I’m not going to lend idiots like Clayon Morris any credibility by arguing their position in good faith when they didn’t arrive at their position in good faith in the first place.

    Knowing the source is enough to discredit and discard this video. They’re vatniks. They produce garbage. Garbage belongs in the garbage bin. The end.








  • As a general rule, you should always keep in mind that you’re not really looking for a backup solution but rather a restore solution. So think about what you would like to be able to restore, and how you would accomplish that.

    For my own use for example, I see very little value in backing up docker containers itself. They’re supposed to be ephemeral and easily recreated with build scripts, so I don’t use docker save or anything, I just make sure that the build code is safely tucked away in a git repository, which itself is backed up of course. In fact I have a weekly job that tears down and rebuilds all my containers, so my build code is tested and my containers are always up-to-date.

    The actual data is in the volumes, so it just lives on a filesystem somewhere. I make sure to have a filesystem backup of that. For data that’s in use and which may give inconsistency issues, there are several solutions:

    • docker stop your containers, create simple filesystem backup, docker start your containers.
    • Do an LVM level snapshot of the filesystem where your volumes live, and back up the snapshot.
    • The same but with a btrfs snapshot (I have no experience with this, all my servers just use ext4)
    • If it’s something like a database, you can often export with database specific tools that ensure consistency (e.g. pg_dump, mongodump, mysqldump, … ), and then backup the resulting dump file.
    • Most virtualization software have functionality that lets you to take snapshots of whole virtual disk images

    As for the OS itself, I guess it depends on how much configuration and tweaking you have done to it and how easy it would be to recreate the whole thing. In case of a complete disaster, I intend to just spin up a new VM, reinstall docker, restore my volumes and then build and spin up my containers. Nevertheless, I still do a full filesystem backup of / and /home as well. I don’t intend to use this to recover from a complete disaster, but it can be useful to recover specific files from accidental file deletions.