• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • lloram239@feddit.detoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    ls reaction to this is unexpected:

    $ mkdir foo
    $ echo Foo > foo/file
    $ chmod a-x  foo
    $ ls -l foo
    ls: cannot access 'foo/file': Permission denied
    total 0
    -????????? ? ? ? ?            ? file
    

    I expected to just get a “Permission denied”, but listing the content it can still do. So x is for following the name to the inode and r for listing directory content (i.e. just names)?




  • I am not terribly impressed. The ability to build and run apps in a well defined and portable sandbox environment is nice. But everything else is kind of terrible. Seemingly simple things like having a package that contains multiple binaries aren’t properly supported. There are no LTS runtimes, so you’ll have to update your packages every couple of months anyway or users will get scary errors due to obsolete runtimes. No way to run a flatpak without installing. Terrible DNS based naming scheme. Dependency resolving requires too much manual intervention. Too much magic behind the scene that makes it hard to tell what is going on (e.g. ostree). No support for dependency other than the three available runtimes and thus terrible granularity (e.g. can’t have a Qt app without pulling in all KDE stuff).

    Basically it feels like one step forward (portable packages) and three steps back (losing everything else you learned to love about package managers). It feels like it was build to solve the problems of packaging proprietary apps while contributing little to the Free Software world.

    I am sticking with Nix, which feels way closer to what I expect from a Free Software package manager (e.g. it can do nix run github:user/project?ref=v0.1.0).



  • NixOS uses a naming convention for packages that keeps them all separate from each other, that’s how you get /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-118.0/. /usr isn’t used for packages and only contains /usr/bin/env for compatibility, nothing else.

    The whole system is held together by nothing more than shell scripts, symlinks and environment variables, standard Unix stuff. Making it very easy to understand if you are already familiar with Linux.

    “Declarative” means that you whole configuration happens in one Nix config file. You don’t edit files in /etc/ directly, you write your settings in /etc/nixos/configuration.nix and all the other files are generated from there. Same is true for package installation, you add your packages to a text file and rebuild.

    If that sounds a little cumbersome, that’s correct, but Nix has some very nice ways around that. Due to everything being nicely isolated from each other, you do not have to install software to use them, you can just run them directly, e.g.:

    nix run nixpkgs#emacs

    You can even run them directly from a Git repository if that repository contains a flake.nix file:

    nix run github:ggerganov/llama.cpp

    All the dependencies will be downloaded and build in the background and garbage collected when they haven’t been used in a while. This makes it very easy to switch between versions, run older versions for testing and all that and you don’t have to worry about leaving garbage behind or accidentally breaking your distribution.

    The downside of all this is that some proprietary third party software can be a problem, as they might expect files to be in /usr that aren’t there. NixOS has ways around that (BuildFHSEnv), but it is quite a bit more involved than just running a setup.sh and hoping for the best.

    The upside is that you can install the Nix package manager on your current distribution and play around with it. You don’t need to use the full NixOS to get started.


  • Quite hard. We had Open Source’ish LLMs for only around six months, if they are even up to the task of verifying a translation is another issue and if they are up to Debian’s Open Source guidelines yet another. This is obviously going to be the long term solution, but the tech for that has simply not been around for very long.

    And of course once you have translation tools good enough for the task, you might just skip the human translator altogether and just use machine translations.





  • C has no memory protection. If you access to the 10th element of a 5 element array, you get to access whatever is in memory there, even if it has nothing to do with that array. Furthermore this doesn’t just allow access to data you shouldn’t be able to access, but also the execution of arbitrary code, as memory doesn’t make a (big) difference between data and code.

    C++ provides a few classes to make it easier to avoid those issues, but still allows all of them.

    Ruby/Python/Java/… provide memory safety and will throw an exception, but they manually check it at runtime, which makes them slow.

    Rust on the other side tries to proof as much as it can at compile time. This makes it fast, but also requires some relearning, as it doesn’t allow pointers without clearly defined ownership (e.g. the classic case of keeping a pointer to the parent element in a tree structure isn’t allowed in Rust).

    Adding the safeties of Rust into C would be impossible, as C allows far to much freedom to reliably figure out if a given piece of code is safe (halting problem and all that). Rust purposefully throws that freedom away to make safe code possible.


  • That’s the idea, and while at it, we could also make .zip files a proper Web technology with browser support. At the moment ePub exists in this weird twilight where it is build out of mostly Web technology, yet isn’t actually part of the Web. Everything being packed into .zip files also means that you can’t link directly to the individual pages within an ePub, as HTTP doesn’t know how to unpack them. It’s all weird and messy and surprising that nobody has cleaned it all up and integrated it into the Web properly.

    So far the original Microsoft Edge is the only browser I am aware of with native ePub support, but even that didn’t survive when they switched to Chrome’s Bink.



  • I’d setup a working group to invent something new. Many of our current formats are stuck in the past, e.g. PDF or ODF are still emulating paper, even so everybody keeps reading them on a screen. What I want to see is a standard document format that is build for the modern day Internet, with editing and publishing in mind. HTML ain’t it, as that can’t handle editing well or long form documents, EPUB isn’t supported by browsers, Markdown lacks a lot of features, etc. And than you have things like Google Docs, which are Internet aware, editable, shareable, but also completely proprietary and lock you into the Google ecosystem.


  • .tar is pretty bad as it lacks in index, making it impossible to quickly seek around in the file. The compression on top adds another layer of complication. It might still work great as tape archiver, but for sending files around the Internet it is quite horrible. It’s really just getting dragged around for cargo cult reasons, not because it’s good at the job it is doing.

    In general I find the archive situation a little annoying, as archives are largely completely unnecessary, that’s what we have directories for. But directories don’t exist as far as HTML is concerned and only single files can be downloaded easily. So everything has to get packed and unpacked again, for absolutely no reason. It’s a job computers should handle transparently in the background, not an explicit user action.

    Many file managers try to add support for .zip and allow you to go into them like it is a folder, but that abstraction is always quite leaky and never as smooth as it should be.



  • Ubuntu has been on a downward spiral for the last decade or more, at this point they have spend more years being bad, than being good. Started when they were trying to push their own Wayland alternative, their own Gnome alternative, and now they try to force their proprietary appstore shop on everybody.

    Ubuntu was really good when they were just Debian with some much needed updates and polish, but those days are long gone.

    And it’s not like I wouldn’t love to get rid of .deb, it’s a terrible packaging format that had it’s best days 25 years ago when it was up against raw tarballs and when packages where shipped on CD-ROM. It’s in dire need of a fundamental upgrade, but Snap really is not the way forward and the way they underhandedly force it on users is just disgusting. Either build a packaging format of the future and just use it for everything, or don’t.