I’ll probably release on March, since there’s a release every 2 months and 6.13 was on Jan 20th.
I’ll probably release on March, since there’s a release every 2 months and 6.13 was on Jan 20th.
But those are small fries, not “the provider of games”
They have less to loose, then. That’s just as dangerous, if not more.
I’m a small fry too, would you run a binary I send you without any form of sandboxing?
we don’t run games as root
No, we typically run them with the same user that stores all our useful private data and that we typically type our passwords with.
Also, why are you OK with that level of sandboxing? don’t you want more “control”? You say containers are bad, but using user roles to protect parts of the system is ok? why are you not running all as root if you want “control”?
we are speaking about Wine, so what they see is limited to WINEPREFIX
Not really, by default you have access to other drives (Z:\
being /
, the fs root), wine is not a perfect sandbox, it’s not designed for that… and if you actually did want it to become one (which ultimately would also lead to a need for memory separation to fight memory-leak attacks) then it would not be that different from what’s being pursued. You’d be essentially building the container in a custom version of wine shipped by Valve on Steam, it does not make any difference in terms of “control”.
Currently, in order for Android app to appear in the official Store, developer has to allow Google to repackage their app and sign it with Google key. So while we can inspect what is there in the code of the app in git, we don’t really know what lands on our phones if installed via Google Play
You can still open an APK and decompile it… it being signed with a specific key is no different than the digital signatures some attach to their emails, it’s a way to prove authenticity, not a way to encrypt the message… you can open the email without having to even care about the signature.
We have no control over what they put in those containers
Most games on Steam are proprietary software you don’t control to begin with. It seems reasonable to keep them encapsulated in containers (+1 if you run Steam on flatpak or so) rather than granting them the capacity to run amok in the entire system, which we would have even less control over.
It seems contradictory to want to remove barriers that are preventing the software from taking more control, and at the same time complaining about how they are having too much control.
Ok, then it’s not the condition of “being a black hole” what makes it “suck in”, but its mass, which can be varied (according to Stephen Hawking, the theoretical minimum mass to form a black hole would be 0.01 mg)
Saying that a black hole “sucks in” in that sense is as valid as saying that any object with mass (like a tenis ball) “sucks in”. But I don’t think that’s what the article was referring to as a “myth”, the myth the article targets is the suck power being a particular characteristic of black holes.
Yes, but that’s very localized and it’s not the same as the image some people have of black holes characterizing themselves for instantly sucking it all in its vecinity.
If the teachings don’t reach outside the classroom, you wouldn’t say that people outside can learn more standing there than they would from any other similarly looking room. For a black hole, the gravitational pull over everything that you can see around it is the exact same as it would for a lower density equivalent mass you might be orbiting.
And we know there are stars heavier than some black holes, which actually would have a stronger pull to things in their proximity than if they were a black hole with smaller mass. Also Stephen Hawkins introduced the concept of micro/mini black holes. He theorized that the minimum mass for a black hole is in the order of 0.00000001 Kg. What makes a black hole have a singularity has more to do with its density than its mass, so if you could smash together a mass with enough strength you could cause it to collapse.
Then, under that interpretation, whether a black hole “sucks in” depends entirely on the trajectory you have. I’d argue then that considering all possible trajectories, you are more likely to not be sucked in by the black hole.
The path the Earth traces isn’t circular, it’s more like it’s spiraling forming ellipses around the Sun and progressively getting further and further away from it (so we are actually slowly being pulled out rather than sucked in). If instead of a Sun we had a black hole with the same mass, nothing would change in that respect, since gravity only depends on the center of mass.
The difference (other than the temperature and light) is that a black hole is very very dense so it would be much much smaller. This means you can get a lot closer to it and this is what makes the gravity skyrocket (since gravity relates to the distance squared). With a star, you can’t get close enough to its center without reaching first the INSIDE of the star… and once you are below the surface of the star then the mass between you and the center of the star gets progressively smaller the closer you get to its center (and the mass that’s behind you will get higher and higher), so this dampens the gravitational pull.
Would you say our planet is currently being sucked in by the Sun? or would you rather say that we are just orbiting the Sun?
Because odds are that if you approach a black hole without aiming directly for it, you might just end up in an orbit around it, not unlike we currently are around the Sun. Or you might even be catapulted out, instead of being “sucked in” in the popular sense.
I think the point the article was trying to make is that “sucking in with lots of force” does not really happen any differently outside the event horizon of a black hole than it would in the proximity of any other star (or object) with the same mass.
So it’s addressing the “myth” that being in the proximity of a black hole would inevitably suck you in… however, odds are that if you are not directly aiming for the black hole, even if you did not resist, you would just end up entering an orbit around it, the same way we are currently orbiting the Sun. Or maybe even be catapulted out of it, instead of sucked in.
The difference would be that past the event horizon you would be torn apart by the space distortion (instead of being cooked alive if it were a star). But theoretically if you can avoid crashing into a star, then you can avoid entering a black hole.
Personally, I would be happy even if it didn’t translate it but were able to give some half decent transcription of, at least, English voice into English text. I prefer having subtitles, even when I speak the language, because it helps in noisy environments and/or when the characters mumble / have weird accents.
However, even that would likely be difficult with a lightweight model. Even big companies like Google often struggle with their autogenerated subtitles. When there’s some very context-specific terminology, or uncommon names, it fumbles. And adding translation to an already incorrect transcript multiplies the nonsense, even if the translation were technically correct.
I feel UI trends have gone in the direction of making things worse, not better.
I remember when it was pretty much unanimous that “mplayer” was beautiful in all its square corners glory, while “Windows Live Media Player” was seen as a horrible abomination.
Now it feels like everyone is on board with inefficient UI designs like the latter for some reason.
I don’t understand the posh stylistic decisions around padding, rounded borders, etc. How do those things make the UI better exactly?
As someone who used low resolutions for most of my University years (I did my thesis in a tiny ultralaptop), I relied heavily on a custom gtk2 theme I had to write to remove most of that padding that made the UI feel so unnecessary and my screen so cramped.
Gnome now pushing for removing theming completely and relying on just color scheme customization feels totally backwards to me. I don’t have an answer for OP sadly… other than just using terminal / tui apps more whenever possible.
Is “intent” what makes all the difference? I think doing something bad unintentionally does not make it good, right?
Otherwise, all I need to do something bad is have no bad intentions. I’m sure you can find good intentions for almost any action, but generally, the end does not justify the means.
I’m not saying that those who act unintentionally should be given the same kind of punishment as those who do it with premeditation… what I’m saying is that if something is bad we should try to prevent it in the same level, as opposed to simply allowing it or sometimes even encourage it. And this can be done in the same way regardless of what tools are used. I think we just need to define more clearly what separates “bad” from “good” specifically based on the action taken (as opposed to the tools the actor used).
I think that’s the difference right there.
One is up for debate, the other one is already heavily regulated currently. Libraries are generally required to have consent if they are making straight copies of copyrighted works. Whether we like it or not.
What AI does is not really a straight up copy, which is why it’s fuzzy, and much harder to regulate without stepping in our own toes, specially as tech advances and the difference between a human reading something and a machine doing it becomes harder and harder to detect.
Which is why you should only care about the personal opinion of those people when it actually relates to that reliability.
I don’t care whether Linus Torvalds likes disrespecting whichever company or people he might want to give the middle finger to, or throw rants in the mailing list or mastodon to attack any particular individual, so long as he continues doing a good job maintaining the kernel and accepting contributions from those same people when they provide quality code, regardless of whatever feelings he might have about whatever opinions they might hold.
You rely on the performance of the software, the clarity of the docs, the efficiency of their bug tracking… but the opinions of the people running those things don’t matter so long as they keep being reliable.
I have contributed to other projects without really needing to get involved in their community in any personal/parasocial level, though.
I just make a pull request and when the code was good it was accepted, when not it got rejected. Sometimes I’ve had to make changes before it getting merged, but I had no need to engage in discussions on discord or anything like that. I’ve been in some mailing lists to keep track on some projects, but never really engaged deeply, specially if it goes off-topic.
If I find that a good code contribution is rejected for whatever toxic reason, then the consequence of that is the code would stop being as good as it could have (because of the contributions being rejected/slowed down), so it’s then that forking might be in order. Of course the code matters.
To his point: if not “discuss”, what is the correct approach against fascism? war and murder? dismiss it, try to “cancel it” without giving any arguments so it can continue to fester on its own and keep growing in opposition?
To me, fascism is a stupid position that doesn’t make much sense, to the point that it falls on itself the moment you “discuss” it.
I would have expected that it would be the fascists the ones unable/unwilling to discuss their position, since it’s the least rational one. So it’s certainly very jarring whenever I hear people jumping to defend against fascism while at the same time stopping in their tracks when it comes to discussing it. Even if those unable to reason might not be convinced by our arguments, anyone with reason would. Rejecting discussion does a disservice, because it does put off those willing to listen and strengthens those who didn’t really want an argument anyway.
Like flat-earthers, they should be challenged with reason, with discussion. Not dismissed as if it were true that there’s a huge conspiracy against them. Whether they listen or not to that reason, dehumanizing them and rejecting civil and rational discourse would play in favor of their movement.
Stating “genocide is bad” should NOT be a statement of faith. Faith is the shakiest of the grounds, if we are unable to articulate the specific reasons that make genocide be bad, then we are condemned to see it repeat itself. So, I’d argue it’s for the sake of the victims in Auschwitz that antifascism should not be turned into a religion, but into a solid and rational position that’s not distorted nor used willy-nilly.
Bash. By default it might seem less featureful than zsh… but bash is a lot more powerful and extensible than some give it credit for. It might be more complex to set it up the way you like it, but once you do it, that configuration can be ported over wherever bash exists (ie. almost everywhere).
It’s changing by having a library like wlroots do most of the work.
When you consider the overall picture, “wlroots + compositor” is actually less complex than “X11 + window manager” because you no longer need to consider the insanely high requirements of having to have a team maintaining the spaghetti mess of X11 code.
Wayland-based dwl has roughly the same line count as X11-based dwm (about 2.2k), without having to depend on a whole separate service as big as X11.
But of course, it being a completely different approach, it’s likely that for most smaller projects (ie. not Gnome or KDE) it’s easier to start a new project than creating a layer to maintain two different parallel implementations.
If you want something that’s more or less compatible with openbox, there seems to be this project, labwc, which claims to be inspired by openbox and compatible with its config/themes… though I haven’t personally tried it.
Also keep in mind that openbox (and I expect labwc too) doesn’t include any “panels” / “taskbars” or anything like that… and it’s likely your X11 panels might not work well if they do not explicitly support Wayland (but I believe that, for example, xfce-panel now supports both).
I think there are situations that fsync does not cover very efficiently, to the point that it can cause timing issues that lead to some bugs / incompatibilities. The timing issues might be rare, but that doesn’t mean the overall efficiency is the same. It would be interesting to see benchmarks of fsync vs ntsync.