• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I hate the fact that none of the big names support CalDAV natively. DAVx5 is cool and all, but app developers really need to step up their shit and support CalDAV already. Not just Microsoft Exchange and Google Calendar but CalDAV as well. It’s not like they need to rebuild their apps from scratch.

    At this point you might just be better served using a web app instead of a native mobile app. Maybe K-9 Mail transformation into Thunderbird Mobile might bring some good news, but I’m not holding high hopes.

    Maybe we should, under the EU’s DMA, force anyone that bundles a calendar/note app with their phone OS to support CalDAV as well as any proprietary protocol of their choice.


  • Excellent analysis. Especially this part:

    It will be much more productive to try to solve this with the handful of Browser vendors than trying to regulate each and every consent banner.

    Early cookie banners were a bad experience but they were manageable. But now thing have transitioned into content-blocking modals, dark patterns, forced individual consent/rejection for each and every one of the 943 partners they’re selling your data to, sites that refuse to serve content if you reject tracking and other ways to frustrate the end user.

    I’m done with every piece of shit predatory actor inventing their own way of malicious compliance with the GDPR. You either implement the user-friendly consent API or you get no more tracking at all. Paywall your shit for all I care, at least then you’ll have a sustainable business model.


  • I work in IT, and different definitions of what SaaS means are starting to wreak real havoc on the architecture as a whole.

    We are better served just quitting the acronyms and taking the time to talk about a more detailed description of what the service actually adds in terms of value.

    Amazon Prime is a subscription for shipping, video streaming, gaming benefits and more. Since software is not the primary goal, but a means of delivery for these other services, I will not consider Amazon Prime SaaS.




  • Yeah, most western European languages actually.

    Dutch, French, Spanish, Italian… Though most of these languages alternate between “taking a decision” and using a form of “to decide”.

    German seems to be the exception. They just had to be different. Guess that’s that German precision for ya, they have to “hit their decisions” otherwise they won’t count.


  • I have no idea why they’re even remotely interested in Windows as a product anymore. Surely they can’t expect that much revenue from integrated AI services when most of the general public’s needs can be covered by web services that will severely outmatch Microsoft’s development speed (y’know because of juggling legacy code and all).

    Considering the fact that they gain most of their revenue by far from their Azure cloud services and enterprise customers, it just seems like a stupid business decision to invest this much into all kinds of random features for their desktop OS aimed at consumers.

    In proper systems architectecture theory, we generally try to avoid mixing up functionality this much because a modular design allows your system to evolve without too much pain. Why build all this crap into Windows when you can just opt-in by installing an application for it?

    I really don’t get it…


  • Apple’s whole modern “it’s reliable and just works” cult following exists because they found a fix for situations where the problem was between keyboard and chair.

    Both Windows and Linux-based operating systems are plenty reliable if you actually know what you’re doing and you know how things work. Apple started a culture where you don’t need to know how things work because you have no influence over your own devices. Which lets people do the simple tasks without adressing the problem that your userbase will not amass any computing knowledge whatsoever.

    And when Apple devices do fail (and trust me, they do), they fail catastrophically without a way to fix the problem yourself (which is by design).

    The distinction is larger for computers than it is for mobile devices, but yeah in general Apple devices are for simpletons. But the biggest issue is that Apple’s design philosophy actively creates these simpletons.


  • It’s strange to me that the differences are so vast between different continents.

    I know litteraly no one who actually uses iMessage. Never once (in recent years) seen some communicate through a channel that isn’t WhatsApp, Signal or something similar. The whole “ew, green bubbles” drama just isn’t a thing here. (Though the existence of iPhone users still harms society in different ways)

    Though I do agree with many commenters that the EU caving to the lobbyists is a bad thing. Having the law only apply to “problems that are big enough to care about” is still a loss for the consumer in the end. I’m all for standardisation and free choice, which means any commercial messaging service should comply. Exceptions only for open source projects funded by non-profit organisations.




  • My goal is not really to turn this into a discussion, but I feel like your concerns might be based on common misconceptions about nuclear energy.

    Chornobyl (Ukrainian spelling) was such a big disaster because it was the first major nuclear disaster. The reactor was built without hands-on experience with the consequences of a nuclear disaster driving the design of the facility itself. We have since learnt a lot about proper design of nuclear reactors and about how to respond to any incidents.

    The Fukushima reactor was designed with that knowledge in mind, but the event was a perfect shitstorm consisting of both an earthquake and a tsunami hitting the facility at the same time. And even though the local population might disagree, the disaster was arguably less serious than Chornobyl was. Due in large part to a better design and proper disaster response.

    We’re more capable than ever of modeling and simulating natural disasters, so I’d argue we acutally CAN plan for most of those. Any disaster we can’t plan for nowadays is likely to also fuck up an area even worse than the resulting nuclear disaster would.

    But probably the most important thing to mention is that nuclear power is a lot more diverse in the modern world. Gone are the days that uranium fission reactors are the norm. They were only popular because they serve a secondary purpose of creating resources for nuclear weapons, in addition to their power generation. With molten salt reactors, thorium-based reactors and SMR (small modular reactors) there’s really not a good reason to build any more “classic” nuclear reactors other than continuing the production of nuclear weapons, which I hope we can just stop doing.

    The best way to prevent large scale incidents is to prevent large scale reactors, which is why there’s so much interest in SMR lately.

    All in all, we likely can’t fully transition to renewables fast enough without the use of nuclear power as an intermediary. But the actual dangers with modern designs are far fewer than they used to be and we should take care not to give in to irrational fears too much.

    To put things into perspective: We currently have no way of stopping a major solar storm that would thouroughly disrupt all modern life, nor can we stop large asteroids heading our way. Both are potentially planet-ending disasters, but the possibility that they might occur doesn’t stop us from trying to build a better earth for the future, right?


  • Seconded, depending on what your goals are with transcoding, you might want to reconsider your strategy.

    Hardware encoding (with a GPU) is mostly useful for realtime transcoding applications like streaming video. There are definitely some caviats that come with the realtime performance, and you’ll find that NVENC encoded video is almost always inferior to the slower equivalent software encoded variants.

    So let’s talk codecs: While h.265 might seem like the holy grail, it is way more computationally intensive than h.264 is. In some cases the difference in encoding time will even be as high as 3-5x. Not really worth it if all you’re gaining is a slightly lower filesize.

    Your results will vary by the media you’re encoding, by your encoder quality settings, tuning and encoding speed. As a rule of thumb: slower encoding speeds equal more efficiently compressed video (a.k.a. relatively higher quality for lower file size).

    Handbrake is my choice of software for encoding video. It includes pretty much everything you could ever want if you’re not looking for niche codecs and exotic video formats.

    I find myself mostly using x264 because it is relatively fast and still provides awesome results. My encoding speed is always set to “slow” or “superslow” (not much difference for my setup). I usually set the quality by making use of the preview function in handbrake, which transcodes just a short section of the video which I use for pixel peeping and checking for any major artifacts that would ruin the content. The resulting file also provides an estimate for how large the final transcoded file will be. If you’re happy with the quality setting, you can opt to mess with the encoder tuning. There are different presets for film, animated content and such. I usually do use film tuning if transcoding live-action media.

    All this generally leaves me with pretty compact file sizes for 1080p media. And transcoding usually happens at a rate of 60-75 fps depending on the resolution. Going up from “slow” to “medium” improves fps by about 25% and increases file size by about 10%. The ideal balance is up to you.

    Advanced tips: try using VMAF (objective video quality analysis algorithm developed by Netflix) to score and compare your different encoding settings. VMAF is neatly integrated into FFMetrics, which is a GUI for FFMpeg and a couple of video analysis algorithms. I also use MPV (open source media player) with FFMpeg command line arguments for playing videos synchronized in a 2x1 or 2x2 matrix. This helps compare the results for quality.


  • You might want to consider setting up a VPN tunnel to your own network. Main benefit is that you can access your home network as if you were connected to it locally. Which makes switching between mobile data and WiFi a non-issue.

    This requires some sort of VPN server and usually a single port-forwarding rule for the protocol which your VPN software of choice uses. For the simplest default configuration of OpenVPN this means setting UDP port 1194 to point to your OpenVPN server.

    Generally, keeping things simple, there’s two types of VPN you can set up:

    • split tunnel VPN, which gives you access to your home network but accesses the internet directly.
    • full tunnel VPN, which sends all of your traffic through your home router.

    It is a little more complicated than that, and there’s more nuance to it, such as wether to user your own DNS server or not, but all that is best left to some further reading.

    I’ve setup an OpenVPN server myself, wich is open source and completely free to mess around with. (Save for maybe some costs for registring your own domain or DDNS serviced. Those are all optional though, and mainly provide convienience and continuity benefits. You can definitely just setup a VPN server and connect with your external IP adress)


  • You can do both though. Lots of high-profile software is both open source and available as SaaS.

    The beauty of that strategy is you can ensure the software will survive your service provider going bankrupt or otherwise suddenly disappearing, leaving you without a solution.

    By not being locked into a specific vendor, competition will be centered around providing the best service, which is in my opinion exactly as it should be.


  • Wow that’s a cool setup, I’ll definitely steal some ideas.

    I’m used to slinging lots of data around and one of the more helpful tools for general purpose automation has been n8n. Though it might have limited use if you’re not trying to glue all kinds of services together. I also host actualbudget to keep track of finances. Both are running comfortably in their own little docker containers.

    I’m currently looking into setting up Nextcloud and experimenting some more with presence detection for Home Assistant. I’m considering CO2 sensors, which will either tell me my home is ventilated properly, or which rooms are occupied.