cross-posted from: https://programming.dev/post/9319044
Hey,
I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.
I have been using luks+ext4 till now. I am
hesistanthesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.
I think zfs is a pretty cool guy. Eh copy on write and doesn’t afraid of anything
Btrfs is default on OpenSUSE, has worked great for me for 7 years. No issues.
Same here, but for only 1 year on my main machine and 6 years on my laptop. I looove snapper. It saved my ass so many times
Yes it is great. For me snapper rollback was an awesome onboarding experience to linux. Being eager to try things I read online for tweaks and general explorarion it brought me back to a working system after some custom kernel compiling gone awry, or deleting the wrong file etc.
I’ve been on btrfs for so many years, with nightly backups with restic, so I’ve been dragging my feet on snapper. Finally installed it a couple weeks ago, and while I opened the config, I don’t think I changed anything. It’s worked so well, and the Arch package was so well done, that I’d forgotten I had it installed until a few days later I noticed that it was taking snapshots every time before I installed something. It’s shockingly good, and I don’t understand why btrfs+snapper(+grub-btrfs) isn’t the default on installs now.
Been using BTRFS on a couple NAS servers for 4+ years. Also did raid1 BTRFS over two USB hard drives connected to a Pi4 (yes this should be absolutely illegal).
The USB raid1 had a couple checksum errors that were easily fixed via scrub last year and the other two servers have been running without any issues. I assume it’s been fine since they’re all connected to a UPS and since I run weekly scrubs.
I enjoyed CoW and snapshots so much that I’ve been using it on my main Arch install’s (I use Arch btw) root drive and storage drives (in BTRFS raid1) for the last 4 months without issue.
deleted by creator
[…] there were rumours some French guy got arrested and had his LUKS encryption fail on him, so you never know.
deleted by creator
Or its possible that he reused passwords
deleted by creator
Been using Btrfs for a year, I once had an issue that my filesystem was read only, I went to the Btrfs reddit and after some troubleshooting it turned out that my ssd was dying, I couldn’t believe it at first because my SMART report was perfectly clean and the SSD was only 2 years old, then a few hours later SMART began reporting thousands of dead sectors.
The bloody thing was better than smart at detecting a dying ssd lol.
My only complaint with btrfs when I used to run it, is that kvm disk performance was abysmal on it. Otherwise I had no issues with the fs.
Really? Were the virtual disks running ext4?
Yes.