![]() So at a pool level you might not be able to turn it off once it's turned on, but you can also turn off deduplication per file system, including in properties you set when receiving a stream. When I nuked the pool and recreated it, it was all fine though. So delete files, scrub, put new files on resulted in them having the exact same failure. It even did this after I deleted everything, because prune couldn’t remove the bad underlying entries because it was having a media failure. So dedup, unfortunately, actually made it REALLY suck to fix, because I couldn’t even copy a new version of the file to the same pool! It kept nuking the duplication, and keep the old bad data and I then couldn’t read the copy. One issue I had is that due to what I eventually tracked down as power issues, I had some corrupted data written to disk under my zfs pool (at the media write later), and I had dedup on. Zfs send/recv sends the blocks as written to the original filesystem (which is why it can be so fast, it doesn’t have to ‘understand’ what it happening or defragment things to read like reading a file does), but that also means undoing or applying dedup won’t work correctly unless it’s screwing with things you probably don’t want it too. That was Really Hard for ZFS to implement, and I'm not sure how meaningful such a combination is in practice. ![]() I haven't used a combination of encryption and deduplication. You can come with a mental model of what you're working with, dealing with successive collections of files.Įven though files are just another abstraction over blocks, it's an abstraction that leaks less without the deduplication. I don't know, I don't have that experience.įor most of us, file-based deduplication might work out better, and is almost certainly easier to understand. You might see that in database applications, depending on log structure. You see this in collections disk images for virtual machines. Small changes to single, large files see some advantage with block based deduplication. These days, the pool could fit on a portable SSD that would fit in my pocket.Ĭareful, file-based dedup on top of ZFS might be more effective. It's a backup server, gets about 2x with deduplication.Īt the time, it was the difference between slow and impossible: I couldn't afford another 2x of disks. There are many more features, and it is probably overkill for a desktop.My first ever large (> 4TB) ZFS pool is still stuck with dedup. I'll probably have to play with some commands to do this in Linux (no beadm command) When updating Solaris we create a BE (Boot Env) that the update is applied to, then when ready you reboot. Or,as pn my desktop setup I have a Solaris boot that shares a 1TB zpool with whatever other OS I decide to work in that day. It is non-endian, a SPARC Zpool can be imported and mounted on an X86 system The snapshot can be "sent" to another system, or used to dig out a missing file, or just rolled back. It is easily extendable, add a LUN and tell it to use it.ĭatasets can be backeduped with snapshots (point in time), that just then records changes. When daasets are created for FS use it is near instantaneous as the space is allocated but not touched (see metaslabs) If it detects an error (bit rot) while scannng (scubbing) it wlll repair it.Īll disks are placed in a pool (raid1,0,3,5 etc etc) and imported as the whole. It is self healing, no write is commited until checksums performed (COW). It is best used with mirroring (of which it has many options and configurations) I want to know about its properties and advantages, perhaps I can learn something new? I do not doubt that and I also do not want to discuss it. I have used it since 2005, so am very comfortable with it and all it's functions. Meanwhile, stepping through the manual process and playing with the zfs_installer ( ) was interesting. The only difference s that libzfs2linux no longer exists, and needs dropping from the install line. I'd really prefer to use LM over vanilla Ubuntu.Īnd the correct solution appears to be the simplest. Is it just the sources.list to imclude vannessa or do I need to invoke ubiquity with some switch to accept the partition layout of my SSD? My question is, what do I need to change in the main ubuntu 22.04 ZFS install guide to make it a LM 21 install? It has been running for over 1/2 an hour now. However while trying to install (after adding the ZFS tools and selecting the Advanced option for ZFS it has only formated the target disk into 2 partitions, not 4 and is going very very slowly (it is an SSD). I don't want or need to use encrypted root pool I've had a look and found varios refeneces like. ![]() I've been playing with ZFS root on Ubuntu, following.
0 Comments
Leave a Reply. |