Monday, April 28, 2008
Snap Upgrade rocks!
I've been a Live Upgrade convert for about a year and a half. I have four boot environments on my workstation at home, two for Solaris Express and two for testing OpenSolaris builds. (Yes, two would be sufficient, and that's what I used to do until the second time I put an experimental build on top of my "stable" boot environment. It's impossible to make that same mistake if I have two stable boot environments.) But as great as Live Upgrade is, it has its shortcomings. For one, you have to reserve disk partitions for boot environments. You can't retrofit Live Upgrade onto an existing system. For another, initializing a boot environment takes time, as it's making a second copy of the OS. (And I'm sure other people have other complaints, but those are my main two.)
With the upcoming Indiana release, Live Upgrade has been replaced with Snap Upgrade. (As I understand it, Live Upgrade can't be open-sourced for some reason.) Snap Upgrade is ZFS-based, which has direct impact on my two main complaints. There's no longer a need to reserve paritions for additional boot environments, as boot environments are merely clones of the existing ZFS partitions. This saves on disk space, as the copy-on-write nature of ZFS means that multiple boot environments will likely point at the most of the same disk blocks. It also means that initializing a boot environment is effectively instantaneous:
The new beadm(1M) command is used to manage boot environments. The command options are a bit simpler than the Live Upgrade commands and demonstrate their ancestry as being a mix of ZFS and Live Upgrade operations:
And some simple demonstration:
With the upcoming Indiana release, Live Upgrade has been replaced with Snap Upgrade. (As I understand it, Live Upgrade can't be open-sourced for some reason.) Snap Upgrade is ZFS-based, which has direct impact on my two main complaints. There's no longer a need to reserve paritions for additional boot environments, as boot environments are merely clones of the existing ZFS partitions. This saves on disk space, as the copy-on-write nature of ZFS means that multiple boot environments will likely point at the most of the same disk blocks. It also means that initializing a boot environment is effectively instantaneous:
# date && beadm create foo && date Mon Apr 28 10:39:03 EDT 2008 Mon Apr 28 10:39:10 EDT 2008 #
The new beadm(1M) command is used to manage boot environments. The command options are a bit simpler than the Live Upgrade commands and demonstrate their ancestry as being a mix of ZFS and Live Upgrade operations:
NAME beadm - utility for managing zfs boot environments SYNOPSIS /usr/sbin/beadm beadm create [-a] [-e non-activeBeName | beName@snapshot] [-o property=value] ... [-p zpool] beName beadm create beName@snapshot beadm destroy [-f] beName | beName@snapshot beadm list [-a | -ds] [-H] [beName] beadm mount beName mountpoint beadm unmount beName beadm rename beName newBeName beadm activate beName
And some simple demonstration:
# beadm list BE Active Active on Mountpoint Space Name reboot Used ---- ------ --------- ---------- ----- opensolaris yes yes legacy 2.74G initial no no - 60.5K # for i in 1 2 3 4 5 ; do > beadm create foo${i} > done # beadm list BE Active Active on Mountpoint Space Name reboot Used ---- ------ --------- ---------- ----- foo1 no no - 81.5K opensolaris yes yes legacy 2.74G initial no no - 60.5K foo5 no no - 81.5K foo2 no no - 81.5K foo3 no no - 80.5K foo4 no no - 81.5K # for i in 1 2 3 4 5 ; do > beadm destroy -f foo${i} > done # beadm list BE Active Active on Mountpoint Space Name reboot Used ---- ------ --------- ---------- ----- opensolaris yes yes legacy 2.74G initial no no - 60.5K #