With the disk partitions all created, now its time to create the ZFS Storage pool, called a zpool. This is the virtual device or container that the ZFS data sets will exist within. If you are moving over from a hardware raid setup, think of it as the raid container. In this example we are using a mirror, ZFS in FreeBSD has support for mirror (2 or more disk), raidz (3 or more, compare to raid 5), raidz2 (4 or more, compare to raid 6), and raidz3 (5 or more, think raid 6 plus another parity disk). If your thinking, how can a mirror as we are talking about 2 drives identical to each other use more than 2 disks, just think X-way mirror, adding more disks just adds an additional copy, just in case you need more redundancy. However no matter which option you are using ZFS is more than just a mirror, it includes additional features, continuous integrity checking, automatic repairing. Its really quite flexible, and has many features beyond the scope of this document.
The commands below, first create the zpool, you will receive an error. It can be ignored, because creating a zpool also creates a ZFS data set at the root which will fail to mount. The default mount point of the data set is the /+name, /zroot in this case, since root is a read only file system on the lice CD, we get an error. With this setup we don't want the root mounted, its just a container for other things, so we set it's mountpoint to none in the second line.
zpool create -f zroot mirror /dev/gpt/zroot* zfs set mountpoint=none zroot
Note: -f, forces use of vdevs even if they already appear in use, this just tells it that if the name zroot, or any of the disk partitions apear in use to force using them anyway. Shouldn't be needed, but if you are reusing old disk, this just makes sure that any old ZFS data that may still be there gets overwritten.
Note: mirror, is of course telling it the type of zpool, if you have more disks, you may consider using a raidz, raidz2, raidz3 instead, I wont get into the advantages/disadvantages and trade off's of the different options here.
Note: /dev/gpt/zroot*, if you remember the disk partitioning, we labeled the zfs partitions, as zroot0, and zroot1. Here we tell it to use zroot* under the /dev/gpt/ directory, which means to use every GPART label starting with zroot.