Ezjail host: Difference between revisions
No edit summary |
|||
| Line 2: | Line 2: | ||
After receiving the server from Hetzner I boot it using the rescue system which puts me at an mfsbsd prompt. I then edit the ''zfsinstall'' script <code>/root/bin/zfsinstall</code> and add "usr" to ''FS_LIST'' near the top of the script. I do this because I like to have /usr as a seperate ZFS dataset. | After receiving the server from Hetzner I boot it using the rescue system which puts me at an mfsbsd prompt. I then edit the ''zfsinstall'' script <code>/root/bin/zfsinstall</code> and add "usr" to ''FS_LIST'' near the top of the script. I do this because I like to have /usr as a seperate ZFS dataset. | ||
I then run the ''zfsinstall'' script like below. I am | I then run the ''zfsinstall'' script like below. I am going to export the majority of the available diskspace as a ZVOL which will be used for a GELI device with another zfs pool on top. This pool will house the actual jails and data. | ||
Note that the disks are new-ish (Power_On_Hours is 73 on both drives according to smartctl, which the mfsbsd author has been clever enough to include on mfsbsd) but I still found an MBR partition that needed to be deleted first. This can be done with the ''destroygeom'' command like shown below: | Note that the disks are new-ish (Power_On_Hours is 73 on both drives according to smartctl, which the mfsbsd author has been clever enough to include on mfsbsd) but I still found an MBR partition that needed to be deleted first. This can be done with the ''destroygeom'' command like shown below: | ||
<pre> | <pre> | ||
[root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz | [root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz | ||
Error: /dev/ad4 already contains a partition table. | Error: /dev/ad4 already contains a partition table. | ||
| Line 18: | Line 18: | ||
Destroying geom ad4: | Destroying geom ad4: | ||
Destroying geom ad6: | Destroying geom ad6: | ||
[root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz | [root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz | ||
Creating GUID partitions on ad4 ... done | Creating GUID partitions on ad4 ... done | ||
Configuring ZFS bootcode on ad4 ... done | Configuring ZFS bootcode on ad4 ... done | ||
Revision as of 12:47, 24 May 2012
Basic install with mfsbsd
After receiving the server from Hetzner I boot it using the rescue system which puts me at an mfsbsd prompt. I then edit the zfsinstall script /root/bin/zfsinstall and add "usr" to FS_LIST near the top of the script. I do this because I like to have /usr as a seperate ZFS dataset.
I then run the zfsinstall script like below. I am going to export the majority of the available diskspace as a ZVOL which will be used for a GELI device with another zfs pool on top. This pool will house the actual jails and data.
Note that the disks are new-ish (Power_On_Hours is 73 on both drives according to smartctl, which the mfsbsd author has been clever enough to include on mfsbsd) but I still found an MBR partition that needed to be deleted first. This can be done with the destroygeom command like shown below:
[root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz
Error: /dev/ad4 already contains a partition table.
=> 63 5860533105 ad4 MBR (2.7T)
63 5860533105 - free - (2.7T)
You may erase the partition table manually with the destroygeom command
[root@rescue ~]# destroygeom
Usage: /root/bin/destroygeom [-h] -d geom [-d geom ...] [-p zpool ...]
[root@rescue ~]# destroygeom -d ad4 -d ad6
Destroying geom ad4:
Destroying geom ad6:
[root@rescue ~]# zfsinstall -d ad4 -d ad6 -r mirror -s 5G -t /nfs/mfsbsd/9.0-amd64-zfs.tar.xz
Creating GUID partitions on ad4 ... done
Configuring ZFS bootcode on ad4 ... done
=> 34 5860533101 ad4 GPT (2.7T)
34 2014 - free - (1.0M)
2048 128 1 freebsd-boot (64K)
2176 10485760 2 freebsd-swap (5.0G)
10487936 62914560 3 freebsd-zfs (30G)
73402496 5787130639 - free - (2.7T)
Creating GUID partitions on ad6 ... done
Configuring ZFS bootcode on ad6 ... done
=> 34 5860533101 ad6 GPT (2.7T)
34 2014 - free - (1.0M)
2048 128 1 freebsd-boot (64K)
2176 10485760 2 freebsd-swap (5.0G)
10487936 62914560 3 freebsd-zfs (30G)
73402496 5787130639 - free - (2.7T)
Creating ZFS pool tank on ad4p3 ad6p3 ... done
Creating tank root partition: ... done
Creating tank partitions: var tmp usr ... done
Setting bootfs for tank to tank/root ... done
NAME USED AVAIL REFER MOUNTPOINT
tank 208K 29.3G 21K none
tank/root 88K 29.3G 25K /mnt
tank/root/tmp 21K 29.3G 21K /mnt/tmp
tank/root/usr 21K 29.3G 21K /mnt/usr
tank/root/var 21K 29.3G 21K /mnt/var
Extracting FreeBSD distribution ...
done
Writing /boot/loader.conf... done
Writing /etc/fstab...Writing /etc/rc.conf... done
Copying /boot/zfs/zpool.cache ... done
Installation complete.
The system will boot from ZFS with clean install on next reboot
You may type "chroot /mnt" and make any adjustments you need.
For example, change the root password or edit/create /etc/rc.conf for
for system services.
WARNING - Don't export ZFS pool "tank"!
[root@rescue ~]#
Post install configuration (before reboot)
Before rebooting into the installed FreeBSD I need to make certain I can reach the server through SSH after the reboot. This means adding network settings to /etc/rc.conf along with sshd_enable="YES". I also go change PermitRootLogin to Yes in /etc/ssh/sshd_config. Finally I set the root password. All of these steps are essential if I am going to have any chance of logging in after reboot. Most of these changes can be done from the mfsbsd shell but the password change requires chroot into the newly installed environment.
I use the chroot command but start another shell as bash is not installed in /mnt:
[root@rescue ~]# chroot /mnt/ csh rescue# ee /etc/rc.conf rescue# ee /etc/ssh/sshd_config rescue# passwd New Password: Retype New Password: rescue#
So, the network settings are sorted, root password is set, and root is permitted to ssh in. Time to reboot (this is the exciting part).
Encrypted zvol
[tykling@latency ~]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfstank 1.41G 72.9G 21K none
zfstank/root 1.41G 72.9G 1.32G /
zfstank/root/tmp 35K 72.9G 35K /tmp
zfstank/root/var 94.4M 72.9G 94.4M /var
[tykling@latency ~]$ sudo zfs create -V 65G zfstank/encrypted
[tykling@latency ~]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfstank 66.4G 7.89G 21K none
zfstank/encrypted 65G 72.9G 16K -
zfstank/root 1.43G 7.89G 1.34G /
zfstank/root/tmp 35K 7.89G 35K /tmp
zfstank/root/var 95.2M 7.89G 95.2M /var
[tykling@latency ~]$ ls -l /dev/zvol/zfstank/encrypted
crw-r----- 1 root operator 0, 81 Dec 8 19:42 /dev/zvol/zfstank/encrypted
[tykling@latency ~]$ sudo geli init -s 4096 -K /root/encrypted.key /dev/zvol/zfstank/encrypted
Enter new passphrase:
Reenter new passphrase:
[tykling@latency ~]$ sudo geli attach -k /root/encrypted.key /dev/zvol/zfstank/encrypted
Enter passphrase:
[tykling@latency ~]$ sudo zpool create cryptopool /dev/zvol/zfstank/encrypted.eli
[tykling@latency ~]$ sudo zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
cryptopool 64.5G 572K 64.5G 0% ONLINE -
zfstank 75.5G 1.73G 73.8G 2% ONLINE -
[tykling@latency ~]$ zpool status cryptopool
pool: cryptopool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
cryptopool ONLINE 0 0 0
zvol/zfstank/encrypted.eli ONLINE 0 0 0
errors: No known data errors
[tykling@latency ~]$
[tykling@latency ~]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
cryptopool 352K 63.5G 112K /cryptopool
zfstank 66.9G 7.45G 21K none
zfstank/encrypted 65G 72.5G 32K -
zfstank/root 1.87G 7.45G 1.78G /
zfstank/root/tmp 35K 7.45G 35K /tmp
zfstank/root/var 95.3M 7.45G 95.3M /var
[tykling@latency ~]$ sudo zfs set mountpoint=none cryptopool
[tykling@latency ~]$ sudo zfs create -o compression=gzip -o mountpoint=/usr/jails cryptopool/jails
[tykling@latency ~]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
cryptopool 536K 63.5G 112K none
cryptopool/jails 112K 63.5G 112K /usr/jails
zfstank 66.9G 7.44G 21K none
zfstank/encrypted 65G 72.4G 2.17M -
zfstank/root 1.88G 7.44G 1.79G /
zfstank/root/tmp 35K 7.44G 35K /tmp
zfstank/root/var 95.3M 7.44G 95.3M /var
[tykling@latency ~]$