In this guide we’ll be going over the basics of bringing up a fresh 45drives chassis with the following:
- Ubuntu 16.04
- ZFS for linux
- iSCSI targeting with targetcli
- 45drive LSI driver installs
- 45drive drive naming and /dev settings
- Raid1 boot drives for supermicro
While portions of this guide will be specific to 45drives hardware, the zfs, ubuntu, and iSCSI portions are applicable to any device
Getting started, installing ubuntu:
First lets connect up iKVM on the supermicro. Connect a dhcp enabled cat6 cable like so:
Next, head to the DHCP ip address picked up by the iKVM, the default login is:
- Username: ADMIN
- Password: ADMIN
Hop on over to ‘remote control’ -> ‘console redirection’ -> ‘launch console’ and hit allow on any java prompts that appear.
After the console appears, select ‘virtual media’ -> ‘virtual storage’…
And then select ‘ISO’ from the dropdown, ‘Open Image’ and browse to the the Ubuntu iso you downloaded earlier, finally click ‘Plug in’ to mount the iso as a virtual disk on the 45drive box
On reboot, mash the ‘DEL’ key to enter bios setup, then head over to the ‘advanced’ tab, sSATA configuration:
Make sure your sSATA settings match the below _EXACTLY_:
Then set your boot settings to UEFI, from here on out we’ll have to use UEFI as we are going to set up a hardware raid1 for the boot drives.
Reboot and when prompted, enter the intel raid setup ‘CTRL + I’, from there you should see at least two disks. Select ‘Create RAID Volume’.
Once at the RAID creation screen, give it a name (any name is fine) and make sure its set to raid1, then create the array.
Once complete, mash the ‘F11’ key to enter the boot menu, and from there select ‘UEFI: virtual cdrom’ **IMPORTANT** Be sure to select the UEFI version, the non UEFI version will fail to properly install on the raid1 setup.
I’m going to assume you know how to install Ubuntu server, just make sure to pick the ‘raid1’ volume on install and also enable ssh to make your life easier later on. By default ubuntu will ‘piggyback’ off of the same network connection as iKVM, which is fine for ssh management access for our purposes. Lets wrap up the ubuntu install portion of this guide with some software updates:
sudo apt-get update && sudo apt-get upgrade -y
Install 45drive drivers and getting mgmt online
Lets start by pulling down the LSI raid card drivers, if you are using rocketraid cards, a similar set of steps applies:
We’ll need unzip so we can unpack the install files
sudo apt-get install unzip && unzip Linux_Driver_RHEL6-7_SLES11-12_P14.zip
Then lets hop into the unzipped folder and actually install the ubuntu dpkg driver:
cd Linux_Driver_RHEL6-7_SLES11-12_P14/mpt3sas_rhel5_rel/ubuntu/rpms-1 && sudo dpkg -i mpt3sas-15.00.00.00-1_Ubuntu16.04.amd64.deb
That’s it, the drivers installed, reboot to verify its properly loaded and picks up all the drives. A good additional step is to install the management software for the raid card (for email alerting and such).
wget https://docs.broadcom.com/docs-and-downloads/host-bus-adapters/host-bus-adapters-common-files/sas_sata_12g_p14/SAS3IRCU_P14.zip unzip SAS3IRCU_P14.zip && cd SAS3IRCU_P14 && cd sas3ircu_linux_x64_rel sudo chmod +x sas3ircu
Lets get zfs installed on ubuntu
sudo apt-get install zfs
Ok now lets get those drives numbered for easy drive replacement/swapping. First download the 45drives tools from github onto the box and unzip it, also move it to the correct folder:
mkdir 45drives && cd 45drives wget https://github.com/bkelly16/45Drives/archive/master.zip && unzip master.zip sudo mkdir /opt/gtools && sudo mkdir /opt/gtools/bin cd 45Drives-master/gtools_v2.1 && sudo cp * /opt/gtools/bin && cd /opt/gtools/bin && sudo chmod +x *
Now we hop into the gtools subdirectory and run the commands to label the drives:
sudo ./dmap -c lsi -s 30 9305
You can verify everything worked as expected by running lsdev, which should give a pretty output of the drives, I also usually give it an alias, so we dont have to hunt for it every time:
Nice! Now lets create our first “raid” zfs pool. We’ll be creating a ‘raid 6’ style pool (raidz2) with a single spare drive and we’ll be calling it ‘brick1’
sudo zpool create -f brick1 raidz2 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 spare 1-13
You can verify it worked by running the following command to check raid/drive status:
sudo zpool status
We also want to tune our new zfs volume for best performance.
atime=off, dont update the accessed time every single read/write since this is backend.
xattr=sa, store linux file attributes in a faster way, as system attributes
exec=off, dont allow files on this volume to execute locally
sync=standard, disabling this can help peformance. Standard means it listens to system calls of when to write. If your using zfs directly with vmware you would want to disable as vmware doesn’t play nice with system calls for sync. If your using it with a windows iscsi implementation (like starwind) standard is fine.
compression=lz4, use the lz4 compression algorithm
redundant_metadata=all, ‘most’ would keep most metadata across multiple disks, greatly improving random write performance at the cost of a small chance of file corruption. In practice the corruption risk is very low but for first timers with zfs I’m going to recommend leaving it at ‘all’
sudo zfs set atime=off brick1 && sudo zfs set xattr=sa brick1 && sudo zfs set exec=off brick1 && sudo zfs set sync=standard brick1 && sudo zfs set compression=lz4 brick1 && sudo zfs set redundant_metadata=all brick1
Adding in iSCSI via targetcli
The final step is to set up iSCSI targeting via targetcli, if you’ve never used targetcli before it may seem confusing with its unique commands and structure, but given a bit of time it’ll feel old hat.
First lets get targetcli installed:
sudo apt-get install targetcli -y
Now lets create a new zvol to be able to point to:
sudo zfs create -o compression=lz4 -b 32K -V 8T brick1/iscsi
Then enter targetcli and set up the backstore:
sudo targetcli cd backstores iblock/ create name=brick1_backend dev=/dev/zvol/brick1/iscsi saveconfig Y
After the backstore is created (we pointed targetcli to the zvol we created earlier) we can start on the iscsi target portion:
cd iblock/brick1_backend /iscsi create
You’ll want to ‘cd’ to the newly created /iscsi/iqn, then ‘cd’ again into tpg1, followed by creating the lun and portal:
luns/ create /backstores/iblock/brick1_backend portals/ create 192.168.1.139
Finally lets blow away some security settings (you should set up chap etc properly later on) and save changes:
set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 saveconfig Y
And that’s it! Just point windows iscsi to the portal ip we created moments ago, and it should find the zvol we made. Starwind portion coming soon….