Building a 45drives backend for Starwind

Intro

In this guide we’ll be going over the basics of bringing up a fresh 45drives chassis with the following:

  • Ubuntu 16.04
  • ZFS for linux
  • iSCSI targeting with targetcli
  • 45drive LSI driver installs
  • 45drive drive naming and /dev settings
  • Raid1 boot drives for supermicro

While portions of this guide will be specific to 45drives hardware, the zfs, ubuntu, and iSCSI portions are applicable to any device

 

Getting started, installing ubuntu:

First lets connect up iKVM on the supermicro. Connect a dhcp enabled cat6 cable like so:

 

Next, head to the DHCP ip address picked up by the iKVM, the default login is:

  • Username: ADMIN
  • Password: ADMIN

 

 

Hop on over to ‘remote control’ -> ‘console redirection’ -> ‘launch console’ and hit allow on any java prompts that appear.

 

 

After the console appears, select ‘virtual media’ -> ‘virtual storage’…

 

And then select ‘ISO’ from the dropdown, ‘Open Image’ and browse to the the Ubuntu iso you downloaded earlier, finally click ‘Plug in’ to mount the iso as a virtual disk on the 45drive box

 

On reboot, mash the ‘DEL’ key to enter bios setup, then head over to the ‘advanced’ tab, sSATA configuration:

 

 

Make sure your sSATA settings match the below _EXACTLY_:

 

 

Then set your boot settings to UEFI, from here on out we’ll have to use UEFI as we are going to set up a hardware raid1 for the boot drives.

 

 

Reboot and when prompted, enter the intel raid setup ‘CTRL + I’, from there you should see at least two disks. Select ‘Create RAID Volume’.

 

 

Once at the RAID creation screen, give it a name (any name is fine) and make sure its set to raid1, then create the array.

 

 

 

Once complete, mash the ‘F11’ key to enter the boot menu, and from there select ‘UEFI: virtual cdrom’ **IMPORTANT** Be sure to select the UEFI version, the non UEFI version will fail to properly install on the raid1 setup.

 

I’m going to assume you know how to install Ubuntu server, just make sure to pick the ‘raid1’ volume on install and also enable ssh to make your life easier later on. By default ubuntu will ‘piggyback’ off of the same network connection as iKVM, which is fine for ssh management access for our purposes. Lets wrap up the ubuntu install portion of this guide with some software updates:

sudo apt-get update && sudo apt-get upgrade -y

 

Install 45drive drivers and getting mgmt online

Lets start by pulling down the LSI raid card drivers, if you are using rocketraid cards, a similar set of steps applies:

wget https://docs.broadcom.com/docs-and-downloads/host-bus-adapters/host-bus-adapters-common-files/sas_sata_12g_p14/Linux_Driver_RHEL6-7_SLES11-12_P14.zip

 

We’ll need unzip so we can unpack the install files

sudo apt-get install unzip && unzip Linux_Driver_RHEL6-7_SLES11-12_P14.zip

 

Then lets hop into the unzipped folder and actually install the ubuntu dpkg driver:

cd Linux_Driver_RHEL6-7_SLES11-12_P14/mpt3sas_rhel5_rel/ubuntu/rpms-1 && sudo dpkg -i mpt3sas-15.00.00.00-1_Ubuntu16.04.amd64.deb

 

That’s it, the drivers installed, reboot to verify its properly loaded and picks up all the drives. A good additional step is to install the management software for the raid card (for email alerting and such).

wget https://docs.broadcom.com/docs-and-downloads/host-bus-adapters/host-bus-adapters-common-files/sas_sata_12g_p14/SAS3IRCU_P14.zip
unzip SAS3IRCU_P14.zip && cd SAS3IRCU_P14 && cd sas3ircu_linux_x64_rel
sudo chmod +x sas3ircu

 

Install zfs

Lets get zfs installed on ubuntu

sudo apt-get install zfs

 

Ok now lets get those drives numbered for easy drive replacement/swapping. First download the 45drives tools from github onto the box and unzip it, also move it to the correct folder:

mkdir 45drives && cd 45drives
wget https://github.com/bkelly16/45Drives/archive/master.zip && unzip master.zip
sudo mkdir /opt/gtools && sudo mkdir /opt/gtools/bin
cd 45Drives-master/gtools_v2.1 && sudo cp * /opt/gtools/bin && cd /opt/gtools/bin && sudo chmod +x *

 

Now we hop into the gtools subdirectory and run the commands to label the drives:

sudo ./dmap -c lsi -s 30
9305

 

You can verify everything worked as expected by running lsdev, which should give a pretty output of the drives, I also usually give it an alias, so we dont have to hunt for it every time:

alias 45drive='/opt/gtools/bin/lsdev'

 

Nice! Now lets create our first “raid” zfs pool. We’ll be creating a ‘raid 6’ style pool (raidz2) with a single spare drive and we’ll be calling it ‘brick1’

sudo zpool create -f brick1 raidz2 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 spare 1-13

 

You can verify it worked by running the following command to check raid/drive status:

sudo zpool status

 

We also want to tune our new zfs volume for best performance.

atime=off, dont update the accessed time every single read/write since this is backend.

xattr=sa, store linux file attributes in a faster way, as system attributes

exec=off, dont allow files on this volume to execute locally

sync=standard, disabling this can help peformance. Standard means it listens to system calls of when to write. If your using zfs directly with vmware you would want to disable as vmware doesn’t play nice with system calls for sync. If your using it with a windows iscsi implementation (like starwind) standard is fine.

compression=lz4, use the lz4 compression algorithm

redundant_metadata=all, ‘most’ would keep most metadata across multiple disks, greatly improving random write performance at the cost of a small chance of file corruption. In practice the corruption risk is very low but for first timers with zfs I’m going to recommend leaving it at ‘all’

sudo zfs set atime=off brick1 && sudo zfs set xattr=sa brick1 && sudo zfs set exec=off brick1 && sudo zfs set sync=standard brick1 && sudo zfs set compression=lz4 brick1 && sudo zfs set redundant_metadata=all brick1

 

Adding in iSCSI via targetcli

The final step is to set up iSCSI targeting via targetcli, if you’ve never used targetcli before it may seem confusing with its unique commands and structure, but given a bit of time it’ll feel old hat.

First lets get targetcli installed:

sudo apt-get install targetcli -y

 

Now lets create a new zvol to be able to point to:

sudo zfs create -o compression=lz4 -b 32K -V 8T brick1/iscsi

 

Then enter targetcli and set up the backstore:

sudo targetcli
cd backstores
iblock/ create name=brick1_backend dev=/dev/zvol/brick1/iscsi
saveconfig
Y

 

After the backstore is created (we pointed targetcli to the zvol we created earlier) we can start on the iscsi target portion:

cd iblock/brick1_backend
/iscsi create

 

You’ll want to ‘cd’ to the newly created /iscsi/iqn, then ‘cd’ again into tpg1, followed by creating the lun and portal:

luns/ create /backstores/iblock/brick1_backend
portals/ create 192.168.1.139

 

Finally lets blow away some security settings (you should set up chap etc properly later on) and save changes:

set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
saveconfig
Y

 

And that’s it! Just point windows iscsi to the portal ip we created moments ago, and it should find the zvol we made. Starwind portion coming soon….

 

 

 


Posted

in

by

Tags:

Comments

5 responses to “Building a 45drives backend for Starwind”

  1. Jack mills Avatar
    Jack mills

    Please explain what the commands do, some are obvious while others are not, especially the tuning commands. If I’m understanding correctly, I wouldn’t run that set of commands at all, looks like it’ll out data at risk.

    1. Michael Rickert Avatar
      Michael Rickert

      I’ve updated the ‘tuning’ section with more details, hope that helps!

    2. Michael Rickert Avatar
      Michael Rickert

      Edit2: I’ve changed the tuning in this guide to metadata=all and sync=standard after thinking on this for awhile. The tuning set before made sense for my particular environ but in the interest of keeping this guide ‘copy/paste’ friendly for a majority of visitors, I’m going to err on the side of caution. Thanks again for bringing this up!

  2. travisdh1 Avatar

    Are you planning to use at least 3 of these for a back end? Seems very wasteful of hardware cost and iops to create a backend for something designed to eliminate backends in the first place otherwise.

    1. Michael Rickert Avatar
      Michael Rickert

      So this choice came about as a result of testing several linux based solutions (ceph, gluster) and having them perform very poorly for large windows shares. They would either have issues with windows perms or large vm volume replication. Being windows based starwind tends to handle windows specific shares better. We also currently use it in a mirror config to avoid hardware outages. An entire 45drive san chassis could die and nothing would be affected on the frontend. Sure your paying more initially than a traditional san with dual controllers, but you get two independent backplanes annnnd no vendor lockin, which means adding drives etc is only the cost of the drives themselves (less than 1/4 the price of say a ‘dell emc ssd’) and also no maintenance costs. So long term you get two chassis for full true redundancy, cheap drives/parts for scaling up, and no maintenance costs to speak of. Also I’m in no way affiliated with starwind or 45drives I’ve just found this to be by far the best solution for mirrored redundancy at scale.

Leave a Reply

Your email address will not be published. Required fields are marked *