Migrate homelab NFS storage from Nexenta CE to Nutanix CE

I have been running Nexenta CE as my primary homelab storage for the past year or so with various results.  I use a combination of both NFS and iSCSI, and present a single NFS mount and a single iSCSI datastore to two ESXi hosts. I have been fairly happy with it, but have had a few issues where the NFS services will sometimes fail every few months or so due to locking issues, therefore bringing down my nfs mounts which is not ideal.  Only way, as far as I have discovered to fix it is to clear NFS locks with the following commands in Nexenta expert mode

  • svcadm clear nlockmgr
  • svcadm enable nlockmgrnds.

I was getting a little tired of troubleshooting this, and so decided to rebuild my homelab storage as a single node Nutanix CE cluster (Can’t really call it a cluster??) and export the storage as a NFS mount to my VMware hosts.  This way I get to both play with Nutanix CE, and also (hopefully) have a more reliable and performant storage environment.

The hardware I am currently running Nexenta CE on is as follows

  • ASRock C2750D4I with 8 core Intel Atom Avoton Processor
  • 16GB DDR3 RAM PC12800/1600MHz with ECC (2x8GB DIMMS)
  • 2x 3TB WD Reds
  • 1x 240GB Crucial M500 SSD for ZFS L2ARC
  • 1x 32GB HP SSD for ZFS ZIL
  • 1x Dual Port Intel NIC Pro/1000 PT

The dual port NIC is LACP bonded and used for my iSCSI datastore, while the onboard NIC’s on the ASROCK C2750D4I is used for NFS and Nexenta Management.

I decided for the Nutanix CE Rebuild I would do the following changes

  • Remove the 32GB ZIL Device
  • Remove the Dual port NIC since I won’t be using iSCSI
  • Use the two onboard nics for Nutanix CE and export a NFS mount to my ESXi hosts

Since I run NFS storage on a separate vlan to the rest of my lab environment, I decided to keep this design and install Nutanix CE on my NFS vlan.  Since my switch doesn’t support L3 routing, I added an additional NIC to my virtualised router (PFsense) and configured a OPT1 interface on my storage network.

GATEWAYS

Now my storage network is routable, and I can manage Nutanix CE from any device on my network.

To prevent a chicken and egg situation if my pfsense VM goes down, I also added a separate management nic in my desktop on the same NFS vlan for out-of-band management.

One issue I had was the requirement to have internet access out from my Nutanix CE installation before you can do manage it, this is required to register with your Nutanix Community account and connect to Nutanix Pulse (phone home).  Since my upgrade was distruptive and my PFsense router VM was offline during the migration, I had no internet connection while installing Nutanix CE.  I fixed this by temporarily reconfiguring my cable modem in NAT mode and moving it to the same nfs vlan and configuring it as the default gateway.

Perhaps a way Nutanix could fix this in the future is have some kind of unique serial number generated after install that uses your hardware as a thumbprint, so you can then register the cluster on a separate internet connected device which could provide you with a code to access the cluster.  Then phone home will begin working after say 24 hours.  Just a suggestion to any Nutanix people reading this 🙂

To migrate to Nutanix CE from nexenta I used the following migration steps

  • Power down all VM’s
  • Export VMware VM’s on Nexenta datastores to OVF templates
  • Write the Nutanix CE Image (ce-2015.06.08-beta.img) to a USB3 Device using Win32DiskImager
  • Remove old Nexenta iSCSI datastores and NFS mounts
  • Boot up Nutanix CE from the USB disk, Nexenta CE disks are wiped automatically
  • Configure a nutanix storage pool and container
  • Create a NFS Whitelist and present to my ESXi hosts.
  • Re-import OVF Templates.

The Nutanix installation was extremely simple, just boot from the USB3 device created earlier and began the install.

Use the “install” username, one cool thing about the mainboard I am using (ASROCK C2750D4I) is it has onboard IPMI

pre-install

Select Keyboard layout

keybaord

Configure IP addresses of both the host and the CVM.  I selected the “create single node cluster option”

Read the EULA, by far the longest part of the install as you need to scroll down and cant skip it.

eula

The install will then run, let it be and go take a coffee.  Disks format automatically.

install

After prism is installed, the cluster should be created automatically.  If not, SSH onto the CVM and run the following commands to create a single node cluster.

Username: nutanix, PW: nutanix/4u

#create cluster

cluster –s CVM-IP –f create

#add DNS servers (required for first logon for internet out, as stated earlier)

ncli cluster add-to-name-servers servers=”your dns server”

You can then start a web browser and will be prompted for a cluster admin username and password.

prism

Nutanix CE will then check for pulse connectivity and a registered NEXT account

pulse

 

After this, create both a Nutanix storage pool and Nutanix container using prism.

first logon

Since I wanted the single node cluster to present storage to my ESXi hosts, I configured two filesystem whitelists.  My ESXi hosts access NFS storage on 192.168.55.10 and 192.168.55.2

NFS whitelist

Then mount these NFS mounts on each ESXi host.  Use the container name, in my case it is simply “container”

ESXi NFS mount

NFS mount successfully created.

nfs mount created

Finally, redeploy all the OVF templates you exported previously from Nexenta.  Lucky for me, all OVF’s imported successful

import vm

So far I have been happy with the performance and stability of Nutanix CE, I don’t have any data to back this up but performance wise I have noticed an increase in Read Performance over Nexenta, with a slight decrease in write performance.  Read performance increase is probably due to the Extent Cache (RAM Cache) design in Nutanix, and the write performance is reduced since I removed the 32GB ZIL device from Nexenta.

I also noticed the performance is more consistent, with ZFS and Nexenta, Reads and Writes would be good until the cache fills up, and then performance would drop off.

However, this setup is not a performance powerhorse.  The Atom CPU’s I am using are pretty limited, and the CPU’s do not support VT-D so I need to use the rather slow onboard SATA controllers instead of a dedicated LSI controller which I have sitting around.

In the future I hope to upgrade my lab equipment and get a three node cluster setup, and migrate most of my VMware installation to Nutanix Acropolis.  Possibly with 10GbE if switch prices become lower.  Since I have a thing for Mini-ITX, one board I have my eye on is the Supermicro X10SDV-TLN4 which has a intregrated 8 Core Xeon D-1540, 2x 10GbE, and supports up to 128GB DDR4 RAM.

If you want to give Nutanix CE a try yourself, you can download the beta version here.   http://www.nutanix.com/products/community-edition/

An install guide which I used is available here

http://www.technologyug.co.uk/How-To-Guides/Nutanix-Community-Edition-Primer-TechUG-Labs-Q2-20.pdf

You can also run Nutanix CE nested on other hypervisors now if you dont have spare lab hardware, a good guide is here from Fellow Nutanix Technical Champion Joep Piscaer

https://www.virtuallifestyle.nl/2015/06/nextconf-running-nutanix-community-edition-nested-on-fusion/

 

 

 

 

 

Share
  • Manoj

    Thank you so much for this blog post. I have also been using the Atom 2750 low power mini-itx mobo for my home lab running vsphere. I tried to install Nutanix CE on this mobo with 1TB HDD and a 250GB SSD drive. For some reason I keep getting a message that my disk system does meet the min requirements. I even tried to change the settings in the minimum_reqs.py file and reduce the SSD drive size to 100GB. But to no avail. I then deleted the partitions of the 2 disks using the linux fdisk but even that didn’t help. I keep getting the same message. Any tips?