Nutanix has just released the long-awaited Life Cycle Manager (LCM) 2.5 version.
What is Nutanix LCM?
Nutanix LCM is the infrastructure framework that upgrades Nutanix software and component firmware on Nutanix cloud platforms.
Rather than manually installing one component at a time, LCM efficiently inventories and programmatically handles the deployment dependencies and order of operations for upgrades.
LCM can handle upgrades such as:
On Prism Element (Per Cluster) – LCM supports AOS, AHV, Prism, Files, Foundation, and component firmware.
On Prism Central/Nutanix Cloud Manager (Centralized Management) – LCM supports the native Nutanix offerings for PC/NCM. such as Files, Objects, Karbon, and Calm.
Hardware vendor-specific firmware upgrades for appliances from Nutanix (NX), Lenovo, HPE, Fujitsu, Dell, Inspur and Intel.
For a good read on “How Upgrades Work at Nutanix”, please refer to this link to understand possible implications to downtime (spoiler alert: None! 😯), performance impacts, order and compatibility – https://bit.ly/HowNutanixUpgradesWork
In addition to dozens of fixed and improved issues, Nutanix introduced a New Feature that should make the VDI/EUC community jump for joy: 😎
LCM 2.5 adds inventory and update support for NVIDIA GRID drivers for GPU cards. For details on using LCM with NVIDIA GRID drivers, see the Life Cycle Manager Guide.
Upgrading NVIDIA GRID drivers traditionally required a very delicate upgrade process, ensuring that the guest and host driver were compatible. Then, complex CLI commands were required to upgrade the AHV host driver, one host at a time.
Not any longer! Now, LCM 2.5 will – in a single reboot – auto-uninstall the old driver, install the new driver, all while vGPU VMs are transparently live migrated to other nodes during the upgrade process.
No user downtime ✅
Automation and operational efficiency ✅
Virtualizing servers and applications is nothing new, and is often considered common practice these days. More and more Tier 1 applications – Domain Controllers, Microsoft Exchange Server, Microsoft SQL Server, and VDI Desktops – are provisioned as Virtual Machines, due to their ability to be agile and highly available. With continued advancements at the hypervisor and storage levels – including VM high availability, distributed/dynamic scheduling, and scale-out, shared-nothing architectures, infrastructure is much more resilient than ever. However, human error is still a VERY common factor in many unplanned outages. Out of the box, Microsoft Windows desktop and server OSes allow any user the ability to eject various HotPlug devices, which could lead to a server or desktop being immediately disconnected and unavailable.
The Windows feature to ‘Safely Remove Hardware and Eject Media’ sounds good in theory, say to eject a USB drive from the operating system. Thanks to Windows Search Index, the OS will often prevent you doing something really bad, such as ejecting a virtualized SCSI attached disk. Nonetheless, ejecting a virtualized network adapter of a production VM seems like a great way to ruin your day.
This STILL does not prevent an administrator (you know, the ones often working on servers) from accidentally clicking the wrong icon.
Well-respect consultant and developer Helge Klein released a great (and frequently referenced!) post in 2012, detailing the ability to disable HotPlug functionality in VMware ESX – both at the hypervisor level, along within the Windows Guest OS.
Since the publishing of Helge’s article, additional hypervisors have been released and adopted in the marketplace. Nutanix may have started the explosion on the scene as a storage disruptor, while making traditional three-tier architecture obsolete. In doing so, Nutanix still allowed a choice of industry standard hypervisors, VMware ESX or Microsoft Hyper-V, to run on top of their commodity hardware. But Nutanix did not stop at simply disrupting the storage market, they wanted to ensure a practice known as ‘Invisible Infrastructure for Enterprise Computing.’ In theory – the storage shouldn’t matter, the hypervisor shouldn’t matter – IT should be a transparent and turnkey solution that focuses on applications and on the business. At .NEXT, Nutanix’s first user conference in June 2015, another disruptive announcement was dropped – Nutanix would release their own hypervisor entitled Acropolis (AHV), based on Linux KVM. Only 1 year later at .NEXT 2016, Nutanix is now reporting 15% of all deployed clusters are running AHV for client workloads.
Placing key infrastructure workloads on Acropolis, such as Microsoft Exchange, SQL, and Citrix XenDesktop, and VMware View are vendor supported (and recommended, if I must say so myself) AND yield an incredible ROI to the business. However, proper precautions need to be addressed to ensure users or administrators cannot accidentally eject virtualized hardware.
Windows VMs running on AHV will have 3 HotPlug devices within the guest OS
Nutanix VirtIO Ethernet Adapter
Nutanix VirtIO Balloon Driver
Nutanix VirtIO SCSI pass-through controller
Currently on Nutanix Acropolis, there is no supported way to disable HotPlug functionality at the hypervisor level. I have confirmed with Nutanix support and engineering that this is not something that is currently publicly exposed.
Currently, the best approach to disabling HotPlug devices within the guest OS would be to disable functionality from within the registry by changing flags on the ‘Capabilities’ key, based on each device within HKLM\SYSTEM\CCS\Enum\PCI.
A few notes regarding these changes:
Security permissions for all keys under ‘HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI’ are protected so that only the SYSTEM account has Full Control.
You could create a workaround by launching Regedit.exe as the SYSTEM account, leveraging a tool like Systernals PSExec (Owned by Microsoft)
Command Prompt – cmd.exe – Run as Administrator
> psexec -i -d -s c:\windows\regedit.exe
However, due to these keys being reverted upon machine restart, the best way to achieve this functionality is either via a computer startup script or Group Policy Preference registry keys – both which run under the SYSTEM account – and applied at computer boot.
Deployment – GPP
The simple and repeatable way to deploy these registry is using Group Policy Preference registry items
Update the REG_DWORD value of the Capabilities key to decimal value of (2) for the following keys: