Configuring Tesla M60 cards for NVIDIA GRID vGPU
[et_pb_section bb_built=”1″ inner_shadow=”on” fullwidth=”on” _builder_version=”3.14″ background_color=”#2ea3f2″ use_background_color_gradient=”on” background_color_gradient_start=”#2ea3f2″ background_color_gradient_end=”#2e6df4″ background_color_gradient_direction=”330deg” border_radii=”on|1px|1px|1px|1px” top_divider_style=”clouds” top_divider_color=”rgba(0,226,247,0)” bottom_divider_height=”50px” bottom_divider_style=”asymmetric3″ next_background_color=”#ffffff”][et_pb_fullwidth_post_title _builder_version=”3.14″ text_orientation=”center” text_color=”light” featured_image=”off” /][/et_pb_section][et_pb_section bb_built=”1″ fullwidth=”off” specialty=”off” prev_background_color=”#2ea3f2″][et_pb_row][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.14″]
There are a couple of steps which need to be taken to configure the Tesla M60 cards with NVIDIA GRID VGPU in a vSphere / Horizon environment. I have listed them here quick and dirty. They are an extract of the NVIDIA Virtual GPU Software User Guide.
On the host(s):
- Install the vib
- esxcli software vib install -v directory/NVIDIA-vGPUVMware_ESXi_6.0_Host_Driver_390.72-1OEM.600.0.0.2159203.vib
- Reboot the host(s)
- Check if the module is loaded
- vmkload_mod -l | grep nvidia
- Run the nvidia-smi command to verify the correct communictation with the device
- Configuring Suspend and Resume for VMware vSphere
- esxcli system module parameters set -m nvidia -p “NVreg_RegistryDwords=RMEnableVgpuMigration=1”
- Reboot the host
- Confirm that suspend and resume is configured
- dmesg | grep NVRM
- Check that the default graphics type is set to shared direct
- If the graphics type were not set to shared direct, execute the following commands to stop and start the xorg and nv-hostengine services
- /etc/init.d/xorg stop
- nv-hostengine -t
- nv-hostengine -d
- /etc/init.d/xorg start
- Install the vib
On the VM / Parent VM:
- Configure the VM, beware that once the vGPU is configured that the console of the VM will not be visible/accessible through the vSphere Client. An alternate access method should already be foreseen
- Edit the VM configuration to add a shared pci device, verify that NVIDIA GRID vGPU is selected
- Choose the vGPU profile
more info on the profiles can be found here under section ‘1.4.1 Virtual GPU Types’: https://docs.nvidia.com/grid/6.0/grid-vgpu-user-guide/index.html - Reserve all guest memory
On the Horizon pool
- Configure the pool to use the NVIDIA GRID vGPU as 3D Renderer
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]