NVIDIA GRID GPUs perfect for keeping up with the Raspberry Pi and the next generation of end points

piaio2

Citrix have been making a fair bit of noise about their end-client (Receiver) being available and supported in-conjunction with partner ThinLinx on the Raspberry Pi, which with peripherals is proving a sub-$100 thin-client, capable of handling demanding graphics and frame rates (fps) of 30fps or more (YouTube is usually 30fps).

The Raspberry Pi and other low-cost end-points such as the Intel NUC are capable because they support hardware decode of protocols such as H.264 and JPEG used by HDX/ICA, they have SoC (system on a chip) hardware designed to handle graphics really very well. Continue reading “NVIDIA GRID GPUs perfect for keeping up with the Raspberry Pi and the next generation of end points”

NVIDIA GRID – May 2016 upcoming webinars

people
No boring recordings or scripted sales waffle on an NVIDIA webinar – just lots of real engineers and specialists willing to answer questions on the fly!

Golly gosh! There’s a lot of interest in GRID and GPU-sharing/virtualized graphics at the moment. So much so that the GRID team together with a number of partners are laying on a series of webinars to satisfy the demand for information stemming from recent VMware announcements and pre-Synergy interest around Citrix. So here’s a list for your diaries (even if you can’t make it – sign up and you will usually get sent the recording). Continue reading “NVIDIA GRID – May 2016 upcoming webinars”

Monitoring NVIDIA vGPU for Citrix XenServer including with XenCenter

hands-on
Real customers setting up GRID in the GTC 2016 hands-on; the following week the SA team tried it out on their colleagues including novices to GRID!

I had some fun at NVIDIA GTC 2016 taking part in a hands-on lab run by the SA (Solution Architecture) organisation of which I am a part. These labs are proving really useful for walking new-users to GRID through key operations on both VMware and Citrix stacks. The guys running it mooted adding more on monitoring once you have got set-up and I kind of volunteered to have a crack at a bonus chapter for the hands-on around monitoring on Citrix.

Continue reading “Monitoring NVIDIA vGPU for Citrix XenServer including with XenCenter”

Free German Citrix User Event with NVIDIA GRID and HDX 3DPro – 18th April 2016

cudg-300x193I was so pleased to receive the below invitation to this event coming up in April from Roy Textor who runs DCUG (Deutschsprachige Citrix User Group / German-speaking Citrix User Group). It’s nearly two years since I blogged about the birth of this group, you can read that here. In the time since then Roy has run dozens of user group meetings all over Germany (I’d love you to comment on this blog if you have attended) and the community is going from strength to strength. Continue reading “Free German Citrix User Event with NVIDIA GRID and HDX 3DPro – 18th April 2016”

A response to APIs, GPUs, and drivers: CAD graphical conspiracy?

DriversI’ve been working at NVIDIA for 7 weeks now. I’ve never worked for a GPU or hardware vendor before. I started off as an Astrophysicist in academia, became a CAD kernel engineer (Parasolid kernel at Siemens PLM) working on applications such as Solidworks, Siemens NX, Ansys Workbench etc. Then I moved on to hypervisor and VDI engineering including virtualized GPUs at Citrix working on XenDesktop/XenApp and XenServer. All my background and experience is in enterprise software development and I still mostly follow CAD and 3D blogs because that’s my passion and experience.

So how much different is working at a hardware (GPU) vendor than to Citrix or Siemens PLM?

Ummm… to be honest half the time I’m not sure I’ve changed jobs. My days are still filled with a lot of very familiar questions and problems; “Is Autodesk certified for use with vSphere when using NVIDIA vGPU?”, “How many Catia users can I put on a Dell R730 server?”, “What bandwidth should I expect when using hidden-line mode?”, “What is the SLA on reported bugs?”, “Is my GRID K2 card supported with Citrix XenServer?”…. Continue reading “A response to APIs, GPUs, and drivers: CAD graphical conspiracy?”

Is my NVIDIA M60, K1, K2, K5000 or my AMD Firepro W7000 supported by vSphere/XenServer/Other Hypervisor?

This is a question I continually see from customers and even those involved in deploying the technologies. The reasons behind the answer are quite complicated, but the answer itself is a simple yes or no, and the steps to find it out are very easy, once you know them. Continue reading “Is my NVIDIA M60, K1, K2, K5000 or my AMD Firepro W7000 supported by vSphere/XenServer/Other Hypervisor?”

D3DLive! My favourite CAD event and how to get a ticket for FREE! Now with added NVIDIA magic!

D3DLiveD3DLive! At Warwick University, UK on March 31st 2016, yet again looks to be pure AWESOMENESS! Every year I blog about why nobody can afford to miss this event and just how amazing it is to attend! Don’t just take it from me – you can read reviews from a number of attendees last year – here.

Continue reading “D3DLive! My favourite CAD event and how to get a ticket for FREE! Now with added NVIDIA magic!”

Comparing Apples to Pears! Benchmarking Thinwire Compatibility and Other HDX Graphics Modes

applesOne of our really experienced experts has just done a blog on some evaluation he did on Citrix’s hot new graphics mode: http://trentent.blogspot.ca/2015/09/performance-differences-in-citrix-hdx.html. Now this guy knows what he is doing but I’m a little worried others might not fully understand what has been done or the implications for the results. Trent did some tests flipping the encoder via a registry key. We do not encourage users to play with registry keys, occasionally they can prove useful to tweak a system, but really if that is being done we should be exposing the control via properly documented policies.

Continue reading “Comparing Apples to Pears! Benchmarking Thinwire Compatibility and Other HDX Graphics Modes”

Great real user feedback on thinwire compatibility mode (thinwire plus)!

My colleague, Muhammad, blogged a few weeks ago about a new optimised graphics mode that seems to be delighting users with significant ICA protocol innovations, particularly those users with constrained bandwidth (read the details – here). During its development and various private and public tech previews this feature has been known as Project Snowball/Thinwire Plus/Thinwire+/Enhanced Compatibility mode but in the documentation it is now “Thinwire Compatibility Mode” (read the documentation – here).

I was delighted to read a detailed review by a Dutch consultant (Patrick Kaak) who has been using this at a real customer deployment. In particular it’s a good read because it contains really specific detailed information on the configuration and bandwidth levels achieved per session (<30kbps). Unfortunately (if you aren’t Dutch) it is written in Dutch so I had to pop it through google translate (which did an amazing job).

You can read the original article by Patrick here (if you know Dutch!): http://bitsofthoughts.com/2015/10/20/citrix-xenapp-thinwire-plus/

What I read and was delighted by is the google translated version below:

Since Windows 2012R2, Microsoft make more use of DirectX for the graphic design of the desktop, where they previously used GDI / GDI + API calls. This was evident at the ICA protocol, which was heavily optimized for GDI and triggering a higher bandwidth in Windows 2012R2.

1. without tuning halfway this year we were at one of our customers engaged in a deployment of XenApp 7.6 Windows 2012 R2. Unfortunately, this client had a number of low bandwidth locations. The narrowest lines were 256kbit and there were about seven session running over, which equates to approximately 35 kbit / s per session. We had the h264 (Super Codec) compression already disabled because it caused a lot of high bandwidth and a lot of optimization applied in the policies, but we did not get the line under the 150kbit / s. On average, we came out of around 170 kbit / s. The 35 kbit / s never seemed to be achievable.

After some phone calls with Citrix Project Snowball, we decided to embrace a project that focused on optimizing ThinWire within the ICA protocol and what we call since Feature Pack 3 now ThinWire Plus. This would again reduce the bandwidth to a level which previously Windows 2008R2 was feasible.

After installing the beta on the test servers turned out that we had to force the server to choose the compatibility mode. A moment of choice, because to do so we had to turn off the Super Codec in its entirety for the server for all users that are on there. This forces you to use each session to ThinWire, even where the lines have enough bandwidth and the Super Codec can be used. This is done by implementing the following registry key:

HKLM \ Software \ Citrix \ Graphics
Name: Encoder
Type: REG_DWORD
Value: 0

It has furthermore been put to Medium in the policy Progressive Compression Level, as was indicated in the guidelines for ThinWire Plus.

snowball active – plus thin wire without optimizations: first results were superb. Immediately after installing ThinWire Plus dropped the average bandwidth already with 50% to 83 kbit / s.

After further tuning of all the components, it was even possible to still continue to go down. Previously had to some extreme measures for people on low bandwidth. The settings were made to further reduce the bandwidth. In the eye is the target frame rate that has been put to 15fps, and the use of 16 bit colors was carried out. Finally, a limitation per session bandwidth imposed maximum of 150 kbps.

gpoMaximum allowed color depth: 16 bits per level. (reduction of 10-15% of bandwidth only for entire server to switch)
Allow Visual Lossless Compression: Disabled
Audio over UDP: Disabled
Client audio redirection: Disabled
Client microphone redirection: Disabled
Desktop Composition Redirection: Disabled (prevents DCR is preferred over Enhanced ThinWire)
Desktop Wallpaper: Disabled (ensures uniform background color)
Extra color compression: Enabled (reduction of bandwidth, increased server CPU)
Additional color space threshold: 8192 kbs (default)
Heavyweight Compression: Enabled
Lossy Compression Level: High
Lossy compression threshold: 2147483647 Kbps (default)
Menu animation: Prohibited (reducing bandwidth by not using menu animations)
Minimum Image Quality: Low (always apply additional compression top sharper image)
Moving image compression: Enabled
Optimization for Windows Media redirection over WAN: Disabled (WMV prevents the WAN towards the client)
Overall Session bandwidth limit: 150 Kbps (for non-GMP, maximum bandwidth per session)
Progressive compression level: medium (required for enhanced thin wire)
Progressive compression threshold: 2147883647 Kbps (default)
Target frame rate: 15 fps
Target minimum frame rate: 10 fps (default)

3. snowball heavy tuned implementation of this policy came in the test situation, the average at 16 kbit / s. A value that we absolutely did not think we could get to in the beginning. In the user tests it was revealed that it still worked well on the environment, despite all the limitations that we had set in the policy.

After all changes were made in the production environment, we see that an average session now uses around 30 kbit / s. Slightly more than in the test environment, but certainly not values ​​that we complained about. Users can operate well and be happy.

Incidentally we discovered when testing behind it at a pass-through application (where users first connect to a published desktop and then launch a published application on another server), the ThinWire Plus configuration on both servers must be running. If we did not see we increase the bandwidth usage to the client again significantly.

(all my colleagues, thank you for providing the performance measurements!)

Blog at WordPress.com.

Up ↑