How is CAD different to gaming apps? Apples and Pears!

blend1I’ve recently written a couple of blogs on why professional graphics cards offer enterprise users value. Evan Yares (yes – the famous one) commented on my post and asked me this question. Evan I suspect already knows the answer as he was the first CAD analyst I saw write about the GPU virtualization projects associated with the GRID products I work on (back in 2013). “No More CAD Workstations”. The fact Evan jumped on it on the day of release makes me think he sees value in professional graphics development 😀

  • Gaming is in many ways very similar to high-end video, it’s usually very transient i.e. there is a lot of movement. Games are often these days also like movies, photo-realistic even. Motion is something the human eye and brain have evolved to perceive really well – look out there is a Lion running to eat me! Our brains fill in information when it’s missing. This makes us less sensitive to visual quality when things are moving. I wrote about how enterprise VDI and CAD demands image quality on static data such as text and CAD hidden-line a few times in the context of how whilst raw H.264 4:2:0 encoding is fine for movies and usually gaming it can cause problems in enterprise scenarios. CAD users are extremely sensitive to line quality.
  • Video but also to some extent many games can exploit the continuity of movement to buffer or re-use data. This is generally less applicable to CAD or VDI which means the demands for driver optimisations are higher to ensure a good frame rate.
  • Gaming is usually mostly about visualization whereas CAD involves a lot of numerical calculation in the modelling. If you have a numerical error in a gaming driver I doubt you would ever be aware of it. If however you are an aerospace company and end up with a change in a CAD model when you change your driver you break the regression compliance required by that industry. You simply can’t manufacture parts for planes that people sit in that are different to the ones that all the simulation and testing was done on; so you can shut down an aircraft manufacturing line – expensive!. NVIDIA’s professional graphics drivers have to be tested with all those CAD and CAE products to the levels required for model fidelity in CAD, that testing takes time, people and serious amounts of hardware. You only have to look at the wealth of fidelity checking products out there to be aware of CAD’s fear of regression e.g. Faro’s products:
  • If there is a serious fault in consumer card you might just see a momentary blip amongst the transients. If you are a CAD user at work visual tearing etc becomes unacceptable. You part might pick up a numerical change (maybe causing a border-line self-intersection in the geometry) that breaks the model integrity and your part fails to re-build and model check! Gaming needs to be guaranteed to the pixel whereas a CAD kernel has fidelity to 10e-11 (which means the internal calculations have to be pretty much at machine precision).
  • In gaming it often the facets that matter a projected mesh of triangles this isn’t a terribly difficult theoretical problem. A much harder problem is getting performant generation of hidden-line data, where every movement of the part results in a recalculation of the data rather than a simple re-projection.
  • Gaming data is clean and designed afresh – real CAD is usually dirty. CAD parts hang around for a long time gets passed through translators like Harmonyware, can involve tolerant heavy-weight NURBs, all geometry that forces heavy numerical crunching. Professional driver development involves lots of specialist staff looking at optimisations to handle such geometry – the likes of which would never be found in a game. Many in CAD will be aware of the NURBs in Catia v4 (n=19 really??? Why Dassault??? Why???), I’m thankful I don’t work on the team that optimized the NVIDIA drivers for that one.
  • The OS support matrix for CAD is usually a lot wider than for gaming. The variety of operating systems used in enterprise is much greater, weirder and legacy driven than in the consumer market. Manufacturing companies are tied to older versions of software (think how most of enterprise clung to Windows XP and hesitantly adopted Windows 7) and also legacy platforms or OSs used by specialist apps (strange varieties of Linux, AIX, Solaris). Many companies actually like older/proven OSs rather than the latest and greatest. Again more testing and QA! Consumer cards are often targeted at recent versions of a few consumer OSs e.g. Windows 8.x as that is what most users use.
  • The hardware and end-points support matrix used in enterprise is again vast, blades, workstations, laptops, tablets, smart phones, iPads, Macs, IoT devices. Citrix alone produces receivers for 13 different platforms, all potential end-points used by servers powered by NVIDIA GRID. This means yet more collaboration and testing with OEMs such as Dell, HP, Lenovo, Cisco etc… and more with the virtualization vendors Huawei, NICE, VMware, Citrix, Parallels etc…
  • Cloud CAD and CAD as a service is growing and all those clouds looking to deliver professional graphics need support from NVIDIA to make robust, standardized platforms capable of graphics. Again yet more staff and experts needed above gaming! And yet more certification of endpoints using HTML5 and similar.
  • Professional graphics invests in projects such as NVIDIA Iray plugins, which are helping designers integrate interactive photorealism and physically based rendering and predictive design into mainstream design applications; Rhino 3D is $1000 and the Iray plugin is an additional $300. This relatively small investment delivers photo real rendering to the mainstream CAD user, whereas previously only large enterprise companies could afford the necessary investment to create a dedicated render farm to deliver the same visual quality. As one of my colleagues says, “working with interactive photoreal visualization is about as similar to 1990s OpenGL as Mars is to Venus”.
  • Professional graphics users often have more than one screen, I know of some finance companies with high-graphical needs where users have 8 or monitors as standard, most gamers use their television or a single monitor, the work to support and synchronise the demands of the enterprise user goes way beyond that needed for most gamers. Professional users also often demand we integrate with haptics and devices such as Wacom tablets, 3D Connexion SpaceMouse etc and getting support for these on VDI has involved a lot of work with virtualization partners such as Citrix and VMware.

On the specifics of AutoDesk/AutoCAD on Quadro vs. consumer, I have now found a video from one of my new NVIDIA colleagues, Sean Kilbride (I haven’t actually met him yet) that shows some of the workflows where you’d expect to see differences – available here.

3 thoughts on “How is CAD different to gaming apps? Apples and Pears!

Add yours

  1. I pretty much agree with all you’ve written, Rachel. My background is game/engine programming although in recent years I’ve dabbled with some CAD visualisation. Here’s a few random thoughts:

    – Games feature many hacks, lowering of precision, whatever it takes to get performance up. Frame rate is king these days, especially with the advent of VR (90hz, 2 x 1080 x 1200+ is becoming the norm). In prioritising performance, precision suffers. Game data often needs to be compact so vertices, normals, textures etc.. can be compressed, limiting range/lighting/material.. accuracy. CAD is all about precision.

    – The CAD data I’ve personally dealt with was high-poly and not very efficiently constructed. Games heavily use various texturing techniques to approximate detail, CAD seems to rely more on pure polygons.

    – Consumer GPU drivers are often optimised for specific (popular) games – I’ve heard of performance dropping for certain titles if executables are renamed etc..

    – Games often implement cutting-edge techniques that may even need GPU vendors to expose new functionality. Rapid and experimental driver iteration can lead to instability – I’m currently unable to update to the latest AMD drivers on one of my machines as they lead to Windows unable to boot.

    – I’ve been through many consumer GPUs which often arrive overclocked and over time start to run hotter and hotter. Some I’ve revived by cleaning fans/refreshing thermal paste.., some have just died after hitting temperatures in excess of 100°C.

    – I would argue many gamers have more than one screen – it’s quite common for dual/triple monitor setups.

    – It’ll be interesting to see whether VR narrows the gap between consumer/professional GPUs any any way as VR places far greater demands on performance (minimum GPU specs are high) and precision concessions may become more noticable. I.e. for VR, games may need greater precision, CAD greater performance.

    Apologies if I’ve just reiterated some of your thoughts but hopefully there’s something useful in there.. 🙂

    – Richard (@MishterTea)

    Liked by 1 person

  2. Hey, interesting perspective from the other side 🙂

    For game development the main difference in my mind is the native support for 64-bit precision floating point – granted, it is not needed that often, but for the specific cases where it is, it is really needed and great care has to be taken to emulate in the correct way.

    Examples: Accurate timing – for instance if a procedural model or texture requires the current time (say the number of milliseconds since the game started) then the animation will become less and less smooth over time. This can be broken down into separate milliseconds, seconds, minutes, hours or some other division but then you do not have a simple number to use in calculations (it becomes really awkward to even do a sine wave unless the frequency happens to be exactly a second, minute etc.). Another example is what is sometimes referred to as the “64-bit problem” in open world games – any massively multiplayer online world will have to deal with this in some way: the space can be so vast that far away from the center of the game universe, player and object positions will become noticeably granular. Solutions include emulation of double precision for certain attributes (such as coordinates) and breaking the game universe into zones which each have their own co-ordinate system.

    What starts to become really challenging is on more limited platforms (for instance mobile) you may be stuck with 16 bit floating point, or (heaven forbid) 8 bit!

    Luckily, most of the game logic which could accrue numerical errors over time still happens on the CPU in full precision, and for the most part, games can get away with all sorts of approximations and no-one notices 🙂

    This may change though as computation-heavy rendering techniques like realtime raytracing start to become practical and make their way into games…

    Liked by 1 person

  3. Thank you so much for both commenting – this is fascinating! The gaming and professional graphics deve communities both have such different needs – yet within a bigger picture and I genuinely feel NVIDIA and what we are doing places us well to do it right for both and take the best technologies from each genre!
    Keep the info coming!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

Up ↑

%d bloggers like this: