I’ve been working at NVIDIA for 7 weeks now. I’ve never worked for a GPU or hardware vendor before. I started off as an Astrophysicist in academia, became a CAD kernel engineer (Parasolid kernel at Siemens PLM) working on applications such as Solidworks, Siemens NX, Ansys Workbench etc. Then I moved on to hypervisor and VDI engineering including virtualized GPUs at Citrix working on XenDesktop/XenApp and XenServer. All my background and experience is in enterprise software development and I still mostly follow CAD and 3D blogs because that’s my passion and experience.
So how much different is working at a hardware (GPU) vendor than to Citrix or Siemens PLM?
Ummm… to be honest half the time I’m not sure I’ve changed jobs. My days are still filled with a lot of very familiar questions and problems; “Is Autodesk certified for use with vSphere when using NVIDIA vGPU?”, “How many Catia users can I put on a Dell R730 server?”, “What bandwidth should I expect when using hidden-line mode?”, “What is the SLA on reported bugs?”, “Is my GRID K2 card supported with Citrix XenServer?”….
So what is…. the great “APIs, GPUs, and drivers: CAD graphical conspiracy”?
As I said I mostly still read blogs on CAD not GPUs or gaming etc. A few weeks ago I saw a new post from Ed Lopategui at GrabCAD (I’ve blogged about them before – awesome company!) a 3D-printing/CAD company entitled “APIs, GPUs, and drivers: CAD graphical conspiracy?“. Ed’s customers are often the same customers the GRID GPUs and virtualization technologies I work on are designed to suit, professional graphics fit for enterprise run from the Cloud/Datacenter, on mobile devices powered by GPUs in the server. Ed is also someone I once was or would have been if career paths had been different and we share a lot of the same “up-bringing” in CAD (and the same previous employers).
I thought Ed would think like me…
I think Ed and I probably share the same insight into how stringent the requirements and high the expectations from software are on tier 1 CAD suppliers from their customers in high-end automotive and aerospace. I’d like to think I knew exactly what Ed’s customers would demand in terms of support, reliability and traceable process, as well as product quality and testing, to risk putting a supplier’s product within their environment.
So it was very interesting to read Ed’s take on GPU pricing
Basically Ed echoed a view I’ve heard before that because professional cards (at NVIDIA that is cards like the GRID and Quadro product lines) cost a lot more than consumer (gaming cards such as the GEForce lines), that there is some sort of cartel/conspiracy to charge professional users more than the product is worth and an inflated cost price above the manufacture cost relative to consumer cards.
Graphics drivers are not a light-weight bit of glue
Ed actually blogged on the GrabCAD blog a company heavily involved in 3D printing. I think I was even more surprised on his views on the value of professional drivers given that context. My own experiences of drivers in 2D-printing has been that the quality control, certification and testing is woefully below the standards enterprise manufacturing demands from its existing software and I suspect that long-term it will be one of the biggest challenge to adoption of the technology. At the moment mainstream printer firmware and driver updates regularly cause memory leaks and material changes in behavior that has long been acceptable in most software let alone manufacturing. The fact Microsoft introduced an entire driver isolation model to protect servers from rogue drivers and the drivers even have a nasty habit of interacting. Is 3D-printing regression control even halfway near the standards it needs to be to ensure a 3D-printed part can truly be trusted to be manufactured version after version the same? If there is a strong change it has changed since the simulations and physical stress tests should that replacement part be used?
Printer drivers seem stuck in a dark age as an added bit of software needed to get the main product working, they are usually fairly lightweight and perform a relatively very limited range of operations. Yet still they are on average…. Flaky as hell….
GPU drivers on the overhand are used and developed in the same way as enterprise software. The nearest product I would compare them to, is probably a hypervisor, where there is both hardware and software interaction. For professional graphics this involves optimizing and designing functionality for specific applications and even OSs – e.g. Catia, Autodesk Revit, Petrel, Linux OSs and both windows workstation and server OSs.
My new job – I am the GPU “conspiracy” (or a teeny bit of it)!!!
I work for the NVIDIA GRID product group and although I’ve only been here a few weeks I’ve met hundreds of people working on software for professional graphics and precisely _nobody_ involved in raw silicon GPU design or manufacture and nobody working on consumer gaming cards. Computer games tend to use a similar architecture, be run on similar devices and users expect to replace them frequently to satisfy their habits. The division I work in is staffed by people doing jobs very similar to those I did back in Siemens PLM on CAD or Citrix XenServer.
If NVIDIA wanted to make just graphics card I’d have been a pretty pointless and useless hire (I don’t play computer games – they are a bit boring – never got it)! However NVIDIA do professional and enterprise graphics and so there are numerous folk like me who hopefully have a clue about the impact of NX lightweight faceting, Catix v4 vs. v5 NURBS or H.264 artefacts on hidden-line CAD. Essentially I am Ed’s GPU conspiracy…. I and my colleagues and the work we do are the reason a professional graphics card costs a lot more than a gaming card. We have vast armies of people who spend their time working on:
- Regression testing 1000s of 3D Graphical and CAD applications
- Teams of people on joint development, test and documentation projects with CAD ISVs; optimizing APIs for geometries even weirder than Catia v4 😀
- Support staff who can actually use CAD applications and reproduce issues
- Certification programs with CAD vendors
- Development with hypervisors, virtualization and remote protocol vendors such as Microsoft, VMware, Citrix, NICE
I’ve got a huge amount of respect for Ed as I know he’s got commitment to enterprise quality burned into his soul, I a huge and long-time fan of his CAD blogs. I just want persuade him that, Hey! CAD bunnies like me need to be in professional graphics!!! My job is worth paying for!!! Now that I’ve become terribly over-sensitive on this issue I keep seeing tweets popping up both for and against the conspiracy. It’s worth reading the various views and experiences in the comments by readers of Ed’s conspiracy blog, read those comments here.
GPU Drivers are serious software
I actually kind of get where Ed is coming from, the way GPUs have historically been sold and how they are very much still a hardware purchase in consumer gaming land help fuel the conspiracy theories. However CAD is probably one of the industries with the most precedence for paying for software functionality, rigourous testing and certification.
Consider the Parasolid kernel, the same modelling kernel is licensed within low-cost viewers, mid-range CAD packages such as SolidEdge, SolidWorks, as well as high-end Siemens NX. It’s available in a variety of editions at different price points with lower cost versions allowing use of limited subsets of APIs. This is win-win:
- A single kernel is tested and developed so all QA is focused on a single product
- Those products that consume the costly to support and develop APIs for class A surfacing, non-manifold Booleans, tolerant geometry essentially fund really high-quality support. Ed’s argument that the professional drivers should be given away with gaming cards is a bit like saying Dassault should give away Catia to every Solidworks user.
- Lower cost products are available on the market e.g. CAD viewers which would not be economically feasible if forced to pay the average support and development costs for the full feature set and support organization. Gaming cards are essentially that lower cost product. It’s not that professional graphic users are being overcharged simply that gamers are getting something a lot cheaper to make, support and develop.
Would you run your CAD software unsupported? Or Microsoft Windows?
The model of paying for the software development loaded into the GPU card is in my own opinion flawed. Enterprise software is about testing and support as well as interoperability with other products and hardware. If the price of that is loaded into a GPU, that in turn can be marked up by server OEMs leaving the user paying more to an OEM which is not allocated to development or support of the GPU software which is the main product a professional user requires. A GPU without drivers and software development is just raw silicon, a rather expensive paperweight or brick!
The sophistication of the software and support needed for GPUs today makes them comparable to an OS or hypervisor within the graphics stack. I simply can’t imagine any serious enterprise being willing to run their Microsoft OS, VMware stack or CAD software unsupported. The idea of just buying a GPU as a piece of hardware and having no guaranteed way forward if there is an issue with the driver seems a complete anomaly.
The Next Blog
I was going to include details on why I think some of the data Ed referenced showed consumer gaming cards performant relative to professional graphics cards and address some of the questions he raises on why new features seem available sooner in gaming cards; I don’t actually think this is true and the data and benchmark referenced was misleading and flawed. However the blog just got far too long even by my standards… so I’m planning a sequel!
This is just a personal view point as an ex-CAD bunny finding her way in a new job. My views do not reflect any official statement from NVIDIA per se, just my own unique view of the world 😀