Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DCV – a happy marriage! … resulting in a free protocol on AWS

weddingIt’s now two years since Amazon bought NICE and their DCV and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DCV products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.

Open-sourced Virtualized GPU-sharing for KVM

Open source background concept glowingAbout a month ago Jack Madden’s Friday EUC news-blast (worth signing-up for), highlighted a recent  announcement from AMD around open-sourcing their GPU drivers for hardware shared-GPU (MxGPU) on the open-source KVM hypervisor.

The actual announcement was made by Michael De Neffe on the AMD site, here.

KVM is an open source hypervisor, favoured by many in the Linux ecosystem and segments such as education. Some commercial hypervisors are built upon KVM adding certain features and commercial support such as Red Hat RHEL. Many large users including cloud giants such as Google, take the open source KVM and roll their own version.

Continue reading “Open-sourced Virtualized GPU-sharing for KVM”

Significant announcements for AR/VR for the CAD / AEC Industries

03C15780Why CAD should care about AR/VR?

VR (Virtual Reality) is all niche headsets and gaming? Or putting bunny ears on selfies… VR basically has a marketing problem. Looks cool but for many in enterprise it seems a niche technology to preview architectural buildings etc. In fact, the use cases are far wider if you get passed those big boxy headsets. AR (Augmented Reality) is essentially bits of VR on top of something see-through. There’s a nice overview video of the Microsoft Hololens from Leila Martine at Microsoft, including some good industrial case studies (towards the end of the video), here. Sublime have some really insightful examples too, such as a Crossrail project using AR for digital twin maintenance.

This week there have been some _really_ very significant announcements from two “gaming” engines, Unity and the Unreal Engine (UE) from Epic. The gaming engines themselves take data about models (which could be CAD/AEC models) together with lighting and material information and put it all together in a “game” which you can explore – or thinking of it another way they make a VR experience. Traditionally these technologies have been focused on gaming and film/media (VFX) industries. Whilst these games can be run with a VR headset, like true games they can be used on a big screen for collaborative views. Continue reading “Significant announcements for AR/VR for the CAD / AEC Industries”

IoT Lifecycle attacks – lessons learned from Flash in VDI/Cloud

One of the pain points in VDI for many years has been Flash Redirection. Flash is a product that it’s makers Adobe seem to have been effectively de-investing in for years. With redirection there is both server and client software. Adobe dropped development for Linux clients many years ago, then surprisingly resurrected it late last year (presumably after customer pressure). Adobe have since said they will kill the Flash player on all platforms in 2020.
Continue reading “IoT Lifecycle attacks – lessons learned from Flash in VDI/Cloud”

Android Rooting and IoS Jailbreaking – lessons learned for IoT Security

Many security experts regard Android as the wild west of IT. An OS based on Linux developed by Google primarily for the mobile devices but now becoming key to many end points associated with IoT, Automotive, Televisions etc. With over 80% of smartphones running Android and most of the rest using Apple’s iOS, Android is well established and security is a big concern.

Imagine you are a big bank and you want 20000 employees to be able to access your secure network from their own phones (BYOD, Bring Your Own Device) or you want to offer your millions of customers your bank’s branded payment application on their own phone. How do you do it? Continue reading “Android Rooting and IoS Jailbreaking – lessons learned for IoT Security”

Effective Digital Content: Identifying your content top 10!

Make your top content work even harder!

This is a quick and dirty trick common in enterprise marketing and often used by pro-active Product Managers themselves. Most enterprise product marketing and product managers can get access to the google/Wordpress analytics for their products.

It is typical that a small % of the content on any website is attracting the most reads. I’ve recently done some analysis on my own blog site. In this article, I’ll use it as example to explain:

1)      How to analyse your view metrics to deduce your top content Continue reading “Effective Digital Content: Identifying your content top 10!”

Super analysis of the draft US Senate IoT security bill

Just a quick note to highlight a super analysis by PenTestPartners analysing the recent draft US Senate bill “Internet of Things Cybersecurity Improvement Act of 2017″

Lots of interesting links and implications for OEMS. Worth a read >>> HERE. Very interesting discussions about “Right to Repair”.

There have been several draft legislation in the past few weeks, see:

 
Rising sun illuminates the front of the Capitol building in DC

 

 

Key points in the UK Statement of Intent on Data Privacy

This blog on The Register caught my with the headline “Re-identifying folks from anonymised data will be a crime in the UK”. It’s a very good overview of the UK goverments recent (7th August 2017) Statement of Intent on Data Privacy (basically how the UK will align to EU wide GDPR legislation given Brexit).

It’s worth reading the Statement of Intent in full, the points that caught my attention were: Continue reading “Key points in the UK Statement of Intent on Data Privacy”

Blog at WordPress.com.

Up ↑