Back in the 1980s and '90s, General Electric (GE) ran a very successful ad campaign with the tagline "We Bring Good Things to Life." Fast forward to today, and there's a whole new range of companies, many with roots in semiconductors, whose range of technologies is now bringing several good tech ideas---including AI---to life.

Chief among them is Nvidia, a company that began life creating graphics chips for PCs but has evolved into a "systems" company that offers technology solutions across an increasingly broad range of industries. At the company's GPU Technology Conference (GTC) last week, they demonstrated how GPUs are now powering efforts in autonomous cars, medical imaging, robotics, and most importantly, a subsegment of Artificial Intelligence called Deep Learning.

Of course, it seems like everybody in the tech industry is now talking about AI, but to Nvidia's credit, they're starting to make some of these applications real. Part of the reason for this is because the company has been at it for a long time. As luck would have it, some of the early, and obvious, applications for AI and deep learning centered around computer vision and other graphically-intensive applications which happened to be a good fit for Nvidia's GPUs.

But it's taken a lot more than luck to evolve the company's efforts into the data center, cloud computing, big data analytics, edge computing, and other applications they're enabling today. A focused long-term vision from CEO Jensen Huang, solid execution of that strategy, extensive R&D investments, and a big focus on software have all allowed Nvidia to reach a point where they are now driving the agenda for real-world AI applications in many different fields.

A focused long-term vision from CEO Jensen Huang, solid execution of that strategy, extensive R&D investments, and a big focus on software have all allowed Nvidia to reach a point where they are now driving the agenda for real-world AI applications in many different fields.

Those advancements were on full display at GTC, including some that, ironically, have applications in the company's heritage of computer graphics. In fact, some of these developments finally brought to life a concept for which computer graphics geeks have been pining for decades: real-time ray tracing. The computationally-intensive technology behind ray tracing essentially traces rays of light that bounce off objects in a scene, enabling hyper-realistic computer-generated graphics, complete with detailed reflections and other visual cues that make an image look "real". The company's new RTX technology leverages a combination of their most advanced Volta GPUs, a new high-speed NVLink interconnect between GPUs, and an AI-powered software technology called OptiX that "denoises" images and allows very detailed ray-traced graphics to be created in real-time on high-powered workstations.

On top of this, Nvidia announced a number of partnerships with companies, applications, and open standards that have a strong presence in the datacenter for AI inferencing applications, including Google's TensorFlow, Docker, Kubernetes and others. For several years, Nvidia has offered tools and capabilities that were well-suited to the initial training portion of building neural networks and other tools used in AI applications. At this year's GTC, however, the company focused on the inferencing half of the equation, with announcements that ranged from a new version (4.0) of a software tool called TensorRT, to optimizations for the Kaldi speech recognition framework, to new partnerships with Microsoft for WindowsML, a machine learning platform for running pre-trained models designed to do inferencing in the latest version of Windows 10.

The TensorRT advancements are particularly important because that tool is intended to optimize the ability for data centers to run inferencing workloads, such as speech recognition for smart speakers and objection recognition in real-time video streams on GPU-equipped servers. These are the kinds of capabilities that real-world AI-powered devices have begun to offer, so improving their efficiency should have a big influence on their effectiveness for everyday consumers. Data center-driven inferencing is a very competitive market right now, however, because Intel and others have had some success here (such as Intel's recent efforts with Microsoft to use FPGA chips to enable more contextual and intelligent Bing searches). Nevertheless, it's a big enough market that's there likely to be strong opportunities for Nvidia, Intel and other upcoming competitors.

For automotive, Nvidia launched its Drive Constellation virtual reality-based driving simulation package, which uses AI to both create realistic driving scenarios and then react to them on a separate machine running the company's autonomous driving software. This "hardware-in-the-loop" based methodology is an important step for testing purposes. It allows these systems to both log significantly more miles in a safe, simulated fashion and to test more corner case or dangerous situations, which would be significantly more challenging or even impossible to test with real-world cars. Given the recent Uber and Tesla autonomous vehicle-related accidents, this simulated test scenario is likely to take on even more importance (and urgency).

Nvidia also announced an arrangement with Arm to license their Nvdia Deep Learning Accelerator (NVDLA) architecture into Arm's AI-specific Trillium platform for machine learning. What this does is allows Nvidia's inferencing capabilities to be integrated into what are expected to be billions of Arm core-based devices being built into IoT (Internet of Things) devices that live on the edge of computing networks. In effect, this allow the extension of AI inferencing to even more devices.

Finally, one of the more impressive new applications of AI that Nvidia showed at GTC actually ties it back with GE. Several months back, the healthcare division of GE announced a partnership with Nvidia to expand the use of AI in its medical devices business. While some of the details of that relationship remain unknown, at GTC, Nvidia did demonstrate how its Project Clara medical imaging supercomputer could use AI not only on newer, more capable medical imaging devices, but even with images made from older devices to improve the legibility, and therefore, medical value of things like MRIs, ultrasounds, and much more. Though no specifics were announced between the two companies, it's not hard to imagine that Nvidia will soon be helping GE to, once again, bring good things to life.

The promise of artificial intelligence, machine learning and deep learning goes back decades, but it's only in the last few years and even, really, the last few months that we're starting to see it come to life. There's still tremendous amounts of work to be done by companies like Nvidia and many others, but events like GTC help to demonstrate that the promise of AI is finally starting to become real.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Masthead image credit: Daniel Hjalmarsson on Unsplash