skip to Main Content

Comet is now available natively within AWS SageMaker!

Learn More

Issue 14: Nvidia’s Computer-generated CEO, Snap’s Use of GPUs for Model Inference

Nvidia shows off its AI+computer animation capabilities, Snap invests in GPU-accelerated inference, and estimating the weight of an object with visual regression

Welcome to issue #14 of The Comet Newsletter! 

Before jumping into this week’s newsletter, I quickly wanted to share a reminder about our upcoming Industry Q&A. We’d love to see you all there!


Back to our regularly-scheduled programming…

This week, we cover Nvidia’s computer-generated CEO and a roundup of the most recent generative art implementations from around the internet. 

Additionally, we highlight Snap’s approach to utilizing GPUs for model inference, as well as an interesting visual regression project from Edge Impulse.

Like what you’re reading? Subscribe here.

And be sure to follow us on Twitter and LinkedIn — drop us a note if you have something we should cover in an upcoming issue!

Happy Reading,

Austin

Head of Community, Comet

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS

Nvidia Reveals Its CEO Was Computer Generated in Keynote Speech

 

 

Over the past couple of years, AI-powered deepfakes have, at different moments, taken center stage in industry chatter—both because the technology is incredibly powerful and also fraught with social risk.

Nvidia jumped headfirst into the intersection of AI and computer animation at their recent GTC conference in April—not by announcing a new product or capability, but by actually creating a digital version of their CEO Jensen Huang (and his kitchen). 

The tech feat was accomplished using a fleet of DSLR cameras and AI systems that mimicked Hunag’s gestures and expressions, and encapsulated just 14 seconds of an hour + 48 minute keynote speech. Huang detailed how the digital version of himself was created as a way to showcase Omniverse, “a platform that incorporates various tools for engineers to create animations, which the company calls a ‘metaverse’ for engineers. 

Read the full report in Motherboard, by Vice

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS

Applying GPU to Snap-Scale Machine Learning Inference

 

Through vast improvements in—and increased availability of—large-scale datasets, higher computational power, and advanced neural network architecture, deep learning has soared in both popularity and practicality in recent years.

But as researchers train bigger models and demands on real-world model performance increase, there remain plenty of hurdles to overcome and challenges to confront. In this excellent technical blog post from the AI Platform Team over at Snap—a clear success story in the implementation of deep learning models at scale—the team explores how and why they’ve adopted model inference-oriented GPU accelerators to build a more effective and efficient inference engine.

A really interesting article that examines, with a high degree of technical specificity, what has precipitated the shift toward GPU accelerators from CPUs, how they’ve approached engineering a system that is able to leverage these accelerated processors, and more.

Read the full blog post from Snap’s AI Platform Team here.

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS

List of VQGAN+CLIP Implementations

If you’ve been following us recently, you might know we’ve been quite taken by the burst of UIs facilitating experimentation with generative Transformer models (like our recent CLIPDraw implementation). 

ML researcher and builder LJ Miranda was kind enough compile the recent rush of VQGAN + CLIP implementations—a particular combo of generative models that allow users to create AI art pieces that echo styles like MS Paint, Unreal Engine, and more.

Explore the collection of implementations here.

Estimate Weight From a Photo Using Visual Regression

Edge Impulse, focused on applying ML to edge devices (mobile phones, smart appliances, or other end-user devices), recently shared an interesting project centered on predicting the weight of a given item using a “visual regression” model that can run directly on an edge device (such as a Raspberry Pi, Jeston Nano, etc). The model takes an image as input and outputs a predicted weight for the object in the frame. A great example of an achievable, functional ML project that can run on lightweight devices.

Read more from the Edge Impulse team here.

Austin Kodra

Austin Kodra

Austin Kodra is the Head of Community at Comet, where he works with Comet's talented community of Data Scientists and Machine Learners.
Back To Top