Home News Researchers at NVIDIA Reveal Major Advances in Deep Learning

Researchers at NVIDIA Reveal Major Advances in Deep Learning


Artificial Intelligence has gathered huge significance in the recent times, but for a glimpse of the brains behind it, one has to take a look at NIPS which is one of the world’s highly prestigious neural network as well as machine learning conference. In research led by Sifei Liu, a former intern at NVIDIA and full-time research scientist, the team brought out the workings of Learning Affinity via Spatial Propagation Networks.

For more than three decades, the researchers and data scientists associated with this ground-breaking work shared their hard work at the Conference and Workshop on Neural Information Processing Systems. However, NIPS garnered interest only in the recent years due to high interest in deep learning.

A total of four papers were involved out of which two were accepted to the conference, and the team contributed to the rest two. The team of researchers includes people from the NVIDIA Research team which is primarily focused on pushing the limits of technology in computer vision, self-driving, machine-learning, car-robotics and so on. Even if these fields are different from each other, they share the common aim of advancing the science of AI with the development of new tools and technologies that hold the potential to provide a breakthrough. These AI shall also apply to modern day challenges like healthcare and autonomous vehicles.

Any computer vision application understands an image by identifying and labeling the representations of its different pixels. This task is termed as image segmentation, and the spatial propagation network specializes in doing this very accurately. The relationship between neighboring pixels is better understood via the deep learning network that brings in the use of well-established physics principle of diffusion. The network could be efficiently trained to recognize any object, space, color, texture, tone and so on.

However, the spatial propagation network relies on using data rather than any hand-designed model to define and create these affinities. This learning model applies to any task that requires image matting, pixel-level labels, image colorization as well as face parsing. The model also has the capability to figure out functional as well as semantic relationships in any image. The paper includes theoretical interpretations of the operations of the neural network which is provided along with the mathematical proof of its implementation with fast speed. The network that runs of GPUs that houses CUDA parallel programming model runs 100x faster than anything possible as of now.

The spatial propagation network doesn’t have the need to solve any linear equations or even any iterative inferencing. It is highly flexible which enables it to be inserted into any potential technology to be used in numerous situations.

The rise of unsupervised learning, as well as generative modeling, has been trending this year at NIPS. The best example of this rising trend is the latest paper with lead author Ming-Yu Liu, a researcher at NVIDIA. The paper is termed as Ünsupervised Image to Image Translation Networks”.

As of now, most of the deep learning used supervised learning methodology to instill in the machines, a human-like capability to recognize objects. A supervised learning can easily distinguish between different breeds of dogs as labeled images of all breeds are already available to train the AI. To provide the machines somewhat imaginative ability, Liu along with his team implemented the use of unsupervised learning and generative modeling. You can see the result of their work in the picture where the left side of the image has inputs comprising of winter, and sunny scenes and the right side of the image carries the image imagined by the AI for summer and rainy scenes.

These amazing results were obtained through the use of generative adversarial networks (GANs) along with a latent space assumption. If you take a look at the top images, the first GAN has been trained on the winter scene like overcast ski8es, bare trees, snow covering and overall everything except the cars going down the frozen road. Meanwhile, in the second pair, the GAN has been trained to understand the summer season overall and not the specifics of the same. The unsupervised learning has eliminated the need for capturing and labeling which required a lot of time and energy.

The implementation of GANs isn’t novel when it comes to unsupervised learning. However, the NVIDIA research has provided an apt result which has helped the unsupervised AI progress farther than known. The benefits linked with this technique have a wide range of applications. With unsupervised learning, you would require less labeled data and minimal time frame as well. This AI could be the future of self-driving cars which could be trained with simulations of various seasonal conditions that include rainy, snowy, cloudy, etc.