The NVIDIA Blog

The NVIDIA Blog

Link to The Official NVIDIA Blog

Square Enix, Microsoft, NVIDIA Point Way Through Uncanny Valley at Microsoft BUILD

Posted: 30 Apr 2015 11:00 AM PDT

Many tech demos end in tears. Square Enix's moving demo during the keynote at Microsoft's BUILD 2015 developer conference this week might be the first where that's exactly the point.

In collaboration with NVIDIA and Microsoft, Square Enix – maker of the Final Fantasy series of video games – stunned the BUILD audience with a research project called WITCH CHAPTER 0 [cry].

The project portrays human crying — considered one of the most difficult emotions to recreate digitally — at a level of quality never seen before with a real-time, computer-generated character.

It's a demo that may come to represent a milestone in an industry that has long-struggled to cross the "uncanny valley."

text
Square Enix – maker of the Final Fantasy series of video games – stunned the BUILD audience with a research project called WITCH CHAPTER 0 [cry].
Coined by pioneering roboticist Masahiro Mori nearly 50 years ago, the term describes the boundary where emotional realism in the digital world begins to blur with reality — and creates a sense of the strange, or even revulsion, in human observers.

Overcoming this sensation has been a stretch goal for computer-generated graphics for decades. It's a task that's even tougher when performed in real time.

WITCH CHAPTER 0 [cry] project, powered by Microsoft's new DirectX 12 application programming interface (API) and NVIDIA GeForce graphics, points to a way through the uncanny valley.

Bringing more reality and depth to character expressions will better immerse players in stories during game-play, and deepen their connection to characters.

NVIDIA GameWorks Effects Studio and GeForce GTX graphics were among the next-generation technologies Square Enix put to use. Square Enix also conducted extensive research on real-time CG technology using DirectX 12. The results will be incorporated into Square Enix's Luminous Studio engine.

text
Art, and science: NVIDIA GameWorks Effects Studio and GeForce GTX graphics were among the next-generation technologies Square Enix put to use.

During the BUILD conference, Microsoft unveiled the research project running on four of our flagship NVIDIA GeForce TITAN X GPUs. It's a stunning example of what they can do with the DirectX 12 API, due to arrive with Windows 10, Microsoft's next-gen operating system.

Much of the know-how achieved by this project will surely trickle down to tomorrow's games. That's reason enough for tears of joy.

The post Square Enix, Microsoft, NVIDIA Point Way Through Uncanny Valley at Microsoft BUILD appeared first on The Official NVIDIA Blog.

NYU to Advance Deep Learning Research with Multi-GPU Cluster

Posted: 30 Apr 2015 09:00 AM PDT

Self-driving cars. Computers that detect tumors. Real-time speech translation.

Just a few years ago, deep learning — training computers to identify patterns and objects, much like the way humans do — was the domain of a few artificial intelligence and data science researchers. No longer.

Today, top experts use it to do amazing things. And they continue to push the bounds of what's possible.

That's why New York University's Center for Data Science and NVIDIA are teaming up to develop next-gen deep learning applications and algorithms for large-scale GPU-accelerated systems.

Founded by deep learning pioneer Yann LeCun, who's also director of AI Research at Facebook, NYU's Center for Data Science (CDS) is one of several top institutions NVIDIA works with to push GPU-based deep learning forward.

Pushing the Deep Learning Technology Envelope

Tomorrow's advances in deep learning rely on new, more sophisticated algorithms. They're designed to help computers achieve — even surpass — human capabilities.

They also require the latest, most advanced computing technologies.

This is where GPU technology comes in. GPUs are the go-to technology for deep learning, reducing the time it takes to train neural networks by days, even months.

But until now many researchers worked on systems with only one GPU. This limits the number of training parameters and the size of the models researchers can develop.

By distributing the deep learning training process among many GPUs, researchers can increase the size of the models that can be trained and the number of models that can be tested. The result: more accurate models and new classes of applications.

Recognizing this, NYU recently installed a new deep learning computing system — called "ScaLeNet." It's an eight-node Cirrascale cluster with 64 top-of-the-line NVIDIA Tesla K80 dual-GPU accelerators.

The new high-performance system will let NYU researchers take on bigger challenges, and create deep learning models that let computers do human-like perceptual tasks.

"Multi-GPU machines are a necessary tool for future progress in AI and deep learning. Potential applications include self-driving cars, medical image analysis systems, real-time speech-to-speech translation, and systems that can truly understand natural language and hold dialogs with people," says LeCun.

ScaLeNet will be used for research projects and educational programs at CDS by a large community of faculty members, research scientists, postdoctoral fellows, and graduate students.

So, expect big things.

"CDS has research projects that apply machine and deep learning to the physical, life and social sciences," LeCun says. "This includes Bayesian models of cosmology and high-energy physics, computational models of the visual and motor cortex, deep learning systems for medical and biological image analysis, as well as machine-learning models of social behavior and economics."

LeCun hopes the work at NYU can serve as a model used to advance the field of deep learning and train the next generation of AI experts.

If you'd like to learn more, LeCun and some of his Facebook and NYU colleagues will present a paper in May at the International Conference on Learning Representations in San Diego. The paper discusses a fast, multi-GPU implementation of convolutional networks — a type of deep learning model used for image and video understanding.

To learn more or to register, visit the conference website.

 

The post NYU to Advance Deep Learning Research with Multi-GPU Cluster appeared first on The Official NVIDIA Blog.