The NVIDIA Blog |
- What’s Cooking in Pro Graphics: How Real-Time Ray Tracing Can Avert a Real-Life “Death Ray”
- NVIDIA Brings Interactive, Physically Based Rendering to the Mainstream
- How Pixar’s Animators Used GPUs to Create a Singing Volcano
- Artomatix – “a Painkiller and Vitamin for Artists” – Wins $100,000 at Emerging Companies Summit
- Cold Storage: How GPUs Figure Into Delivering Payloads to the Moon
- How Google Uses GPUs to Revolutionize Speech, Video, Image Recognition
|
Posted: 19 Mar 2015 07:00 AM PDT
You know something's awry when your building starts melting nearby cars.
London's year-old 20 Fenchurch Street tower is a stunner. But the same curved glass that gives the 37-story tower the nickname, "The Walkie Talkie," also has a knack for concentrating sunlight. The result: a hot spot that melted part of a nearby black Jaguar XJ and cooked shampoo in a local barber shop. It's even been used to fry eggs. Such "death rays" are a growing problem, thanks to a new generation of glass-sheathed buildings with radical computer-designed curves. Those curves reflect – and concentrate – light in ways that have been hard for designers and engineers to predict. Until now. Our demo at NVIDIA's annual GPU Technology Conference, in Silicon Valley, taps into the power of GPUs to show how London's fifth-tallest building came to be called the "Fryscraper." And Iray We GoRendering – the process of turning a digital model into an image on a screen– isn't new, of course. Nor is ray tracing, which tracks the way beams of light interact with objects in their environment. What's new is how our Iray ray tracing technology takes advantage of GPUs to render detailed models in real-time (see "NVIDIA Brings Interactive, Physically-Based Rendering to the Mainstream").The result is revolutionary: Rather than relying on technology that takes hours to create a single, static image – a snapshot – designers, using Iray, can view rich digital images as they work. And they can see how light interacts with their design over long stretches of time – as the sun moves across the sky at different times of the day and year – rather than just a moment or two. NVIDIA is putting these tools within reach of every designer with plugins that will build this capability into the most popular design tools. It's a move that's sure to save time. And, potentially, trouble. Avoiding a Deadlier Death RayIn fact, we found the Walkie Talkie building's solar glare could have been worse. Alter the building's curves, just a nudge or two, and it could create a beam hot enough to melt lead.Such powerful simulations build on technology we first demonstrated at last year's GTC. We showed, together with Honda, the first interactive visualization of an entire car. Our demo didn't just spin around a digital prototype. We showed how you could section the vehicle and peel off layers to view the innards of the car, right down to the silver Accord's electrical wires and seat springs. Technology like this promises to solve a huge number of common design problems. And some that aren't so common. Challenges of Modeling LightTake 20 Fenchurch – its glass curves create a spot where the temperature can rise to almost 200 degrees Fahrenheit. Or the Vdara Hotel, just off the Las Vegas Strip – its concave glass facade creates temperatures by the pool hot enough to melt plastic. Or L.A.'s extravagant Walt Disney Concert Hall – it heated up nearby condos, driving residents to draw their shades and run air conditioners. None of this is the work of mad scientists or Bond villains. The structures were created by architects and engineers who lack the tools to predict how their designs will interact with the world around them. In the past, modeling reflected light has been a time-consuming procedure. It's usually reserved for presentations of near-final designs. And designers build those presentations around specific lighting conditions. They're snapshots, not simulations. Introducing Quadro M6000 Graphics CardsOur new Iray 2015 rendering technology changes that. When paired with our new Quadro M6000 graphics card – the world's most powerful GPU – Iray 2015 models the way light bounces around a scene as design teams tweak their models.And rather than having to wait hours to create photorealistic images that are ready to put in front of a customer, designers can just add more GPUs to create higher-resolution models in an instant. With eight Quadro M6000 GPUs in our just upgraded Quadro Visual Computing Appliance (VCA), the level of interactive photorealism is stunning. Put our VCA in a data center, and design teams can call on its rendering power when and where it's needed. Every NVIDIA Iray product will include the ability to stream rendering from machines running our Iray Server software. Same Tools, New RulesAll this technology works with the tools designers already use. We're making Iray accessible to millions of users with add-ins for popular 3D creation applications, including Autodesk's 3ds Max, Maya, and Revit, McNeel Rhinoceros and Maxon Cinema 4D.With this new generation of prototyping tools, designers and engineers no longer have to build detailed physical models. Or create movies of rendered objects. Instead, designers can see their work in real-time. That can save months. Or years. And even save a few Jaguars from the next "fryscraper." The post What's Cooking in Pro Graphics: How Real-Time Ray Tracing Can Avert a Real-Life "Death Ray" appeared first on The Official NVIDIA Blog. |
|
Posted: 19 Mar 2015 07:00 AM PDT
Is it real or is it rendered? We've been teasing our social media followers for months now by posting stunning images and asking them if they can tell the difference between our computer-generated images and real ones.
Thousands have weighed in. And it's fiendishly difficult. But for designers who build the products we use every day – from the cars we drive to the buildings we live in – it's more than just pretty pictures. It's critical that what they see digitally accurately shows what their design is like in reality. Light, materials and form, all coming together in the intended way. But to visualize designs properly requires significant technology to calculate exactly how materials interact with light. For instance, whether glare occurs on a car's windshield if the dashboard is made of a certain material and not a slightly different one. To render those designs properly requires physically based rendering, and to make it interactive requires very fast GPUs. Now, we're announcing a multi-product roadmap to bring this capability to millions of designers. It has three main pieces:
Bringing Interactive, Scalable, Physically Based Rendering to Millions Throughout 2015, NVIDIA is bringing Iray to several more 3D creation applications, including Autodesk's 3ds Max, Maya, Revit, McNeel Rhinoceros. DAZ 3D has also made Iray available to its customers. This means millions of designers will now have access to Iray's capabilities, including Iray Material Definition Language (MDL), which allows physically based materials to be interchangeable across apps, so designers can switch from one tool to another and get consistent results. Iray 2015 is supporting the latest measurement format from X-Rite, while MDL is being supported by a growing number of companies who allow designers to create physically based materials including Allegorithmic and Old Castle. To learn more, please visit us here. The post NVIDIA Brings Interactive, Physically Based Rendering to the Mainstream appeared first on The Official NVIDIA Blog. |
|
Posted: 18 Mar 2015 10:18 PM PDT
Creating singing volcanoes isn't, perhaps, a common use case for NVIDIA products.
But in a packed session at the GPU Technology Conference, attendees learned that GPUs helped Pixar get the details right in animating Uku, the singing volcano in the soon-to-be-released short film Lava. Animating a quarter-mile-tall rock is different than animating people and animals. For instance, Pixar wanted to be sure Uku moved like trembling rock. And when animators depicted the volcano's mouth moving, Presto – Pixar's proprietary GPU-powered animation system – helped the production team determine that Uku’s “cheeks” were moving too much. “We got comments that it looked less like a rock and more like a guy in a rock suit,” said Byron Bashforth, the film’s technical director. The discovery prevented an unnecessary delay while the segment went to rendering, where it might have been discovered it in the past. And quite often, Pixar’s animators are able to unearth more obvious flaws on their own rather than relying on the shading team to alert them. Presto – which Pixar’s engineering lead, Dirk Van Gelder, demoed in a keynote at GTC 2014 – is a powerful application that helps animators see their work with and without shading, by simply using a drop-down menu. While the cheek movement was a subtle and unexpected discovery, Presto users are able to view other cause-and-effect scenarios in real-time. For instance, the production team wanted Uku’s eyes to close when clouds shadowed his face to emphasize sadness. To achieve this before the addition of realistic cloud shadowing, Van Gelder said his team developed a light blocker that enabled them to simulate shadows closing over Uku in real time. That, in turn, allowed animators to ensure that the character’s eyes shut in sync with shadows hitting his face. It's All in the EyebrowsThe animators also were able to use Presto’s real-time capabilities to avoid an issue with Uku’s rock-hewn “eyebrows.”When animators viewed the raw animation of Uku’s moving face without lighting or texture or shading, the eyebrows seemed to move too much. But when shading was added using a drop-down selection, the eyebrows seemed to move too little. In thet past that change would have been made after the rendering process. And that would have caused more delays. “Rendering is getting more and more expensive,” said Van Gelder. “The more we can show them in Presto the more we can hold off rendering until later in process.” The post How Pixar's Animators Used GPUs to Create a Singing Volcano appeared first on The Official NVIDIA Blog. |
|
Posted: 18 Mar 2015 09:41 PM PDT
A year-old startup that promises to cut production costs for video games and movies, won the second annual $100,000 Early Stage Challenge, at NVIDIA's Emerging Companies Summit Wednesday.
Artomatix, based in Dublin, Ireland, automates content creation by generating images that usually need trained artists. Run from a former Guinness brewery, the company uses machine learning and big data analytics to reproduce artwork such as characters and objects, progressively altering them in a realistic fashion. "It's an emerging area that we call machine creativity," Artomatic CEO Eric Risser said in an interview with GamesBeat. He described his company as both "a painkiller and a vitamin" for artists. Artomatix was one of a dozen promising startups competing for a $100,000 check at the Early Stage Challenge. Other companies, hailing from six countries, focused on medical imaging, deep learning, rendering, pharmaceutical research, and automotive technology. Now in its eighth year, the Emerging Companies Summit featured 17 companies presenting onstage before a room full of investors and technology executives in Silicon Valley competing for a total of $650,000 in prizes. The morning session featured five promising startups involved in robotics, machine learning and advertising. The post Artomatix – "a Painkiller and Vitamin for Artists" – Wins $100,000 at Emerging Companies Summit appeared first on The Official NVIDIA Blog. |
|
Posted: 18 Mar 2015 03:20 PM PDT
Talk about cold storage.
By next year, it should be possible to send a payload to the moon for a cool $1.2 million a kilogram, thanks to work being done to help finance the race to drive a vehicle on the moon. Astrobotic, a Pittsburgh startup that spun out of Carnegie Mellon University, plans to conduct its first lunar mission with its Griffin Lander in late 2016. It's a plan that's geared to make the company a contender for Google's Lunar XPRIZE competition, which will award a total of $30 million to private teams that can land a robot on the moon; move the robot 500 meters; and send back HDTV mooncasts. The goal of the competition is to help make space travel more affordable. In fact, it's doing exactly that—not for people yet, but for payloads. Astrobotic is selling space on its lander to companies, universities and governments that want equipment delivered to the moon. GPUs are helping to make that happen by enabling the company to more effectively model the journey and ensure its lander arrives safely. Astrobotic's sponsored payloads are a handy way raise to a healthy chunk of the $100 million mission cost, Kevin Peterson, the company's chief technology officer, told a full conference room at this year's GPU Technology Conference. "As far as we know, we're the only company that has a configure-your-lunar-mission website," Peterson said. Astrobotic's team isn't in business to be a moon courier. The purpose of its mission is to land and watch its rover do its best to claim the XPRIZE grand prize of $20 million. As for the role of GPUs, they're used to simulate movements and pressure on the Griffin Lander during launch, helping Peterson and his colleagues determine whether the lander will shake apart or experience excessive high-frequency acceleration. Peterson said GPUs are also helping to simulate landings—a lot of them—by supporting ray tracing of the moon's surface so that the Astrobotic team can ensure it can target a landing area the size of a football field. For comparison, NASA's Apollo missions targeted clear landing areas three miles wide. "We would like to land and leave a million times before we do an actual mission," said Peterson of Astrobotic's desire to be able to land with precision. By developing that precision landing capability, Astrobotic hopes to break out from the need to land on the moon in places that are flat and safe. Peterson said he and his team want to be able to land the Griffin Lander in lunar pits (formed by ancient lava flows) and the lips of craters. GPUs will make that possible. "We want to go to more interesting places in the solar system than have been accessed in the past," he told a couple of a hundred GTC attendees. "Computation is the key to unlocking those locations." The last communication between the company and the lander will occur during lunar orbit. Once the descent begins, Peterson said, the mission becomes autonomous, with both the lander—which GTC attendees can see firsthand in the exhibit hall—and rover working similarly to the autonomous car technology being developed today. The post Cold Storage: How GPUs Figure Into Delivering Payloads to the Moon appeared first on The Official NVIDIA Blog. |
|
Posted: 18 Mar 2015 01:50 PM PDT
That’s it, gamers. You’ve been replaced.
Google has used a new technology called deep learning to build a machine that has mastered 50 classic Atari video games. And you’ve never seen Space Invaders played like this. Talk about the way it’s meant to be played. Of course, no one is coming for your GeForce GTX 980. But the same GPU technologies that power your video games are being used by Google to do things few thought would now be possible, Google Senior Research Fellow Jeff Dean explained Wednesday in a keynote speech at our annual GPU Technology Conference. Dean is among a core group of engineers at Google who have built a new generation of technologies that have redefined the infrastructure that underpins the Web. Now, Dean and his colleagues are pushing into new domains — speech, vision, language modeling, user prediction, and translation — that once seemed only possible in the realm of science fiction. Google's researchers are even using machines to master classic computer games, like Breakout. Building Digital ‘Brains’That work is built on creating neural networks modeled on the human brain. But only roughly. Today’s digital brains resemble human ones no more than airplane wings are inspired by the wings of birds."We’re not trying to simulate the brain at a very deep chemical transmitter level, we're taking very high-level abstractions," Dean said. Like biological brains, these new digital brains rely on sophisticated algorithms to teach machines to perform complex tasks from scratch, just as a child learns to identify different kinds of balls by being shown many examples. It may sound simple, but training a computer to learn how to do these tasks saves vast amounts of time. "One of the things we care about is reducing human engineering efforts," Dean says. "We prefer a deep learning algorithm where the algorithms themselves built up higher levels of abstraction automatically." Once trained, these models can be embedded into real world applications. Since 2012, for example, Google's Android smasrtphone software has used deep-learning based predictive speech recognition. The system relies on software built into both Android Jellybean, as well as Google's powerful servers. Google is now using deep learning in more than 50 production applications, Dean said. Google is ideally positioned to push deep learning forward. Its search business gives it access to a vast sea of data, in the form of text and images. And the vast distributed computing infrastructure it has built around this business gives it the ability to crunch data in a hurry. Now, it's adding GPUs to this infrastructure, giving it the ability to train neural network to tackle a vast variety of tasks in a hurry. The parallel computing capabilities built into GPUs – which are designed to perform vast numbers of tasks at once – allow Google's engineers to train systems fast. That lets Google use these systems to do work that wasn't possible for computers just a few years ago – like identifying house addresses, classifying photos and transcribing speech. "One of the functions of these models that's incredibly powerful is they can take input in one modality and transform it to another," Dean said. "Like take pixels and transform them into text." Playing GamesThe killer demo, of course, involves video games. Dean described the work of a group of colleagues in London who built a deep learning system and set it loose in 50 classic Atari video game and told it to maximize its score.While the machine struggled at first, after hundreds of games it showed superhuman capabilities. It tore through alien hordes in Space Invaders and slalomed expertly through the curves on Enduro. "I think it's time to call the ref," Dean said as he showed a video of Google's deep learning system pummeling a hapless opponent in video boxing. The post How Google Uses GPUs to Revolutionize Speech, Video, Image Recognition appeared first on The Official NVIDIA Blog. |