Add the ability to manipulate time to the list of advances made possible by GPUs. Cinnafilm, a small engineering-driven firm in Albuquerque, N.M., sells a package of hardware and software powered by NVIDIA Tesla GPUs that allows video to be shortened or lengthened by as much as 10 percent, in real time, without the need for any editing. "It completely re-samples video to a precise length without changing anything," Lance Maurer, Cinnafilm's CEO and founder, told a roomful of attendees during a session at the GPU Technology Conference.
GPU-powered technology helps broadcasters fit quality content, like “Dexter,” into a limited quantity of time. This is no mere fast-forward button. Play video faster, even just a little, and actors' voices rise in pitch. Motion gets blurred. And closed captioning flashes by too fast to read. Without a way to subtly tweak video files to compensate, viewers will complain. The solution, dubbed Tachyon Wormhole — in a nod to Maurer's background as an astrophysicist — is helping TV broadcasters eke out additional revenue by shortening shows, making room for more advertisements. Similarly, if a broadcaster has to air a 23-minute video in a 22-minute slot, the discrepancy can be addressed lickety-split. Hit shows such as Showtime's Dexter and Comedy Central's South Park have been using Tachyon Wormhole, which was awarded "Best of Show" at last year's National Association of Broadcasters conference. It's a joint project between Cinnafilm and Wohler Technology, which manages the audio aspect on CPUs and is Tachyon Wormhole's exclusive reseller. Maurer told the audience that the explosive growth of video has spurred the development of new tools. "The quantity of video being uploaded today is mind-boggling," he said. "To catch up and guarantee quality, you have to create a system that can catch, create and deliver automatically." Users of Tachyon Wormhole set parameters for how they want to adjust the length of a video, drop it into a "watch folder," and from there the system runs the needed computations and spits out the re-timed version. Maurer demonstrated Tachyon Wormhole's capabilities by showing original and re-timed versions of a video segment side by side. The changes were essentially imperceptible — that is, aside from the fact that one finished first. Not that there aren't flaws. Maurer said most people start to notice the alterations once a video has been shortened by 8 or 9 percent. "The science does break down at some point," he said. Generally, however, any such flaws typically rear their head at specific moments. For instance, if a video depicts a rapid-fire gun battle, the sound of the gunshots may start blending together. Or a kissing scene may seem obviously rushed. To combat that, Tachyon Wormhole allows production teams to specify that such segments shouldn't be shortened when defining the parameters. Maurer credits GPUs with making Tachyon Wormhole possible by greatly reducing the cost, time and power required to make it work. "If you can leverage GPUs, your life gets 10 times easier," he said. "I’ve been using GPUs for 10 years now, and I bet on the right horse." The post How GPU Technology Helps (Prime) Time Fly appeared first on The Official NVIDIA Blog.
Researchers from Harvard University snagged the fourth annual Achievement Award for NVIDIA Centers of Excellence, recognizing their work using GPUs to study extended excitonic systems and vibrational-excitonic effects.
Harvard’s Nicolas Sawaya snagged our fourth annual Achievement Award for NVIDIA Centers of Excellence. Separately, Esteban Walter Gonzalez Clua, a researcher from Brazli's Universidade Federal Fluminense, has been named a CUDA Fellow The Harvard team, led by Nicolas Sawaya, received the top award for their work focusing on the way light is photosynthesized, and how this knowledge might be used to design better photovoltaics and light-emitting diodes. Additionally, three other finalists from top universities were selected by a panel of experts from among our 22 CUDA Centers of Excellence. The winning team will receive our new NVIDIA DIGITS DevBox, a plug-in appliance for deep learning that's equipped with our NVIDIA DIGITS deep-learning software and four TITAN X GPUs. Other finalists will receive a TITAN X, the world's fastest GPU.
Finalists took home the NVIDIA GeForce GTX Titan X. In addition to the Harvard team, the finalists include:
Tokyo Tech, team led by Hitoshi Sato, for work on big data processing on GPU-based supercomputers.
Technische Universität Dresden, team led by Axel Huebl, for work on the OpenACC profiling interface
Universidade Federal Fluminense, team led by Esteban Clua, for their CUDA education and evangelism.
New CUDA Fellow Announced
As a CUDA Fellow, Clua will help lead the use and adoption of CUDA, and continue to spread the word about GPU computing. He is an associate professor and vice-director of the Computer Science Institute of Universidade Federal Fluminense in Rio de Janeiro. He has served as a visiting professor at 10 universities worldwide. Having worked with GPUs since they were still simple raster devices and with CUDA since its launch, he is among the most published Brazilian researchers in GPU computing, with more than 100 full papers in journals and conferences. His work focuses on complex data-structures for GPUs, GPU clusters and grid architectures. He is co-founder of SBGames – the Brazilian Symposium of Digital Entertainment and Video Games – which is the largest such conference Latin America. He's also the president of the Brazilian Computing Society Game Committee. The post Harvard Researchers Win Center of Excellence Achievement Award, New CUDA Fellow Announced appeared first on The Official NVIDIA Blog.
Rivals face off against one another, bouncing back and forth to stay loose. "3 … 2 … 1 … fight!" Punches get thrown. Fast kicks delivered. Combos are lethal, leaving you standing over your downed foe. It's another day at work for NVIDIA's Eduardo Perez-Frangie, a Silicon Valley-based engineer, and pro gamer. Gaming's come a long way from Pong, Pac-Man and Tetris played in the local arcade with the loose change in your pocket. It's gotten big. Stadium-size big—where gaming teams attack and counter-attack, watched by thousands of roaring fans. And millions of dollars are at stake. "Street Fighter" has also evolved since its release almost 30 years ago. The latest iteration, published last year, is one of hundreds of games played by pro gamers, often in teams with attention-grabbing names. And this is where Eduardo's story picks up. He works on software quality assurance for our mobile business during the week. But he spends weekends as PR Balrog, a championship-level pro gamer for the Evil Geniuses team. Eduardo's already had a big year, notching up serial wins. He kicked off 2015 by ranking fourth in the Canada Cup Masters Series, in Calgary. "Gaming is like education, it expands the brain," he said. "When you compete, endurance—both mental and physical—is the most important thing. A tournament can run for 16 hours and you need to keep your focus."
Eduardo Perez-Frangie playing as PR Balrog for the Evil Geniuses team He hit Atlanta this month for Final Round 2015. Later this year, he'll fly to South Korea for another tournament. He expects to play in up to 10 championships in 2015. Last year, Eduardo came in first in the Northern California Regionals and ranked fifth in the Capcom Cup, competing against players worldwide. He travels as far as London and Japan to compete in tournaments, which draw live audiences in the thousands. Tens of thousands more watch pro gamers smite each other on the video streaming service Twitch.tv. First Gaming Rivals to Beat—Your Brothers Eduardo's love of competition began early. Coming from a technology-loving family, he was playing against his brothers in Puerto Rico by the time he was four. He played in his first gaming tournament at 13. He keeps his edge by working out at the gym and playing a lot of basketball. "You have to stay fit to game," he said. "People see a gamer sitting down for hours, so I like to go to the gym every day." While Eduardo enjoys strategy, his strength lies in action-packed games such as Street Fighter that rely on physical agility and mental dexterity. In the games he plays, he's part of a six-person team, where "you have to strategize to win," he said. While traveling to games, he's an ambassador for the Evil Geniuses, and says he marvels at how gamers are treated like celebrities in Japan and South Korea. He's a natural risk taker, having moved three years ago to California in the hope that he could turn his gaming skills into something more. Like a developer at a Silicon Valley startup, he lived in a so-called gamers' house with five others. He joined Evil Geniuses after an introduction by pro gamer pal, Justin Wong. The team pays a stipend and transportation costs. Gamers get to keep their winnings, with Eduardo taking home $7,000 after one successful tournament. One perk of traveling to tournaments: Meeting game developers. After friendly introductions, Eduardo knows he's looking for the flaws and weaknesses in the game so he can win. With a father who was "a computer geek," Eduardo had a computer very young and discovered early he "loves technology and loves to break things." This love morphed quickly into figuring out how software works, and how it doesn't, which paved his path to a role at NVIDIA. "What I do now at work is testing software to find bugs, stress a device and find the faults," he said. "And then this translates into everything that I do in gaming." The post Gamer Goes From Arcades to Stadiums as Battles Get Bigger appeared first on The Official NVIDIA Blog.
Among the greatest concerns for soldiers in conflict areas: hidden improvised explosive devices (IEDs), and the harrowing risks they pose. It's the mission of CertaSIM, a northern California startup, to help protect them, said its founder Wayne Mindle, who spoke at our recent GPU Technology Conference. CertaSIM uses GPU technology to develop simulation programs that help design blast-proof vehicles, such as Humvees and Joint Light Tactical Vehicles deployed by the U.S. Army. GPUs are well suited to computing the physics of discrete particle interaction –the way dirt, rocks, and shrapnel move – when a blast occurs. The solver technology is called the discrete particle method (DPM) which is a strong predictive tool for simulating blasts.
CertaSIM uses a Tesla K40 to speed up its sophisticated simulations. One particular area that CertaSIM models is blast shields, the vital girding that protects the underside of military vehicles from IEDs, which are often hidden and thus hard to predict. When they're underground, most of their damage comes from soil and rocks that erupt around a vehicle from an explosion, Mindle said. "It's the soil around that causes impact from a buried mine and if it's wet, it's a bigger weight in a blast," he said. Mindle's team has developed different equations, using DPM, for soil that's wet and dry. "With GPUs this is easy," Mindle said. "Combine DPM with parallel processing of the GPU and you've got an efficient and cost-effective tool." Simulations are run repeatedly over 96 hours on vehicle hulls based on the structures of military vehicles, and holding Hybrid-III dummies, the kind used in auto crash tests. But using a Tesla K40 GPU, Mindle says these same simulations can be run in a fraction of the time, speeding up the development of better and safer armored vehicles. The post Protecting Protectors: How GPUs Help the U.S. Army Build Better Blast-Proof Vehicles appeared first on The Official NVIDIA Blog.
The race is on to understand how cell mutation causes cancer, which kills hundreds of thousands worldwide each year and is the second leading cause of death in the U.S. Setting the pace is Rommie Amaro, associate professor at the University of California, San Diego, whose research on how to interrupt the mutation process is aided by high-performance computers. Amaro is focused on the anti-cancer drug discovery pipeline using advanced molecular dynamics simulations powered by GPUs. The work recently earned her a $200,000 grant as part of the NVIDIA Foundation's Compute the Cure initiative. "Enormous gains in computing power are enabling a new framework for drug discovery," Amaro said in a presentation she gave at our GPU Technology Conference this week. Recent work involves using computer simulations to capture various shapes of a tumor suppressor, the protein called p53. It's known as "the guardian of the genome" because it's a key regulator of cell growth and development in normal cells. In its healthy form, p53 helps with cell repair or triggers cell death if the damage is too great. When p53 doesn't function properly, or mutates, it allows cancer cells that it would normally attack to escape. "P53 is a protective cell, usually," Amaro said. "But in more than 50 percent of human cancers, if this cell is damaged, tumors grow." Computer simulations capture not just how proteins are built, but how they function inside the body. The simulations run on GPU-powered computers revealed a new "binding site" that may help cancer researchers create new drugs to help "reactivate" p53 when it mutates and doesn't do its job. The computational approach led to the discovery of "p53 reactivation compounds in six months, compared to all the research efforts of the previous 20 years combined," she said. "This is a great example of the power of the model." By building shareable computer-aided drug discovery workflows, her team can create recipes that other researchers can follow to find new drug targets or reproduce the work done by their peers. The goal: to accelerate the discovery of new and safer medicines to fight cancer. "Cancer is a big complex disease and it'll take as many people as possible to come up with cures," Amaro said.
Search and rescue ain't what it used to be. Gone are the days of rescue teams and their dogs heading into dangerous situations not knowing what they're going to face. Technology has transformed the art of rescue into a science. One key advance has been the use of robotic devices, which do everything from evaluating surroundings to assessing the situation of those who need help. But as Pawel Musialik, a programmer and researcher at Poland's Institute of Mathematical Machines (IMM), told attendees during a session at the GPU Technology Conference, getting the most out of these robots takes planning. "We want to provide tools for rescue teams to get the best use of unmanned platforms," Musialik said. "They're not experts in software development."
Pawel Musialik talks about how to use GPUs to use rescue robots more effectively. IMM is one of a handful of entities that comprise the Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS) project. Formed after the 2011 earthquake and tsunami in Japan, ICARUS is a joint research effort spearheaded by the European Commission to make the use of robots more practical during search-and-rescue efforts. Musialik and IMM have been working on developing systems that will help search-and-rescue teams direct ground and aerial robots with less pre-mission preparation. That means enabling robots to categorize classes of objects (buildings or vegetation, say), understand the relationships between those objects (overlapping or adjacent) and then operate based on rules to make determinations such as whether a situation is unsafe. The hardware IMM is using takes advantage of NVIDIA GPUs and the CUDA parallel processing architecture. Rugged computers equipped with two NVIDIA GRID K2 cards are combined with GeForce GTX-powered laptops. Pulling data from sources such as geographical information systems and ground and aerial point clouds, IMM has established models that help instruct robots in real time. That information, combined with detailed graphical visualizations, is creating more informed rescue robots, Musialik said. "We couldn't do point classifications with CPUs," he said. For instance, Musialik showed an example of a CPU-generated image in which the software couldn't distinguish between a monument and surrounding vegetation. Once a GPU was added to the equation, the monument was clearly identified. With GPUs, they can get the system to feed robots increasingly granular data. The moral: If you ever find yourself trapped in a crumbled building or deep ravine, worry not. GPU-powered robots may be on their way. The post How GPUs Help Rescue Robots Find Their Way appeared first on The Official NVIDIA Blog.
Six promising startups walked away with cash and prizes worth more than $650,000 at NVIDIA's eighth annual Emerging Companies Summit. More than 50 startups participated in the competition, selected from a field of more than 150 applicants from 30 countries. The event is a regular highlight of NVIDIA's GPU Technology Conference, in Silicon Valley, which drew more than 4,000 attendees. This year's award winners included:
Ersatz Labs – Makes deep learning accessible through a web-based user interface and API. (U.S. based.)
QM Scientific – Makes shopping easier by showing the best places to shop based on price, quality or location. (U.S. based.)
Viontech– Creates embedded systems for transport, video surveillance, business intelligence (China based.)
Herta Security – Facial recognition software for surveillance and security (Spain based.)
Clarifai – Image-recognition software for deep learning (U.S. based.)
Each received $15,000 in legal services from Cooley, $60,000 worth of Microsoft's Windows Azure for BizSpark Plus, an NVIDIA Tesla K40 GPU worth $5,000, and a trophy. But perhaps more valuable is the prestige and visibility that comes from being recognized as one of the world's more innovative startups. A sixth startup, Artomatix, won the $100,000 Early Stage Challenge at the event for the most promising young startup. The Dublin, Ireland-based company automates artwork creation for video games. The post Six Startups Split $650,000 in Prizes at Emerging Companies Summit appeared first on The Official NVIDIA Blog.
Andrew Ng doesn't think robots will kill us. But they might take our jobs. "Maybe in hundreds of years, technology will advance to a point where there could be a chance of evil killer robots," said Ng, a leading machine learning researcher and chief scientist at Baidu, in his keynote speech at the GPU Technology Conference Thursday. "But I don't work on preventing artificial intelligence from going evil for the same reason I don't work on solving the problem of overpopulation on the planet Mars," he said. Recent breakthroughs have given machines uncanny new abilities, ones that will have a huge impact in the near future. But they've also raised concerns. Thinkers ranging from physicist Stephen Hawking to Microsoft co-founder Bill Gates and Tesla Motors CEO Elon Musk have warned that the emergence of machine intelligence poses a threat to humanity. Ng, who was named by Time magazine as one of the world's 100 most influential people, doesn't see his work posing a threat anytime soon. "Rather than being distracted by evil killer robots, the challenge to labor caused by these machines is a conversation that academia and industry and government should have," he said. Ng compared the explosion in machine learning—driven by vast pools of data and powerful GPUs—to a rocket. Faster computers—powered by GPUs—provide the rocket engine. Vast sums of data provide the fuel. Pumping that fuel through ever more powerful engines results in rocket that can take researchers farther, faster.
Rocket Ride: GPUs are one of the factors propelling machine learning forward, Baidu Chief Scientist Andrew Ng said. The result is machines that are already able to perform tasks better than humans, like identifying scenes in photographs. “We think that Baidu and other organizations are well beyond what humans are able to achieve on tasks that humans are really good at," Ng said. Ng is leading Silicon Valley research efforts at Baidu and continues to teach computer science at Stanford University. Prior to Baidu, he led the Google Brain project. He's among a handful of researchers who have used GPUs to spark a renaissance in deep learning that has given computers the ability to do things—like recognize images and translate speech—that seemed impossible just a few years ago. While deep learning is complicated stuff, Ng provided a simple explanation for how it works—and why so many breakthroughs have happened in the past few years. One factor: ubiquitous electronic devices are generating vast pools of user generated data—images, speech, video—that can be crunched by researchers. The other factor: More sophisticated deep learning networks, supported by ever more powerful processors. These networks are very loosely modeled on the human brain—which Ng pointed out that we know very little about—and are arranged into layers that categorize the information streamed through them by researchers. GPUs play a key role here by helping researchers feed this data to their models more quickly, giving them the ability to try out new network architectures faster, and build systems that can sort everything from speed to video quickly. Researchers are still trying to understand how this technology can be applied. But Ng sees deep learning sparking breakthroughs in a wide range of industries, from medical imaging to transportation. "I hope you will have grandchildren who will come to you and ask 'Is it really true that when you were young and you came home and said something to your microwave that it just would sit there?'" Meanwhile, smarter machines can help us with challenges people face in the present. Ng compared mastery of machine learning to a super power, a power that's already helping people like Li Chongyang, a blind man who uses speech recognition to listen to music and place phone calls on his smart phone. "You have superpowers," Ng told his GTC audience. "I hope you all go home and use your superpowers to create the greatest possible good for humanity."
With diversity in high-tech one of the day's hottest issues, this week's GPU Technology Conference highlights how some women are beating the odds. Despite being highly under-represented in the field, they're showcasing breakthrough work in GPU computing during sessions including cancer research, video technologies and image recognition. Nearly 100 female researchers, professors and engineers came together at the Women@GTC event Wednesday at our GPU Technology Conference to discuss women who innovate and how to help inspire female students to pursue careers in technology. Among the attendees was Rommie Amaro, associate professor at the University of California, San Diego, who is leading research on how to interrupt the mutation process in cells that cause cancer, aided by high-performance computers. The work, presented by Amaro at GTC, is focused on the anti-cancer drug discovery pipeline using advanced molecular dynamics simulations powered by GPUs. It recently earned her a $200,000 grant as part of NVIDIA's Compute the Cure award. GPUs are also helping keep us traveling safely. Fanny Nina-Paravecino, a Ph.D. candidate and research assistant in computer engineering at Northeastern University, is using Hyper-Q for real-time image segmentation for luggage scanning at airports. Other GTC presenters include Chen Sagiv, CEO of SagivTech Ltd., which deploys multiple video sources to create high-quality 3D video scenes that can be shared via social networks. Deborah Bard from SLAC National Accelerator Laboratory uses desktop GPUs to study the structure and evolution of the universe. And Ying Liu, an associate professor at the University of Chinese Academy of Sciences, uses GPUs to accelerate collaborative filtering algorithms. NVIDIA CEO Jen-Hsun Huang participated in the event, and shared his science-driven way of thinking about how to add value to the company with more inclusive practices. "Believing in something isn't enough, you have to have a system in place to make it happen," he said. One effort beyond recruitment is a drive to retain women in the technology sector. Showing the world what inspires those working in technology about their jobs is perhaps the best way to find the best candidates, women or men, Huang said. In the end, it's the product that counts because when you see a line of code, you don't know who wrote it, he added. "Science is a ticket to the world. It's a common language, like music," said Women@GTC panelist Pinar Muyan-Ozcelik, assistant professor of Computer Science at Sacramento State University. Panelist Fernanda Foertter, an HPC user support specialist at Oak Ridge National Laboratory, noted "one way to increase different points of view is to be inclusive of different points of view." An inclusive work environment in technology means creating and supporting a network that encourages hiring individuals from broad backgrounds. Panelists suggested several key factors to creating an inclusive environment: ensuring work environments that are family, not just women, friendly; projecting female role models for others to see; and using appropriate language when recruiting — avoiding "hacker," for example, which few women identify with, and replacing it with "problem-solver." Panelist Lorena Barba, associate professor of mechanical and aerospace engineering at George Washington University, echoed these sentiments, noting that women in science must think of themselves as leaders, because "a leader can imagine the future." The post Women@GTC Focus on Innovation, Inspiration and Roadmap for Inclusion appeared first on The Official NVIDIA Blog.
You know something's awry when your building starts melting nearby cars. London's year-old 20 Fenchurch Street tower is a stunner. But the same curved glass that gives the 37-story tower the nickname, "The Walkie Talkie," also has a knack for concentrating sunlight. The result: a hot spot that melted part of a nearby black Jaguar XJ and cooked shampoo in a local barber shop. It's even been used to fry eggs. Such "death rays" are a growing problem, thanks to a new generation of glass-sheathed buildings with radical computer-designed curves. Those curves reflect – and concentrate – light in ways that have been hard for designers and engineers to predict. Until now. Our demo at NVIDIA's annual GPU Technology Conference, in Silicon Valley, taps into the power of GPUs to show how London's fifth-tallest building came to be called the "Fryscraper."
A new generation of glass-sheathed buildings with radical computer-designed curves have created some unexpected challenges.
And Iray We Go
Rendering – the process of turning a digital model into an image on a screen– isn't new, of course. Nor is ray tracing, which tracks the way beams of light interact with objects in their environment. What's new is how our Iray ray tracing technology takes advantage of GPUs to render detailed models in real-time (see "NVIDIA Brings Interactive, Physically-Based Rendering to the Mainstream"). The result is revolutionary: Rather than relying on technology that takes hours to create a single, static image – a snapshot – designers, using Iray, can view rich digital images as they work. And they can see how light interacts with their design over long stretches of time – as the sun moves across the sky at different times of the day and year – rather than just a moment or two. NVIDIA is putting these tools within reach of every designer with plugins that will build this capability into the most popular design tools. It's a move that's sure to save time. And, potentially, trouble.
Avoiding a Deadlier Death Ray
In fact, we found the Walkie Talkie building's solar glare could have been worse. Alter the building's curves, just a nudge or two, and it could create a beam hot enough to melt lead. Such powerful simulations build on technology we first demonstrated at last year's GTC. We showed, together with Honda, the first interactive visualization of an entire car. Our demo didn't just spin around a digital prototype. We showed how you could section the vehicle and peel off layers to view the innards of the car, right down to the silver Accord's electrical wires and seat springs. Technology like this promises to solve a huge number of common design problems. And some that aren't so common.
Challenges of Modeling Light
Our Iray technology can model light in ways that just weren’t practical before. Take 20 Fenchurch – its glass curves create a spot where the temperature can rise to almost 200 degrees Fahrenheit. Or the Vdara Hotel, just off the Las Vegas Strip – its concave glass facade creates temperatures by the pool hot enough to melt plastic. Or L.A.'s extravagant Walt Disney Concert Hall – it heated up nearby condos, driving residents to draw their shades and run air conditioners. None of this is the work of mad scientists or Bond villains. The structures were created by architects and engineers who lack the tools to predict how their designs will interact with the world around them. In the past, modeling reflected light has been a time-consuming procedure. It's usually reserved for presentations of near-final designs. And designers build those presentations around specific lighting conditions. They're snapshots, not simulations.
Introducing Quadro M6000 Graphics Cards
Our new Iray 2015 rendering technology changes that. When paired with our new Quadro M6000 graphics card – the world's most powerful GPU – Iray 2015 models the way light bounces around a scene as design teams tweak their models. And rather than having to wait hours to create photorealistic images that are ready to put in front of a customer, designers can just add more GPUs to create higher-resolution models in an instant. With eight Quadro M6000 GPUs in our just upgraded Quadro Visual Computing Appliance (VCA), the level of interactive photorealism is stunning. Put our VCA in a data center, and design teams can call on its rendering power when and where it's needed. Every NVIDIA Iray product will include the ability to stream rendering from machines running our Iray Server software.
Is it real or is it rendered? We've been teasing our social media followers for months now by posting stunning images and asking them if they can tell the difference between our computer-generated images and real ones.
Real or rendered? If you’ve been following NVIDIA on social media you know just how tough it can be to tell work created with our technology from the real thing. Thousands have weighed in. And it's fiendishly difficult. But for designers who build the products we use every day – from the cars we drive to the buildings we live in – it's more than just pretty pictures. It's critical that what they see digitally accurately shows what their design is like in reality. Light, materials and form, all coming together in the intended way. But to visualize designs properly requires significant technology to calculate exactly how materials interact with light. For instance, whether glare occurs on a car's windshield if the dashboard is made of a certain material and not a slightly different one. To render those designs properly requires physically based rendering, and to make it interactive requires very fast GPUs. Now, we're announcing a multi-product roadmap to bring this capability to millions of designers. It has three main pieces:
Iray 2015 – the latest version of our GPU-accelerated rendering software, with new features to support exchanging materials across design applications, scalability outside of a workstation, and much faster rendering speed.
Quadro M6000 – our most powerful professional GPU, featuring our Maxwell architecture and 12GB of graphics memory to support complex designs.
Quadro Visual Computing Appliance – upgraded with 8 M6000-class GPUs, this VCA scalable appliance achieves unprecedented speed and visual fidelity, and is specifically tuned to accelerate our Iray software.
All these products will work together to give designers in a vast array of industries power that was – until now – available to just a handful.
A conference room rendered in Iray. Bringing Interactive, Scalable, Physically Based Rendering to Millions Throughout 2015, NVIDIA is bringing Iray to several more 3D creation applications, including Autodesk's 3ds Max, Maya, Revit, McNeel Rhinoceros. DAZ 3Dhas also made Iray available to its customers. This means millions of designers will now have access to Iray's capabilities, including Iray Material Definition Language (MDL), which allows physically based materials to be interchangeable across apps, so designers can switch from one tool to another and get consistent results. Iray 2015 is supporting the latest measurement format from X-Rite, while MDL is being supported by a growing number of companies who allow designers to create physically based materials including Allegorithmic and Old Castle. To learn more, please visit us here. The post NVIDIA Brings Interactive, Physically Based Rendering to the Mainstream appeared first on The Official NVIDIA Blog.
Creating singing volcanoes isn't, perhaps, a common use case for NVIDIA products. But in a packed session at the GPU Technology Conference, attendees learned that GPUs helped Pixar get the details right in animating Uku, the singing volcano in the soon-to-be-released short film Lava. Animating a quarter-mile-tall rock is different than animating people and animals. For instance, Pixar wanted to be sure Uku moved like trembling rock. And when animators depicted the volcano's mouth moving, Presto – Pixar's proprietary GPU-powered animation system – helped the production team determine that Uku’s “cheeks” were moving too much. “We got comments that it looked less like a rock and more like a guy in a rock suit,” said Byron Bashforth, the film’s technical director.
The discovery prevented an unnecessary delay while the segment went to rendering, where it might have been discovered it in the past. And quite often, Pixar’s animators are able to unearth more obvious flaws on their own rather than relying on the shading team to alert them. Presto – which Pixar’s engineering lead, Dirk Van Gelder, demoed in a keynote at GTC 2014 – is a powerful application that helps animators see their work with and without shading, by simply using a drop-down menu. While the cheek movement was a subtle and unexpected discovery, Presto users are able to view other cause-and-effect scenarios in real-time. For instance, the production team wanted Uku’s eyes to close when clouds shadowed his face to emphasize sadness. To achieve this before the addition of realistic cloud shadowing, Van Gelder said his team developed a light blocker that enabled them to simulate shadows closing over Uku in real time. That, in turn, allowed animators to ensure that the character’s eyes shut in sync with shadows hitting his face.
It's All in the Eyebrows
The animators also were able to use Presto’s real-time capabilities to avoid an issue with Uku’s rock-hewn “eyebrows.” When animators viewed the raw animation of Uku’s moving face without lighting or texture or shading, the eyebrows seemed to move too much. But when shading was added using a drop-down selection, the eyebrows seemed to move too little. In thet past that change would have been made after the rendering process. And that would have caused more delays. “Rendering is getting more and more expensive,” said Van Gelder. “The more we can show them in Presto the more we can hold off rendering until later in process.” The post How Pixar's Animators Used GPUs to Create a Singing Volcano appeared first on The Official NVIDIA Blog.
A year-old startup that promises to cut production costs for video games and movies, won the second annual $100,000 Early Stage Challenge, at NVIDIA's Emerging Companies Summit Wednesday. Artomatix, based in Dublin, Ireland, automates content creation by generating images that usually need trained artists. Run from a former Guinness brewery, the company uses machine learning and big data analytics to reproduce artwork such as characters and objects, progressively altering them in a realistic fashion. "It's an emerging area that we call machine creativity," Artomatic CEO Eric Risser said in an interview with GamesBeat. He described his company as both "a painkiller and a vitamin" for artists. Artomatix was one of a dozen promising startups competing for a $100,000 check at the Early Stage Challenge. Other companies, hailing from six countries, focused on medical imaging, deep learning, rendering, pharmaceutical research, and automotive technology. Now in its eighth year, the Emerging Companies Summit featured 17 companies presenting onstage before a room full of investors and technology executives in Silicon Valley competing for a total of $650,000 in prizes. The morning session featured five promising startups involved in robotics, machine learning and advertising. The post Artomatix – "a Painkiller and Vitamin for Artists" – Wins $100,000 at Emerging Companies Summit appeared first on The Official NVIDIA Blog.
Talk about cold storage. By next year, it should be possible to send a payload to the moon for a cool $1.2 million a kilogram, thanks to work being done to help finance the race to drive a vehicle on the moon. Astrobotic, a Pittsburgh startup that spun out of Carnegie Mellon University, plans to conduct its first lunar mission with its Griffin Lander in late 2016. It's a plan that's geared to make the company a contender for Google's Lunar XPRIZE competition, which will award a total of $30 million to private teams that can land a robot on the moon; move the robot 500 meters; and send back HDTV mooncasts. The goal of the competition is to help make space travel more affordable. In fact, it's doing exactly that—not for people yet, but for payloads. Astrobotic is selling space on its lander to companies, universities and governments that want equipment delivered to the moon. GPUs are helping to make that happen by enabling the company to more effectively model the journey and ensure its lander arrives safely.
Astrobotic is using GPUs to help it put is lander on the Moon. Astrobotic's sponsored payloads are a handy way raise to a healthy chunk of the $100 million mission cost, Kevin Peterson, the company's chief technology officer, told a full conference room at this year's GPU Technology Conference. "As far as we know, we're the only company that has a configure-your-lunar-mission website," Peterson said. Astrobotic's team isn't in business to be a moon courier. The purpose of its mission is to land and watch its rover do its best to claim the XPRIZE grand prize of $20 million. As for the role of GPUs, they're used to simulate movements and pressure on the Griffin Lander during launch, helping Peterson and his colleagues determine whether the lander will shake apart or experience excessive high-frequency acceleration. Peterson said GPUs are also helping to simulate landings—a lot of them—by supporting ray tracing of the moon's surface so that the Astrobotic team can ensure it can target a landing area the size of a football field. For comparison, NASA's Apollo missions targeted clear landing areas three miles wide. "We would like to land and leave a million times before we do an actual mission," said Peterson of Astrobotic's desire to be able to land with precision. By developing that precision landing capability, Astrobotic hopes to break out from the need to land on the moon in places that are flat and safe. Peterson said he and his team want to be able to land the Griffin Lander in lunar pits (formed by ancient lava flows) and the lips of craters. GPUs will make that possible. "We want to go to more interesting places in the solar system than have been accessed in the past," he told a couple of a hundred GTC attendees. "Computation is the key to unlocking those locations." The last communication between the company and the lander will occur during lunar orbit. Once the descent begins, Peterson said, the mission becomes autonomous, with both the lander—which GTC attendees can see firsthand in the exhibit hall—and rover working similarly to the autonomous car technology being developed today. The post Cold Storage: How GPUs Figure Into Delivering Payloads to the Moon appeared first on The Official NVIDIA Blog.
That’s it, gamers. You’ve been replaced. Google has used a new technology called deep learning to build a machine that has mastered 50 classic Atari video games. And you’ve never seen Space Invaders played like this. Talk about the way it’s meant to be played. Of course, no one is coming for your GeForce GTX 980. But the same GPU technologies that power your video games are being used by Google to do things few thought would now be possible, Google Senior Research Fellow Jeff Dean explained Wednesday in a keynote speech at our annual GPU Technology Conference.
Deep learning gives computers the ability to do things that just a few years ago few thought would soon be possible. Dean is among a core group of engineers at Google who have built a new generation of technologies that have redefined the infrastructure that underpins the Web. Now, Dean and his colleagues are pushing into new domains — speech, vision, language modeling, user prediction, and translation — that once seemed only possible in the realm of science fiction. Google's researchers are even using machines to master classic computer games, like Breakout.
Building Digital ‘Brains’
That work is built on creating neural networks modeled on the human brain. But only roughly. Today’s digital brains resemble human ones no more than airplane wings are inspired by the wings of birds. "We’re not trying to simulate the brain at a very deep chemical transmitter level, we're taking very high-level abstractions," Dean said. Like biological brains, these new digital brains rely on sophisticated algorithms to teach machines to perform complex tasks from scratch, just as a child learns to identify different kinds of balls by being shown many examples. It may sound simple, but training a computer to learn how to do these tasks saves vast amounts of time. "One of the things we care about is reducing human engineering efforts," Dean says. "We prefer a deep learning algorithm where the algorithms themselves built up higher levels of abstraction automatically."
Google is using algorithms to tackle tasks that would just take on tasks that would just take too long for human programmers. Once trained, these models can be embedded into real world applications. Since 2012, for example, Google's Android smasrtphone software has used deep-learning based predictive speech recognition. The system relies on software built into both Android Jellybean, as well as Google's powerful servers. Google is now using deep learning in more than 50 production applications, Dean said. Google is ideally positioned to push deep learning forward. Its search business gives it access to a vast sea of data, in the form of text and images. And the vast distributed computing infrastructure it has built around this business gives it the ability to crunch data in a hurry. Now, it's adding GPUs to this infrastructure, giving it the ability to train neural network to tackle a vast variety of tasks in a hurry. The parallel computing capabilities built into GPUs – which are designed to perform vast numbers of tasks at once – allow Google's engineers to train systems fast. That lets Google use these systems to do work that wasn't possible for computers just a few years ago – like identifying house addresses, classifying photos and transcribing speech.
Thousands listened to Google’s Jeff Dean explain how the search giant is using GPUs to accelerate deep learning. "One of the functions of these models that's incredibly powerful is they can take input in one modality and transform it to another," Dean said. "Like take pixels and transform them into text."
Playing Games
The killer demo, of course, involves video games. Dean described the work of a group of colleagues in London who built a deep learning system and set it loose in 50 classic Atari video game and told it to maximize its score. While the machine struggled at first, after hundreds of games it showed superhuman capabilities. It tore through alien hordes in Space Invaders and slalomed expertly through the curves on Enduro. "I think it's time to call the ref," Dean said as he showed a video of Google's deep learning system pummeling a hapless opponent in video boxing. The post How Google Uses GPUs to Revolutionize Speech, Video, Image Recognition appeared first on The Official NVIDIA Blog.