The NVIDIA Blog | |
- Fault Finding: SoCal Researchers Use GPUs to Detect Earthquake Hazards Coming Our Way
- At VMworld, GRID 2.0 Powered “Tower of Power” Drives Billions of Pixels
- Make the Most of Your Mac: New Driver Delivers Big Performance Gains
| Fault Finding: SoCal Researchers Use GPUs to Detect Earthquake Hazards Coming Our Way Posted: 31 Aug 2015 10:06 PM PDT GPU technology toppled letters of the iconic "Hollywood" sign and lashed the Golden Gate Bridge with a tsunami in this summer's blockbuster San Andreas. But that's the movies. In real life, researchers at the Southern California Earthquake Center are using GPU-powered high performance computing to develop CyberShake, a complex model that calculates how earthquake waves move through a 3D model of the Earth. This helps develop earthquake forecasts and more accurate hazard assessments. SCEC's initial target is the real Los Angeles region, where the Pacific and American tectonic plates run into each other to create the famed San Andreas Fault, which runs the length of California, and adjacent states. Their groundbreaking work earlier this year helped SCEC and their collaborators win NVIDIA's inaugural Global Impact Award and its $150,000 prize. This spring, the team used National Science Foundation and Department of Energy supercomputers — Blue Waters and Titan — to produce the most sophisticated seismic hazard analysis forecast yet for the Southern California region. Seismic Waves They performed simulations for 336 separate locations in the region, and doubled the maximum simulated frequency from 0.5 Hertz to 1 Hertz. As that measurement increases, so does the potential for damage–and the complexity of the simulation. Structures such as buildings and bridges are most vulnerable to damage by seismic waves between 1 and 10 hertz. But the required scientific calculation poses a huge computational challenge. At 1 Hertz, the CyberShake calculation for each specific location required 33X more computational work as at 0.5 Hertz. Thanks to the parallel processing efficiency of GPUs, however, they needed only 7X as many node hours. SCEC, located at the University of Southern California, is led by Director Thomas H. Jordan. Working with him is Yifeng Cui, director of the High Performance Geocomputing Laboratory at the San Diego Supercomputer Center, at the University of California, San Diego. "With more people moving to cities in seismically active regions, economic risks from a devastating earthquake are high and getting higher," said Cui. "GPU capabilities, combined with high-level GPU programming language CUDA, provide the computing power required for acceleration of numerically intensive 3D simulations." Hazard Information Maps The goal is to build more accurate hazard information maps from earthquake hazard simulations — the kind supplied by the U.S. Geological Survey, which supports SCEC's work. The maps would also aid seismologists and utility companies in addition to engineers responsible for building codes. "The general public wants immediate (short-term) forecasts, but there's no good scientific technique to make predictions yet — it's not like a weather forecaster saying it's going to rain, so you know to take a coat," said Philip Maechling, an associate director at SCEC who collaborated with Cui on the study. With GPU-powered supercomputing architecture, more complex quake simulations can be run efficiently and quickly. Structures respond in different ways to seismic waves of different frequencies. Skyscrapers and highway overpasses are at most risk during long-period shaking, while smaller buildings are more vulnerable to high-frequency shaking. "We want our information to be applicable to a wider range of buildings," Maechling said. Engineers will be able to apply these models to other parts of California, and the globe, so one day no one will have to face the devastation in San Andreas outside of the movie theater. NVIDIA invites submissions for the 2016 Global Impact Award through the end of October. The post Fault Finding: SoCal Researchers Use GPUs to Detect Earthquake Hazards Coming Our Way appeared first on The Official NVIDIA Blog. |
| At VMworld, GRID 2.0 Powered “Tower of Power” Drives Billions of Pixels Posted: 31 Aug 2015 05:42 PM PDT For VMworld this week in San Francisco, we designed and built an enormous “Tower of Power" that serves as a centerpiece for our presence at the show. Supported by our DesignWorks developer suite and state-of-the-art demo engine technology, our 16-foot tall, four-sided creation showcases how our GRID 2.0 technology can put any application on any device. The "Tower of Power" – 336 micro-tiles grouped into 56 desk-sized displays – does more than just display a host of advanced, remotely-hosted apps. It makes them dance. The tower is an imposing presence. Its four walls are 14 tiles tall and six wide. Line up all 336 of the tower's 16 inch by 10-inch rear projection tiles would stretch 450 feet. That's the length of one and a half football fields. But despite its size, the images on the tower's screens seem to flit effortlessly across it. Our engineers have figured out ways to send a wave of ripples across these virtual desktops. Or twist these apps into a whirling storm of pixels for a virtual 3D tornado. All across the surface of a display generating more than 7.4 billion pixels a second. Step One: Putting GRID 2.0 to WorkOur goal: to show how NVIDIA GRID 2.0 can accelerate powerful visual computing apps that can be served up to any display. NVIDIA GRID accelerates virtual desktops and applications, giving enterprises the power to deliver powerful graphics to any user, on any device. Even one the size of a building. As a result, our "Tower of Power" is a machine that connects to real apps, with real capabilities. Even as it was being assembled during a quick two week sprint, our "Tower of Power" was getting work done. In fact, Michael Thompson, part of the small team of engineers who helped assemble and test the tower in a loading dock at our Silicon Valley campus, would use the huge display to beam into his desktop PC on the other side of our campus to update the demo software running the display. “No way we could get this kind of resolution in our cubes,” the tall, t-shirt clad engineer said last week as he updated the code powering it all while sitting at a folding table just in front of the half-finished display. The story behind the story: NVIDIA GRID 2.0. All the apps on the display are running on VMWare Horizon virtual machines powered by four HP blades. Each of the four blade servers runs four of our new Tesla M6 GPUs. And each blade server can run 16 virtual machines. That gives us the power to support a total of 64 different virtual desktops. Step Two: Using Quadro to Put GRID 2.0 on DisplayWhile GRID 2.0 is the engine that makes these apps scream, our Quadro GPUs pours all this content into a remarkable custom display. The four walls of displays are powered by four NVIDIA Quadro GPUs. Another four help take the virtual desktops generated by NVIDIA GRID 2.0 and – using our new DesignWorks developer suite—turn them into pixels we can pick up and play with. The message behind the monolith: if our technology can support 7.4 billion pixels of worth of virtualized desktops, just imagine what it can do for your enterprise.
The post At VMworld, GRID 2.0 Powered "Tower of Power" Drives Billions of Pixels appeared first on The Official NVIDIA Blog. |
| Make the Most of Your Mac: New Driver Delivers Big Performance Gains Posted: 31 Aug 2015 12:53 PM PDT Welcome to the fast track. Our new driver for the Mac Pro offers up to 80 percent improved performance for Macs with Kepler GPUs. And, for the first time, our driver includes beta support for MacBook Pros and iMacs with Kepler GPUs, as well as beta support for those using Maxwell GPUs in older Mac Pro systems. We lead the industry with our driver support. Just as for Windows and Linux users, our goal for those with Macs is to provide drivers that elicit the best performance our NVIDIA GPUs have to offer. With our new driver, you can enjoy a major performance boost on a host of key apps, like Apple's Final Cut Pro, as well as games, like Tomb Raider, Formula 1 2013 and Batman: Arkham City. If you're running an older Mac Pro that lets you swap in the latest GPU, you can make the most of our Maxwell architecture. Just update your driver, then add a new Maxwell GPUs. Find more info, including a list of supported Macs, at our driver download page. [1] All tests were run on a Mac Book Pro, 2.8GHz i7 CPU, 16GB RAM, 1TB PCIe SSD, and GTX 750M 2GB. Baseline graphics driver version 10.10.3. New NVIDIA graphics driver version 346.02.02f03. Tomb Raider tests used the built-in benchmark at 1440×900 with 0x anti-aliasing and quality set to low. Formula 1 2013 tests used the built-in benchmark at 1440×900 with 2x anti-aliasing and default graphics settings. Final Cut Pro X – Title Render was tested using a ProRes 422 1440x1080p30 project. The post Make the Most of Your Mac: New Driver Delivers Big Performance Gains appeared first on The Official NVIDIA Blog. |
| You are subscribed to email updates from The Official NVIDIA Blog To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
| Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States | |