Art Silicon Valley/San Francisco (Art SV/SF) is Art Miami’s International Contemporary and Modern Art Fair on the West Coast. Cascade’s turbulent flow visualizations will be part of the installation of Bay Area artist Mel Day and featured in the exhibition CODE AND NOISE, curated by Christine Duval. This exhibition presents eleven artists from Chicago, New York, the Bay Area, China and Japan who use, create or leverage on software to produce works that are engaging, stimulating and allow to ponder on many current issues such as the environment, memory, art history, data collection and surveillance.
Cascade Technologies, Inc. and GE (NYSE:GE) have announced a multi-year joint development agreement focused on gas turbine combustion.
“The global energy industry looks to GE as a leader in high efficiency, with current HA gas turbines designed to deliver more than 61 percent combined cycle efficiency. The enhanced simulation and visualization capabilities enabled by our collaboration with Cascade can help us deliver even higher efficiency and lower emissions in the next generation of gas turbines,” said John Lammas, vice president, power generation engineering at GE Power and Water. “Together, we’re working to deliver better products, faster.”
Read the full story here.
A new milestone in high performance computing was reached late Tuesday evening (1/22/13) when Stanford researcher and Cascade consultant Dr. Joe Nichols ran the CharLES solver on more than 1 million processor cores. This breakthrough happened during “Early Science” testing of the newly installed Sequoia supercomputer at the Lawrence Livermore National Laboratories (LLNL). The Sequoia IBM Bluegene/Q system is currently ranked No. 2 on the list of the world’s most powerful supercomputers boasting 1,572,864 compute cores and 1.6 petabytes of memory connected together with a high-speed five-dimensional torus interconnect. A CFD simulation tasks all parts of a supercomputer because waves propagating throughout the tightly-coupled simulation require a well-orchestrated balance between computation, memory, communication, and I/O.
At the one million core level, previously innocuous parts of simulation codes may suddenly become bottlenecks, and massive parallelism through all aspects of the software architecture is critical. Joe and other researchers from Stanford’s PSAAP program and LLNL computing staff have been working closely together for a few weeks now to prepare for this unprecedented opportunity. So, together, they were glued to their terminals (more than usual) Tuesday afternoon and into the evening during the first “full-system scaling” window of the early science testing period to see whether initial extreme-scale science runs would achieve stable run-time performance after starting up. As the first CFD simulation passed successfully through its initialization phase, all were thrilled as they saw the code performance continue to scale all the way to and beyond one million cores. This means that the time-to-solution continued to reduce, enabling more science to be done with ever faster turnaround times. These early science runs represent at least an order-of-magnitude increase in computational power over the largest runs performed using CharLES to date, enabling unprecedented fidelity and dramatically reducing time to solution.
Atomization of a liquid jet in a turbulent flow using Cascade’s unstructured Volume-of-Fluid solver. Saied could not resist adding the music. Thanks for that. This simulation and others involving multiphase flows are described in detail in this ILASS 2014 conference paper.