Skip to main content

News Release

New Cornell collaboration to explore GPU computing using MATLAB

Contact: Paul Redfern
Cell: (607) 227-1865

FOR RELEASE: July 19, 2011

ITHACA, N.Y. – The Cornell University Center for Advanced Computing (CAC) today announced that it is testing the performance of general-purpose GPUs with MATLAB applications in a new research collaboration with NVIDIA, Dell, and MathWorks.

This research will explore GPU computing capabilities for data manipulation on NVIDIA GPUs using MATLAB applications. In particular, Cornell will focus on the use of multiple GPUs on the desktop via the MathWorks Parallel Computing Toolbox, and a GPU cluster via MATLAB Distributed Computing Server.

Cornell is conducting this research on Dell C6100 servers with the C410x PCIe expansion chassis, which supports server connections to NVIDIA Tesla M2070 GPUs.

“The launch of this GPU capability with eight nodes (each with eight CPU cores) and eight NVIDIA Tesla M2070 GPUs (each with 448 CUDA cores) is extremely valuable particularly for researchers needing to process large blocks of data in parallel,” said David Lifka, Cornell CAC director.

For example, researchers from Weill Cornell Medical Center, University of Michigan Health System, and Rutgers Laboratory for Computational Imaging and Bioinformatics are currently using the NVIDIA GPUs and MATLAB to accelerate and improve the diagnosis of cancer cells using template matching. Using MATLAB’s built-in GPU functions, the researchers experienced a 14.7-times speedup in code processing time (from 86.9 seconds to 5.9 seconds). That’s a significant improvement for pathologists who would like to process many large scale images each day. By comparison, MATLAB code running on GPUs performed 4.8-times faster than code that was implemented in C++ without GPUs. And, because MATLAB is optimized for use with GPUs, users can take advantage of the GPUs’ compute power without needing to learn another programming language or leaving the MATLAB environment.

In another project, Theo Damoulas, a research associate with the NSF-established Institute for Computational Sustainability (ICS) directed by Prof. Carla Gomes, benefited from a 12-times speedup in Dynamic Time Warping (DTW) computation by using a combination of built-in MATLAB GPU functions and CUDA code. DTW is the computationally expensive part of the code which uses machine learning and signal analysis techniques to automatically identify bird species from their flight calls. Automatic flight call classification is much faster and arguably more accurate than manual classification, and the first step in creating large scale networks of recording stations that can provide a detailed understanding of the migration patterns of individual species.

This project is representative of the research of the ICS, whose aim is to provide solutions for balancing environmental, economic, and societal needs for a sustainable future by bringing computational thinking to sustainability research. The ICS is a joint venture involving scientists from Cornell University, Bowdoin College, the Conservation Fund, Howard University, Oregon State University, and the Pacific Northwest National Laboratory.

“As GPU performance testing and production runs continue at Cornell, CAC will be developing lessons learned and best practices for porting MATLAB code in order to improve the overall experience with MATLAB GPU computing for scientific researchers,” noted Lifka.

Cornell previously deployed a National Science Foundation-sponsored 512-core experimental MATLAB resource for the research community in partnership with Purdue University to provide a bridge to high-end national resources. Over 550,000 jobs ran on the experimental resource which facilitated research, student learning, and Science Gateway applications.

Press Contacts:

Cornell CAC
Paul Redfern
(607) 227-1865

Kari Sherrodd
(512) 728-2835

Sarah Coyle
(508) 647-4615

Andrew Humber
(408) 416-7943