Skip header content and main navigationBinghamton University, State University of New York - ES2 - NSF


Binghamton University

Binghamton University has amassed a vast infrastructure for conducting energy efficient systems research by academic and industrial partners. Housed in the new LEED (Leadership in Energy and Environmental Design) Platinum Center of Excellence Building, a new, fully-instrumented 4500 square foot ES2 Data Center is available for ES2 researchers and IAB member companies. This data center laboratory has the scale of a mid-range data center but unlike a real data center, it permits disruptive experiments to be carried out. There are 3 cold aisles - 2 of which are fully contained. The laboratory incorporates different types of cooling facilities (traditional chilled air cooling, rear door heat exchangers using chilled water and warm water cooling) to permit experiments involving different cooling technologies that are seen in legacy as well as state-of-the-art data centers. This facility currently has over 30 racks of storage servers, 2U and blade compute servers and another 30 racks are now being populated. The networking infrastructures within this data center laboratory consist of 10 Gbits/sec. Ethernet, and extensive switched optical fiber-based connections for the large storage arrays and load balancing switches. A separate 1200 feet staging laboratory and a separate laboratory space for a full-sized container based data center are also available to ES2 researchers.
The laboratory has extensive instrumentation and controls in hardware and software to permit a wide variety of experiments to be carried out. On the IT side, these include the ability to measure power consumption of individual servers, server load statistics, network traffic in real time and the ability to generate realistic loads on individual servers and the data center as a whole. The instrumentation and controls on the cooling systems and thermal side include the ability to measure temperatures in various zones of the servers, air velocities and water flow rates in real time and the ability to control CRAC fan speeds, inlet temperatures and adjust the mix of chilled and warm water.

University of Texas at Arlington

The University of Texas at Arlington is home to an electronics cooling lab with equipment related to air cooling, complete with air flow bench and reliability equipment such as an Environmental Chamber, Instron Tester and Failure Analysis Capability.  UTA has two 625-square-foot data center lab facilities complete with two 20-ton CRAC units. The facility is used to run various experiments related to the data center cooling technologies we are investigating. One of the data center laboratory houses Open Compute Hardware exclusively. UTA recently received 480 servers from Yahoo! and a 42U direct-to-chip liquid cooled rack from Cisco. The measurement equipment in the data center includes wired and as well as wireless temperature, humidity and pressure instrumentation. Students that are part of I/UCRC also have access to Professor Agonafer’s thermal and reliability labs. A Stereo Particle Image Velocimetry (SPIV) system is housed at the Aerodynamics Research Center (ARC) and is available to carry out experimental flow visualization. In addition, UTA is home to a $145 million Engineering Research Lab. The facility provides approximately 234,000 square feet of space for state-of-the-art, multi-disciplinary research and teaching labs and classrooms, faculty and graduate student offices, administrative offices, conference rooms and support areas.  

UTA has access to a research modular data center with a direct/indirect evaporative cooling module setup and the facility of industrial partner, Mestex, in Dallas, TX. The research facility is operated and monitored all year round in the hot and humid Dallas climate using only evaporative cooling or outside air free cooling and it houses 4 racks populated with 1U servers. Also, Facebook donated $200K scaled direct evaporative cooling unit for research purposes complete with 30KW heating element to represent data center heat load. The equipment is capable of drawing 5000 CFM and comes with sensors and control programs built in.The University of Texas at Arlington is home to the preeminent university-based nanotechnology research, development and teaching facility in north Texas. The Nanotechnology Research & Teaching Facility is an interdisciplinary resource open to scientists within and outside of the University. Research activities are conducted through mutually-beneficial associations of chemistry, electrical engineering, mechanical and aerospace engineering, materials science and physics faculty, graduate students and research assistants at UTA, as well as collaborative efforts with investigators at other universities and in the private sector. We plan to utilize the facility for fabrication related to 3D packaging.

Villanova University

The Villanova University Laboratory for Advanced Thermal and Fluid Systems ( is a modern, comprehensive laboratory for fundamental investigations in thermal transport and characterization of thermal management in electronics, energy, and propulsion systems. The laboratory houses several major facilities and many customized rigs. The Low Speed Boundary Layer Wind Tunnel is a versatile open-return wind with flow velocities up to 60 m/s and freestream turbulence of less than 1%.  The Closed Return Aerodynamics Wind Tunnel is a commercial closed return wind with a cross section of 2 ft x 2 ft and flow velocities as high as 55 m/s or 120 mph in the test section. The Jet Impingement Facility is designed to provide clean, low-turbulence flow into a variety of nozzle configurations that are used to study the fluid mechanics and heat transfer in impinging jets for electronics cooling.  A companion Spray Cooling Rig is a special apparatus designed to investigate the fluid mechanics and heat transfer due to spray and droplet cooling.  A specialized Mini/micro Channel Flow Loop is a specialized liquid flow loop designed to deliver metered, constant temperature flow rate for investigation of single and multi-phase heat transfer in small scale heat exchangers for electronics cooling.   The laboratory has many other custom rigs for measuring thermal properties such as thermal conductivity and thermal diffusivity, and for measuring the thermal impedance of interface materials.  Diagnostic and measurement tools include thermal and particle imaging velocimetry, infrared imaging, ultra-high speed video, and liquid crystal thermal visualization.

 Georgia Institute of Technology

The Microelectronics and Emerging Technologies Thermal Laboratory (METTL) houses fabrication and characterization facilities, as well as experimental rigs for the study of heat transfer and fluid flow phenomena from tens of nm to approximately m length scales. Characterization equipment includes infra-red microscopy, particle image velocimetry, high speed imaging, and temperature, pressure, and flow rate measurement capabilities over a broad range. Fabrication capabilities include wafer dicing, wire bonding, and nano-fabrication. Experiments at the rack level will be performed at the Consortium for Energy Efficient Thermal Management (CEETHERM) Data Center Laboratory which accommodates 28 computing racks arranged in a typical hot aisle cold aisle configuration. The data center infrastructure is designed to handle power densities of 500 W/sq.ft. Six computer room air conditioning units (CRAC) supply a total of 79,000 CFM of air, which provides cooling to the data center. A variety of cooling arrangements (under floor and overhead distribution) can be achieved with the four down flow and two up flow CRAC units. In addition, the facility has two air economizers which draw in cold ambient air during winter to reduce the cooling load on the chilled water system. In the economizer mode, the facility's digital control system performs a series of psychometric calculations to check if the conditions are conducive and accordingly commands the exhaust and economizer fans to draw in outside air. Power for the data center comes from a 480V, 1200A grid supply. The grid power branches into two panels that independently distribute power to the computing and cooling infrastructure in the data center. The power for the computing equipment is fed though three power distribution units that step down the voltage from 480V to 210V to power up the rack power strips. The data center has power monitoring capability installed at various levels from the incoming grid supply to the rack power strips. The data from the power meters and other data acquisition systems in the facility is logged into the PI software to generate PUE charts. Another major feature of the facility is the ability to conduct controlled studies (which usually cannot be performed in a large data center) in a simulated data center environment and compare it with an existing data center facility. This is achieved by splitting the facility into two equal sections using a collapsible insulted partition. The control system is designed such that each half has its own independent power monitoring, air flow distribution and economizer controls. This arrangement offers flexibility to validate the computational models and test the control algorithms on a smaller scale.

Last Updated: 1/27/17