Computing

Mission

The Watson Computing Group provides an advanced computing infrastructure in support of the college's academic and research initiatives. We collaborate with the Center for Learning and Teaching, Division of Research, and Information Technology services to provide and administer the college's computing and data storage cyberinfrastructure. We facilitate the computing environment's design, acquisition, deployment, and support.

Watson Data Center

The Data Center consists of three rooms of 1000, 484 and 468 square feet. The rooms utilize conventional raised floor air cooling. The largest room has been outfitted with sixteen water cooled Knürr Electronics GmbH, Cooltherm 25kW equipment racks. These racks provide the primary cooling for the installed equipment. The heat energy recovered from the data center cooling system is reused to pre-heat outside air as it enters the building ventilation system.

The data center leverages both physical and virtualization technologies to offer a flexible multi-tier computing environment to support research and the computing needs of Watson College.

Our Three Tiers

Initial Tier

Intended for general desktop computing and provides 3D business graphic capabilities; both physical and virtual desktops are deployed in this environment.

Second tier

Provides access to a robust computational environment for modeling and 3D simulation, and network file storage.

It is comprised of high-end servers with high-performance graphic capabilities. The environment is housed in the data center and accessed remotely via Horizon View. Included in the second tier, is a 200 TB network file server that supports the research and infrastructure needs. Faculty have access to network storage which is shared within their research group.

Third tier

Provides access to high-performance computing clusters. The cluster is comprised of a head node, 160 compute nodes, 292 TB of direct attached storage node, with common storage access to all nodes. 40 & 56 GB/sec Infiniband Ethernet provides networking between nodes with a 1 GB/sec Ethernet for management.

The cluster provides 3720 CPU cores with ~16 TB of system memory.  The cluster currently supports MATLAB 2019b jobs up to 600 cores, VASP, COMSOL, R and almost any *nix based applications.