RSS Description of Facilities, Equipment and Other Resources

Research Support Services (RSS)

Research Support Team:

Leadership: Matthew Keeler (Director – IT University of Missouri), Buddy Scharfenberg (Director - IT Missouri S&T). Jenn Nixon (Manager - IT Missouri S&T), John Harrison (Manager - IT University of Missouri), Jason Lockwood (Information Security Officer - Research)

Non-HPC Research System Support: 14 Research Support Technologists, 1 Research Technology Analyst

Embedded Systems and Software Development: 1 System Administrator, 1 Software Engineer

HPC End User Support: 3 Cyberinfrastructure Engineers, 1 Research Technology Analyst

HPC System Administration: 5 System Administrators

Storage Administration: 1 Storage Architect

Security: 2 Security Analysts

Quantum Computing: 1 Cyberinfrastructure Engineer, 1 Research Technology Analyst


HPC Facilities, Equipment and Other Resources

University of Missouri Cluster (Hellbender)

  • Operating System: Alma Linux 8
  • Scheduling Software: Slurm
  • Nodes: 112
  • Cores: 14,336
  • GPU: 68 Nvidia A100 GPU cards across 17 Nodes
  • Storage:
  • Connected to Research Data Ecosystem
    • 8.5 Pb connected high-performance storage available.
      • 4PB VAST
      • 4.5 PB GPFS/Pixstor
    • NFS/GPFS mounts to client servers or Hellbender cluster.
    • Able to provide via SMB to allow access separate from Hellbender cluster environment (for those who need research storage but do not utilize our HPC environment).
    • Snapshots are enabled to provide individual file recovery for 1 week on the Pixstor array and for 1 month on the VAST array.
    • Storage lab allocations are protected by associated security groups applied to the share, with group member access can be administered by the assigned PI or appointed representative.            
    • Archive:  LTO tape library has been installed to provide long term Archive storage in conjunction with Records Management.
    • File Transfer and File sharing can be carried out via access to our RDE environment via our Globus subscription.
  • Network: The NDR Infiniband backbone will provide up to four hundred (400) gigabits of data throughput per second from point to point on the network, with an anticipated theoretical latency of less than six hundred (600) nanoseconds.
  • Each node will be attached to the backbone with an HDR Infiniband connection capable of providing two hundred (200) gigabits of data throughput per second to each node, with an anticipated theoretical latency of less than six hundred (600) nanoseconds.

Missouri University of Science and Technology (Mill)

  • Operating System: Alma Linux 8
  • Scheduling Software: Slurm
  • Nodes: 229
  • Cores: 15,200
  • GPU: 34 GPU’s across 8 Nodes
    • 24 Nvidia V100 cards
    • 8 Nvidia H100 cards
    • 2 Nvidia V100s cards
  • Storage:
    • 250Tb connected high performance VAST flash storage
    • 800Tb connected HPC utility CEPH storage
    • Storage lab allocations are protected by associated security groups applied to the share, with group member access can be administered by the assigned PI or appointed representative.            
  • Network:
    • The network is based on an HDR backbone which provides up to 200 gibabits of data point-to-point on the network. Each node is attached to the backbone with a HDR-100 Infiniband connection capable of providing 100 gigabits of data throughput per second to each node.

Science DMZ 

The University of Missouri has a 100Gbps Science-DMZ to support high speed data transfers between the campus and HPC centers within the Great Plains Network, Internet2, ESnet, the Pacific Wave, and other R&E networks. The Science-DMZ was supported in part by NSF CC-NIE Award 1245795.

Specialized System Support Resources

RSS provides expert support for specialized devices, including sensors, instrumentation interfaces, atypical workstations and servers, development, storage, security, and other IT related needs in the research context.


Quantum Computing Facilities, Equipment and Other Resources

University of Missouri System’s Quantum Innovation Center in partnership with IBM Quantum

  • Through the Quantum Innovation Center, UM System has access to exclusive IBM Quantum Systems which are not publicly available, and the earliest beta release of the Qiskit quantum SDK. Users are able to evaluate, explore, and execute quantum computing through an API that provides runtime-level access to IBM’s quantum computers via Qiskit Runtime environment.
  • UM System researchers to run priority jobs through a fair-share queue on IBM’s quantum processing units (QPUs).
  • UM System researchers also have access to IBM’s learning and training materials, working groups and networking opportunities which will aide UM System in mastering and improving upon state-of-the-art quantum computing algorithms and techniques.
  • Available time to be allocated among researchers
    •  1600 minutes in a rolling 28-day period

Quantum compute resources:

 4 QPUs with 156 Qubits:

  • ​​​195K-250K Circuit Layer Operations per Second (CLOPS) each
  • 2Q error (layered): < 5.00e-3

 1 QPU with 133 Qubits

  •  210K CLOPS
  •  2Q error (layered): < 5.77e-3

4 QPUs with 127 Qubits​​​​​​​

  •  150K-220K CLOPS each
  •  2Q error (layered): < 4.09e-2

All QPUs employ circuits capable of running 5,000 two-qubit gates, which surpass classically simulatable experiments.