Advanced Research Computing Initiative

The advanced research computing (ARC) initiative establishes a high-performance computing (HPC) infrastructure that supports and accelerates computational and data-enabled research, scholarship and education at the University of Cincinnati.


The University of Cincinnati’s Advanced Research Computing (ARC) initiative offers a readily accessible hybrid CPU/GPU computing cluster, supporting the next generation of computational and data science researchers while developing a highly competitive workforce.

We will partner with you to utilize the core suite of HPC services and resources. With the ARC resources, researchers can advance theoretical knowledge and expand the realm of discovery, generating leading edge research and applications suitable for innovation and commercialization in line with UC’s Next Lives Here strategic direction.

This sustainable high-performance computing (HPC) infrastructure with technical support services, accelerates the time to discovery and enables sophisticated and increasingly realistic modeling, simulation and data analysis and will help to bridge users to the local, regional and national HPC ecosystem.


ARC resources support all disciplines, including healthcare, sciences, engineering and social sciences/humanities, in their quest to harness big data via analytics, modeling and simulation, visualization, artificial intelligence and machine learning.


The ARC initiative is a collaboration between the Office of Research, University of Cincinnati faculty, the Office of Information Technologies (UCIT) technical and research services teams, the College of Engineering and Applied Sciences (CEAS) technical staff, Indiana University Information Technology Service’s Chief HPC Systems Architect, and XSEDE Capabilities and Resource Integration (XCRI) HPC Systems Administration staff. This partnership is made possible as part of a long-term commitment by UC to create an environment to advance the University of Cincinnati’s leadership position in innovative research and impact.

HPC Cluster Available Hardware/Software

ARC is equipped with 50 teraFLOPS of peak CPU performance and 2 NVIDIA Tesla V100 GPU nodes (224 teraFLOPS deep learning peak performance) connected with high-performance 100 GB/s Omnipath (OPA) interconnect, a significant step forward in both bandwidth and latency.


  • 50 teraFLOPS of peak CPU performance
    • Intel Xeon Gold 6148 2.4G, 20C/40T, 192 GB RAM/node
    • Plans to increase it to 140 teraFLOPS peak CPU performance in the next year
  • 224 teraFLOPS deep learning peak performance
    • NVIDIA Tesla V100 32G Passive GPU
    • Plans to increase it to 896 teraFLOPS deep learning peak performance in the next year
  • ZFS Storage Node – 96TB raw storage
  • Omnipath HPC Networking infrastructure
    • Maximum Ominpath bandwidth between nodes = 100Gbps


  • OpenHPC environment
  • Warewulf cluster provisioning system and managed by the SLURM
  • Singularity containers being installed soon
  • Developmental tools, including compilers, OpenMP, MPI, OpenMPI libraries for parallel code development, debuggers, and open source AI tools
  • FLEXlm being installed so that individual researchers can easily maintain and use their software resources
  • User login is based on UC/AD, so that user groups and easier access
  • ARC Cluster Report

Getting Access

To get access to ARC, the Principal Investigator should email with the following information:

  • The project name and brief description
  • Do you need GPU or CPUs?
  • The minimum number of cores needed for the job
  • When can you start using the system?
  • How many CPU/GPU hours are needed for the job (number of jobs x hours for each job)
  • Software needed:
    • Can you install software yourself or you need assistance
    • If you have proprietary software, please provide the license file/flexlm server link (if stored on another server at UC)
  • UC usernames of students/postdocs who would need access