Brief introduction to the cluster
To provide a good experience for various tasks for various users, the cluster consists of many components. Below are some highlights:
- Variety in compute nodes
- Parallel file system storage
- Fast Infiniband and Ethernet network
- SSH servers cluster
- Web portal
- CLI client
- Software by modules and containers
This user guide will not cover the detailed hardware spec but instead focus on the experience instead. If you are interested in those technical details, please feel free to contact us.
Compute nodes
We want to provide a heterogenous cluster with a wide variety of hardware and software, for users to experience different combinations. Compute nodes may have very different models, architecture, and performances. We carefully build and fine-tune the software on the cluster to ensure they fully leverage the computing power. We provide tools on our web portal to help you to choose what suit you.
Besides the OneAsia resources, bringing in hardware is also welcome. Our billing system is smart enough to charge jobs by individual nodes. That means one can submit a giant job to allocate computing power owned by multiple providers. To align the terminology, we group them into 3 pools:
- OneAsia
- Hardware owned by OneAsia Network Limited.
- Bring-in shared
- Hardware brought by external but willing to share with others.
- Quota, priority, preemption, and fair share could be enforced to control who has priority.
- Bring-in dedicated
- Hardware brought by external but not willing to share.
Storage
The cluster has a parallel file system which provides fast and reliable access to your data. We are charging monthly by the maximum allowed quota. You could easily check your quota in our web portal or through our CLI client. You may also request a larger quota anytime by submitting a ticket to us.
One should see at least 3 file sets, they are:
- User home directory
- This is where you may store your persistent data. It is mounted at /pfss/home/$USER and has a default quota of 10GB.
- User scratch directory
- This is where you use for I/O during your job. It is mounted at /pfss/scratch01/$USER and the default quota is 100GB. Inactive files may be purged every 30 days.
- Group scratch directory
- This is where you share your file with your group mate. It is mounted at /pfss/scratch02/$GROUP and the default quota is 1TB.
There are many ways you can access your files.
- From the web portal file browser
- SSH / SFTP
- Mount to your local computer using our CLI client
When running jobs, no matter whether you are using our modules or containers. All file sets you have access to will be available.
Networking
Traffic between compute nodes or between compute nodes and the parallel file systems is going through our Infiniband network. Both our modules or containers are compiled with the latest MPI toolchain to fully utilize the bandwidth.
Login nodes farm
Whether you access the cluster through our web portal or your own SSH client, you will be connecting to our SSH servers cluster. Our load balancer will connect you to the server with the least connections. Connections will only be granted when authenticated by a private key, no password authentication is allowed. You may connect through the web portal or the CLI client if you don't want to keep the private key.
Per connection, you will have one dedicated CPU core, 1GB of memory, and a limited bandwidth. It is for you to prepare your software, job script, and data. It is free of charge, and you will have access to all your file sets, all modules, containers, SLURM commands, and our CLI client.
Please leverage compute nodes for heavy workloads. In case you really need more resources on the login node, submit a ticket and let us help.
Web portal
Our web portal provides many features to make the journey easier. Our goal is to enable users from different backgrounds, to consume HPC resources easily and efficiently. We also leverage the web portal internally for research and management. We will cover the details in later chapters. Below are some highlights:
- Web Terminal
- File browser
- Software browser
- Quick jobs launcher
- Job efficiency viewer and alert
- Quota control
- Team management
- Ticket system
- Cost allocation
CLI client
To further accelerate the workflow, we created our own command line client. We will cover the details later but below are some example use cases:
- Connect to the login nodes farm without the private key
- Mount a file sets to your local computer
- Allocate ports from compute node for GUI workloads
- Check quota and usage
- Check cluster healthiness
SoftwaresSoftware
The cluster currently provides free software in two ways: Lmod and Containers. Our team is working hard to provide state-of-the-art software which fine-tuned for the cluster's compute nodes. You may log in to our web portal to browse the available software.
Besides software, we also provide pre-trained models and popular data sets. We will cover the details later.