MLIS member, Prof. Roy Friedman, updates with information on newly available HPC infrastructure:
The Technion has recently purchased a cluster of 4 DGX-A100 servers.
Each server sports eight A100 cards connected by NVidia's nvlink and nvswitch technology (600 Gbps).
The servers are connected by Mellanox 200 Gbps IB switches - each of the 8 cards of each server is independently connected to one of the switches.
Each server has 40GB of memory.
This is already the strongest AI cluster in Israeli academia and the good news is that four additional servers are in the process of being ordered!
The cluster is managed using a slurm job queue and jobs are encapsulated in containers giving maximum flexibility for the runtime environment.
Usage is currently free for Technion researchers and at the moment the cluster is underutilized.
Sometime later this summer we will start charging a symbolic usage fee and later in the autumn this might be raised a bit but will remain much cheaper than any alternative I am aware of.
More information can be found here
To open an account, fill in this form
and remember to check the "GPU Users" box in the affiliation field.