最近在用并行超算云GPU服务器(中国国家网格12区)搭建毕设的环境,这里记录一下。
首先,超算云服务器的登录可以采用网页版、也可以采用客户端(超算云地址:https://cloud.paratera.com/
)。需要注意的是,并行超算云只提供windows和mac的客户端,Linux用户可能只有使用网页版的界面了(或者用pappcloud直接远程练ssh用vim写:( 哈哈,pappcloud的用法可参见官网下载的《papp_cloud使用手册》)。
超算云上最常见的是用module进行包管理(文档:https://modules.readthedocs.io/en/latest/module.html)。我们可以用module avail
命令来查看现有的包:
[[email protected] ~]$ module avail ------------------------- /usr/share/Modules/modulefiles ------------------------- dot module-git module-info modules null use.own -------------------------------- /etc/modulefiles -------------------------------- mpi/compat-openmpi16-x86_64 mpi/mpich-x86_64 mpi/mpich-3.0-x86_64 mpi/openmpi-x86_64 mpi/mpich-3.2-x86_64 ----------------------------- /software/modulefiles ------------------------------ alphafold/2.0 anaconda/2.7 anaconda/3.7(default) anaconda/3.7.4 bcftools/1.10.1
具体新建环境相关操作大家可以参见module的文档,此处不再赘述。大家需要注意的是,因为Pytorch和Tensorflow对应的CUDA版本有出入,我们建议Pytorch和Tensorflow分别装两个不同的环境。
然后根据下列不同的命令对Pytorch和Tensorflow进行装载。
1. Pytorch环境装载与测试
Pytorch 1.9.0 环境装载:
[[email protected] project]$ module load anaconda/3.7.4(tensflow) [[email protected] project]$ source activate torch (torch) [[email protected] project]$
可以查看此时的torch版本:
(torch) [[email protected] ~]$ pip list |grep torch torch 1.9.0+cu111 torchvision 0.10.0+cu111
接下来我们编写test_torch.py测试文件:
# test_torch.py import torch print(torch.cuda.is_available())
采用以下的sub_torch.sh脚本提交到GPU运算节点运行(注意,提交脚本里面一定要有装载环境操作(在计算节点装载),在用户节点里装载环境没用(用户节点只能用于安装依赖包))
#!/bin/bash #SBATCH -N 1 #SBATCH -n 5 #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --no-requeue module load anaconda/3.7.4 source activate torch export PYTHONUNBUFFERED=1 python test_torch.py
提交命令为
sbatch sub_torch.sh
(注意,不是bash sub_torch.sh,bash不能提交到计算节点)
用squeue查看队列情况
[[email protected] project]$ squeue CLUSTER: priv JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) CLUSTER: swarm JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 16601003 gpu sub_tens macong R INVALID 1 g0002
一段时间后,查看同目录下的slurm-16601003.out输出文件,我们看到
True
说明Pytorch环境配置成功。
2. Tensorflow 2.5.1 环境装载与测试:
Tensorflow 2.5.1 环境装载:
[[email protected] project]$ module load anaconda/3.7.4 [[email protected] project]$ export LD_LIBRARY_PATH=/home/macong/project/cuda/lib64:$LD_LIBRARY_PATH [[email protected] project]$ source activate tensflow (tensflow) [[email protected] project]$
加载完毕后,可以查看此时的tensorflow版本
(tensflow) [[email protected] project]$ pip list |grep tensorflow tensorflow-estimator 2.4.0 tensorflow-gpu 2.4.1
接下来我们编写以下test_tensorflow.py文件:
# test_tensorflow.py import tensorflow as tf print(tf.test.is_gpu_available())
采用以下的sub_tensorflow.sh脚本提交到GPU运算节点运行(同样地,提交脚本里面一定要有装载环境操作。另外注意,因为Tensorflow需要cudnn,这里要额外地增加cuda动态链接库的加载地址)
#!/bin/bash #SBATCH -N 1 #SBATCH -n 5 #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --no-requeue module load anaconda/3.7.4 export LD_LIBRARY_PATH=/home/macong/project/cuda/lib64:$LD_LIBRARY_PATH source activate tensflow export PYTHONUNBUFFERED=1 python test_tensorflow.py
提交命令为
sbatch sub_tensorflow.sh
输出显示
job 16601097 on cluster swarm
同样,我们可以用squeue查看队列情况
[[email protected] project]$ squeue CLUSTER: priv JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) CLUSTER: swarm JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 16601097 gpu sub_tens macong R INVALID 1 g0039
一段时间后,查看同目录下的slurm-16601097.out输出文件,我们看到一长串打印输出
2021-11-28 15:29:22.848812: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 WARNING:tensorflow:From test_tensorflow.py:2: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-11-28 15:30:04.558903: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-11-28 15:30:04.592168: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-11-28 15:30:04.596694: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2021-11-28 15:30:04.736951: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:84:00.0 name: Tesla V100-SXM2-16GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s 2021-11-28 15:30:04.737540: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-11-28 15:30:05.810351: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-11-28 15:30:05.810525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-11-28 15:30:06.033285: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-11-28 15:30:06.193055: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2021-11-28 15:30:06.630374: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-11-28 15:30:06.820341: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2021-11-28 15:30:06.847036: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2021-11-28 15:30:06.850769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-11-28 15:30:06.850852: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-11-28 15:30:09.592923: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-11-28 15:30:09.593017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 2021-11-28 15:30:09.593043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N 2021-11-28 15:30:09.628099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/device:GPU:0 with 14761 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:84:00.0, compute capability: 7.0) True
当然,我们只需要关注最后一行的“True”,说明Tensorflow环境配置成功。
3. 常用命令:
(1) squeue
squeue可用于查看当前的任务队列信息,如之前我们看到的:
[[email protected] project]$ squeue CLUSTER: priv JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) CLUSTER: swarm JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 16601097 gpu sub_tens macong R INVALID 1 g0039
(2) scancel
scancel+任务id可用于将正在运行的任务杀掉,如杀掉正在运行的16601167任务
scancel 16601167
对应的slurm-16601167.out文件中会显示:
slurmstepd: error: *** JOB 16601167 ON g0011 CANCELLED AT 2021-11-28T10:10:00 ***
更多命令可详见官网《中国国家网格12区用户手册v2.4》
Be First to Comment