cntk

The Microsoft Cognitive Toolkit, formerly known as CNTK, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph.

In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs.

Running cntk:
use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure :

  1. In the Tags section, locate the container image release that you want to run.
  2. In the Pull column, click the icon to copy the docker pull command.
  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

Run the container image. A typical command to launch the container is:

nvidia-docker run -it --rm -v local_dir:container_dir nvcr.io/nvidia/cntk:<xx.xx>

Where :

  • -it means run in interactive mode
  • --rm will delete the container when finished
  • -v is the mounting directory
  • local_dir is the directory or file from your host system (absolute path) that you want to access from inside your container. For example, the local_dir in the following path is /home/jsmith/data/mnist.

    -v /home/jsmith/data/mnist:/data/mnist

    If you are inside the container, for example, ls /data/mnist, you will see the same files as if you issued the ls /home/jsmith/data/mnist command from outside the container.

  • container_dir is the target directory when you are inside your container. For example, /data/mnist is the target directory in the example:

    -v /home/jsmith/data/mnist:/data/mnist

  • <xx.xx> is the tag. For example, 17.06.

a. When running on a single GPU, the Microsoft Cognitive Toolkit can be invoked using a command similar to the following:

cntk configFile=myscript.cntk ...

b. When running on multiple GPUs, run the Microsoft Cognitive Toolkit through MPI. The following example uses four GPUs, numbered 0..3, for training:

export OMP_NUM_THREADS=10
export CUDA_DEVICE_ORDER=PCI_BUS_ID
export CUDA_VISIBLE_DEVICES=0,1,2,3
mpirun --allow-run-as-root --oversubscribe --npernode 4 \
-x OMP_NUM_THREADS -x CUDA_DEVICE_ORDER -x CUDA_VISIBLE_DEVICES \
cntk configFile=myscript.cntk ...

c. When running all eight GPUs of DGX-1 together is even more simple:

export OMP_NUM_THREADS=10 mpirun --allow-run-as-root --oversubscribe --npernode 8 \ -x OMP_NUM_THREADS cntk configFile=myscript.cntk ...

When running the Microsoft Cognitive Toolkit containers, it is important to include at least the following options:

nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ... nvcr.io/nvidia/cntk:17.02 …

 

For any queries, raise a ticket in the helpdesk or please contact System Administrator, #103,SERC.