tensort

NVIDIA TensorRT is a deep learning inference optimizer and runtime that minimizes latency and delivers high-throughput for inference applications. TensorRT-based applications on GPUs perform up to 100x faster than CPU for applications such as video streaming, speech recognition, recommendation and natural language processing in production environments.

Running tensort:

Use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.

Procedure:

  1. In the Tags section, locate the container image release that you want to run.
  2. In the Pull column, click the icon to copy the docker pull command.
  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
  4. Run the container image.Open a command prompt and issue:

       nvidia-docker run -it --rm nvcr.io/nvidia/tensorrt:<xx.xx>

Where:

  • -it means interactive
  • --rm will delete the container when finished
  • <xx.xx> is the tag. For example, 18.01.

 

For any queries, raise a ticket in the helpdesk or please contact System Administrator, #103,SERC.