{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Deploy the Pretrained Model on Jetson Nano\n**Author**: [BBuf](https://github.com/BBuf)\n\nThis is an example of using Relay to compile a ResNet model and deploy\nit on Jetson Nano.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import tvm\nfrom tvm import te\nimport tvm.relay as relay\nfrom tvm import rpc\nfrom tvm.contrib import utils, graph_executor as runtime\nfrom tvm.contrib.download import download_testdata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n## Build TVM Runtime on Jetson Nano\n\nThe first step is to build the TVM runtime on the remote device.\n\n
All instructions in both this section and next section should be\n executed on the target device, e.g. Jetson Nano. And we assume it\n has Linux running.
If we want to use Jetson Nano's GPU for inference,\n we need to enable the CUDA option in `config.cmake`,\n that is, `set(USE_CUDA ON)`