{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(sphx_glr_tutorial_tvmc_command_line_driver.py)=\n", "# 用 TVMC 编译和优化模型\n", "\n", "原作者:[Leandro Nunes](https://github.com/leandron), [Matthew Barrett](https://github.com/mbaret), [Chris Hoge](https://github.com/hogepodge)\n", "\n", "在本节中,将使用 TVMC,即 TVM 命令行驱动程序。TVMC 工具,它暴露了 TVM 的功能,如 auto-tuning、编译、profiling 和通过命令行界面执行模型。\n", "\n", "在完成本节内容后,将使用 TVMC 来完成以下任务:\n", "\n", "* 为 TVM 运行时编译预训练 ResNet-50 v2 模型。\n", "* 通过编译后的模型运行真实图像,并解释输出和模型的性能。\n", "* 使用 TVM 在 CPU 上调优模型。\n", "* 使用 TVM 收集的调优数据重新编译优化模型。\n", "* 通过优化后的模型运行图像,并比较输出和模型的性能。\n", "\n", "本节的目的是让你了解 TVM 和 TVMC 的能力,并为理解 TVM 的工作原理奠定基础。\n", "\n", "## 使用 TVMC\n", "\n", "TVMC 是 Python 应用程序,是 TVM Python 软件包的一部分。当你使用 Python 包安装 TVM 时,你将得到 TVMC 作为命令行应用程序,名为 ``tvmc``。这个命令的位置将取决于你的平台和安装方法。\n", "\n", "另外,如果你在 ``$PYTHONPATH`` 上将 TVM 作为 Python 模块,你可以通过可执行的 python 模块 ``python -m tvm.driver.tvmc`` 访问命令行驱动功能。\n", "\n", "为简单起见,本教程将提到 TVMC 命令行使用 ``tvmc ``,但同样的结果可以用 ``python -m tvm.driver.tvmc ``。\n", "\n", "你可以使用帮助页面查看:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: tvmc [--config CONFIG] [-v] [--version] [-h]\n", " {micro,run,tune,compile} ...\n", "\n", "TVM compiler driver\n", "\n", "options:\n", " --config CONFIG configuration json file\n", " -v, --verbose increase verbosity\n", " --version print the version and exit\n", " -h, --help show this help message and exit.\n", "\n", "commands:\n", " {micro,run,tune,compile}\n", " micro select micro context.\n", " run run a compiled module\n", " tune auto-tune a model\n", " compile compile a model.\n", "\n", "TVMC - TVM driver command-line interface\n" ] } ], "source": [ "!python -m tvm.driver.tvmc --help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "``tvmc`` 可用的 TVM 的主要功能来自子命令 ``compile`` 和 ``run``,以及 ``tune``。要了解某个子命令下的具体选项,请使用 ``tvmc --help``。将在本教程中逐一介绍这些命令,但首先需要下载预训练模型来使用。\n", "\n", "## 获得模型\n", "\n", "在本教程中,将使用 ResNet-50 v2。ResNet-50 是卷积神经网络,有 50 层深度,设计用于图像分类。将使用的模型已经在超过一百万张图片上进行了预训练,有 1000 种不同的分类。该网络输入图像大小为 224x224。如果你有兴趣探究更多关于 ResNet-50 模型的结构,建议下载 `[Netron](https://netron.app),它免费提供的 ML 模型查看器。\n", "\n", "在本教程中,将使用 ONNX 格式的模型。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2022-04-26 13:07:52-- https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v2-7.onnx\n", "Resolving github.com (github.com)... 20.205.243.166\n", "Connecting to github.com (github.com)|20.205.243.166|:443... connected.\n", "HTTP request sent, awaiting response... 302 Found\n", "Location: https://media.githubusercontent.com/media/onnx/models/main/vision/classification/resnet/model/resnet50-v2-7.onnx [following]\n", "--2022-04-26 13:07:53-- https://media.githubusercontent.com/media/onnx/models/main/vision/classification/resnet/model/resnet50-v2-7.onnx\n", "Resolving media.githubusercontent.com (media.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ...\n", "Connecting to media.githubusercontent.com (media.githubusercontent.com)|185.199.111.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 102442450 (98M) [application/octet-stream]\n", "Saving to: ‘resnet50-v2-7.onnx’\n", "\n", "resnet50-v2-7.onnx 100%[===================>] 97.70M 4.51MB/s in 25s \n", "\n", "2022-04-26 13:08:27 (3.89 MB/s) - ‘resnet50-v2-7.onnx’ saved [102442450/102442450]\n", "\n" ] } ], "source": [ "!wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了让该模型可以被其他教程使用,需要:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "!mv resnet50-v2-7.onnx ../../_models/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} 支持的模型格式\n", "TVMC 支持用 Keras、ONNX、TensorFlow、TFLite 和 Torch 创建的模型。如果你需要明确地提供你所使用的模型格式,请使用选项 ``--model-format``。\n", "```\n", "\n", "更多信息见:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: tvmc compile [-h] [--cross-compiler CROSS_COMPILER]\n", " [--cross-compiler-options CROSS_COMPILER_OPTIONS]\n", " [--desired-layout {NCHW,NHWC}] [--dump-code FORMAT]\n", " [--model-format {keras,onnx,pb,tflite,pytorch,paddle}]\n", " [-o OUTPUT] [-f {so,mlf}] [--pass-config name=value]\n", " [--target TARGET]\n", " [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE]\n", " [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS]\n", " [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL]\n", " [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG]\n", " [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE]\n", " [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS]\n", " [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE]\n", " [--target-ext_dev-libs TARGET_EXT_DEV_LIBS]\n", " [--target-ext_dev-model TARGET_EXT_DEV_MODEL]\n", " [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB]\n", " [--target-ext_dev-tag TARGET_EXT_DEV_TAG]\n", " [--target-ext_dev-device TARGET_EXT_DEV_DEVICE]\n", " [--target-ext_dev-keys TARGET_EXT_DEV_KEYS]\n", " [--target-llvm-fast-math TARGET_LLVM_FAST_MATH]\n", " [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL]\n", " [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API]\n", " [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE]\n", " [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF]\n", " [--target-llvm-mattr TARGET_LLVM_MATTR]\n", " [--target-llvm-num-cores TARGET_LLVM_NUM_CORES]\n", " [--target-llvm-libs TARGET_LLVM_LIBS]\n", " [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ]\n", " [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS]\n", " [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API]\n", " [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT]\n", " [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB]\n", " [--target-llvm-tag TARGET_LLVM_TAG]\n", " [--target-llvm-mtriple TARGET_LLVM_MTRIPLE]\n", " [--target-llvm-model TARGET_LLVM_MODEL]\n", " [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI]\n", " [--target-llvm-mcpu TARGET_LLVM_MCPU]\n", " [--target-llvm-device TARGET_LLVM_DEVICE]\n", " [--target-llvm-runtime TARGET_LLVM_RUNTIME]\n", " [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP]\n", " [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC]\n", " [--target-llvm-mabi TARGET_LLVM_MABI]\n", " [--target-llvm-keys TARGET_LLVM_KEYS]\n", " [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN]\n", " [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE]\n", " [--target-hybrid-libs TARGET_HYBRID_LIBS]\n", " [--target-hybrid-model TARGET_HYBRID_MODEL]\n", " [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB]\n", " [--target-hybrid-tag TARGET_HYBRID_TAG]\n", " [--target-hybrid-device TARGET_HYBRID_DEVICE]\n", " [--target-hybrid-keys TARGET_HYBRID_KEYS]\n", " [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE]\n", " [--target-aocl-libs TARGET_AOCL_LIBS]\n", " [--target-aocl-model TARGET_AOCL_MODEL]\n", " [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB]\n", " [--target-aocl-tag TARGET_AOCL_TAG]\n", " [--target-aocl-device TARGET_AOCL_DEVICE]\n", " [--target-aocl-keys TARGET_AOCL_KEYS]\n", " [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS]\n", " [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE]\n", " [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE]\n", " [--target-nvptx-libs TARGET_NVPTX_LIBS]\n", " [--target-nvptx-model TARGET_NVPTX_MODEL]\n", " [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB]\n", " [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE]\n", " [--target-nvptx-tag TARGET_NVPTX_TAG]\n", " [--target-nvptx-mcpu TARGET_NVPTX_MCPU]\n", " [--target-nvptx-device TARGET_NVPTX_DEVICE]\n", " [--target-nvptx-keys TARGET_NVPTX_KEYS]\n", " [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS]\n", " [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE]\n", " [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE]\n", " [--target-opencl-libs TARGET_OPENCL_LIBS]\n", " [--target-opencl-model TARGET_OPENCL_MODEL]\n", " [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB]\n", " [--target-opencl-tag TARGET_OPENCL_TAG]\n", " [--target-opencl-device TARGET_OPENCL_DEVICE]\n", " [--target-opencl-keys TARGET_OPENCL_KEYS]\n", " [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS]\n", " [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE]\n", " [--target-metal-from_device TARGET_METAL_FROM_DEVICE]\n", " [--target-metal-libs TARGET_METAL_LIBS]\n", " [--target-metal-keys TARGET_METAL_KEYS]\n", " [--target-metal-model TARGET_METAL_MODEL]\n", " [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB]\n", " [--target-metal-tag TARGET_METAL_TAG]\n", " [--target-metal-device TARGET_METAL_DEVICE]\n", " [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS]\n", " [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS]\n", " [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE]\n", " [--target-webgpu-libs TARGET_WEBGPU_LIBS]\n", " [--target-webgpu-model TARGET_WEBGPU_MODEL]\n", " [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB]\n", " [--target-webgpu-tag TARGET_WEBGPU_TAG]\n", " [--target-webgpu-device TARGET_WEBGPU_DEVICE]\n", " [--target-webgpu-keys TARGET_WEBGPU_KEYS]\n", " [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS]\n", " [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE]\n", " [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE]\n", " [--target-rocm-libs TARGET_ROCM_LIBS]\n", " [--target-rocm-mattr TARGET_ROCM_MATTR]\n", " [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-rocm-model TARGET_ROCM_MODEL]\n", " [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB]\n", " [--target-rocm-mtriple TARGET_ROCM_MTRIPLE]\n", " [--target-rocm-tag TARGET_ROCM_TAG]\n", " [--target-rocm-device TARGET_ROCM_DEVICE]\n", " [--target-rocm-mcpu TARGET_ROCM_MCPU]\n", " [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK]\n", " [--target-rocm-keys TARGET_ROCM_KEYS]\n", " [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS]\n", " [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE]\n", " [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE]\n", " [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER]\n", " [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION]\n", " [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER]\n", " [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z]\n", " [--target-vulkan-libs TARGET_VULKAN_LIBS]\n", " [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION]\n", " [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS]\n", " [--target-vulkan-mattr TARGET_VULKAN_MATTR]\n", " [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE]\n", " [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE]\n", " [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR]\n", " [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64]\n", " [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32]\n", " [--target-vulkan-model TARGET_VULKAN_MODEL]\n", " [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X]\n", " [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB]\n", " [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y]\n", " [--target-vulkan-tag TARGET_VULKAN_TAG]\n", " [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8]\n", " [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION]\n", " [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION]\n", " [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER]\n", " [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE]\n", " [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32]\n", " [--target-vulkan-device TARGET_VULKAN_DEVICE]\n", " [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK]\n", " [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE]\n", " [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME]\n", " [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT]\n", " [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS]\n", " [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16]\n", " [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME]\n", " [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64]\n", " [--target-vulkan-keys TARGET_VULKAN_KEYS]\n", " [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16]\n", " [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS]\n", " [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE]\n", " [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE]\n", " [--target-cuda-arch TARGET_CUDA_ARCH]\n", " [--target-cuda-libs TARGET_CUDA_LIBS]\n", " [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-cuda-model TARGET_CUDA_MODEL]\n", " [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB]\n", " [--target-cuda-tag TARGET_CUDA_TAG]\n", " [--target-cuda-device TARGET_CUDA_DEVICE]\n", " [--target-cuda-mcpu TARGET_CUDA_MCPU]\n", " [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK]\n", " [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK]\n", " [--target-cuda-keys TARGET_CUDA_KEYS]\n", " [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE]\n", " [--target-sdaccel-libs TARGET_SDACCEL_LIBS]\n", " [--target-sdaccel-model TARGET_SDACCEL_MODEL]\n", " [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB]\n", " [--target-sdaccel-tag TARGET_SDACCEL_TAG]\n", " [--target-sdaccel-device TARGET_SDACCEL_DEVICE]\n", " [--target-sdaccel-keys TARGET_SDACCEL_KEYS]\n", " [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE]\n", " [--target-composite-libs TARGET_COMPOSITE_LIBS]\n", " [--target-composite-devices TARGET_COMPOSITE_DEVICES]\n", " [--target-composite-model TARGET_COMPOSITE_MODEL]\n", " [--target-composite-tag TARGET_COMPOSITE_TAG]\n", " [--target-composite-device TARGET_COMPOSITE_DEVICE]\n", " [--target-composite-keys TARGET_COMPOSITE_KEYS]\n", " [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE]\n", " [--target-stackvm-libs TARGET_STACKVM_LIBS]\n", " [--target-stackvm-model TARGET_STACKVM_MODEL]\n", " [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB]\n", " [--target-stackvm-tag TARGET_STACKVM_TAG]\n", " [--target-stackvm-device TARGET_STACKVM_DEVICE]\n", " [--target-stackvm-keys TARGET_STACKVM_KEYS]\n", " [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE]\n", " [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS]\n", " [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL]\n", " [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB]\n", " [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG]\n", " [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE]\n", " [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS]\n", " [--target-c-unpacked-api TARGET_C_UNPACKED_API]\n", " [--target-c-from_device TARGET_C_FROM_DEVICE]\n", " [--target-c-libs TARGET_C_LIBS]\n", " [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT]\n", " [--target-c-executor TARGET_C_EXECUTOR]\n", " [--target-c-link-params TARGET_C_LINK_PARAMS]\n", " [--target-c-model TARGET_C_MODEL]\n", " [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT]\n", " [--target-c-system-lib TARGET_C_SYSTEM_LIB]\n", " [--target-c-tag TARGET_C_TAG]\n", " [--target-c-interface-api TARGET_C_INTERFACE_API]\n", " [--target-c-mcpu TARGET_C_MCPU]\n", " [--target-c-device TARGET_C_DEVICE]\n", " [--target-c-runtime TARGET_C_RUNTIME]\n", " [--target-c-keys TARGET_C_KEYS]\n", " [--target-c-march TARGET_C_MARCH]\n", " [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE]\n", " [--target-hexagon-libs TARGET_HEXAGON_LIBS]\n", " [--target-hexagon-mattr TARGET_HEXAGON_MATTR]\n", " [--target-hexagon-model TARGET_HEXAGON_MODEL]\n", " [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS]\n", " [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE]\n", " [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB]\n", " [--target-hexagon-mcpu TARGET_HEXAGON_MCPU]\n", " [--target-hexagon-device TARGET_HEXAGON_DEVICE]\n", " [--target-hexagon-tag TARGET_HEXAGON_TAG]\n", " [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS]\n", " [--target-hexagon-keys TARGET_HEXAGON_KEYS]\n", " [--tuning-records PATH] [--executor EXECUTOR]\n", " [--executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS]\n", " [--executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT]\n", " [--executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API]\n", " [--executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API]\n", " [--executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS]\n", " [--runtime RUNTIME]\n", " [--runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB]\n", " [--runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB] [-v]\n", " [-O [0-3]] [--input-shapes INPUT_SHAPES]\n", " [--disabled-pass DISABLED_PASS]\n", " [--module-name MODULE_NAME]\n", " FILE\n", "\n", "positional arguments:\n", " FILE path to the input model file.\n", "\n", "options:\n", " -h, --help show this help message and exit\n", " --cross-compiler CROSS_COMPILER\n", " the cross compiler to generate target libraries, e.g.\n", " 'aarch64-linux-gnu-gcc'.\n", " --cross-compiler-options CROSS_COMPILER_OPTIONS\n", " the cross compiler options to generate target\n", " libraries, e.g. '-mfpu=neon-vfpv4'.\n", " --desired-layout {NCHW,NHWC}\n", " change the data layout of the whole graph.\n", " --dump-code FORMAT comma separated list of formats to export the input\n", " model, e.g. 'asm,ll,relay'.\n", " --model-format {keras,onnx,pb,tflite,pytorch,paddle}\n", " specify input model format.\n", " -o OUTPUT, --output OUTPUT\n", " output the compiled module to a specified archive.\n", " Defaults to 'module.tar'.\n", " -f {so,mlf}, --output-format {so,mlf}\n", " output format. Use 'so' for shared object or 'mlf' for\n", " Model Library Format (only for microTVM targets).\n", " Defaults to 'so'.\n", " --pass-config name=value\n", " configurations to be used at compile time. This option\n", " can be provided multiple times, each one to set one\n", " configuration value, e.g. '--pass-config\n", " relay.backend.use_auto_scheduler=0', e.g. '--pass-\n", " config\n", " tir.add_lower_pass=opt_level1,pass1,opt_level2,pass2'.\n", " --target TARGET compilation target as plain string, inline JSON or\n", " path to a JSON file\n", " --tuning-records PATH\n", " path to an auto-tuning log file by AutoTVM. If not\n", " presented, the fallback/tophub configs will be used.\n", " --executor EXECUTOR Executor to compile the model with\n", " --runtime RUNTIME Runtime to compile the model with\n", " -v, --verbose increase verbosity.\n", " -O [0-3], --opt-level [0-3]\n", " specify which optimization level to use. Defaults to\n", " '3'.\n", " --input-shapes INPUT_SHAPES\n", " specify non-generic shapes for model to run, format is\n", " \"input_name:[dim1,dim2,...,dimn]\n", " input_name2:[dim1,dim2]\".\n", " --disabled-pass DISABLED_PASS\n", " disable specific passes, comma-separated list of pass\n", " names.\n", " --module-name MODULE_NAME\n", " The output module name. Defaults to 'default'.\n", "\n", "target example_target_hook:\n", " --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE\n", " target example_target_hook from_device\n", " --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS\n", " target example_target_hook libs options\n", " --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL\n", " target example_target_hook model string\n", " --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG\n", " target example_target_hook tag string\n", " --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE\n", " target example_target_hook device string\n", " --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS\n", " target example_target_hook keys options\n", "\n", "target ext_dev:\n", " --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE\n", " target ext_dev from_device\n", " --target-ext_dev-libs TARGET_EXT_DEV_LIBS\n", " target ext_dev libs options\n", " --target-ext_dev-model TARGET_EXT_DEV_MODEL\n", " target ext_dev model string\n", " --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB\n", " target ext_dev system-lib\n", " --target-ext_dev-tag TARGET_EXT_DEV_TAG\n", " target ext_dev tag string\n", " --target-ext_dev-device TARGET_EXT_DEV_DEVICE\n", " target ext_dev device string\n", " --target-ext_dev-keys TARGET_EXT_DEV_KEYS\n", " target ext_dev keys options\n", "\n", "target llvm:\n", " --target-llvm-fast-math TARGET_LLVM_FAST_MATH\n", " target llvm fast-math\n", " --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL\n", " target llvm opt-level\n", " --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API\n", " target llvm unpacked-api\n", " --target-llvm-from_device TARGET_LLVM_FROM_DEVICE\n", " target llvm from_device\n", " --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF\n", " target llvm fast-math-ninf\n", " --target-llvm-mattr TARGET_LLVM_MATTR\n", " target llvm mattr options\n", " --target-llvm-num-cores TARGET_LLVM_NUM_CORES\n", " target llvm num-cores\n", " --target-llvm-libs TARGET_LLVM_LIBS\n", " target llvm libs options\n", " --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ\n", " target llvm fast-math-nsz\n", " --target-llvm-link-params TARGET_LLVM_LINK_PARAMS\n", " target llvm link-params\n", " --target-llvm-interface-api TARGET_LLVM_INTERFACE_API\n", " target llvm interface-api string\n", " --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT\n", " target llvm fast-math-contract\n", " --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB\n", " target llvm system-lib\n", " --target-llvm-tag TARGET_LLVM_TAG\n", " target llvm tag string\n", " --target-llvm-mtriple TARGET_LLVM_MTRIPLE\n", " target llvm mtriple string\n", " --target-llvm-model TARGET_LLVM_MODEL\n", " target llvm model string\n", " --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI\n", " target llvm mfloat-abi string\n", " --target-llvm-mcpu TARGET_LLVM_MCPU\n", " target llvm mcpu string\n", " --target-llvm-device TARGET_LLVM_DEVICE\n", " target llvm device string\n", " --target-llvm-runtime TARGET_LLVM_RUNTIME\n", " target llvm runtime string\n", " --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP\n", " target llvm fast-math-arcp\n", " --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC\n", " target llvm fast-math-reassoc\n", " --target-llvm-mabi TARGET_LLVM_MABI\n", " target llvm mabi string\n", " --target-llvm-keys TARGET_LLVM_KEYS\n", " target llvm keys options\n", " --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN\n", " target llvm fast-math-nnan\n", "\n", "target hybrid:\n", " --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE\n", " target hybrid from_device\n", " --target-hybrid-libs TARGET_HYBRID_LIBS\n", " target hybrid libs options\n", " --target-hybrid-model TARGET_HYBRID_MODEL\n", " target hybrid model string\n", " --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB\n", " target hybrid system-lib\n", " --target-hybrid-tag TARGET_HYBRID_TAG\n", " target hybrid tag string\n", " --target-hybrid-device TARGET_HYBRID_DEVICE\n", " target hybrid device string\n", " --target-hybrid-keys TARGET_HYBRID_KEYS\n", " target hybrid keys options\n", "\n", "target aocl:\n", " --target-aocl-from_device TARGET_AOCL_FROM_DEVICE\n", " target aocl from_device\n", " --target-aocl-libs TARGET_AOCL_LIBS\n", " target aocl libs options\n", " --target-aocl-model TARGET_AOCL_MODEL\n", " target aocl model string\n", " --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB\n", " target aocl system-lib\n", " --target-aocl-tag TARGET_AOCL_TAG\n", " target aocl tag string\n", " --target-aocl-device TARGET_AOCL_DEVICE\n", " target aocl device string\n", " --target-aocl-keys TARGET_AOCL_KEYS\n", " target aocl keys options\n", "\n", "target nvptx:\n", " --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS\n", " target nvptx max_num_threads\n", " --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE\n", " target nvptx thread_warp_size\n", " --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE\n", " target nvptx from_device\n", " --target-nvptx-libs TARGET_NVPTX_LIBS\n", " target nvptx libs options\n", " --target-nvptx-model TARGET_NVPTX_MODEL\n", " target nvptx model string\n", " --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB\n", " target nvptx system-lib\n", " --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE\n", " target nvptx mtriple string\n", " --target-nvptx-tag TARGET_NVPTX_TAG\n", " target nvptx tag string\n", " --target-nvptx-mcpu TARGET_NVPTX_MCPU\n", " target nvptx mcpu string\n", " --target-nvptx-device TARGET_NVPTX_DEVICE\n", " target nvptx device string\n", " --target-nvptx-keys TARGET_NVPTX_KEYS\n", " target nvptx keys options\n", "\n", "target opencl:\n", " --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS\n", " target opencl max_num_threads\n", " --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE\n", " target opencl thread_warp_size\n", " --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE\n", " target opencl from_device\n", " --target-opencl-libs TARGET_OPENCL_LIBS\n", " target opencl libs options\n", " --target-opencl-model TARGET_OPENCL_MODEL\n", " target opencl model string\n", " --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB\n", " target opencl system-lib\n", " --target-opencl-tag TARGET_OPENCL_TAG\n", " target opencl tag string\n", " --target-opencl-device TARGET_OPENCL_DEVICE\n", " target opencl device string\n", " --target-opencl-keys TARGET_OPENCL_KEYS\n", " target opencl keys options\n", "\n", "target metal:\n", " --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS\n", " target metal max_num_threads\n", " --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE\n", " target metal thread_warp_size\n", " --target-metal-from_device TARGET_METAL_FROM_DEVICE\n", " target metal from_device\n", " --target-metal-libs TARGET_METAL_LIBS\n", " target metal libs options\n", " --target-metal-keys TARGET_METAL_KEYS\n", " target metal keys options\n", " --target-metal-model TARGET_METAL_MODEL\n", " target metal model string\n", " --target-metal-system-lib TARGET_METAL_SYSTEM_LIB\n", " target metal system-lib\n", " --target-metal-tag TARGET_METAL_TAG\n", " target metal tag string\n", " --target-metal-device TARGET_METAL_DEVICE\n", " target metal device string\n", " --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS\n", " target metal max_function_args\n", "\n", "target webgpu:\n", " --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS\n", " target webgpu max_num_threads\n", " --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE\n", " target webgpu from_device\n", " --target-webgpu-libs TARGET_WEBGPU_LIBS\n", " target webgpu libs options\n", " --target-webgpu-model TARGET_WEBGPU_MODEL\n", " target webgpu model string\n", " --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB\n", " target webgpu system-lib\n", " --target-webgpu-tag TARGET_WEBGPU_TAG\n", " target webgpu tag string\n", " --target-webgpu-device TARGET_WEBGPU_DEVICE\n", " target webgpu device string\n", " --target-webgpu-keys TARGET_WEBGPU_KEYS\n", " target webgpu keys options\n", "\n", "target rocm:\n", " --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS\n", " target rocm max_num_threads\n", " --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE\n", " target rocm thread_warp_size\n", " --target-rocm-from_device TARGET_ROCM_FROM_DEVICE\n", " target rocm from_device\n", " --target-rocm-libs TARGET_ROCM_LIBS\n", " target rocm libs options\n", " --target-rocm-mattr TARGET_ROCM_MATTR\n", " target rocm mattr options\n", " --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK\n", " target rocm max_shared_memory_per_block\n", " --target-rocm-model TARGET_ROCM_MODEL\n", " target rocm model string\n", " --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB\n", " target rocm system-lib\n", " --target-rocm-mtriple TARGET_ROCM_MTRIPLE\n", " target rocm mtriple string\n", " --target-rocm-tag TARGET_ROCM_TAG\n", " target rocm tag string\n", " --target-rocm-device TARGET_ROCM_DEVICE\n", " target rocm device string\n", " --target-rocm-mcpu TARGET_ROCM_MCPU\n", " target rocm mcpu string\n", " --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK\n", " target rocm max_threads_per_block\n", " --target-rocm-keys TARGET_ROCM_KEYS\n", " target rocm keys options\n", "\n", "target vulkan:\n", " --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS\n", " target vulkan max_num_threads\n", " --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE\n", " target vulkan thread_warp_size\n", " --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE\n", " target vulkan from_device\n", " --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER\n", " target vulkan max_per_stage_descriptor_storage_buffer\n", " --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION\n", " target vulkan driver_version\n", " --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER\n", " target vulkan supports_16bit_buffer\n", " --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z\n", " target vulkan max_block_size_z\n", " --target-vulkan-libs TARGET_VULKAN_LIBS\n", " target vulkan libs options\n", " --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION\n", " target vulkan supports_dedicated_allocation\n", " --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS\n", " target vulkan supported_subgroup_operations\n", " --target-vulkan-mattr TARGET_VULKAN_MATTR\n", " target vulkan mattr options\n", " --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE\n", " target vulkan max_storage_buffer_range\n", " --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE\n", " target vulkan max_push_constants_size\n", " --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR\n", " target vulkan supports_push_descriptor\n", " --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64\n", " target vulkan supports_int64\n", " --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32\n", " target vulkan supports_float32\n", " --target-vulkan-model TARGET_VULKAN_MODEL\n", " target vulkan model string\n", " --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X\n", " target vulkan max_block_size_x\n", " --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB\n", " target vulkan system-lib\n", " --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y\n", " target vulkan max_block_size_y\n", " --target-vulkan-tag TARGET_VULKAN_TAG\n", " target vulkan tag string\n", " --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8\n", " target vulkan supports_int8\n", " --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION\n", " target vulkan max_spirv_version\n", " --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION\n", " target vulkan vulkan_api_version\n", " --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER\n", " target vulkan supports_8bit_buffer\n", " --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE\n", " target vulkan device_type string\n", " --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32\n", " target vulkan supports_int32\n", " --target-vulkan-device TARGET_VULKAN_DEVICE\n", " target vulkan device string\n", " --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK\n", " target vulkan max_threads_per_block\n", " --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE\n", " target vulkan max_uniform_buffer_range\n", " --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME\n", " target vulkan driver_name string\n", " --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT\n", " target vulkan supports_integer_dot_product\n", " --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS\n", " target vulkan supports_storage_buffer_storage_class\n", " --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16\n", " target vulkan supports_float16\n", " --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME\n", " target vulkan device_name string\n", " --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64\n", " target vulkan supports_float64\n", " --target-vulkan-keys TARGET_VULKAN_KEYS\n", " target vulkan keys options\n", " --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK\n", " target vulkan max_shared_memory_per_block\n", " --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16\n", " target vulkan supports_int16\n", "\n", "target cuda:\n", " --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS\n", " target cuda max_num_threads\n", " --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE\n", " target cuda thread_warp_size\n", " --target-cuda-from_device TARGET_CUDA_FROM_DEVICE\n", " target cuda from_device\n", " --target-cuda-arch TARGET_CUDA_ARCH\n", " target cuda arch string\n", " --target-cuda-libs TARGET_CUDA_LIBS\n", " target cuda libs options\n", " --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK\n", " target cuda max_shared_memory_per_block\n", " --target-cuda-model TARGET_CUDA_MODEL\n", " target cuda model string\n", " --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB\n", " target cuda system-lib\n", " --target-cuda-tag TARGET_CUDA_TAG\n", " target cuda tag string\n", " --target-cuda-device TARGET_CUDA_DEVICE\n", " target cuda device string\n", " --target-cuda-mcpu TARGET_CUDA_MCPU\n", " target cuda mcpu string\n", " --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK\n", " target cuda max_threads_per_block\n", " --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK\n", " target cuda registers_per_block\n", " --target-cuda-keys TARGET_CUDA_KEYS\n", " target cuda keys options\n", "\n", "target sdaccel:\n", " --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE\n", " target sdaccel from_device\n", " --target-sdaccel-libs TARGET_SDACCEL_LIBS\n", " target sdaccel libs options\n", " --target-sdaccel-model TARGET_SDACCEL_MODEL\n", " target sdaccel model string\n", " --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB\n", " target sdaccel system-lib\n", " --target-sdaccel-tag TARGET_SDACCEL_TAG\n", " target sdaccel tag string\n", " --target-sdaccel-device TARGET_SDACCEL_DEVICE\n", " target sdaccel device string\n", " --target-sdaccel-keys TARGET_SDACCEL_KEYS\n", " target sdaccel keys options\n", "\n", "target composite:\n", " --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE\n", " target composite from_device\n", " --target-composite-libs TARGET_COMPOSITE_LIBS\n", " target composite libs options\n", " --target-composite-devices TARGET_COMPOSITE_DEVICES\n", " target composite devices options\n", " --target-composite-model TARGET_COMPOSITE_MODEL\n", " target composite model string\n", " --target-composite-tag TARGET_COMPOSITE_TAG\n", " target composite tag string\n", " --target-composite-device TARGET_COMPOSITE_DEVICE\n", " target composite device string\n", " --target-composite-keys TARGET_COMPOSITE_KEYS\n", " target composite keys options\n", "\n", "target stackvm:\n", " --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE\n", " target stackvm from_device\n", " --target-stackvm-libs TARGET_STACKVM_LIBS\n", " target stackvm libs options\n", " --target-stackvm-model TARGET_STACKVM_MODEL\n", " target stackvm model string\n", " --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB\n", " target stackvm system-lib\n", " --target-stackvm-tag TARGET_STACKVM_TAG\n", " target stackvm tag string\n", " --target-stackvm-device TARGET_STACKVM_DEVICE\n", " target stackvm device string\n", " --target-stackvm-keys TARGET_STACKVM_KEYS\n", " target stackvm keys options\n", "\n", "target aocl_sw_emu:\n", " --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE\n", " target aocl_sw_emu from_device\n", " --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS\n", " target aocl_sw_emu libs options\n", " --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL\n", " target aocl_sw_emu model string\n", " --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB\n", " target aocl_sw_emu system-lib\n", " --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG\n", " target aocl_sw_emu tag string\n", " --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE\n", " target aocl_sw_emu device string\n", " --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS\n", " target aocl_sw_emu keys options\n", "\n", "target c:\n", " --target-c-unpacked-api TARGET_C_UNPACKED_API\n", " target c unpacked-api\n", " --target-c-from_device TARGET_C_FROM_DEVICE\n", " target c from_device\n", " --target-c-libs TARGET_C_LIBS\n", " target c libs options\n", " --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT\n", " target c constants-byte-alignment\n", " --target-c-executor TARGET_C_EXECUTOR\n", " target c executor string\n", " --target-c-link-params TARGET_C_LINK_PARAMS\n", " target c link-params\n", " --target-c-model TARGET_C_MODEL\n", " target c model string\n", " --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT\n", " target c workspace-byte-alignment\n", " --target-c-system-lib TARGET_C_SYSTEM_LIB\n", " target c system-lib\n", " --target-c-tag TARGET_C_TAG\n", " target c tag string\n", " --target-c-interface-api TARGET_C_INTERFACE_API\n", " target c interface-api string\n", " --target-c-mcpu TARGET_C_MCPU\n", " target c mcpu string\n", " --target-c-device TARGET_C_DEVICE\n", " target c device string\n", " --target-c-runtime TARGET_C_RUNTIME\n", " target c runtime string\n", " --target-c-keys TARGET_C_KEYS\n", " target c keys options\n", " --target-c-march TARGET_C_MARCH\n", " target c march string\n", "\n", "target hexagon:\n", " --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE\n", " target hexagon from_device\n", " --target-hexagon-libs TARGET_HEXAGON_LIBS\n", " target hexagon libs options\n", " --target-hexagon-mattr TARGET_HEXAGON_MATTR\n", " target hexagon mattr options\n", " --target-hexagon-model TARGET_HEXAGON_MODEL\n", " target hexagon model string\n", " --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS\n", " target hexagon llvm-options options\n", " --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE\n", " target hexagon mtriple string\n", " --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB\n", " target hexagon system-lib\n", " --target-hexagon-mcpu TARGET_HEXAGON_MCPU\n", " target hexagon mcpu string\n", " --target-hexagon-device TARGET_HEXAGON_DEVICE\n", " target hexagon device string\n", " --target-hexagon-tag TARGET_HEXAGON_TAG\n", " target hexagon tag string\n", " --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS\n", " target hexagon link-params\n", " --target-hexagon-keys TARGET_HEXAGON_KEYS\n", " target hexagon keys options\n", "\n", "executor graph:\n", " --executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS\n", " Executor graph link-params\n", "\n", "executor aot:\n", " --executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT\n", " Executor aot workspace-byte-alignment\n", " --executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API\n", " Executor aot unpacked-api\n", " --executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API\n", " Executor aot interface-api string\n", " --executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS\n", " Executor aot link-params\n", "\n", "runtime cpp:\n", " --runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB\n", " Runtime cpp system-lib\n", "\n", "runtime crt:\n", " --runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB\n", " Runtime crt system-lib\n" ] } ], "source": [ "!python -m tvm.driver.tvmc compile --help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} 为 TVM 添加 ONNX 支持\n", "TVM 依赖于你系统中的 ONNX python 库。你可以使用 ``pip3 install --user onnx onnxoptimizer`` 命令来安装 ONNX。如果你有 root 权限并且想全局安装 ONNX,你可以去掉 ``--user`` 选项。对 ``onnxoptimizer`` 的依赖是可选的,仅用于 ``onnx>=1.9``。\n", "```\n", "\n", "## 将 ONNX 模型编译到 TVM 运行时中\n", "\n", "一旦下载了 ResNet-50 模型,下一步就是对其进行编译。为了达到这个目的,将使用 ``tvmc compile``。从编译过程中得到的输出是模型的 TAR 包,它被编译成目标平台的动态库。可以使用 TVM 运行时在目标设备上运行该模型。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.\n" ] } ], "source": [ "# 这可能需要几分钟的时间,取决于你的机器\n", "!python -m tvm.driver.tvmc compile --target \"llvm\" \\\n", " --output resnet50-v2-7-tvm.tar \\\n", " ../../_models/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "查看 ``tvmc compile`` 在 module 中创建的文件:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "mod.so\n", "mod.json\n", "mod.params\n" ] } ], "source": [ "%%bash\n", "mkdir model\n", "tar -xvf resnet50-v2-7-tvm.tar -C model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "列出了三个文件:\n", "\n", "* ``mod.so`` 是模型,表示为 C++ 库,可以被 TVM 运行时加载。\n", "* ``mod.json`` 是 TVM Relay 计算图的文本表示。\n", "* ``mod.params`` 是包含预训练模型参数的文件。\n", "\n", "该 module 可以被你的应用程序直接加载,而 model 可以通过 TVM 运行时 API 运行。\n", "\n", "```{admonition} 定义正确的 target\n", "指定正确的目标(选项 ``--target``)可以对编译后的模块的性能产生巨大的影响,因为它可以利用目标上可用的硬件特性。\n", " \n", "欲了解更多信息,请参考 [为 x86 CPU 自动调优卷积网络](tune_relay_x86)。建议确定你运行的是哪种 CPU,以及可选的功能,并适当地设置目标。\n", "```\n", "\n", "## 用 TVMC 从编译的模块中运行模型\n", "\n", "已经将模型编译到模块,可以使用 TVM 运行时来进行预测。\n", "\n", "\n", "TVMC 内置了 TVM 运行时,允许你运行编译的 TVM 模型。为了使用 TVMC 来运行模型并进行预测,需要两样东西:\n", "\n", "- 编译后的模块,我们刚刚生成出来。\n", "- 对模型的有效输入,以进行预测。\n", "\n", "当涉及到预期的张量形状、格式和数据类型时,每个模型都很特别。出于这个原因,大多数模型需要一些预处理和后处理,以确保输入是有效的,并解释输出结果。TVMC 对输入和输出数据都采用了 NumPy 的 ``.npz`` 格式。这是得到良好支持的 NumPy 格式,可以将多个数组序列化为文件。\n", "\n", "作为本教程的输入,将使用一只猫的图像,但你可以自由地用你选择的任何图像来代替这个图像。\n", "\n", "### 输入预处理\n", "\n", "对于 ResNet-50 v2 模型,预期输入是 ImageNet 格式的。下面是为 ResNet-50 v2 预处理图像的脚本例子。\n", "\n", "你将需要安装支持的 Python 图像库的版本。你可以使用 ``pip3 install --user pillow`` 来满足脚本的这个要求。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "#!python ./preprocess.py\n", "from tvm.contrib.download import download_testdata\n", "from PIL import Image\n", "import numpy as np\n", "\n", "img_url = \"https://s3.amazonaws.com/model-server/inputs/kitten.jpg\"\n", "img_path = download_testdata(img_url, \"imagenet_cat.png\", module=\"data\")\n", "\n", "# Resize it to 224x224\n", "resized_image = Image.open(img_path).resize((224, 224))\n", "img_data = np.asarray(resized_image).astype(\"float32\")\n", "\n", "# ONNX expects NCHW input, so convert the array\n", "img_data = np.transpose(img_data, (2, 0, 1))\n", "\n", "# Normalize according to ImageNet\n", "imagenet_mean = np.array([0.485, 0.456, 0.406])\n", "imagenet_stddev = np.array([0.229, 0.224, 0.225])\n", "norm_img_data = np.zeros(img_data.shape).astype(\"float32\")\n", "for i in range(img_data.shape[0]):\n", " norm_img_data[i, :, :] = (img_data[i, :, :] / 255 - imagenet_mean[i]) / imagenet_stddev[i]\n", "\n", "# Add batch dimension\n", "img_data = np.expand_dims(norm_img_data, axis=0)\n", "\n", "# Save to .npz (outputs imagenet_cat.npz)\n", "np.savez(\"imagenet_cat\", data=img_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 运行已编译的模块\n", "\n", "有了模型和输入数据,现在可以运行 TVMC 来做预测:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "!python -m tvm.driver.tvmc run \\\n", " --inputs imagenet_cat.npz \\\n", " --output predictions.npz \\\n", " resnet50-v2-7-tvm.tar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "回顾一下, ``.tar`` 模型文件包括 C++ 库,对 Relay 模型的描述,以及模型的参数。TVMC 包括 TVM 运行时,它可以加载模型并根据输入进行预测。当运行上述命令时,TVMC 会输出新文件,``predictions.npz``,其中包含 NumPy 格式的模型输出张量。\n", "\n", "在这个例子中,在用于编译的同一台机器上运行该模型。在某些情况下,可能想通过 RPC Tracker 远程运行它。要阅读更多关于这些选项的信息,请查看:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: tvmc run [-h] [--device {cpu,cuda,cl,metal,vulkan,rocm,micro}]\n", " [--fill-mode {zeros,ones,random}] [-i INPUTS] [-o OUTPUTS]\n", " [--print-time] [--print-top N] [--profile] [--end-to-end]\n", " [--repeat N] [--number N] [--rpc-key RPC_KEY]\n", " [--rpc-tracker RPC_TRACKER] [--list-options]\n", " PATH\n", "\n", "positional arguments:\n", " PATH path to the compiled module file or to the project\n", " directory if '--device micro' is selected.\n", "\n", "optional arguments:\n", " -h, --help show this help message and exit\n", " --device {cpu,cuda,cl,metal,vulkan,rocm,micro}\n", " target device to run the compiled module. Defaults to\n", " 'cpu'\n", " --fill-mode {zeros,ones,random}\n", " fill all input tensors with values. In case\n", " --inputs/-i is provided, they will take precedence\n", " over --fill-mode. Any remaining inputs will be filled\n", " using the chosen fill mode. Defaults to 'random'\n", " -i INPUTS, --inputs INPUTS\n", " path to the .npz input file\n", " -o OUTPUTS, --outputs OUTPUTS\n", " path to the .npz output file\n", " --print-time record and print the execution time(s). (non-micro\n", " devices only)\n", " --print-top N print the top n values and indices of the output\n", " tensor\n", " --profile generate profiling data from the runtime execution.\n", " Using --profile requires the Graph Executor Debug\n", " enabled on TVM. Profiling may also have an impact on\n", " inference time, making it take longer to be generated.\n", " (non-micro devices only)\n", " --end-to-end Measure data transfers as well as model execution.\n", " This can provide a more realistic performance\n", " measurement in many cases.\n", " --repeat N run the model n times. Defaults to '1'\n", " --number N repeat the run n times. Defaults to '1'\n", " --rpc-key RPC_KEY the RPC tracker key of the target device. (non-micro\n", " devices only)\n", " --rpc-tracker RPC_TRACKER\n", " hostname (required) and port (optional, defaults to\n", " 9090) of the RPC tracker, e.g. '192.168.0.100:9999'.\n", " (non-micro devices only)\n", " --list-options show all run options and option choices when '--device\n", " micro' is selected. (micro devices only)\n" ] } ], "source": [ "!python -m tvm.driver.tvmc run --help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 输出后处理\n", "\n", "如前所述,每个模型都会有自己的特定方式来提供输出张量。\n", "\n", "需要运行一些后处理,利用为模型提供的查找表,将 ResNet-50 v2 的输出渲染成人类可读的形式。\n", "\n", "下面的脚本显示了后处理的例子,从编译的模块的输出中提取标签。\n", "\n", "运行这个脚本应该产生以下输出:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "class='n02123045 tabby, tabby cat' with probability=0.621104\n", "class='n02123159 tiger cat' with probability=0.356378\n", "class='n02124075 Egyptian cat' with probability=0.019712\n", "class='n02129604 tiger, Panthera tigris' with probability=0.001215\n", "class='n04040759 radiator' with probability=0.000262\n" ] } ], "source": [ "#!python ./postprocess.py\n", "import os.path\n", "import numpy as np\n", "\n", "from scipy.special import softmax\n", "\n", "from tvm.contrib.download import download_testdata\n", "\n", "# Download a list of labels\n", "labels_url = \"https://s3.amazonaws.com/onnx-model-zoo/synset.txt\"\n", "labels_path = download_testdata(labels_url, \"synset.txt\", module=\"data\")\n", "\n", "with open(labels_path, \"r\") as f:\n", " labels = [l.rstrip() for l in f]\n", "\n", "output_file = \"predictions.npz\"\n", "\n", "# Open the output and read the output tensor\n", "if os.path.exists(output_file):\n", " with np.load(output_file) as data:\n", " scores = softmax(data[\"output_0\"])\n", " scores = np.squeeze(scores)\n", " ranks = np.argsort(scores)[::-1]\n", "\n", " for rank in ranks[0:5]:\n", " print(\"class='%s' with probability=%f\" % (labels[rank], scores[rank]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "试着用其他图像替换猫的图像,看看 ResNet 模型会做出什么样的预测。\n", "\n", "## 自动调优 ResNet 模型\n", "\n", "之前的模型是为了在 TVM 运行时工作而编译的,但不包括任何特定平台的优化。在本节中,将展示如何使用 TVMC 建立针对你工作平台的优化模型。\n", "\n", "在某些情况下,当使用编译模块运行推理时,可能无法获得预期的性能。在这种情况下,可以利用自动调优器,为模型找到更好的配置,获得性能的提升。TVM 中的调优是指对模型进行优化以在给定目标上更快地运行的过程。这与训练或微调不同,因为它不影响模型的准确性,而只影响运行时的性能。作为调优过程的一部分,TVM 将尝试运行许多不同的运算器实现变体,以观察哪些算子表现最佳。这些运行的结果被存储在调优记录文件中,这最终是 ``tune`` 子命令的输出。\n", "\n", "在最简单的形式下,调优要求你提供三样东西:\n", "\n", "- 你打算在这个模型上运行的设备的目标规格\n", "- 输出文件的路径,调优记录将被保存在该文件中\n", "- 最后是要调优的模型的路径。\n", "\n", "默认搜索算法需要 `xgboost`,请参阅下面关于优化搜索算法的详细信息:\n", "\n", "```bash\n", "pip install xgboost cloudpickle\n", "```\n", "\n", "下面的例子展示了这一做法的实际效果:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/media/pc/data/4tb/lxw/anaconda3/envs/mx39/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", " from pandas import MultiIndex, Int64Index\n", "[Task 1/25] Current/Best: 139.87/ 252.51 GFLOPS | Progress: (40/40) | 20.88 s Done.\n", "[Task 2/25] Current/Best: 42.44/ 183.76 GFLOPS | Progress: (40/40) | 11.12 s Done.\n", "[Task 3/25] Current/Best: 176.21/ 215.65 GFLOPS | Progress: (40/40) | 11.55 s Done.\n", "[Task 4/25] Current/Best: 113.94/ 160.83 GFLOPS | Progress: (40/40) | 13.36 s Done.\n", "[Task 5/25] Current/Best: 120.38/ 164.05 GFLOPS | Progress: (40/40) | 12.15 s Done.\n", "[Task 6/25] Current/Best: 103.44/ 188.69 GFLOPS | Progress: (40/40) | 12.60 s Done.\n", "[Task 7/25] Current/Best: 137.09/ 204.00 GFLOPS | Progress: (40/40) | 11.36 s Done.\n", "[Task 8/25] Current/Best: 99.24/ 195.34 GFLOPS | Progress: (40/40) | 18.87 s Done.\n", "[Task 9/25] Current/Best: 70.21/ 189.30 GFLOPS | Progress: (40/40) | 19.84 s Done.\n", "[Task 10/25] Current/Best: 139.57/ 150.27 GFLOPS | Progress: (40/40) | 11.81 s Done.\n", "[Task 11/25] Current/Best: 136.51/ 192.55 GFLOPS | Progress: (40/40) | 11.38 s Done.\n", "[Task 12/25] Current/Best: 127.62/ 216.62 GFLOPS | Progress: (40/40) | 15.05 s Done.\n", "[Task 13/25] Current/Best: 76.30/ 237.37 GFLOPS | Progress: (40/40) | 12.29 s Done.\n", "[Task 14/25] Current/Best: 67.69/ 197.50 GFLOPS | Progress: (40/40) | 17.04 s Done.\n", "[Task 16/25] Current/Best: 57.91/ 200.78 GFLOPS | Progress: (40/40) | 12.76 s Done.\n", "[Task 17/25] Current/Best: 172.88/ 267.60 GFLOPS | Progress: (40/40) | 12.21 s Done.\n", "[Task 18/25] Current/Best: 164.30/ 195.15 GFLOPS | Progress: (40/40) | 18.82 s Done.\n", "[Task 19/25] Current/Best: 122.30/ 209.99 GFLOPS | Progress: (40/40) | 14.50 s Done.\n", "[Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/40) | 0.00 s s Done.\n", " Done.\n", " Done.\n", "[Task 22/25] Current/Best: 69.31/ 177.25 GFLOPS | Progress: (40/40) | 12.39 s Done.\n", "[Task 23/25] Current/Best: 92.92/ 185.29 GFLOPS | Progress: (40/40) | 13.99 s Done.\n", "[Task 25/25] Current/Best: 18.40/ 84.62 GFLOPS | Progress: (40/40) | 20.26 s Done.\n", " Done.\n" ] } ], "source": [ "!python -m tvm.driver.tvmc tune --target \"llvm\" \\\n", " --output resnet50-v2-7-autotuner_records.json \\\n", " ../../_models/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在这个例子中,如果你为 ``--target`` 标志指出更具体的目标,你会看到更好的结果。\n", "\n", "TVMC 将对模型的参数空间进行搜索,尝试不同的运算符配置,并选择在你的平台上运行最快的一个。尽管这是基于 CPU 和模型操作的指导性搜索,但仍可能需要几个小时来完成搜索。这个搜索的输出将被保存到 ``resnet50-v2-7-autotuner_records.json`` 文件中,以后将被用来编译优化的模型。\n", "\n", "```{admonition} 定义调优搜索算法\n", "默认情况下,这种搜索是使用 ``XGBoost Grid`` 算法引导的。根据你的模型的复杂性和可利用的时间,你可能想选择不同的算法。完整的列表可以通过查阅:\n", "```" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: tvmc tune [-h] [--early-stopping EARLY_STOPPING]\n", " [--min-repeat-ms MIN_REPEAT_MS]\n", " [--model-format {keras,onnx,pb,tflite,pytorch,paddle}]\n", " [--number NUMBER] -o OUTPUT [--parallel PARALLEL]\n", " [--repeat REPEAT] [--rpc-key RPC_KEY]\n", " [--rpc-tracker RPC_TRACKER] [--target TARGET]\n", " [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE]\n", " [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS]\n", " [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL]\n", " [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG]\n", " [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE]\n", " [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS]\n", " [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE]\n", " [--target-ext_dev-libs TARGET_EXT_DEV_LIBS]\n", " [--target-ext_dev-model TARGET_EXT_DEV_MODEL]\n", " [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB]\n", " [--target-ext_dev-tag TARGET_EXT_DEV_TAG]\n", " [--target-ext_dev-device TARGET_EXT_DEV_DEVICE]\n", " [--target-ext_dev-keys TARGET_EXT_DEV_KEYS]\n", " [--target-llvm-fast-math TARGET_LLVM_FAST_MATH]\n", " [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL]\n", " [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API]\n", " [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE]\n", " [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF]\n", " [--target-llvm-mattr TARGET_LLVM_MATTR]\n", " [--target-llvm-num-cores TARGET_LLVM_NUM_CORES]\n", " [--target-llvm-libs TARGET_LLVM_LIBS]\n", " [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ]\n", " [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS]\n", " [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API]\n", " [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT]\n", " [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB]\n", " [--target-llvm-tag TARGET_LLVM_TAG]\n", " [--target-llvm-mtriple TARGET_LLVM_MTRIPLE]\n", " [--target-llvm-model TARGET_LLVM_MODEL]\n", " [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI]\n", " [--target-llvm-mcpu TARGET_LLVM_MCPU]\n", " [--target-llvm-device TARGET_LLVM_DEVICE]\n", " [--target-llvm-runtime TARGET_LLVM_RUNTIME]\n", " [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP]\n", " [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC]\n", " [--target-llvm-mabi TARGET_LLVM_MABI]\n", " [--target-llvm-keys TARGET_LLVM_KEYS]\n", " [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN]\n", " [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE]\n", " [--target-hybrid-libs TARGET_HYBRID_LIBS]\n", " [--target-hybrid-model TARGET_HYBRID_MODEL]\n", " [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB]\n", " [--target-hybrid-tag TARGET_HYBRID_TAG]\n", " [--target-hybrid-device TARGET_HYBRID_DEVICE]\n", " [--target-hybrid-keys TARGET_HYBRID_KEYS]\n", " [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE]\n", " [--target-aocl-libs TARGET_AOCL_LIBS]\n", " [--target-aocl-model TARGET_AOCL_MODEL]\n", " [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB]\n", " [--target-aocl-tag TARGET_AOCL_TAG]\n", " [--target-aocl-device TARGET_AOCL_DEVICE]\n", " [--target-aocl-keys TARGET_AOCL_KEYS]\n", " [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS]\n", " [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE]\n", " [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE]\n", " [--target-nvptx-libs TARGET_NVPTX_LIBS]\n", " [--target-nvptx-model TARGET_NVPTX_MODEL]\n", " [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB]\n", " [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE]\n", " [--target-nvptx-tag TARGET_NVPTX_TAG]\n", " [--target-nvptx-mcpu TARGET_NVPTX_MCPU]\n", " [--target-nvptx-device TARGET_NVPTX_DEVICE]\n", " [--target-nvptx-keys TARGET_NVPTX_KEYS]\n", " [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS]\n", " [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE]\n", " [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE]\n", " [--target-opencl-libs TARGET_OPENCL_LIBS]\n", " [--target-opencl-model TARGET_OPENCL_MODEL]\n", " [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB]\n", " [--target-opencl-tag TARGET_OPENCL_TAG]\n", " [--target-opencl-device TARGET_OPENCL_DEVICE]\n", " [--target-opencl-keys TARGET_OPENCL_KEYS]\n", " [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS]\n", " [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE]\n", " [--target-metal-from_device TARGET_METAL_FROM_DEVICE]\n", " [--target-metal-libs TARGET_METAL_LIBS]\n", " [--target-metal-keys TARGET_METAL_KEYS]\n", " [--target-metal-model TARGET_METAL_MODEL]\n", " [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB]\n", " [--target-metal-tag TARGET_METAL_TAG]\n", " [--target-metal-device TARGET_METAL_DEVICE]\n", " [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS]\n", " [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS]\n", " [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE]\n", " [--target-webgpu-libs TARGET_WEBGPU_LIBS]\n", " [--target-webgpu-model TARGET_WEBGPU_MODEL]\n", " [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB]\n", " [--target-webgpu-tag TARGET_WEBGPU_TAG]\n", " [--target-webgpu-device TARGET_WEBGPU_DEVICE]\n", " [--target-webgpu-keys TARGET_WEBGPU_KEYS]\n", " [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS]\n", " [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE]\n", " [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE]\n", " [--target-rocm-libs TARGET_ROCM_LIBS]\n", " [--target-rocm-mattr TARGET_ROCM_MATTR]\n", " [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-rocm-model TARGET_ROCM_MODEL]\n", " [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB]\n", " [--target-rocm-mtriple TARGET_ROCM_MTRIPLE]\n", " [--target-rocm-tag TARGET_ROCM_TAG]\n", " [--target-rocm-device TARGET_ROCM_DEVICE]\n", " [--target-rocm-mcpu TARGET_ROCM_MCPU]\n", " [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK]\n", " [--target-rocm-keys TARGET_ROCM_KEYS]\n", " [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS]\n", " [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE]\n", " [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE]\n", " [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER]\n", " [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION]\n", " [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER]\n", " [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z]\n", " [--target-vulkan-libs TARGET_VULKAN_LIBS]\n", " [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION]\n", " [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS]\n", " [--target-vulkan-mattr TARGET_VULKAN_MATTR]\n", " [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE]\n", " [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE]\n", " [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR]\n", " [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64]\n", " [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32]\n", " [--target-vulkan-model TARGET_VULKAN_MODEL]\n", " [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X]\n", " [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB]\n", " [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y]\n", " [--target-vulkan-tag TARGET_VULKAN_TAG]\n", " [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8]\n", " [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION]\n", " [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION]\n", " [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER]\n", " [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE]\n", " [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32]\n", " [--target-vulkan-device TARGET_VULKAN_DEVICE]\n", " [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK]\n", " [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE]\n", " [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME]\n", " [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT]\n", " [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS]\n", " [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16]\n", " [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME]\n", " [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64]\n", " [--target-vulkan-keys TARGET_VULKAN_KEYS]\n", " [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16]\n", " [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS]\n", " [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE]\n", " [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE]\n", " [--target-cuda-arch TARGET_CUDA_ARCH]\n", " [--target-cuda-libs TARGET_CUDA_LIBS]\n", " [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-cuda-model TARGET_CUDA_MODEL]\n", " [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB]\n", " [--target-cuda-tag TARGET_CUDA_TAG]\n", " [--target-cuda-device TARGET_CUDA_DEVICE]\n", " [--target-cuda-mcpu TARGET_CUDA_MCPU]\n", " [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK]\n", " [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK]\n", " [--target-cuda-keys TARGET_CUDA_KEYS]\n", " [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE]\n", " [--target-sdaccel-libs TARGET_SDACCEL_LIBS]\n", " [--target-sdaccel-model TARGET_SDACCEL_MODEL]\n", " [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB]\n", " [--target-sdaccel-tag TARGET_SDACCEL_TAG]\n", " [--target-sdaccel-device TARGET_SDACCEL_DEVICE]\n", " [--target-sdaccel-keys TARGET_SDACCEL_KEYS]\n", " [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE]\n", " [--target-composite-libs TARGET_COMPOSITE_LIBS]\n", " [--target-composite-devices TARGET_COMPOSITE_DEVICES]\n", " [--target-composite-model TARGET_COMPOSITE_MODEL]\n", " [--target-composite-tag TARGET_COMPOSITE_TAG]\n", " [--target-composite-device TARGET_COMPOSITE_DEVICE]\n", " [--target-composite-keys TARGET_COMPOSITE_KEYS]\n", " [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE]\n", " [--target-stackvm-libs TARGET_STACKVM_LIBS]\n", " [--target-stackvm-model TARGET_STACKVM_MODEL]\n", " [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB]\n", " [--target-stackvm-tag TARGET_STACKVM_TAG]\n", " [--target-stackvm-device TARGET_STACKVM_DEVICE]\n", " [--target-stackvm-keys TARGET_STACKVM_KEYS]\n", " [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE]\n", " [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS]\n", " [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL]\n", " [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB]\n", " [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG]\n", " [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE]\n", " [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS]\n", " [--target-c-unpacked-api TARGET_C_UNPACKED_API]\n", " [--target-c-from_device TARGET_C_FROM_DEVICE]\n", " [--target-c-libs TARGET_C_LIBS]\n", " [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT]\n", " [--target-c-executor TARGET_C_EXECUTOR]\n", " [--target-c-link-params TARGET_C_LINK_PARAMS]\n", " [--target-c-model TARGET_C_MODEL]\n", " [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT]\n", " [--target-c-system-lib TARGET_C_SYSTEM_LIB]\n", " [--target-c-tag TARGET_C_TAG]\n", " [--target-c-interface-api TARGET_C_INTERFACE_API]\n", " [--target-c-mcpu TARGET_C_MCPU]\n", " [--target-c-device TARGET_C_DEVICE]\n", " [--target-c-runtime TARGET_C_RUNTIME]\n", " [--target-c-keys TARGET_C_KEYS]\n", " [--target-c-march TARGET_C_MARCH]\n", " [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE]\n", " [--target-hexagon-libs TARGET_HEXAGON_LIBS]\n", " [--target-hexagon-mattr TARGET_HEXAGON_MATTR]\n", " [--target-hexagon-model TARGET_HEXAGON_MODEL]\n", " [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS]\n", " [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE]\n", " [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB]\n", " [--target-hexagon-mcpu TARGET_HEXAGON_MCPU]\n", " [--target-hexagon-device TARGET_HEXAGON_DEVICE]\n", " [--target-hexagon-tag TARGET_HEXAGON_TAG]\n", " [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS]\n", " [--target-hexagon-keys TARGET_HEXAGON_KEYS]\n", " [--target-host TARGET_HOST] [--timeout TIMEOUT]\n", " [--trials TRIALS] [--tuning-records PATH]\n", " [--desired-layout {NCHW,NHWC}] [--enable-autoscheduler]\n", " [--cache-line-bytes CACHE_LINE_BYTES] [--num-cores NUM_CORES]\n", " [--vector-unit-bytes VECTOR_UNIT_BYTES]\n", " [--max-shared-memory-per-block MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--max-local-memory-per-block MAX_LOCAL_MEMORY_PER_BLOCK]\n", " [--max-threads-per-block MAX_THREADS_PER_BLOCK]\n", " [--max-vthread-extent MAX_VTHREAD_EXTENT]\n", " [--warp-size WARP_SIZE] [--include-simple-tasks]\n", " [--log-estimated-latency]\n", " [--tuner {ga,gridsearch,random,xgb,xgb_knob,xgb-rank}]\n", " [--input-shapes INPUT_SHAPES]\n", " FILE\n", "\n", "positional arguments:\n", " FILE path to the input model file\n", "\n", "optional arguments:\n", " -h, --help show this help message and exit\n", " --early-stopping EARLY_STOPPING\n", " minimum number of trials before early stopping\n", " --min-repeat-ms MIN_REPEAT_MS\n", " minimum time to run each trial, in milliseconds.\n", " Defaults to 0 on x86 and 1000 on all other targets\n", " --model-format {keras,onnx,pb,tflite,pytorch,paddle}\n", " specify input model format\n", " --number NUMBER number of runs a single repeat is made of. The final\n", " number of tuning executions is: (1 + number * repeat)\n", " -o OUTPUT, --output OUTPUT\n", " output file to store the tuning records for the tuning\n", " process\n", " --parallel PARALLEL the maximum number of parallel devices to use when\n", " tuning\n", " --repeat REPEAT how many times to repeat each measurement\n", " --rpc-key RPC_KEY the RPC tracker key of the target device. Required\n", " when --rpc-tracker is provided.\n", " --rpc-tracker RPC_TRACKER\n", " hostname (required) and port (optional, defaults to\n", " 9090) of the RPC tracker, e.g. '192.168.0.100:9999'\n", " --target TARGET compilation target as plain string, inline JSON or\n", " path to a JSON file\n", " --target-host TARGET_HOST\n", " the host compilation target, defaults to None\n", " --timeout TIMEOUT compilation timeout, in seconds\n", " --trials TRIALS the maximum number of tuning trials to perform\n", " --tuning-records PATH\n", " path to an auto-tuning log file by AutoTVM.\n", " --desired-layout {NCHW,NHWC}\n", " change the data layout of the whole graph\n", " --enable-autoscheduler\n", " enable tuning the graph through the AutoScheduler\n", " tuner\n", " --input-shapes INPUT_SHAPES\n", " specify non-generic shapes for model to run, format is\n", " \"input_name:[dim1,dim2,...,dimn]\n", " input_name2:[dim1,dim2]\"\n", "\n", "target example_target_hook:\n", " --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE\n", " target example_target_hook from_device\n", " --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS\n", " target example_target_hook libs options\n", " --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL\n", " target example_target_hook model string\n", " --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG\n", " target example_target_hook tag string\n", " --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE\n", " target example_target_hook device string\n", " --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS\n", " target example_target_hook keys options\n", "\n", "target ext_dev:\n", " --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE\n", " target ext_dev from_device\n", " --target-ext_dev-libs TARGET_EXT_DEV_LIBS\n", " target ext_dev libs options\n", " --target-ext_dev-model TARGET_EXT_DEV_MODEL\n", " target ext_dev model string\n", " --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB\n", " target ext_dev system-lib\n", " --target-ext_dev-tag TARGET_EXT_DEV_TAG\n", " target ext_dev tag string\n", " --target-ext_dev-device TARGET_EXT_DEV_DEVICE\n", " target ext_dev device string\n", " --target-ext_dev-keys TARGET_EXT_DEV_KEYS\n", " target ext_dev keys options\n", "\n", "target llvm:\n", " --target-llvm-fast-math TARGET_LLVM_FAST_MATH\n", " target llvm fast-math\n", " --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL\n", " target llvm opt-level\n", " --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API\n", " target llvm unpacked-api\n", " --target-llvm-from_device TARGET_LLVM_FROM_DEVICE\n", " target llvm from_device\n", " --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF\n", " target llvm fast-math-ninf\n", " --target-llvm-mattr TARGET_LLVM_MATTR\n", " target llvm mattr options\n", " --target-llvm-num-cores TARGET_LLVM_NUM_CORES\n", " target llvm num-cores\n", " --target-llvm-libs TARGET_LLVM_LIBS\n", " target llvm libs options\n", " --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ\n", " target llvm fast-math-nsz\n", " --target-llvm-link-params TARGET_LLVM_LINK_PARAMS\n", " target llvm link-params\n", " --target-llvm-interface-api TARGET_LLVM_INTERFACE_API\n", " target llvm interface-api string\n", " --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT\n", " target llvm fast-math-contract\n", " --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB\n", " target llvm system-lib\n", " --target-llvm-tag TARGET_LLVM_TAG\n", " target llvm tag string\n", " --target-llvm-mtriple TARGET_LLVM_MTRIPLE\n", " target llvm mtriple string\n", " --target-llvm-model TARGET_LLVM_MODEL\n", " target llvm model string\n", " --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI\n", " target llvm mfloat-abi string\n", " --target-llvm-mcpu TARGET_LLVM_MCPU\n", " target llvm mcpu string\n", " --target-llvm-device TARGET_LLVM_DEVICE\n", " target llvm device string\n", " --target-llvm-runtime TARGET_LLVM_RUNTIME\n", " target llvm runtime string\n", " --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP\n", " target llvm fast-math-arcp\n", " --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC\n", " target llvm fast-math-reassoc\n", " --target-llvm-mabi TARGET_LLVM_MABI\n", " target llvm mabi string\n", " --target-llvm-keys TARGET_LLVM_KEYS\n", " target llvm keys options\n", " --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN\n", " target llvm fast-math-nnan\n", "\n", "target hybrid:\n", " --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE\n", " target hybrid from_device\n", " --target-hybrid-libs TARGET_HYBRID_LIBS\n", " target hybrid libs options\n", " --target-hybrid-model TARGET_HYBRID_MODEL\n", " target hybrid model string\n", " --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB\n", " target hybrid system-lib\n", " --target-hybrid-tag TARGET_HYBRID_TAG\n", " target hybrid tag string\n", " --target-hybrid-device TARGET_HYBRID_DEVICE\n", " target hybrid device string\n", " --target-hybrid-keys TARGET_HYBRID_KEYS\n", " target hybrid keys options\n", "\n", "target aocl:\n", " --target-aocl-from_device TARGET_AOCL_FROM_DEVICE\n", " target aocl from_device\n", " --target-aocl-libs TARGET_AOCL_LIBS\n", " target aocl libs options\n", " --target-aocl-model TARGET_AOCL_MODEL\n", " target aocl model string\n", " --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB\n", " target aocl system-lib\n", " --target-aocl-tag TARGET_AOCL_TAG\n", " target aocl tag string\n", " --target-aocl-device TARGET_AOCL_DEVICE\n", " target aocl device string\n", " --target-aocl-keys TARGET_AOCL_KEYS\n", " target aocl keys options\n", "\n", "target nvptx:\n", " --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS\n", " target nvptx max_num_threads\n", " --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE\n", " target nvptx thread_warp_size\n", " --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE\n", " target nvptx from_device\n", " --target-nvptx-libs TARGET_NVPTX_LIBS\n", " target nvptx libs options\n", " --target-nvptx-model TARGET_NVPTX_MODEL\n", " target nvptx model string\n", " --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB\n", " target nvptx system-lib\n", " --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE\n", " target nvptx mtriple string\n", " --target-nvptx-tag TARGET_NVPTX_TAG\n", " target nvptx tag string\n", " --target-nvptx-mcpu TARGET_NVPTX_MCPU\n", " target nvptx mcpu string\n", " --target-nvptx-device TARGET_NVPTX_DEVICE\n", " target nvptx device string\n", " --target-nvptx-keys TARGET_NVPTX_KEYS\n", " target nvptx keys options\n", "\n", "target opencl:\n", " --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS\n", " target opencl max_num_threads\n", " --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE\n", " target opencl thread_warp_size\n", " --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE\n", " target opencl from_device\n", " --target-opencl-libs TARGET_OPENCL_LIBS\n", " target opencl libs options\n", " --target-opencl-model TARGET_OPENCL_MODEL\n", " target opencl model string\n", " --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB\n", " target opencl system-lib\n", " --target-opencl-tag TARGET_OPENCL_TAG\n", " target opencl tag string\n", " --target-opencl-device TARGET_OPENCL_DEVICE\n", " target opencl device string\n", " --target-opencl-keys TARGET_OPENCL_KEYS\n", " target opencl keys options\n", "\n", "target metal:\n", " --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS\n", " target metal max_num_threads\n", " --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE\n", " target metal thread_warp_size\n", " --target-metal-from_device TARGET_METAL_FROM_DEVICE\n", " target metal from_device\n", " --target-metal-libs TARGET_METAL_LIBS\n", " target metal libs options\n", " --target-metal-keys TARGET_METAL_KEYS\n", " target metal keys options\n", " --target-metal-model TARGET_METAL_MODEL\n", " target metal model string\n", " --target-metal-system-lib TARGET_METAL_SYSTEM_LIB\n", " target metal system-lib\n", " --target-metal-tag TARGET_METAL_TAG\n", " target metal tag string\n", " --target-metal-device TARGET_METAL_DEVICE\n", " target metal device string\n", " --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS\n", " target metal max_function_args\n", "\n", "target webgpu:\n", " --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS\n", " target webgpu max_num_threads\n", " --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE\n", " target webgpu from_device\n", " --target-webgpu-libs TARGET_WEBGPU_LIBS\n", " target webgpu libs options\n", " --target-webgpu-model TARGET_WEBGPU_MODEL\n", " target webgpu model string\n", " --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB\n", " target webgpu system-lib\n", " --target-webgpu-tag TARGET_WEBGPU_TAG\n", " target webgpu tag string\n", " --target-webgpu-device TARGET_WEBGPU_DEVICE\n", " target webgpu device string\n", " --target-webgpu-keys TARGET_WEBGPU_KEYS\n", " target webgpu keys options\n", "\n", "target rocm:\n", " --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS\n", " target rocm max_num_threads\n", " --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE\n", " target rocm thread_warp_size\n", " --target-rocm-from_device TARGET_ROCM_FROM_DEVICE\n", " target rocm from_device\n", " --target-rocm-libs TARGET_ROCM_LIBS\n", " target rocm libs options\n", " --target-rocm-mattr TARGET_ROCM_MATTR\n", " target rocm mattr options\n", " --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK\n", " target rocm max_shared_memory_per_block\n", " --target-rocm-model TARGET_ROCM_MODEL\n", " target rocm model string\n", " --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB\n", " target rocm system-lib\n", " --target-rocm-mtriple TARGET_ROCM_MTRIPLE\n", " target rocm mtriple string\n", " --target-rocm-tag TARGET_ROCM_TAG\n", " target rocm tag string\n", " --target-rocm-device TARGET_ROCM_DEVICE\n", " target rocm device string\n", " --target-rocm-mcpu TARGET_ROCM_MCPU\n", " target rocm mcpu string\n", " --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK\n", " target rocm max_threads_per_block\n", " --target-rocm-keys TARGET_ROCM_KEYS\n", " target rocm keys options\n", "\n", "target vulkan:\n", " --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS\n", " target vulkan max_num_threads\n", " --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE\n", " target vulkan thread_warp_size\n", " --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE\n", " target vulkan from_device\n", " --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER\n", " target vulkan max_per_stage_descriptor_storage_buffer\n", " --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION\n", " target vulkan driver_version\n", " --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER\n", " target vulkan supports_16bit_buffer\n", " --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z\n", " target vulkan max_block_size_z\n", " --target-vulkan-libs TARGET_VULKAN_LIBS\n", " target vulkan libs options\n", " --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION\n", " target vulkan supports_dedicated_allocation\n", " --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS\n", " target vulkan supported_subgroup_operations\n", " --target-vulkan-mattr TARGET_VULKAN_MATTR\n", " target vulkan mattr options\n", " --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE\n", " target vulkan max_storage_buffer_range\n", " --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE\n", " target vulkan max_push_constants_size\n", " --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR\n", " target vulkan supports_push_descriptor\n", " --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64\n", " target vulkan supports_int64\n", " --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32\n", " target vulkan supports_float32\n", " --target-vulkan-model TARGET_VULKAN_MODEL\n", " target vulkan model string\n", " --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X\n", " target vulkan max_block_size_x\n", " --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB\n", " target vulkan system-lib\n", " --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y\n", " target vulkan max_block_size_y\n", " --target-vulkan-tag TARGET_VULKAN_TAG\n", " target vulkan tag string\n", " --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8\n", " target vulkan supports_int8\n", " --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION\n", " target vulkan max_spirv_version\n", " --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION\n", " target vulkan vulkan_api_version\n", " --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER\n", " target vulkan supports_8bit_buffer\n", " --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE\n", " target vulkan device_type string\n", " --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32\n", " target vulkan supports_int32\n", " --target-vulkan-device TARGET_VULKAN_DEVICE\n", " target vulkan device string\n", " --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK\n", " target vulkan max_threads_per_block\n", " --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE\n", " target vulkan max_uniform_buffer_range\n", " --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME\n", " target vulkan driver_name string\n", " --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT\n", " target vulkan supports_integer_dot_product\n", " --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS\n", " target vulkan supports_storage_buffer_storage_class\n", " --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16\n", " target vulkan supports_float16\n", " --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME\n", " target vulkan device_name string\n", " --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64\n", " target vulkan supports_float64\n", " --target-vulkan-keys TARGET_VULKAN_KEYS\n", " target vulkan keys options\n", " --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK\n", " target vulkan max_shared_memory_per_block\n", " --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16\n", " target vulkan supports_int16\n", "\n", "target cuda:\n", " --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS\n", " target cuda max_num_threads\n", " --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE\n", " target cuda thread_warp_size\n", " --target-cuda-from_device TARGET_CUDA_FROM_DEVICE\n", " target cuda from_device\n", " --target-cuda-arch TARGET_CUDA_ARCH\n", " target cuda arch string\n", " --target-cuda-libs TARGET_CUDA_LIBS\n", " target cuda libs options\n", " --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK\n", " target cuda max_shared_memory_per_block\n", " --target-cuda-model TARGET_CUDA_MODEL\n", " target cuda model string\n", " --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB\n", " target cuda system-lib\n", " --target-cuda-tag TARGET_CUDA_TAG\n", " target cuda tag string\n", " --target-cuda-device TARGET_CUDA_DEVICE\n", " target cuda device string\n", " --target-cuda-mcpu TARGET_CUDA_MCPU\n", " target cuda mcpu string\n", " --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK\n", " target cuda max_threads_per_block\n", " --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK\n", " target cuda registers_per_block\n", " --target-cuda-keys TARGET_CUDA_KEYS\n", " target cuda keys options\n", "\n", "target sdaccel:\n", " --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE\n", " target sdaccel from_device\n", " --target-sdaccel-libs TARGET_SDACCEL_LIBS\n", " target sdaccel libs options\n", " --target-sdaccel-model TARGET_SDACCEL_MODEL\n", " target sdaccel model string\n", " --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB\n", " target sdaccel system-lib\n", " --target-sdaccel-tag TARGET_SDACCEL_TAG\n", " target sdaccel tag string\n", " --target-sdaccel-device TARGET_SDACCEL_DEVICE\n", " target sdaccel device string\n", " --target-sdaccel-keys TARGET_SDACCEL_KEYS\n", " target sdaccel keys options\n", "\n", "target composite:\n", " --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE\n", " target composite from_device\n", " --target-composite-libs TARGET_COMPOSITE_LIBS\n", " target composite libs options\n", " --target-composite-devices TARGET_COMPOSITE_DEVICES\n", " target composite devices options\n", " --target-composite-model TARGET_COMPOSITE_MODEL\n", " target composite model string\n", " --target-composite-tag TARGET_COMPOSITE_TAG\n", " target composite tag string\n", " --target-composite-device TARGET_COMPOSITE_DEVICE\n", " target composite device string\n", " --target-composite-keys TARGET_COMPOSITE_KEYS\n", " target composite keys options\n", "\n", "target stackvm:\n", " --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE\n", " target stackvm from_device\n", " --target-stackvm-libs TARGET_STACKVM_LIBS\n", " target stackvm libs options\n", " --target-stackvm-model TARGET_STACKVM_MODEL\n", " target stackvm model string\n", " --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB\n", " target stackvm system-lib\n", " --target-stackvm-tag TARGET_STACKVM_TAG\n", " target stackvm tag string\n", " --target-stackvm-device TARGET_STACKVM_DEVICE\n", " target stackvm device string\n", " --target-stackvm-keys TARGET_STACKVM_KEYS\n", " target stackvm keys options\n", "\n", "target aocl_sw_emu:\n", " --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE\n", " target aocl_sw_emu from_device\n", " --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS\n", " target aocl_sw_emu libs options\n", " --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL\n", " target aocl_sw_emu model string\n", " --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB\n", " target aocl_sw_emu system-lib\n", " --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG\n", " target aocl_sw_emu tag string\n", " --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE\n", " target aocl_sw_emu device string\n", " --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS\n", " target aocl_sw_emu keys options\n", "\n", "target c:\n", " --target-c-unpacked-api TARGET_C_UNPACKED_API\n", " target c unpacked-api\n", " --target-c-from_device TARGET_C_FROM_DEVICE\n", " target c from_device\n", " --target-c-libs TARGET_C_LIBS\n", " target c libs options\n", " --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT\n", " target c constants-byte-alignment\n", " --target-c-executor TARGET_C_EXECUTOR\n", " target c executor string\n", " --target-c-link-params TARGET_C_LINK_PARAMS\n", " target c link-params\n", " --target-c-model TARGET_C_MODEL\n", " target c model string\n", " --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT\n", " target c workspace-byte-alignment\n", " --target-c-system-lib TARGET_C_SYSTEM_LIB\n", " target c system-lib\n", " --target-c-tag TARGET_C_TAG\n", " target c tag string\n", " --target-c-interface-api TARGET_C_INTERFACE_API\n", " target c interface-api string\n", " --target-c-mcpu TARGET_C_MCPU\n", " target c mcpu string\n", " --target-c-device TARGET_C_DEVICE\n", " target c device string\n", " --target-c-runtime TARGET_C_RUNTIME\n", " target c runtime string\n", " --target-c-keys TARGET_C_KEYS\n", " target c keys options\n", " --target-c-march TARGET_C_MARCH\n", " target c march string\n", "\n", "target hexagon:\n", " --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE\n", " target hexagon from_device\n", " --target-hexagon-libs TARGET_HEXAGON_LIBS\n", " target hexagon libs options\n", " --target-hexagon-mattr TARGET_HEXAGON_MATTR\n", " target hexagon mattr options\n", " --target-hexagon-model TARGET_HEXAGON_MODEL\n", " target hexagon model string\n", " --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS\n", " target hexagon llvm-options options\n", " --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE\n", " target hexagon mtriple string\n", " --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB\n", " target hexagon system-lib\n", " --target-hexagon-mcpu TARGET_HEXAGON_MCPU\n", " target hexagon mcpu string\n", " --target-hexagon-device TARGET_HEXAGON_DEVICE\n", " target hexagon device string\n", " --target-hexagon-tag TARGET_HEXAGON_TAG\n", " target hexagon tag string\n", " --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS\n", " target hexagon link-params\n", " --target-hexagon-keys TARGET_HEXAGON_KEYS\n", " target hexagon keys options\n", "\n", "AutoScheduler options:\n", " AutoScheduler options, used when --enable-autoscheduler is provided\n", "\n", " --cache-line-bytes CACHE_LINE_BYTES\n", " the size of cache line in bytes. If not specified, it\n", " will be autoset for the current machine.\n", " --num-cores NUM_CORES\n", " the number of device cores. If not specified, it will\n", " be autoset for the current machine.\n", " --vector-unit-bytes VECTOR_UNIT_BYTES\n", " the width of vector units in bytes. If not specified,\n", " it will be autoset for the current machine.\n", " --max-shared-memory-per-block MAX_SHARED_MEMORY_PER_BLOCK\n", " the max shared memory per block in bytes. If not\n", " specified, it will be autoset for the current machine.\n", " --max-local-memory-per-block MAX_LOCAL_MEMORY_PER_BLOCK\n", " the max local memory per block in bytes. If not\n", " specified, it will be autoset for the current machine.\n", " --max-threads-per-block MAX_THREADS_PER_BLOCK\n", " the max number of threads per block. If not specified,\n", " it will be autoset for the current machine.\n", " --max-vthread-extent MAX_VTHREAD_EXTENT\n", " the max vthread extent. If not specified, it will be\n", " autoset for the current machine.\n", " --warp-size WARP_SIZE\n", " the thread numbers of a warp. If not specified, it\n", " will be autoset for the current machine.\n", " --include-simple-tasks\n", " whether to extract simple tasks that do not include\n", " complicated ops\n", " --log-estimated-latency\n", " whether to log the estimated latency to the file after\n", " tuning a task\n", "\n", "AutoTVM options:\n", " AutoTVM options, used when the AutoScheduler is not enabled\n", "\n", " --tuner {ga,gridsearch,random,xgb,xgb_knob,xgb-rank}\n", " type of tuner to use when tuning with autotvm.\n" ] } ], "source": [ "!python -m tvm.driver.tvmc tune --help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "对于消费级 Skylake CPU 来说,输出结果将是这样的:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/media/pc/data/4tb/lxw/anaconda3/envs/mx39/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", " from pandas import MultiIndex, Int64Index\n", "[Task 1/25] Current/Best: 135.54/ 444.49 GFLOPS | Progress: (40/40) | 16.09 s Done.\n", "[Task 2/25] Current/Best: 91.39/ 426.70 GFLOPS | Progress: (40/40) | 10.33 s Done.\n", "[Task 3/25] Current/Best: 147.25/ 516.21 GFLOPS | Progress: (40/40) | 11.55 s Done.\n", "[Task 4/25] Current/Best: 561.81/ 561.81 GFLOPS | Progress: (40/40) | 12.99 s Done.\n", "[Task 5/25] Current/Best: 182.70/ 570.25 GFLOPS | Progress: (40/40) | 11.12 s Done.\n", "[Task 6/25] Current/Best: 79.82/ 459.29 GFLOPS | Progress: (40/40) | 12.03 s Done.\n", "[Task 7/25] Current/Best: 152.79/ 300.64 GFLOPS | Progress: (40/40) | 11.16 s Done.\n", "[Task 8/25] Current/Best: 155.29/ 310.77 GFLOPS | Progress: (40/40) | 14.68 s Done.\n", "[Task 9/25] Current/Best: 126.56/ 561.24 GFLOPS | Progress: (40/40) | 13.93 s Done.\n", "[Task 10/25] Current/Best: 41.68/ 517.18 GFLOPS | Progress: (40/40) | 10.91 s Done.\n", "[Task 11/25] Current/Best: 311.13/ 528.67 GFLOPS | Progress: (40/40) | 10.89 s Done.\n", "[Task 12/25] Current/Best: 265.13/ 525.74 GFLOPS | Progress: (40/40) | 11.19 s Done.\n", "[Task 13/25] Current/Best: 107.09/ 426.10 GFLOPS | Progress: (40/40) | 11.29 s Done.\n", "[Task 14/25] Current/Best: 119.32/ 373.60 GFLOPS | Progress: (40/40) | 12.38 s Done.\n", "[Task 15/25] Current/Best: 101.58/ 439.72 GFLOPS | Progress: (40/40) | 14.41 s Done.\n", "[Task 16/25] Current/Best: 177.78/ 427.98 GFLOPS | Progress: (40/40) | 10.23 s Done.\n", "[Task 17/25] Current/Best: 72.04/ 349.15 GFLOPS | Progress: (40/40) | 11.50 s Done.\n", "[Task 18/25] Current/Best: 124.41/ 500.93 GFLOPS | Progress: (40/40) | 12.07 s Done.\n", "[Task 19/25] Current/Best: 243.37/ 371.27 GFLOPS | Progress: (40/40) | 12.88 s Done.\n", "[Task 20/25] Current/Best: 137.63/ 343.57 GFLOPS | Progress: (40/40) | 21.29 s Done.\n", "[Task 21/25] Current/Best: 59.02/ 330.98 GFLOPS | Progress: (40/40) | 12.88 s Done.\n", "[Task 22/25] Current/Best: 273.71/ 457.41 GFLOPS | Progress: (40/40) | 11.04 s Done.\n", "[Task 23/25] Current/Best: 166.89/ 430.39 GFLOPS | Progress: (40/40) | 13.46 s Done.\n", "[Task 25/25] Current/Best: 28.01/ 59.42 GFLOPS | Progress: (40/40) | 20.24 s Done.\n", " Done.\n" ] } ], "source": [ "!python -m tvm.driver.tvmc tune \\\n", " --target \"llvm -mcpu=broadwell\" \\\n", " --output resnet50-v2-7-autotuner_records.json \\\n", " ../../_models/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "调谐会话可能需要很长的时间,所以 ``tvmc tune`` 提供了许多选项来定制你的调谐过程,在重复次数方面(例如 ``--repeat`` 和 ``--number``),要使用的调谐算法等等。\n", "\n", "## 用调优数据编译优化后的模型\n", "\n", "作为上述调谐过程的输出,获得了存储在 ``resnet50-v2-7-autotuner_records.json`` 的调谐记录。这个文件可以有两种使用方式:\n", "\n", "- 作为进一步调谐的输入(通过 ``tvmc tune --tuning-records``)。\n", "- 作为对编译器的输入\n", "\n", "编译器将使用这些结果来为你指定的目标上的模型生成高性能代码。要做到这一点,可以使用 ``tvmc compile --tuning-records``。\n", "\n", "获得更多信息:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: tvmc compile [-h] [--cross-compiler CROSS_COMPILER]\n", " [--cross-compiler-options CROSS_COMPILER_OPTIONS]\n", " [--desired-layout {NCHW,NHWC}] [--dump-code FORMAT]\n", " [--model-format {keras,onnx,pb,tflite,pytorch,paddle}]\n", " [-o OUTPUT] [-f {so,mlf}] [--pass-config name=value]\n", " [--target TARGET]\n", " [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE]\n", " [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS]\n", " [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL]\n", " [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG]\n", " [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE]\n", " [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS]\n", " [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE]\n", " [--target-ext_dev-libs TARGET_EXT_DEV_LIBS]\n", " [--target-ext_dev-model TARGET_EXT_DEV_MODEL]\n", " [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB]\n", " [--target-ext_dev-tag TARGET_EXT_DEV_TAG]\n", " [--target-ext_dev-device TARGET_EXT_DEV_DEVICE]\n", " [--target-ext_dev-keys TARGET_EXT_DEV_KEYS]\n", " [--target-llvm-fast-math TARGET_LLVM_FAST_MATH]\n", " [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL]\n", " [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API]\n", " [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE]\n", " [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF]\n", " [--target-llvm-mattr TARGET_LLVM_MATTR]\n", " [--target-llvm-num-cores TARGET_LLVM_NUM_CORES]\n", " [--target-llvm-libs TARGET_LLVM_LIBS]\n", " [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ]\n", " [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS]\n", " [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API]\n", " [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT]\n", " [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB]\n", " [--target-llvm-tag TARGET_LLVM_TAG]\n", " [--target-llvm-mtriple TARGET_LLVM_MTRIPLE]\n", " [--target-llvm-model TARGET_LLVM_MODEL]\n", " [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI]\n", " [--target-llvm-mcpu TARGET_LLVM_MCPU]\n", " [--target-llvm-device TARGET_LLVM_DEVICE]\n", " [--target-llvm-runtime TARGET_LLVM_RUNTIME]\n", " [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP]\n", " [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC]\n", " [--target-llvm-mabi TARGET_LLVM_MABI]\n", " [--target-llvm-keys TARGET_LLVM_KEYS]\n", " [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN]\n", " [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE]\n", " [--target-hybrid-libs TARGET_HYBRID_LIBS]\n", " [--target-hybrid-model TARGET_HYBRID_MODEL]\n", " [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB]\n", " [--target-hybrid-tag TARGET_HYBRID_TAG]\n", " [--target-hybrid-device TARGET_HYBRID_DEVICE]\n", " [--target-hybrid-keys TARGET_HYBRID_KEYS]\n", " [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE]\n", " [--target-aocl-libs TARGET_AOCL_LIBS]\n", " [--target-aocl-model TARGET_AOCL_MODEL]\n", " [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB]\n", " [--target-aocl-tag TARGET_AOCL_TAG]\n", " [--target-aocl-device TARGET_AOCL_DEVICE]\n", " [--target-aocl-keys TARGET_AOCL_KEYS]\n", " [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS]\n", " [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE]\n", " [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE]\n", " [--target-nvptx-libs TARGET_NVPTX_LIBS]\n", " [--target-nvptx-model TARGET_NVPTX_MODEL]\n", " [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB]\n", " [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE]\n", " [--target-nvptx-tag TARGET_NVPTX_TAG]\n", " [--target-nvptx-mcpu TARGET_NVPTX_MCPU]\n", " [--target-nvptx-device TARGET_NVPTX_DEVICE]\n", " [--target-nvptx-keys TARGET_NVPTX_KEYS]\n", " [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS]\n", " [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE]\n", " [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE]\n", " [--target-opencl-libs TARGET_OPENCL_LIBS]\n", " [--target-opencl-model TARGET_OPENCL_MODEL]\n", " [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB]\n", " [--target-opencl-tag TARGET_OPENCL_TAG]\n", " [--target-opencl-device TARGET_OPENCL_DEVICE]\n", " [--target-opencl-keys TARGET_OPENCL_KEYS]\n", " [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS]\n", " [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE]\n", " [--target-metal-from_device TARGET_METAL_FROM_DEVICE]\n", " [--target-metal-libs TARGET_METAL_LIBS]\n", " [--target-metal-keys TARGET_METAL_KEYS]\n", " [--target-metal-model TARGET_METAL_MODEL]\n", " [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB]\n", " [--target-metal-tag TARGET_METAL_TAG]\n", " [--target-metal-device TARGET_METAL_DEVICE]\n", " [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS]\n", " [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS]\n", " [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE]\n", " [--target-webgpu-libs TARGET_WEBGPU_LIBS]\n", " [--target-webgpu-model TARGET_WEBGPU_MODEL]\n", " [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB]\n", " [--target-webgpu-tag TARGET_WEBGPU_TAG]\n", " [--target-webgpu-device TARGET_WEBGPU_DEVICE]\n", " [--target-webgpu-keys TARGET_WEBGPU_KEYS]\n", " [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS]\n", " [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE]\n", " [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE]\n", " [--target-rocm-libs TARGET_ROCM_LIBS]\n", " [--target-rocm-mattr TARGET_ROCM_MATTR]\n", " [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-rocm-model TARGET_ROCM_MODEL]\n", " [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB]\n", " [--target-rocm-mtriple TARGET_ROCM_MTRIPLE]\n", " [--target-rocm-tag TARGET_ROCM_TAG]\n", " [--target-rocm-device TARGET_ROCM_DEVICE]\n", " [--target-rocm-mcpu TARGET_ROCM_MCPU]\n", " [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK]\n", " [--target-rocm-keys TARGET_ROCM_KEYS]\n", " [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS]\n", " [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE]\n", " [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE]\n", " [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER]\n", " [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION]\n", " [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER]\n", " [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z]\n", " [--target-vulkan-libs TARGET_VULKAN_LIBS]\n", " [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION]\n", " [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS]\n", " [--target-vulkan-mattr TARGET_VULKAN_MATTR]\n", " [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE]\n", " [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE]\n", " [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR]\n", " [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64]\n", " [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32]\n", " [--target-vulkan-model TARGET_VULKAN_MODEL]\n", " [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X]\n", " [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB]\n", " [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y]\n", " [--target-vulkan-tag TARGET_VULKAN_TAG]\n", " [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8]\n", " [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION]\n", " [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION]\n", " [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER]\n", " [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE]\n", " [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32]\n", " [--target-vulkan-device TARGET_VULKAN_DEVICE]\n", " [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK]\n", " [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE]\n", " [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME]\n", " [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT]\n", " [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS]\n", " [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16]\n", " [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME]\n", " [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64]\n", " [--target-vulkan-keys TARGET_VULKAN_KEYS]\n", " [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16]\n", " [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS]\n", " [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE]\n", " [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE]\n", " [--target-cuda-arch TARGET_CUDA_ARCH]\n", " [--target-cuda-libs TARGET_CUDA_LIBS]\n", " [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK]\n", " [--target-cuda-model TARGET_CUDA_MODEL]\n", " [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB]\n", " [--target-cuda-tag TARGET_CUDA_TAG]\n", " [--target-cuda-device TARGET_CUDA_DEVICE]\n", " [--target-cuda-mcpu TARGET_CUDA_MCPU]\n", " [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK]\n", " [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK]\n", " [--target-cuda-keys TARGET_CUDA_KEYS]\n", " [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE]\n", " [--target-sdaccel-libs TARGET_SDACCEL_LIBS]\n", " [--target-sdaccel-model TARGET_SDACCEL_MODEL]\n", " [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB]\n", " [--target-sdaccel-tag TARGET_SDACCEL_TAG]\n", " [--target-sdaccel-device TARGET_SDACCEL_DEVICE]\n", " [--target-sdaccel-keys TARGET_SDACCEL_KEYS]\n", " [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE]\n", " [--target-composite-libs TARGET_COMPOSITE_LIBS]\n", " [--target-composite-devices TARGET_COMPOSITE_DEVICES]\n", " [--target-composite-model TARGET_COMPOSITE_MODEL]\n", " [--target-composite-tag TARGET_COMPOSITE_TAG]\n", " [--target-composite-device TARGET_COMPOSITE_DEVICE]\n", " [--target-composite-keys TARGET_COMPOSITE_KEYS]\n", " [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE]\n", " [--target-stackvm-libs TARGET_STACKVM_LIBS]\n", " [--target-stackvm-model TARGET_STACKVM_MODEL]\n", " [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB]\n", " [--target-stackvm-tag TARGET_STACKVM_TAG]\n", " [--target-stackvm-device TARGET_STACKVM_DEVICE]\n", " [--target-stackvm-keys TARGET_STACKVM_KEYS]\n", " [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE]\n", " [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS]\n", " [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL]\n", " [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB]\n", " [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG]\n", " [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE]\n", " [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS]\n", " [--target-c-unpacked-api TARGET_C_UNPACKED_API]\n", " [--target-c-from_device TARGET_C_FROM_DEVICE]\n", " [--target-c-libs TARGET_C_LIBS]\n", " [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT]\n", " [--target-c-executor TARGET_C_EXECUTOR]\n", " [--target-c-link-params TARGET_C_LINK_PARAMS]\n", " [--target-c-model TARGET_C_MODEL]\n", " [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT]\n", " [--target-c-system-lib TARGET_C_SYSTEM_LIB]\n", " [--target-c-tag TARGET_C_TAG]\n", " [--target-c-interface-api TARGET_C_INTERFACE_API]\n", " [--target-c-mcpu TARGET_C_MCPU]\n", " [--target-c-device TARGET_C_DEVICE]\n", " [--target-c-runtime TARGET_C_RUNTIME]\n", " [--target-c-keys TARGET_C_KEYS]\n", " [--target-c-march TARGET_C_MARCH]\n", " [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE]\n", " [--target-hexagon-libs TARGET_HEXAGON_LIBS]\n", " [--target-hexagon-mattr TARGET_HEXAGON_MATTR]\n", " [--target-hexagon-model TARGET_HEXAGON_MODEL]\n", " [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS]\n", " [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE]\n", " [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB]\n", " [--target-hexagon-mcpu TARGET_HEXAGON_MCPU]\n", " [--target-hexagon-device TARGET_HEXAGON_DEVICE]\n", " [--target-hexagon-tag TARGET_HEXAGON_TAG]\n", " [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS]\n", " [--target-hexagon-keys TARGET_HEXAGON_KEYS]\n", " [--tuning-records PATH] [--executor EXECUTOR]\n", " [--executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS]\n", " [--executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT]\n", " [--executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API]\n", " [--executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API]\n", " [--executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS]\n", " [--runtime RUNTIME]\n", " [--runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB]\n", " [--runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB] [-v]\n", " [-O [0-3]] [--input-shapes INPUT_SHAPES]\n", " [--disabled-pass DISABLED_PASS]\n", " [--module-name MODULE_NAME]\n", " FILE\n", "\n", "positional arguments:\n", " FILE path to the input model file.\n", "\n", "optional arguments:\n", " -h, --help show this help message and exit\n", " --cross-compiler CROSS_COMPILER\n", " the cross compiler to generate target libraries, e.g.\n", " 'aarch64-linux-gnu-gcc'.\n", " --cross-compiler-options CROSS_COMPILER_OPTIONS\n", " the cross compiler options to generate target\n", " libraries, e.g. '-mfpu=neon-vfpv4'.\n", " --desired-layout {NCHW,NHWC}\n", " change the data layout of the whole graph.\n", " --dump-code FORMAT comma separated list of formats to export the input\n", " model, e.g. 'asm,ll,relay'.\n", " --model-format {keras,onnx,pb,tflite,pytorch,paddle}\n", " specify input model format.\n", " -o OUTPUT, --output OUTPUT\n", " output the compiled module to a specified archive.\n", " Defaults to 'module.tar'.\n", " -f {so,mlf}, --output-format {so,mlf}\n", " output format. Use 'so' for shared object or 'mlf' for\n", " Model Library Format (only for microTVM targets).\n", " Defaults to 'so'.\n", " --pass-config name=value\n", " configurations to be used at compile time. This option\n", " can be provided multiple times, each one to set one\n", " configuration value, e.g. '--pass-config\n", " relay.backend.use_auto_scheduler=0', e.g. '--pass-\n", " config\n", " tir.add_lower_pass=opt_level1,pass1,opt_level2,pass2'.\n", " --target TARGET compilation target as plain string, inline JSON or\n", " path to a JSON file\n", " --tuning-records PATH\n", " path to an auto-tuning log file by AutoTVM. If not\n", " presented, the fallback/tophub configs will be used.\n", " --executor EXECUTOR Executor to compile the model with\n", " --runtime RUNTIME Runtime to compile the model with\n", " -v, --verbose increase verbosity.\n", " -O [0-3], --opt-level [0-3]\n", " specify which optimization level to use. Defaults to\n", " '3'.\n", " --input-shapes INPUT_SHAPES\n", " specify non-generic shapes for model to run, format is\n", " \"input_name:[dim1,dim2,...,dimn]\n", " input_name2:[dim1,dim2]\".\n", " --disabled-pass DISABLED_PASS\n", " disable specific passes, comma-separated list of pass\n", " names.\n", " --module-name MODULE_NAME\n", " The output module name. Defaults to 'default'.\n", "\n", "target example_target_hook:\n", " --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE\n", " target example_target_hook from_device\n", " --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS\n", " target example_target_hook libs options\n", " --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL\n", " target example_target_hook model string\n", " --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG\n", " target example_target_hook tag string\n", " --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE\n", " target example_target_hook device string\n", " --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS\n", " target example_target_hook keys options\n", "\n", "target ext_dev:\n", " --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE\n", " target ext_dev from_device\n", " --target-ext_dev-libs TARGET_EXT_DEV_LIBS\n", " target ext_dev libs options\n", " --target-ext_dev-model TARGET_EXT_DEV_MODEL\n", " target ext_dev model string\n", " --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB\n", " target ext_dev system-lib\n", " --target-ext_dev-tag TARGET_EXT_DEV_TAG\n", " target ext_dev tag string\n", " --target-ext_dev-device TARGET_EXT_DEV_DEVICE\n", " target ext_dev device string\n", " --target-ext_dev-keys TARGET_EXT_DEV_KEYS\n", " target ext_dev keys options\n", "\n", "target llvm:\n", " --target-llvm-fast-math TARGET_LLVM_FAST_MATH\n", " target llvm fast-math\n", " --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL\n", " target llvm opt-level\n", " --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API\n", " target llvm unpacked-api\n", " --target-llvm-from_device TARGET_LLVM_FROM_DEVICE\n", " target llvm from_device\n", " --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF\n", " target llvm fast-math-ninf\n", " --target-llvm-mattr TARGET_LLVM_MATTR\n", " target llvm mattr options\n", " --target-llvm-num-cores TARGET_LLVM_NUM_CORES\n", " target llvm num-cores\n", " --target-llvm-libs TARGET_LLVM_LIBS\n", " target llvm libs options\n", " --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ\n", " target llvm fast-math-nsz\n", " --target-llvm-link-params TARGET_LLVM_LINK_PARAMS\n", " target llvm link-params\n", " --target-llvm-interface-api TARGET_LLVM_INTERFACE_API\n", " target llvm interface-api string\n", " --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT\n", " target llvm fast-math-contract\n", " --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB\n", " target llvm system-lib\n", " --target-llvm-tag TARGET_LLVM_TAG\n", " target llvm tag string\n", " --target-llvm-mtriple TARGET_LLVM_MTRIPLE\n", " target llvm mtriple string\n", " --target-llvm-model TARGET_LLVM_MODEL\n", " target llvm model string\n", " --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI\n", " target llvm mfloat-abi string\n", " --target-llvm-mcpu TARGET_LLVM_MCPU\n", " target llvm mcpu string\n", " --target-llvm-device TARGET_LLVM_DEVICE\n", " target llvm device string\n", " --target-llvm-runtime TARGET_LLVM_RUNTIME\n", " target llvm runtime string\n", " --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP\n", " target llvm fast-math-arcp\n", " --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC\n", " target llvm fast-math-reassoc\n", " --target-llvm-mabi TARGET_LLVM_MABI\n", " target llvm mabi string\n", " --target-llvm-keys TARGET_LLVM_KEYS\n", " target llvm keys options\n", " --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN\n", " target llvm fast-math-nnan\n", "\n", "target hybrid:\n", " --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE\n", " target hybrid from_device\n", " --target-hybrid-libs TARGET_HYBRID_LIBS\n", " target hybrid libs options\n", " --target-hybrid-model TARGET_HYBRID_MODEL\n", " target hybrid model string\n", " --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB\n", " target hybrid system-lib\n", " --target-hybrid-tag TARGET_HYBRID_TAG\n", " target hybrid tag string\n", " --target-hybrid-device TARGET_HYBRID_DEVICE\n", " target hybrid device string\n", " --target-hybrid-keys TARGET_HYBRID_KEYS\n", " target hybrid keys options\n", "\n", "target aocl:\n", " --target-aocl-from_device TARGET_AOCL_FROM_DEVICE\n", " target aocl from_device\n", " --target-aocl-libs TARGET_AOCL_LIBS\n", " target aocl libs options\n", " --target-aocl-model TARGET_AOCL_MODEL\n", " target aocl model string\n", " --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB\n", " target aocl system-lib\n", " --target-aocl-tag TARGET_AOCL_TAG\n", " target aocl tag string\n", " --target-aocl-device TARGET_AOCL_DEVICE\n", " target aocl device string\n", " --target-aocl-keys TARGET_AOCL_KEYS\n", " target aocl keys options\n", "\n", "target nvptx:\n", " --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS\n", " target nvptx max_num_threads\n", " --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE\n", " target nvptx thread_warp_size\n", " --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE\n", " target nvptx from_device\n", " --target-nvptx-libs TARGET_NVPTX_LIBS\n", " target nvptx libs options\n", " --target-nvptx-model TARGET_NVPTX_MODEL\n", " target nvptx model string\n", " --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB\n", " target nvptx system-lib\n", " --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE\n", " target nvptx mtriple string\n", " --target-nvptx-tag TARGET_NVPTX_TAG\n", " target nvptx tag string\n", " --target-nvptx-mcpu TARGET_NVPTX_MCPU\n", " target nvptx mcpu string\n", " --target-nvptx-device TARGET_NVPTX_DEVICE\n", " target nvptx device string\n", " --target-nvptx-keys TARGET_NVPTX_KEYS\n", " target nvptx keys options\n", "\n", "target opencl:\n", " --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS\n", " target opencl max_num_threads\n", " --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE\n", " target opencl thread_warp_size\n", " --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE\n", " target opencl from_device\n", " --target-opencl-libs TARGET_OPENCL_LIBS\n", " target opencl libs options\n", " --target-opencl-model TARGET_OPENCL_MODEL\n", " target opencl model string\n", " --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB\n", " target opencl system-lib\n", " --target-opencl-tag TARGET_OPENCL_TAG\n", " target opencl tag string\n", " --target-opencl-device TARGET_OPENCL_DEVICE\n", " target opencl device string\n", " --target-opencl-keys TARGET_OPENCL_KEYS\n", " target opencl keys options\n", "\n", "target metal:\n", " --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS\n", " target metal max_num_threads\n", " --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE\n", " target metal thread_warp_size\n", " --target-metal-from_device TARGET_METAL_FROM_DEVICE\n", " target metal from_device\n", " --target-metal-libs TARGET_METAL_LIBS\n", " target metal libs options\n", " --target-metal-keys TARGET_METAL_KEYS\n", " target metal keys options\n", " --target-metal-model TARGET_METAL_MODEL\n", " target metal model string\n", " --target-metal-system-lib TARGET_METAL_SYSTEM_LIB\n", " target metal system-lib\n", " --target-metal-tag TARGET_METAL_TAG\n", " target metal tag string\n", " --target-metal-device TARGET_METAL_DEVICE\n", " target metal device string\n", " --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS\n", " target metal max_function_args\n", "\n", "target webgpu:\n", " --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS\n", " target webgpu max_num_threads\n", " --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE\n", " target webgpu from_device\n", " --target-webgpu-libs TARGET_WEBGPU_LIBS\n", " target webgpu libs options\n", " --target-webgpu-model TARGET_WEBGPU_MODEL\n", " target webgpu model string\n", " --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB\n", " target webgpu system-lib\n", " --target-webgpu-tag TARGET_WEBGPU_TAG\n", " target webgpu tag string\n", " --target-webgpu-device TARGET_WEBGPU_DEVICE\n", " target webgpu device string\n", " --target-webgpu-keys TARGET_WEBGPU_KEYS\n", " target webgpu keys options\n", "\n", "target rocm:\n", " --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS\n", " target rocm max_num_threads\n", " --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE\n", " target rocm thread_warp_size\n", " --target-rocm-from_device TARGET_ROCM_FROM_DEVICE\n", " target rocm from_device\n", " --target-rocm-libs TARGET_ROCM_LIBS\n", " target rocm libs options\n", " --target-rocm-mattr TARGET_ROCM_MATTR\n", " target rocm mattr options\n", " --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK\n", " target rocm max_shared_memory_per_block\n", " --target-rocm-model TARGET_ROCM_MODEL\n", " target rocm model string\n", " --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB\n", " target rocm system-lib\n", " --target-rocm-mtriple TARGET_ROCM_MTRIPLE\n", " target rocm mtriple string\n", " --target-rocm-tag TARGET_ROCM_TAG\n", " target rocm tag string\n", " --target-rocm-device TARGET_ROCM_DEVICE\n", " target rocm device string\n", " --target-rocm-mcpu TARGET_ROCM_MCPU\n", " target rocm mcpu string\n", " --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK\n", " target rocm max_threads_per_block\n", " --target-rocm-keys TARGET_ROCM_KEYS\n", " target rocm keys options\n", "\n", "target vulkan:\n", " --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS\n", " target vulkan max_num_threads\n", " --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE\n", " target vulkan thread_warp_size\n", " --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE\n", " target vulkan from_device\n", " --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER\n", " target vulkan max_per_stage_descriptor_storage_buffer\n", " --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION\n", " target vulkan driver_version\n", " --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER\n", " target vulkan supports_16bit_buffer\n", " --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z\n", " target vulkan max_block_size_z\n", " --target-vulkan-libs TARGET_VULKAN_LIBS\n", " target vulkan libs options\n", " --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION\n", " target vulkan supports_dedicated_allocation\n", " --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS\n", " target vulkan supported_subgroup_operations\n", " --target-vulkan-mattr TARGET_VULKAN_MATTR\n", " target vulkan mattr options\n", " --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE\n", " target vulkan max_storage_buffer_range\n", " --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE\n", " target vulkan max_push_constants_size\n", " --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR\n", " target vulkan supports_push_descriptor\n", " --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64\n", " target vulkan supports_int64\n", " --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32\n", " target vulkan supports_float32\n", " --target-vulkan-model TARGET_VULKAN_MODEL\n", " target vulkan model string\n", " --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X\n", " target vulkan max_block_size_x\n", " --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB\n", " target vulkan system-lib\n", " --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y\n", " target vulkan max_block_size_y\n", " --target-vulkan-tag TARGET_VULKAN_TAG\n", " target vulkan tag string\n", " --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8\n", " target vulkan supports_int8\n", " --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION\n", " target vulkan max_spirv_version\n", " --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION\n", " target vulkan vulkan_api_version\n", " --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER\n", " target vulkan supports_8bit_buffer\n", " --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE\n", " target vulkan device_type string\n", " --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32\n", " target vulkan supports_int32\n", " --target-vulkan-device TARGET_VULKAN_DEVICE\n", " target vulkan device string\n", " --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK\n", " target vulkan max_threads_per_block\n", " --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE\n", " target vulkan max_uniform_buffer_range\n", " --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME\n", " target vulkan driver_name string\n", " --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT\n", " target vulkan supports_integer_dot_product\n", " --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS\n", " target vulkan supports_storage_buffer_storage_class\n", " --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16\n", " target vulkan supports_float16\n", " --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME\n", " target vulkan device_name string\n", " --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64\n", " target vulkan supports_float64\n", " --target-vulkan-keys TARGET_VULKAN_KEYS\n", " target vulkan keys options\n", " --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK\n", " target vulkan max_shared_memory_per_block\n", " --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16\n", " target vulkan supports_int16\n", "\n", "target cuda:\n", " --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS\n", " target cuda max_num_threads\n", " --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE\n", " target cuda thread_warp_size\n", " --target-cuda-from_device TARGET_CUDA_FROM_DEVICE\n", " target cuda from_device\n", " --target-cuda-arch TARGET_CUDA_ARCH\n", " target cuda arch string\n", " --target-cuda-libs TARGET_CUDA_LIBS\n", " target cuda libs options\n", " --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK\n", " target cuda max_shared_memory_per_block\n", " --target-cuda-model TARGET_CUDA_MODEL\n", " target cuda model string\n", " --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB\n", " target cuda system-lib\n", " --target-cuda-tag TARGET_CUDA_TAG\n", " target cuda tag string\n", " --target-cuda-device TARGET_CUDA_DEVICE\n", " target cuda device string\n", " --target-cuda-mcpu TARGET_CUDA_MCPU\n", " target cuda mcpu string\n", " --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK\n", " target cuda max_threads_per_block\n", " --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK\n", " target cuda registers_per_block\n", " --target-cuda-keys TARGET_CUDA_KEYS\n", " target cuda keys options\n", "\n", "target sdaccel:\n", " --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE\n", " target sdaccel from_device\n", " --target-sdaccel-libs TARGET_SDACCEL_LIBS\n", " target sdaccel libs options\n", " --target-sdaccel-model TARGET_SDACCEL_MODEL\n", " target sdaccel model string\n", " --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB\n", " target sdaccel system-lib\n", " --target-sdaccel-tag TARGET_SDACCEL_TAG\n", " target sdaccel tag string\n", " --target-sdaccel-device TARGET_SDACCEL_DEVICE\n", " target sdaccel device string\n", " --target-sdaccel-keys TARGET_SDACCEL_KEYS\n", " target sdaccel keys options\n", "\n", "target composite:\n", " --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE\n", " target composite from_device\n", " --target-composite-libs TARGET_COMPOSITE_LIBS\n", " target composite libs options\n", " --target-composite-devices TARGET_COMPOSITE_DEVICES\n", " target composite devices options\n", " --target-composite-model TARGET_COMPOSITE_MODEL\n", " target composite model string\n", " --target-composite-tag TARGET_COMPOSITE_TAG\n", " target composite tag string\n", " --target-composite-device TARGET_COMPOSITE_DEVICE\n", " target composite device string\n", " --target-composite-keys TARGET_COMPOSITE_KEYS\n", " target composite keys options\n", "\n", "target stackvm:\n", " --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE\n", " target stackvm from_device\n", " --target-stackvm-libs TARGET_STACKVM_LIBS\n", " target stackvm libs options\n", " --target-stackvm-model TARGET_STACKVM_MODEL\n", " target stackvm model string\n", " --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB\n", " target stackvm system-lib\n", " --target-stackvm-tag TARGET_STACKVM_TAG\n", " target stackvm tag string\n", " --target-stackvm-device TARGET_STACKVM_DEVICE\n", " target stackvm device string\n", " --target-stackvm-keys TARGET_STACKVM_KEYS\n", " target stackvm keys options\n", "\n", "target aocl_sw_emu:\n", " --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE\n", " target aocl_sw_emu from_device\n", " --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS\n", " target aocl_sw_emu libs options\n", " --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL\n", " target aocl_sw_emu model string\n", " --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB\n", " target aocl_sw_emu system-lib\n", " --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG\n", " target aocl_sw_emu tag string\n", " --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE\n", " target aocl_sw_emu device string\n", " --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS\n", " target aocl_sw_emu keys options\n", "\n", "target c:\n", " --target-c-unpacked-api TARGET_C_UNPACKED_API\n", " target c unpacked-api\n", " --target-c-from_device TARGET_C_FROM_DEVICE\n", " target c from_device\n", " --target-c-libs TARGET_C_LIBS\n", " target c libs options\n", " --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT\n", " target c constants-byte-alignment\n", " --target-c-executor TARGET_C_EXECUTOR\n", " target c executor string\n", " --target-c-link-params TARGET_C_LINK_PARAMS\n", " target c link-params\n", " --target-c-model TARGET_C_MODEL\n", " target c model string\n", " --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT\n", " target c workspace-byte-alignment\n", " --target-c-system-lib TARGET_C_SYSTEM_LIB\n", " target c system-lib\n", " --target-c-tag TARGET_C_TAG\n", " target c tag string\n", " --target-c-interface-api TARGET_C_INTERFACE_API\n", " target c interface-api string\n", " --target-c-mcpu TARGET_C_MCPU\n", " target c mcpu string\n", " --target-c-device TARGET_C_DEVICE\n", " target c device string\n", " --target-c-runtime TARGET_C_RUNTIME\n", " target c runtime string\n", " --target-c-keys TARGET_C_KEYS\n", " target c keys options\n", " --target-c-march TARGET_C_MARCH\n", " target c march string\n", "\n", "target hexagon:\n", " --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE\n", " target hexagon from_device\n", " --target-hexagon-libs TARGET_HEXAGON_LIBS\n", " target hexagon libs options\n", " --target-hexagon-mattr TARGET_HEXAGON_MATTR\n", " target hexagon mattr options\n", " --target-hexagon-model TARGET_HEXAGON_MODEL\n", " target hexagon model string\n", " --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS\n", " target hexagon llvm-options options\n", " --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE\n", " target hexagon mtriple string\n", " --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB\n", " target hexagon system-lib\n", " --target-hexagon-mcpu TARGET_HEXAGON_MCPU\n", " target hexagon mcpu string\n", " --target-hexagon-device TARGET_HEXAGON_DEVICE\n", " target hexagon device string\n", " --target-hexagon-tag TARGET_HEXAGON_TAG\n", " target hexagon tag string\n", " --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS\n", " target hexagon link-params\n", " --target-hexagon-keys TARGET_HEXAGON_KEYS\n", " target hexagon keys options\n", "\n", "executor graph:\n", " --executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS\n", " Executor graph link-params\n", "\n", "executor aot:\n", " --executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT\n", " Executor aot workspace-byte-alignment\n", " --executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API\n", " Executor aot unpacked-api\n", " --executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API\n", " Executor aot interface-api string\n", " --executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS\n", " Executor aot link-params\n", "\n", "runtime cpp:\n", " --runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB\n", " Runtime cpp system-lib\n", "\n", "runtime crt:\n", " --runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB\n", " Runtime crt system-lib\n" ] } ], "source": [ "!python -m tvm.driver.tvmc compile --help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在,模型的调谐数据已经收集完毕,可以使用优化的算子重新编译模型,以加快计算速度。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "!python -m tvm.driver.tvmc compile \\\n", " --target \"llvm\" \\\n", " --tuning-records resnet50-v2-7-autotuner_records.json \\\n", " --output resnet50-v2-7-tvm_autotuned.tar \\\n", " ../../_models/resnet50-v2-7.onnx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "验证优化后的模型是否运行并产生相同的结果:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "class='n02123045 tabby, tabby cat' with probability=0.621104\n", "class='n02123159 tiger cat' with probability=0.356378\n", "class='n02124075 Egyptian cat' with probability=0.019712\n", "class='n02129604 tiger, Panthera tigris' with probability=0.001215\n", "class='n04040759 radiator' with probability=0.000262\n" ] } ], "source": [ "!python -m tvm.driver.tvmc run \\\n", " --inputs imagenet_cat.npz \\\n", " --output predictions.npz \\\n", " resnet50-v2-7-tvm_autotuned.tar\n", "\n", "!python postprocess.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 比较已调谐和未调谐的模型\n", "\n", "TVMC 提供了在模型之间进行基本性能基准测试的工具。你可以指定重复次数,并且 TVMC 报告模型的运行时间(与运行时间的启动无关)。可以粗略了解调谐对模型性能的改善程度。例如,在测试的英特尔 i7 系统上,看到调谐后的模型比未调谐的模型运行快 $47\\%$。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Execution time summary:\n", " mean (ms) median (ms) max (ms) min (ms) std (ms) \n", " 41.2506 40.8879 54.4469 36.7249 2.4430 \n", " \n" ] } ], "source": [ "!python -m tvm.driver.tvmc run \\\n", " --inputs imagenet_cat.npz \\\n", " --output predictions.npz \\\n", " --print-time \\\n", " --repeat 100 \\\n", " resnet50-v2-7-tvm_autotuned.tar" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Execution time summary:\n", " mean (ms) median (ms) max (ms) min (ms) std (ms) \n", " 51.8327 52.5906 67.5374 42.9440 4.4040 \n", " \n" ] } ], "source": [ "!python -m tvm.driver.tvmc run \\\n", " --inputs imagenet_cat.npz \\\n", " --output predictions.npz \\\n", " --print-time \\\n", " --repeat 100 \\\n", " resnet50-v2-7-tvm.tar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 小结\n", "\n", "在本教程中,介绍了 TVMC,用于 TVM 的命令行驱动。演示了如何编译、运行和调优模型。还讨论了对输入和输出进行预处理和后处理的必要性。在调优过程之后,演示了如何比较未优化和优化后的模型的性能。\n", "\n", "这里介绍了使用 ResNet-50 v2 本地的简单例子。然而,TVMC 支持更多的功能,包括交叉编译、远程执行和剖析/基准测试(profiling/benchmarking)。\n", "\n", "要想知道还有哪些可用的选项,请看 ``tvmc --help``。\n", "\n", "在 [用 Python 接口编译和优化模型](auto_tuning_with_pyton) 教程中,将使用 Python 接口介绍同样的编译和优化步骤。" ] } ], "metadata": { "interpreter": { "hash": "f0a0fcc4cb7375f8ee907b3c51d5b9d65107fda1aab037a85df7b0c09b870b98" }, "kernelspec": { "display_name": "Python 3.10.4 ('tvm-mxnet': conda)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }