tvm.relay.backend#

Backend codegen modules for relay.

The Python interface to the Relay reference interpreter.

class tvm.relay.backend.interpreter.ConstructorValue(tag, fields, constructor)[源代码]#
class tvm.relay.backend.interpreter.Executor[源代码]#

An abstract interface for executing Relay programs.

_convert_args(expr, args, kwargs)[源代码]#

Convert the combination of arguments and keyword arguments into a sequence of arguments that may be passed to a Relay evaluator.

We first provide all positional arguments, and then attempt to fill in the remaining arguments using the keyword arguments. We map the keyword arguments to the corresponding parameters, if there is an ambiguity between positional and keyword arguments this procedure will raise an error.

expr: relay.Expr

The expression to evaluate

args: List[tvm.nd.NDArray]

The arguments to pass to the evaluator.

kwargs: Dict[str, tvm.NDArrray]

The keyword arguments to pass to the evaluator.

返回

List[tvm.nd.NDArray]

The new arguments with all keyword arguments placed in the correct slot.

返回类型

args

_make_executor(expr=None)[源代码]#

Construct a Python function that implements the evaluation of expression.

expr: Optional[relay.Expr]

The Relay expression to execute.

executor: function,

A Python function which implements the behavior of expr.

evaluate(expr=None, binds=None)[源代码]#

Evaluate a Relay expression on the executor.

expr: Optional[tvm.relay.Expr]

The expression to evaluate.

binds: Optional[Map[tvm.relay.Var, tvm.relay.Expr]]

Additional binding of free variable.

valUnion[function, Object]

The evaluation result.

class tvm.relay.backend.interpreter.Interpreter(mod, device, target)[源代码]#

Simple interpreter interface.

modtvm.IRModule

The module to support the execution.

deviceDevice

The runtime device to run the code on.

targettvm.Target

The target option to build the function. Only homogeneous execution is supported.

CAUTION: Despite the API the module is prepared upon each call to evaluate rather than once in create_executor. That is: .. code-block:: python

executor = relay.create_executor(kind=”debug”, mod=module) a = executor.evaluate(expr)(args1) b = executor.evaluate(expr)(args2)

will prepare all the bindings in module twice. For efficiency, try to hoist calls to evaluate as high as possible, preferably immediately after create_executor: .. code-block:: python

func = relay.create_executor(kind=”debug”, mod=module).evaluate(expr) a = func(args1) b = func(args2)

class tvm.relay.backend.interpreter.RefValue(value)[源代码]#

TE compiler engine (replacing legacy compile_engine).

class tvm.relay.backend.te_compiler.CCacheKey(source_func, target)[源代码]#

Key in the TE Compiler.

source_functvm.relay.Function

The source function.

targettvm.Target

The target we want to run the function on.

class tvm.relay.backend.te_compiler.CCacheValue[源代码]#

Value in the TE Compiler, including usage statistics.

class tvm.relay.backend.te_compiler.LoweredOutput(outputs, implement)[源代码]#

Lowered output

class tvm.relay.backend.te_compiler.TECompiler[源代码]#

TECompiler to get lowered code.

clear()[源代码]#

clear the existing cached functions

items()[源代码]#

List items in the cache. Returns ——- item_list : List[Tuple[CCacheKey, CCacheValue]]

The list of items.

jit(source_func, target=None)[源代码]#

JIT a source_func to a tvm.runtime.PackedFunc.

source_funcUnion[tvm.relay.Function, CCacheKey]

The source relay function.

targettvm.Target

The target platform.

jited_func: tvm.runtime.PackedFunc

The result of jited function.

lower(source_func, target=None, mod_name='default')[源代码]#

Lower a source_func to a CachedFunc.

source_funcUnion[tvm.relay.Function, CCacheKey]

The source relay function.

targettvm.Target

The target platform.

cached_func: CachedFunc

The result of lowering.

tvm.relay.backend.te_compiler.get()[源代码]#

Get the global TE Compiler.

enginetvm.relay.backend.TECompiler

The TE Compiler.

tvm.relay.backend.te_compiler.get_shape(shape)[源代码]#

Convert the shape to correct dtype and vars.

tvm.relay.backend.te_compiler.get_valid_implementations(op, attrs, inputs, out_type, target)[源代码]#

Get all valid implementations from the op strategy.

Note that this function doesn’t support op with symbolic input shapes.

optvm.ir.Op

Relay operator.

attrsobject

The op attribute.

inputsList[tvm.te.Tensor]

Input tensors to the op.

out_typerelay.Type

The output type.

targettvm.target.Target

The target to compile the op.

retList[relay.op.OpImplementation]

The list of all valid op implementations.

tvm.relay.backend.te_compiler.select_implementation(op, attrs, inputs, out_type, target, use_autotvm=True)[源代码]#

Select the best implementation from the op strategy.

If use_autotvm is True, it’ll first try to find the best implementation based on AutoTVM profile results. If no AutoTVM profile result is found, it’ll choose the implementation with highest plevel.

If use_autotvm is False, it’ll directly choose the implementation with highest plevel.

Note that this function doesn’t support op with symbolic input shapes.

optvm.ir.Op

Relay operator.

attrsobject

The op attribute.

inputsList[tvm.te.Tensor]

Input tensors to the op.

out_typerelay.Type

The output type.

targettvm.target.Target

The target to compile the op.

use_autotvmbool

Whether query AutoTVM to pick the best.

rettuple(relay.op.OpImplementation, List[tvm.te.Tensor])

The best op implementation and the corresponding output tensors.

A compiler from a Relay expression to TVM’s graph executor.

The compiler is built from a few pieces.

First we define a compiler from a single Relay expression to the graph language. We require the expression to be a function. The function’s parameters correspond to the placeholder/inputs and model parameters found in the computation graph representation. The body of the function represents the computation graph.

The compiler’s output is a program in the graph language, which is composed of Node, NodeRef, InputNode, OpNode. This “little language” represents programs in TVM’s graph format.

To connect to the graph executor, we use a printer that converts our graph format into TVM’s JSON format. The resulting string can be loaded by contrib.graph_executor or any other TVM runtime compatible systems.

class tvm.relay.backend.graph_executor_codegen.GraphExecutorCodegen(mod, target)[源代码]#

The compiler from Relay to the TVM runtime system.

codegen(ir_module, func)[源代码]#

Compile a single function into a graph.

ir_module: tvm.ir.Module

The module to compile

func: tvm.relay.Expr

The function to compile.

graph_jsonstr

The graph json that can be consumed by runtime.

modIRModule or Dict[Target, IRModule]

The lowered functions.

paramsDict[str, tvm.nd.NDArray]

Additional constant parameters.

The Relay Virtual Machine.

Implements a Python interface to compiling and executing on the Relay VM.

class tvm.relay.backend.vm.VMCompiler[源代码]#

Compiler that compiles Relay module to VM executable.

_tophub_context(raw_targets)[源代码]#

Get the autotvm context.

codegen()[源代码]#

Generate the kernel library.

get_exec()[源代码]#

Get the VM executable.

exectvm.runtime.vm.Executable

The VM executable that contains both library code and bytecode.

get_params()[源代码]#

Return the updated weights.

lower(mod, target=None, target_host=None)[源代码]#

Lower the module to VM bytecode.

modtvm.IRModule

The Relay module to build.

targetany multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

target_hostany target-like object, see Target.canon_target

Host compilation target, if target is device.

optimize(mod, target=None, target_host=None, params=None)[源代码]#

Helper method that optimizes a Relay module via VM.

mod : tvm.IRModule

targetany multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

target_hostany target-like object, see Target.canon_target

Host compilation target, if target is device.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time. Used for constant folding.

modtvm.IRModule

The optimized relay module.

paramsdict

The parameters of the final module.

set_params(params)[源代码]#

Set constant parameters for the model.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time. Used for constant folding.

class tvm.relay.backend.vm.VMExecutor(mod, device, target)[源代码]#

An implementation of the executor interface for the Relay VM.

Useful interface for experimentation and debugging the VM can also be used directly from the API. supported by tvm.runtime.vm.

modIRModule

The module to support the execution.

deviceDevice

The runtime device to run the code on.

targetany multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

tvm.relay.backend.vm.compile(mod, target=None, target_host=None, params=None)[源代码]#

Compile the module to VM executable. A helper function for VMCompiler.

modtvm.IRModule

The Relay module to build.

targetany multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

target_hostNone, or any target-like object, see Target.canon_target

Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver to setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm intepreter is used.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time. Used for constant folding.

exectvm.runtime.vm.Executable

The VM executable that contains both library code and bytecode.