Relay 核心张量算子
导航
Relay 核心张量算子#
这个页面包含了在 tvm.relay 中预定义的核心张量算子原语的列表。核心张量算子原语覆盖了深度学习中的典型工作负载(workload)。它们可以在前端框架中表示工作负载,并为优化提供基本的构建块。由于深度学习是快速发展的领域,所以存在可能不支持的算子。
备注
本文档将在 python 前端直接列出这些算子的函数签名。
算子概述#
Level 1:基础算子
这一 level 实现了全连接的多层感知器。
|
Compute elementwise log of data. |
|
Compute elementwise sqrt of data. |
|
Compute elementwise rsqrt of data. |
|
Compute elementwise exp of data. |
|
Compute elementwise sigmoid of data. |
|
Addition with numpy-style broadcasting. |
|
Subtraction with numpy-style broadcasting. |
|
Multiplication with numpy-style broadcasting. |
|
Division with numpy-style broadcasting. |
|
Mod with numpy-style broadcasting. |
|
Compute element-wise tanh of data. |
|
Concatenate the input tensors along the given axis. |
|
Insert num_newaxis axes at the position given by axis. |
Computes softmax. |
|
Computes log softmax. |
|
Rectified linear unit. |
|
Applies the dropout operation to the input array. |
|
Batch normalization layer (Ioffe and Szegedy, 2014). |
|
add_bias operator. |
Level 2:卷积
这一层启用了典型的 convnet 模型。
2D convolution. |
|
Two dimensional transposed convolution operator. |
|
3D convolution. |
|
3D transpose convolution. |
|
Dense operator. |
|
2D maximum pooling operator. |
|
3D maximum pooling operator. |
|
2D average pooling operator. |
|
3D average pooling operator. |
|
2D global maximum pooling operator. |
|
2D global average pooling operator. |
|
Upsampling. |
|
3D Upsampling. |
|
BatchFlatten. |
|
Padding |
|
This operator takes data as input and does local response normalization. |
|
Perform L2 normalization on the input data |
|
Tensor packing for bitserial operations. |
|
Bitserial Dense operator. |
|
2D convolution using bitserial computation. |
|
|
2D convolution with winograd algorithm. |
Weight Transformation part for 2D convolution with winograd algorithm. |
|
|
3D convolution with winograd algorithm. |
Weight Transformation part for 3D convolution with winograd algorithm. |
Level 3:额外的数学和变换算子
这一 level 支持额外的数学和变换算子。
This operator takes data as input and does Leaky version of a Rectified Linear Unit. |
|
This operator takes data as input and does Leaky version of a Rectified Linear Unit. |
|
|
Reshape the input array. |
|
Reshapes the input tensor by the size of another tensor. |
|
Copy a tensor. |
|
Permutes the dimensions of an array. |
|
Squeeze axes in the array. |
|
Compute element-wise floor of data. |
|
Compute element-wise ceil of data. |
|
Compute element-wise absolute of data. |
|
Compute element-wise trunc of data. |
|
Clip the elements in a between a_min and a_max. |
|
Compute element-wise round of data. |
|
Compute element-wise absolute of data. |
|
Compute element-wise negative of data. |
|
Take elements from an array along an axis. |
|
Fill array with zeros. |
|
Returns an array of zeros, with same type and shape as the input. |
|
Fill array with ones. |
|
Returns an array of ones, with same type and shape as the input. |
|
Gather values along given axis from given indices. |
|
Gather elements or slices from data and store to a tensor whose shape is defined by indices. |
|
Fill array with scalar value. |
|
Return a scalar value array with the same shape and type as the input array. |
|
Cast input tensor to data type. |
|
Reinterpret input tensor to data type. |
|
Split input tensor along axis by sections or indices. |
|
Return evenly spaced values within a given interval. |
|
Create coordinate matrices from coordinate vectors. |
|
Join a sequence of arrays along a new axis. |
|
Repeats elements of an array. |
|
Repeats the whole array multiple times. |
|
Reverses the order of elements along given axis while preserving array shape. |
|
Reverse the tensor for variable length slices. |
|
Convert a flat index or array of flat indices into a tuple of coordinate arrays. |
|
Converts a sparse representation into a dense tensor. |
Level 4:广播和约简
|
Right shift with numpy-style broadcasting. |
|
Left shift with numpy-style broadcasting. |
|
Broadcasted elementwise test for (lhs == rhs). |
|
Broadcasted elementwise test for (lhs != rhs). |
|
Broadcasted elementwise test for (lhs > rhs). |
|
Broadcasted elementwise test for (lhs >= rhs). |
|
Broadcasted elementwise test for (lhs < rhs). |
|
Broadcasted elementwise test for (lhs <= rhs). |
|
Computes the logical AND of boolean array elements over given axes. |
|
Computes the logical OR of boolean array elements over given axes. |
|
logical AND with numpy-style broadcasting. |
|
logical OR with numpy-style broadcasting. |
|
Compute element-wise logical not of data. |
|
logical XOR with numpy-style broadcasting. |
|
Maximum with numpy-style broadcasting. |
|
Minimum with numpy-style broadcasting. |
|
Power with numpy-style broadcasting. |
|
Selecting elements from either x or y depending on the value of the condition. |
|
Returns the indices of the maximum values along an axis. |
|
Returns the indices of the minimum values along an axis. |
|
Computes the sum of array elements over given axes. |
|
Computes the max of array elements over given axes. |
|
Computes the min of array elements over given axes. |
|
Computes the mean of array elements over given axes. |
|
Computes the variance of data over given axes. |
|
Computes the standard deviation of data over given axes. |
|
Computes the mean and variance of data over given axes. |
|
Computes the mean and standard deviation of data over given axes. |
|
Computes the products of array elements over given axes. |
|
Strided slice of an array. |
|
Return a scalar value array with the same type, broadcast to the provided shape. |
Level 5:Vision/Image 算子
Image resize1d operator. |
|
Image resize2d operator. |
|
Image resize3d operator. |
|
Crop input images and resize them. |
|
Morphological Dilation 2D. |
|
Generate prior(anchor) boxes from data, sizes and ratios. |
|
Location transformation for multibox detection |
|
|
Non-maximum suppression operations. |
Yolo reorg operation used in darknet models. |
Level 6:算法算子
|
Performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order. |
|
Get the top k elements in an input tensor along the given axis. |
Level 10:临时算子
这个层次支持广播算子的反向传播。这是暂时的。
|
Return a scalar value array with the same shape and type as the input array. |
|
Return a scalar value array with the same shape and type as the input array. |
|
Slice the first input with respect to the second input. |
|
Get shape of a tensor. |
|
Get number of elements of input tensor. |
|
Transform the layout of a tensor |
|
Copy data from the source device to the destination device. |
|
Annotates a body expression with device constraints. |
|
Reshapes the input array where the special values are inferred from right to left. |
|
Sets all elements outside the expected length of the sequence to a constant value. |
Compute batch matrix multiplication of tensor_a and tensor_b. |
|
2D adaptive max pooling operator. |
|
2D adaptive average pooling operator. |
|
|
Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. |
Level 11: Dialect Operators
This level supports dialect operators.
|
Quantized addition with numpy-style broadcasting. |
|
Computes batch matrix multiplication of x and y when x and y are data in batch. |
|
Concatenate the quantized input tensors along the given axis. |
|
Quantized 2D convolution. |
|
This operator deconvolves quantized data with quantized kernel. |
|
Qnn Dense operator. |
|
Dequantize op This operator takes quantized int8 and unit8 as input and produces dequantized float32 as output. |
|
Quantized multiplication with numpy-style broadcasting. |
|
Quantize op This operator takes float32 as input and produces quantized int8 or unit8 as output. |
|
Requantized operator. |
|
Quantized reciprocal square root. |
|
Simulated Dequantize op Mimics the dequantize op but has more flexibility in valid inputs and always outputs the same type as the input. |
|
Simulated Quantize op Mimics the quantize op but has more flexibility in valid inputs and always outputs the same type as the input. |
|
Quantized subtraction with numpy-style broadcasting. |