自定义 TVM 数据类型#

原作者: Gus Smith, Andrew Liu

在本教程中,将向您展示如何利用 Bring Your Own Datatypes 框架在 TVM 中使用您自己的自定义数据类型。请注意,Bring Your Own Datatypes 框架目前只处理 software emulated versions of datatypes。框架不支持开箱即用的自定义加速器数据类型的编译。

Datatype 库#

Bring Your Own Datatypes 允许用户在 TVM 的原生数据类型(如 float 下)旁边注册自己的数据类型实现。在一般情况下,这些数据类型实现通常以库的形式出现。例如:

在本节中,我们将使用一个已经实现的示例库,位于 3rdparty/byodt/myfloat.cc。我们称之为“myfloat”的这个数据类型实际上只是 IEE-754 浮点数,但它提供了有用的示例 以说明在 BYODT 框架中可以使用任何数据类型。

设置#

因为我们不使用任何 3rdparty 库,所以不需要设置。

如果你想在自己的数据类型库中尝试这种方法,首先使用 CDLL 将库的函数引入进程空间:

ctypes.CDLL('my-datatype-lib.so', ctypes.RTLD_GLOBAL)

简单的 TVM 程序#

我们将首先在 TVM 中编写一个简单的程序;然后,重写它以使用自定义数据类型。

import tvm
from tvm import relay

# 我们的基本程序:  Z = X + Y
x = relay.var("x", shape=(3,), dtype="float32")
y = relay.var("y", shape=(3,), dtype="float32")
z = x + y
program = relay.Function([x, y], z)
module = tvm.IRModule.from_expr(program)

现在,我们使用 numpy 创建随机数输入到这个程序中:

import numpy as np

np.random.seed(23)  # 为再现性

x_input = np.random.rand(3).astype("float32")
y_input = np.random.rand(3).astype("float32")
print(f"x: {x_input}")
print(f"y: {y_input}")
x: [0.51729786 0.9469626  0.7654598 ]
y: [0.28239584 0.22104536 0.6862221 ]

最后,我们准备好运行程序:

z_output = relay.create_executor(mod=module).evaluate()(x_input, y_input)
print("z: {}".format(z_output))
z: [0.7996937 1.168008  1.4516819]

添加自定义数据类型#

现在,我们将做同样的事情,但是我们将为中间计算使用自定义数据类型。

我们使用与上面相同的输入变量 xy,但在添加 x + y 之前,我们首先通过 relay.cast(...) call 将 xy 强制转换为自定义数据类型。

注意我们如何指定自定义数据类型:我们使用特殊的 custom[...] 语法。另外,注意数据类型后面的“32”:这是自定义数据类型的 bitwidth。这告诉 TVM myfloat 的每个实例都是 32 位宽的。

try:
    with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
        x_myfloat = relay.cast(x, dtype="custom[myfloat]32")
        y_myfloat = relay.cast(y, dtype="custom[myfloat]32")
        z_myfloat = x_myfloat + y_myfloat
        z = relay.cast(z_myfloat, dtype="float32")
except tvm.TVMError as e:
    # Print last line of error
    print(str(e).split("\n")[-1])

Trying to generate this program throws an error from TVM. TVM does not know how to handle any custom datatype out of the box! We first have to register the custom type with TVM, giving it a name and a type code:

试图从 TVM 生成此程序会抛出一个错误。TVM 不知道如何处理任何开箱即用的自定义数据类型!我们首先要向 TVM 注册自定义类型,给它一个名称和一个类型代码:

tvm.target.datatype.register("myfloat", 150)

Note that the type code, 150, is currently chosen manually by the user. See TVMTypeCode::kCustomBegin in include/tvm/runtime/c_runtime_api.h. Now we can generate our program again:

注意,type 代码 150 目前是由用户手动选择的。参见 include/tvm/runtime/c_runtime_api.h 中的 TVMTypeCode::kCustomBegin。现在我们可以再次生成我们的程序:

x_myfloat = relay.cast(x, dtype="custom[myfloat]32")
y_myfloat = relay.cast(y, dtype="custom[myfloat]32")
z_myfloat = x_myfloat + y_myfloat
z = relay.cast(z_myfloat, dtype="float32")
program = relay.Function([x, y], z)
module = tvm.IRModule.from_expr(program)
module = relay.transform.InferType()(module)

现在我们有了一个使用 myfloat 的 Relay 程序!

print(program)
fn (%x: Tensor[(3), float32], %y: Tensor[(3), float32]) {
  %0 = cast(%x, dtype="custom[myfloat]32");
  %1 = cast(%y, dtype="custom[myfloat]32");
  %2 = add(%0, %1);
  cast(%2, dtype="float32")
}

现在我们可以无错误地表示我们的程序,让我们试着运行它!

try:
    with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
        z_output_myfloat = relay.create_executor("graph", mod=module).evaluate()(x_input, y_input)
        print("z: {}".format(y_myfloat))
except tvm.TVMError as e:
    # Print last line of error
    print(str(e).split("\n")[-1])
  Check failed: (lower) is false: Cast lowering function for target llvm destination type 150 source type 2 not found

Now, trying to compile this program throws an error. Let’s dissect this error.

The error is occurring during the process of lowering the custom datatype code to code that TVM can compile and run. TVM is telling us that it cannot find a lowering function for the Cast operation, when casting from source type 2 (float, in TVM), to destination type 150 (our custom datatype). When lowering custom datatypes, if TVM encounters an operation over a custom datatype, it looks for a user-registered lowering function, which tells it how to lower the operation to an operation over datatypes it understands. We have not told TVM how to lower Cast operations for our custom datatypes; thus, the source of this error.

To fix this error, we simply need to specify a lowering function:

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func(
        {
            (32, 32): "FloatToCustom32",  # cast from float32 to myfloat32
        }
    ),
    "Cast",
    "llvm",
    "float",
    "myfloat",
)

The register_op(...) call takes a lowering function, and a number of parameters which specify exactly the operation which should be lowered with the provided lowering function. In this case, the arguments we pass specify that this lowering function is for lowering a Cast from float to myfloat for target "llvm".

The lowering function passed into this call is very general: it should take an operation of the specified type (in this case, Cast) and return another operation which only uses datatypes which TVM understands.

In the general case, we expect users to implement operations over their custom datatypes using calls to an external library. In our example, our myfloat library implements a Cast from float to 32-bit myfloat in the function FloatToCustom32. To provide for the general case, we have made a helper function, create_lower_func(...), which does just this: given a dictionary, it replaces the given operation with a Call to the appropriate function name provided based on the op and the bit widths. It additionally removes usages of the custom datatype by storing the custom datatype in an opaque uint of the appropriate width; in our case, a uint32_t. For more information, see the source code.

# We can now re-try running the program:
try:
    with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
        z_output_myfloat = relay.create_executor("graph", mod=module).evaluate()(x_input, y_input)
        print("z: {}".format(z_output_myfloat))
except tvm.TVMError as e:
    # Print last line of error
    print(str(e).split("\n")[-1])
  Check failed: (lower) is false: Add lowering function for target llvm type 150 not found

This new error tells us that the Add lowering function is not found, which is good news, as it’s no longer complaining about the Cast! We know what to do from here: we just need to register the lowering functions for the other operations in our program.

Note that for Add, create_lower_func takes in a dict where the key is an integer. For Cast operations, we require a 2-tuple to specify the src_bit_length and the dest_bit_length, while for all other operations, the bit length is the same between the operands so we only require one integer to specify bit_length.

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Add"}),
    "Add",
    "llvm",
    "myfloat",
)
tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({(32, 32): "Custom32ToFloat"}),
    "Cast",
    "llvm",
    "myfloat",
    "float",
)

# Now, we can run our program without errors.
with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
    z_output_myfloat = relay.create_executor(mod=module).evaluate()(x_input, y_input)
print("z: {}".format(z_output_myfloat))

print("x:\t\t{}".format(x_input))
print("y:\t\t{}".format(y_input))
print("z (float32):\t{}".format(z_output))
print("z (myfloat32):\t{}".format(z_output_myfloat))

# Perhaps as expected, the ``myfloat32`` results and ``float32`` are exactly the same!
z: [0.7996937 1.168008  1.4516819]
x:		[0.51729786 0.9469626  0.7654598 ]
y:		[0.28239584 0.22104536 0.6862221 ]
z (float32):	[0.7996937 1.168008  1.4516819]
z (myfloat32):	[0.7996937 1.168008  1.4516819]

Running Models With Custom Datatypes#

We will first choose the model which we would like to run with myfloat. In this case we use Mobilenet. We choose Mobilenet due to its small size. In this alpha state of the Bring Your Own Datatypes framework, we have not implemented any software optimizations for running software emulations of custom datatypes; the result is poor performance due to many calls into our datatype emulation library.

First let us define two helper functions to get the mobilenet model and a cat image.

def get_mobilenet():
    dshape = (1, 3, 224, 224)
    from mxnet.gluon.model_zoo.vision import get_model

    block = get_model("mobilenet0.25", pretrained=True)
    shape_dict = {"data": dshape}
    return relay.frontend.from_mxnet(block, shape_dict)


def get_cat_image():
    from tvm.contrib.download import download_testdata
    from PIL import Image

    url = "https://gist.githubusercontent.com/zhreshold/bcda4716699ac97ea44f791c24310193/raw/fa7ef0e9c9a5daea686d6473a62aacd1a5885849/cat.png"
    dst = "cat.png"
    real_dst = download_testdata(url, dst, module="data")
    img = Image.open(real_dst).resize((224, 224))
    # CoreML's standard model image format is BGR
    img_bgr = np.array(img)[:, :, ::-1]
    img = np.transpose(img_bgr, (2, 0, 1))[np.newaxis, :]
    return np.asarray(img, dtype="float32")


module, params = get_mobilenet()
Downloading /home/xinet/.mxnet/models/mobilenet0.25-9f83e440.zipeb3c4f5d-55fd-40c7-b2a0-1981acc156d2 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...

It’s easy to execute MobileNet with native TVM:

ex = tvm.relay.create_executor("graph", mod=module, params=params)
input = get_cat_image()
result = ex.evaluate()(input).numpy()
# print first 10 elements
print(result.flatten()[:10])
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/home/xinet/workspace/lxw/tvm/xinetzone/docs/how_to/extend_tvm/bring_your_own_datatypes.ipynb Cell 27 in <cell line: 1>()
----> <a href='vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f6f7074696d69737469635f626f7267222c2273657474696e6773223a7b22686f7374223a227373683a2f2f78696e227d7d/home/xinet/workspace/lxw/tvm/xinetzone/docs/how_to/extend_tvm/bring_your_own_datatypes.ipynb#X35sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> ex = tvm.relay.create_executor("graph", mod=module, params=params)
      <a href='vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f6f7074696d69737469635f626f7267222c2273657474696e6773223a7b22686f7374223a227373683a2f2f78696e227d7d/home/xinet/workspace/lxw/tvm/xinetzone/docs/how_to/extend_tvm/bring_your_own_datatypes.ipynb#X35sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> input = get_cat_image()
      <a href='vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f6f7074696d69737469635f626f7267222c2273657474696e6773223a7b22686f7374223a227373683a2f2f78696e227d7d/home/xinet/workspace/lxw/tvm/xinetzone/docs/how_to/extend_tvm/bring_your_own_datatypes.ipynb#X35sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2'>3</a> result = ex.evaluate()(input).numpy()

TypeError: create_executor() got an unexpected keyword argument 'params'

Now, we would like to change the model to use myfloat internally. To do so, we need to convert the network. To do this, we first define a function which will help us convert tensors:

def convert_ndarray(dst_dtype, array):
    """Converts an NDArray into the specified datatype"""
    x = relay.var("x", shape=array.shape, dtype=str(array.dtype))
    cast = relay.Function([x], x.astype(dst_dtype))
    with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
        return relay.create_executor("graph").evaluate(cast)(array)

Now, to actually convert the entire network, we have written a pass in Relay which simply converts all nodes within the model to use the new datatype.

from tvm.relay.frontend.change_datatype import ChangeDatatype

src_dtype = "float32"
dst_dtype = "custom[myfloat]32"

module = relay.transform.InferType()(module)

# Currently, custom datatypes only work if you run simplify_inference beforehand
module = tvm.relay.transform.SimplifyInference()(module)

# Run type inference before changing datatype
module = tvm.relay.transform.InferType()(module)

# Change datatype from float to myfloat and re-infer types
cdtype = ChangeDatatype(src_dtype, dst_dtype)
expr = cdtype.visit(module["main"])
module = tvm.relay.transform.InferType()(module)

# We also convert the parameters:
params = {k: convert_ndarray(dst_dtype, v) for k, v in params.items()}

# We also need to convert our input:
input = convert_ndarray(dst_dtype, input)

# Finally, we can try to run the converted model:
try:
    # Vectorization is not implemented with custom datatypes.
    with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
        result_myfloat = tvm.relay.create_executor("graph", mod=module).evaluate(expr)(
            input, **params
        )
except tvm.TVMError as e:
    print(str(e).split("\n")[-1])

When we attempt to run the model, we get a familiar error telling us that more functions need to be registered for myfloat.

Because this is a neural network, many more operations are required. Here, we register all the needed functions:

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "FloatToCustom32"}),
    "FloatImm",
    "llvm",
    "myfloat",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.lower_ite, "Call", "llvm", "myfloat", intrinsic_name="tir.if_then_else"
)

tvm.target.datatype.register_op(
    tvm.target.datatype.lower_call_pure_extern,
    "Call",
    "llvm",
    "myfloat",
    intrinsic_name="tir.call_pure_extern",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Mul"}),
    "Mul",
    "llvm",
    "myfloat",
)
tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Div"}),
    "Div",
    "llvm",
    "myfloat",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Sqrt"}),
    "Call",
    "llvm",
    "myfloat",
    intrinsic_name="tir.sqrt",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Sub"}),
    "Sub",
    "llvm",
    "myfloat",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Exp"}),
    "Call",
    "llvm",
    "myfloat",
    intrinsic_name="tir.exp",
)

tvm.target.datatype.register_op(
    tvm.target.datatype.create_lower_func({32: "Custom32Max"}),
    "Max",
    "llvm",
    "myfloat",
)

tvm.target.datatype.register_min_func(
    tvm.target.datatype.create_min_lower_func({32: "MinCustom32"}, "myfloat"),
    "myfloat",
)

Note we are making use of two new functions: register_min_func and create_min_lower_func.

register_min_func takes in an integer num_bits for the bit length, and should return an operation representing the minimum finite representable value for the custom data type with the specified bit length.

Similar to register_op and create_lower_func, the create_min_lower_func handles the general case where the minimum representable custom datatype value is implemented using calls to an external library.

Now we can finally run the model:

# Vectorization is not implemented with custom datatypes.
with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
    result_myfloat = relay.create_executor(mod=module).evaluate(expr)(input, **params)
    result_myfloat = convert_ndarray(src_dtype, result_myfloat).numpy()
    # print first 10 elements
    print(result_myfloat.flatten()[:10])

# Again, note that the output using 32-bit myfloat exactly the same as 32-bit floats,
# because myfloat is exactly a float!
np.testing.assert_array_equal(result, result_myfloat)