AI开发平台MODELARTS-Open-Clip基于DevServer适配PyTorch NPU训练指导:Step7 推理验证

时间:2024-05-16 21:27:22

Step7 推理验证

首先将上面训练的最终模型文件epoch_29.pt 拷贝到/home/ma-user/open_clip目录下,然后在/home/ma-user/open_clip下,执行如下命令。

vi inference.py

将下面的代码拷贝进去后保存。

import os
import torch
from PIL import Image
import open_clip

if 'DEVICE_ID' in os.environ:
   print("DEVICE_ID:", os.environ['DEVICE_ID'])
else:
   os.environ['DEVICE_ID'] = "0"

model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='/home/ma-user/open_clip/epoch_29.pt')
model = model.to("npu")
tokenizer = open_clip.get_tokenizer('ViT-B-32')

image = preprocess(Image.open("./docs/CLIP.png")).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])

print("input image shape:", image.shape)
print("input text shape:", text.shape)

with torch.no_grad(), torch.cuda.amp.autocast():
    image = image.to("npu")
    text = text.to("npu")
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

    print("output image shape:", image_features.shape)
    print("output text shape:", text_features.shape)

    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)  # prints: [[1., 0., 0.]]

运行推理脚本。

python inference.py

由于./docs/CLIP.png图片是一张图表,因此结果值和第一个文本"a diagram"吻合,结果值会接近[[1., 0., 0.]]。

support.huaweicloud.com/bestpractice-modelarts/modelarts_10_3003.html