AI开发平台ModelArts-在Notebook中通过Dockerfile从0制作自定义镜像用于推理:附录1:Dockerfile模板

时间:2025-02-12 15:11:47

附录1:Dockerfile模板

Dockerfile样例,此样例可以直接另存为一个Dockerfile文件使用。此处可以使用的基础镜像列表请参见推理专属预置镜像列表
FROM swr.cn-north-4.myhuaweicloud.com/atelier/tensorflow_2_1:tensorflow_2.1.0-cuda_10.1-py_3.7-ubuntu_18.04-x86_64-20221121111529-d65d817# here create a soft link from '/home/ma-user/anaconda/lib/python3.7/site-packages/model_service' to '/home/ma-user/infer/model_service'. It’s the build-in inference framework code dir# if the installed python version of this base image is python3.8, you should create a soft link from '/home/ma-user/anaconda/lib/python3.8/site-packages/model_service' to '/home/ma-user/infer/model_service'.USER rootRUN ln -s /home/ma-user/anaconda/lib/python3.7/site-packages/model_service  /home/ma-user/infer/model_serviceUSER ma-user# here we supply a demo, you can change it to your own model filesADD model/  /home/ma-user/infer/model/1USER rootRUN chown -R ma-user:ma-group  /home/ma-user/infer/model/1USER ma-user# default MODELARTS_SSL_CLIENT_VERIFY switch is "true". In order to debug, we set it to be "false"ENV MODELARTS_SSL_CLIENT_VERIFY="false"# change your port and protocol here, default is 8443 and https# ENV MODELARTS_SERVICE_PORT=8080# ENV MODELARTS_SSL_ENABLED="false"# add pip install here# RUN pip install numpy==1.16.4# RUN pip install -r requirements.txt# default cmd, you can chage it here# CMD sh /home/ma-user/infer/run.sh
support.huaweicloud.com/usermanual-standard-modelarts/docker-modelarts_0031.html