虽然端点成功建立,但是当我请求的时候一直报错我的某个库没有
但是我把每个虚拟环境的python都设置好了,是否哪里不对
单独文件在本地都是可以运行的
- 其中model.tar.gz的格式
best.pt
yolov5/
... 省略
code/
code/requirements.txt
code/inference.py
- 我配置端点的代码
def main():
session = sagemaker.Session()
role = DUMMY_IAM_ROLE
model_dir = f"s3://stopscooterpic-training/model.tar.gz"
model = PyTorchModel(
model_data=model_dir,
role=DUMMY_IAM_ROLE,
framework_version='1.8',
py_version='py3',
entry_point='inference.py', # Update this with your inference script
source_dir='./code',
)
endpoint_name = 'endpoint'
# Deploy the model to an endpoint ml.m4.xlarge
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge', endpoint_name=endpoint_name)