Python Crashed when Using pycaffe to Do Inferencing

Hi!

I'm learning Intel Caffe-1.0.6, and built it with MLSL. When I tried to use pycaffe to inference, it crashed.

The code:


import caffe

model = 'lenet_infer.prototxt'
weights = 'lenet_iter_4000.caffemodel'
caffe.set_mode_cpu()
net = caffe.Net(model, weights, caffe.TEST)  # It crashed here

The error output:


WARNING: Logging before InitGoogleLogging() is written to STDERR
W1213 07:43:04.077935   303 _caffe.cpp:184] DEPRECATION WARNING - deprecated use of Python interface
W1213 07:43:04.077976   303 _caffe.cpp:185] Use this instead (with the named "weights" parameter):
W1213 07:43:04.077985   303 _caffe.cpp:187] Net('examples/mnist/lenet_infer.prototxt', 1, weights='/var/train/caffe/mnist/lenet_iter_4000.caffemodel')
I1213 07:43:04.084452   303 upgrade_proto.cpp:109] Attempting to upgrade input file specified using deprecated input fields: examples/mnist/lenet_infer.prototxt
I1213 07:43:04.084493   303 upgrade_proto.cpp:112] Successfully upgraded file specified using deprecated input fields.
W1213 07:43:04.084498   303 upgrade_proto.cpp:114] Note that future Caffe releases will only support input layers and not input fields.
I1213 07:43:04.090289   303 cpu_info.cpp:453] Processor speed [MHz]: 2400
I1213 07:43:04.090304   303 cpu_info.cpp:456] Total number of sockets: 2
I1213 07:43:04.090307   303 cpu_info.cpp:459] Total number of CPU cores: 12
I1213 07:43:04.090312   303 cpu_info.cpp:462] Total number of processors: 24
I1213 07:43:04.090314   303 cpu_info.cpp:465] GPU is used: no
I1213 07:43:04.090317   303 cpu_info.cpp:468] OpenMP environmental variables are specified: no
I1213 07:43:04.090322   303 cpu_info.cpp:471] OpenMP thread bind allowed: yes
I1213 07:43:04.090325   303 cpu_info.cpp:474] Number of OpenMP threads: 12
Attempting to use an MPI routine before initializing MPI

The net:


name: "LeNet"
input: "data"
input_shape {
  dim: 1 # batchsize
  dim: 3 # number of colour channels - rgb
  dim: 32 # width
  dim: 32 # height
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "Softmax"
  bottom: "ip2"
  top: "loss"
}

I dug a little bit into pycaffe, and found no support for MPI and MLSL. There was no code

My questions are: 1. Does it mean if I want to use pycaffe to do inference, I have to use original Caffe? 2. Is there a plan to support pycaffe in Intel Caffe? 3. What's the preferred method to to inference in Intel Caffe?

该提问来源于开源项目:intel/caffe

查看全部
weixin_39943442
weixin_39943442
2020/12/04 21:20
  • 点赞
  • 收藏
  • 回答
    私信

2个回复