openvino demo 文件运行报错问题。

demo里面的 object detection demo 运行的时候出现错误如下
严重性 代码 说明 项目 文件 行 禁止显示状态
错误 C4996 'std::basic_string,std::allocator>::copy': Call to 'std::basic_string::copy' with parameters that may be unsafe - this call relies on the caller to check that the passed values are correct. To disable this warning, use -D_SCL_SECURE_NO_WARNINGS. See documentation on how to use Visual C++ 'Checked Iterators' 88999 d:\open_model_zoo-2018\demos\extension\ext_list.hpp 56

是怎么回事,求各位老师解答

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
abdroid 关于支付宝官方给的Demo运行报错问题,急!急!急!

新手第一次做支付宝支付,我下载了官网上的支付Demo,导入后将下面这些参数配好后 ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445418250_887322.png) 运行Demo,点击支付报了如下错误,大神些麻烦帮我看看是怎么回事啊?不懂... ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445418388_510994.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445418343_655306.png)

IM demo 运行时报错,运行不了

![图片说明](https://img-ask.csdn.net/upload/201508/11/1439274465_151914.png) IM demo 运行时报错,运行不了, SQLiteLog (1) no such Column:isblack 是数据库没有这一列吗?如果是的话,请问怎么修改呢?

openvino里的object detection demo 运行的时候有很多报错是怎么回事?

代码如下:/* // Copyright (c) 2018 Intel Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. */ #include <gflags/gflags.h> #include <algorithm> #include <functional> #include <iostream> #include <fstream> #include <random> #include <string> #include <memory> #include <vector> #include <limits> #include <chrono> #include <format_reader_ptr.h> #include <inference_engine.hpp> #include <ext_list.hpp> #include <samples/common.hpp> #include <samples/slog.hpp> #include <samples/args_helper.hpp> #include "object_detection_demo.h" #include "detectionoutput.h" using namespace InferenceEngine; bool ParseAndCheckCommandLine(int argc, char *argv[]) { // ---------------------------Parsing and validation of input args-------------------------------------- slog::info << "Parsing input parameters" << slog::endl; gflags::ParseCommandLineNonHelpFlags(&argc, &argv, true); if (FLAGS_h) { showUsage(); return false; } if (FLAGS_ni < 1) { throw std::logic_error("Parameter -ni should be greater than 0 (default: 1)"); } if (FLAGS_i.empty()) { throw std::logic_error("Parameter -i is not set"); } if (FLAGS_m.empty()) { throw std::logic_error("Parameter -m is not set"); } return true; } /** * \brief The entry point for the Inference Engine object_detection demo application * \file object_detection_demo/main.cpp * \example object_detection_demo/main.cpp */ int main(int argc, char *argv[]) { try { /** This demo covers certain topology and cannot be generalized for any object detection one **/ slog::info << "InferenceEngine: " << GetInferenceEngineVersion() << "\n"; // ------------------------------ Parsing and validation of input args --------------------------------- if (!ParseAndCheckCommandLine(argc, argv)) { return 0; } /** This vector stores paths to the processed images **/ std::vector<std::string> images; parseImagesArguments(images); if (images.empty()) throw std::logic_error("No suitable images were found"); // ----------------------------------------------------------------------------------------------------- // --------------------------- 1. Load Plugin for inference engine ------------------------------------- slog::info << "Loading plugin" << slog::endl; InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d); /*If CPU device, load default library with extensions that comes with the product*/ if (FLAGS_d.find("CPU") != std::string::npos) { /** * cpu_extensions library is compiled from "extension" folder containing * custom MKLDNNPlugin layer implementations. These layers are not supported * by mkldnn, but they can be useful for inferencing custom topologies. **/ plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>()); } if (!FLAGS_l.empty()) { // CPU(MKLDNN) extensions are loaded as a shared library and passed as a pointer to base extension IExtensionPtr extension_ptr = make_so_pointer<IExtension>(FLAGS_l); plugin.AddExtension(extension_ptr); slog::info << "CPU Extension loaded: " << FLAGS_l << slog::endl; } if (!FLAGS_c.empty()) { // clDNN Extensions are loaded from an .xml description and OpenCL kernel files plugin.SetConfig({ { PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c } }); slog::info << "GPU Extension loaded: " << FLAGS_c << slog::endl; } /** Setting plugin parameter for per layer metrics **/ if (FLAGS_pc) { plugin.SetConfig({ { PluginConfigParams::KEY_PERF_COUNT, PluginConfigParams::YES } }); } /** Printing plugin version **/ printPluginVersion(plugin, std::cout); // ----------------------------------------------------------------------------------------------------- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ std::string binFileName = fileNameNoExt(FLAGS_m) + ".bin"; slog::info << "Loading network files:" "\n\t" << FLAGS_m << "\n\t" << binFileName << slog::endl; CNNNetReader networkReader; /** Read network model **/ networkReader.ReadNetwork(FLAGS_m); /** Extract model name and load weigts **/ networkReader.ReadWeights(binFileName); CNNNetwork network = networkReader.getNetwork(); Precision p = network.getPrecision(); // ----------------------------------------------------------------------------------------------------- // --------------------------- 3. Configure input & output --------------------------------------------- // ------------------------------ Adding DetectionOutput ----------------------------------------------- /** * The only meaningful difference between Faster-RCNN and SSD-like topologies is the interpretation * of the output data. Faster-RCNN has 2 output layers which (the same format) are presented inside SSD. * * But SSD has an additional post-processing DetectionOutput layer that simplifies output filtering. * So here we are adding 3 Reshapes and the DetectionOutput to the end of Faster-RCNN so it will return the * same result as SSD and we can easily parse it. */ std::string firstLayerName = network.getInputsInfo().begin()->first; int inputWidth = network.getInputsInfo().begin()->second->getTensorDesc().getDims()[3]; int inputHeight = network.getInputsInfo().begin()->second->getTensorDesc().getDims()[2]; DataPtr bbox_pred_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_bbox_name.c_str()); if (bbox_pred_reshapeInPort == nullptr) { throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_bbox_name); } SizeVector bbox_pred_reshapeOutDims = { bbox_pred_reshapeInPort->getTensorDesc().getDims()[0] * bbox_pred_reshapeInPort->getTensorDesc().getDims()[1], 1 }; DataPtr rois_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_proposal_name.c_str()); if (rois_reshapeInPort == nullptr) { throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_proposal_name); } SizeVector rois_reshapeOutDims = { rois_reshapeInPort->getTensorDesc().getDims()[0] * rois_reshapeInPort->getTensorDesc().getDims()[1], 1 }; DataPtr cls_prob_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_prob_name.c_str()); if (cls_prob_reshapeInPort == nullptr) { throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_prob_name); } SizeVector cls_prob_reshapeOutDims = { cls_prob_reshapeInPort->getTensorDesc().getDims()[0] * cls_prob_reshapeInPort->getTensorDesc().getDims()[1], 1 }; /* Detection output */ int normalized = 0; int prior_size = normalized ? 4 : 5; int num_priors = rois_reshapeOutDims[0] / prior_size; // num_classes guessed from the output dims if (bbox_pred_reshapeOutDims[0] % (num_priors * 4) != 0) { throw std::logic_error("Can't guess number of classes. Something's wrong with output layers dims"); } int num_classes = bbox_pred_reshapeOutDims[0] / (num_priors * 4); slog::info << "num_classes guessed: " << num_classes << slog::endl; LayerParams detectionOutParams; detectionOutParams.name = "detection_out"; detectionOutParams.type = "DetectionOutput"; detectionOutParams.precision = p; CNNLayerPtr detectionOutLayer = CNNLayerPtr(new CNNLayer(detectionOutParams)); detectionOutLayer->params["background_label_id"] = "0"; detectionOutLayer->params["code_type"] = "caffe.PriorBoxParameter.CENTER_SIZE"; detectionOutLayer->params["eta"] = "1.0"; detectionOutLayer->params["input_height"] = std::to_string(inputHeight); detectionOutLayer->params["input_width"] = std::to_string(inputWidth); detectionOutLayer->params["keep_top_k"] = "200"; detectionOutLayer->params["nms_threshold"] = "0.3"; detectionOutLayer->params["normalized"] = std::to_string(normalized); detectionOutLayer->params["num_classes"] = std::to_string(num_classes); detectionOutLayer->params["share_location"] = "0"; detectionOutLayer->params["top_k"] = "400"; detectionOutLayer->params["variance_encoded_in_target"] = "1"; detectionOutLayer->params["visualize"] = "False"; detectionOutLayer->insData.push_back(bbox_pred_reshapeInPort); detectionOutLayer->insData.push_back(cls_prob_reshapeInPort); detectionOutLayer->insData.push_back(rois_reshapeInPort); SizeVector detectionOutLayerOutDims = { 7, 200, 1, 1 }; DataPtr detectionOutLayerOutPort = DataPtr(new Data("detection_out", detectionOutLayerOutDims, p, TensorDesc::getLayoutByDims(detectionOutLayerOutDims))); detectionOutLayerOutPort->creatorLayer = detectionOutLayer; detectionOutLayer->outData.push_back(detectionOutLayerOutPort); DetectionOutputPostProcessor detOutPostProcessor(detectionOutLayer.get()); network.addOutput(FLAGS_bbox_name, 0); network.addOutput(FLAGS_prob_name, 0); network.addOutput(FLAGS_proposal_name, 0); // --------------------------- Prepare input blobs ----------------------------------------------------- slog::info << "Preparing input blobs" << slog::endl; /** Taking information about all topology inputs **/ InputsDataMap inputsInfo(network.getInputsInfo()); /** SSD network has one input and one output **/ if (inputsInfo.size() != 1 && inputsInfo.size() != 2) throw std::logic_error("Demo supports topologies only with 1 or 2 inputs"); std::string imageInputName, imInfoInputName; InputInfo::Ptr inputInfo = inputsInfo.begin()->second; SizeVector inputImageDims; /** Stores input image **/ /** Iterating over all input blobs **/ for (auto & item : inputsInfo) { /** Working with first input tensor that stores image **/ if (item.second->getInputData()->getTensorDesc().getDims().size() == 4) { imageInputName = item.first; slog::info << "Batch size is " << std::to_string(networkReader.getNetwork().getBatchSize()) << slog::endl; /** Creating first input blob **/ Precision inputPrecision = Precision::U8; item.second->setPrecision(inputPrecision); } else if (item.second->getInputData()->getTensorDesc().getDims().size() == 2) { imInfoInputName = item.first; Precision inputPrecision = Precision::FP32; item.second->setPrecision(inputPrecision); if ((item.second->getTensorDesc().getDims()[1] != 3 && item.second->getTensorDesc().getDims()[1] != 6) || item.second->getTensorDesc().getDims()[0] != 1) { throw std::logic_error("Invalid input info. Should be 3 or 6 values length"); } } } // ------------------------------ Prepare output blobs ------------------------------------------------- slog::info << "Preparing output blobs" << slog::endl; OutputsDataMap outputsInfo(network.getOutputsInfo()); const int maxProposalCount = detectionOutLayerOutDims[1]; const int objectSize = detectionOutLayerOutDims[0]; /** Set the precision of output data provided by the user, should be called before load of the network to the plugin **/ outputsInfo[FLAGS_bbox_name]->setPrecision(Precision::FP32); outputsInfo[FLAGS_prob_name]->setPrecision(Precision::FP32); outputsInfo[FLAGS_proposal_name]->setPrecision(Precision::FP32); // ----------------------------------------------------------------------------------------------------- // --------------------------- 4. Loading model to the plugin ------------------------------------------ slog::info << "Loading model to the plugin" << slog::endl; ExecutableNetwork executable_network = plugin.LoadNetwork(network, {}); // ----------------------------------------------------------------------------------------------------- // --------------------------- 5. Create infer request ------------------------------------------------- InferRequest infer_request = executable_network.CreateInferRequest(); // ----------------------------------------------------------------------------------------------------- // --------------------------- 6. Prepare input -------------------------------------------------------- /** Collect images data ptrs **/ std::vector<std::shared_ptr<unsigned char>> imagesData, originalImagesData; std::vector<int> imageWidths, imageHeights; for (auto & i : images) { FormatReader::ReaderPtr reader(i.c_str()); if (reader.get() == nullptr) { slog::warn << "Image " + i + " cannot be read!" << slog::endl; continue; } /** Store image data **/ std::shared_ptr<unsigned char> originalData(reader->getData()); std::shared_ptr<unsigned char> data(reader->getData(inputInfo->getTensorDesc().getDims()[3], inputInfo->getTensorDesc().getDims()[2])); if (data.get() != nullptr) { originalImagesData.push_back(originalData); imagesData.push_back(data); imageWidths.push_back(reader->width()); imageHeights.push_back(reader->height()); } } if (imagesData.empty()) throw std::logic_error("Valid input images were not found!"); size_t batchSize = network.getBatchSize(); slog::info << "Batch size is " << std::to_string(batchSize) << slog::endl; if (batchSize != imagesData.size()) { slog::warn << "Number of images " + std::to_string(imagesData.size()) + \ " doesn't match batch size " + std::to_string(batchSize) << slog::endl; slog::warn << std::to_string(std::min(imagesData.size(), batchSize)) + \ " images will be processed" << slog::endl; batchSize = std::min(batchSize, imagesData.size()); } /** Creating input blob **/ Blob::Ptr imageInput = infer_request.GetBlob(imageInputName); /** Filling input tensor with images. First b channel, then g and r channels **/ size_t num_channels = imageInput->getTensorDesc().getDims()[1]; size_t image_size = imageInput->getTensorDesc().getDims()[3] * imageInput->getTensorDesc().getDims()[2]; unsigned char* data = static_cast<unsigned char*>(imageInput->buffer()); /** Iterate over all input images **/ for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) { /** Iterate over all pixel in image (b,g,r) **/ for (size_t pid = 0; pid < image_size; pid++) { /** Iterate over all channels **/ for (size_t ch = 0; ch < num_channels; ++ch) { /** [images stride + channels stride + pixel id ] all in bytes **/ data[image_id * image_size * num_channels + ch * image_size + pid] = imagesData.at(image_id).get()[pid*num_channels + ch]; } } } if (imInfoInputName != "") { Blob::Ptr input2 = infer_request.GetBlob(imInfoInputName); auto imInfoDim = inputsInfo.find(imInfoInputName)->second->getTensorDesc().getDims()[1]; /** Fill input tensor with values **/ float *p = input2->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>(); for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) { p[image_id * imInfoDim + 0] = static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[2]); p[image_id * imInfoDim + 1] = static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[3]); for (int k = 2; k < imInfoDim; k++) { p[image_id * imInfoDim + k] = 1.0f; // all scale factors are set to 1.0 } } } // ----------------------------------------------------------------------------------------------------- // ---------------------------- 7. Do inference -------------------------------------------------------- slog::info << "Start inference (" << FLAGS_ni << " iterations)" << slog::endl; typedef std::chrono::high_resolution_clock Time; typedef std::chrono::duration<double, std::ratio<1, 1000>> ms; typedef std::chrono::duration<float> fsec; double total = 0.0; /** Start inference & calc performance **/ for (int iter = 0; iter < FLAGS_ni; ++iter) { auto t0 = Time::now(); infer_request.Infer(); auto t1 = Time::now(); fsec fs = t1 - t0; ms d = std::chrono::duration_cast<ms>(fs); total += d.count(); } // ----------------------------------------------------------------------------------------------------- // ---------------------------- 8. Process output ------------------------------------------------------ slog::info << "Processing output blobs" << slog::endl; Blob::Ptr bbox_output_blob = infer_request.GetBlob(FLAGS_bbox_name); Blob::Ptr prob_output_blob = infer_request.GetBlob(FLAGS_prob_name); Blob::Ptr rois_output_blob = infer_request.GetBlob(FLAGS_proposal_name); std::vector<Blob::Ptr> detOutInBlobs = { bbox_output_blob, prob_output_blob, rois_output_blob }; Blob::Ptr output_blob = std::make_shared<TBlob<float>>(Precision::FP32, Layout::NCHW, detectionOutLayerOutDims); output_blob->allocate(); std::vector<Blob::Ptr> detOutOutBlobs = { output_blob }; detOutPostProcessor.execute(detOutInBlobs, detOutOutBlobs, nullptr); const float* detection = static_cast<PrecisionTrait<Precision::FP32>::value_type*>(output_blob->buffer()); std::vector<std::vector<int> > boxes(batchSize); std::vector<std::vector<int> > classes(batchSize); /* Each detection has image_id that denotes processed image */ for (int curProposal = 0; curProposal < maxProposalCount; curProposal++) { float image_id = detection[curProposal * objectSize + 0]; float label = detection[curProposal * objectSize + 1]; float confidence = detection[curProposal * objectSize + 2]; float xmin = detection[curProposal * objectSize + 3] * imageWidths[image_id]; float ymin = detection[curProposal * objectSize + 4] * imageHeights[image_id]; float xmax = detection[curProposal * objectSize + 5] * imageWidths[image_id]; float ymax = detection[curProposal * objectSize + 6] * imageHeights[image_id]; /* MKLDnn and clDNN have little differente in DetectionOutput layer, so we need this check */ if (image_id < 0 || confidence == 0) { continue; } std::cout << "[" << curProposal << "," << label << "] element, prob = " << confidence << " (" << xmin << "," << ymin << ")-(" << xmax << "," << ymax << ")" << " batch id : " << image_id; if (confidence > 0.5) { /** Drawing only objects with >50% probability **/ classes[image_id].push_back(static_cast<int>(label)); boxes[image_id].push_back(static_cast<int>(xmin)); boxes[image_id].push_back(static_cast<int>(ymin)); boxes[image_id].push_back(static_cast<int>(xmax - xmin)); boxes[image_id].push_back(static_cast<int>(ymax - ymin)); std::cout << " WILL BE PRINTED!"; } std::cout << std::endl; } for (size_t batch_id = 0; batch_id < batchSize; ++batch_id) { addRectangles(originalImagesData[batch_id].get(), imageHeights[batch_id], imageWidths[batch_id], boxes[batch_id], classes[batch_id]); const std::string image_path = "out_" + std::to_string(batch_id) + ".bmp"; if (writeOutputBmp(image_path, originalImagesData[batch_id].get(), imageHeights[batch_id], imageWidths[batch_id])) { slog::info << "Image " + image_path + " created!" << slog::endl; } else { throw std::logic_error(std::string("Can't create a file: ") + image_path); } } // ----------------------------------------------------------------------------------------------------- std::cout << std::endl << "total inference time: " << total << std::endl; std::cout << "Average running time of one iteration: " << total / static_cast<double>(FLAGS_ni) << " ms" << std::endl; std::cout << std::endl << "Throughput: " << 1000 * static_cast<double>(FLAGS_ni) * batchSize / total << " FPS" << std::endl; std::cout << std::endl; /** Show performace results **/ if (FLAGS_pc) { printPerformanceCounts(infer_request, std::cout); } } catch (const std::exception& error) { slog::err << error.what() << slog::endl; return 1; } catch (...) { slog::err << "Unknown/internal exception happened." << slog::endl; return 1; } slog::info << "Execution successful" << slog::endl; return 0; } 有如下报错:严重性 代码 说明 项目 文件 行 禁止显示状态 错误 LNK2019 无法解析的外部符号 CreateFormatReader,该符号在函数 "public: __cdecl FormatReader::ReaderPtr::ReaderPtr(char const *)" (??0ReaderPtr@FormatReader@@QEAA@PEBD@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误(活动) 无法引用 函数 "InferenceEngine::make_so_pointer<T>(const std::string &name) [其中 T=InferenceEngine::IExtension]" (已声明 所在行数:164,所属文件:"c:\Users\颜俊毅\Desktop\dldt-2018\inference-engine\include\details\ie_so_pointer.hpp") -- 它是已删除的函数 88999 c:\Users\颜俊毅\Documents\Visual Studio 2015\Projects\88999\88999\7521.cpp 102 错误 LNK2019 无法解析的外部符号 __imp_CreateDefaultAllocator,该符号在函数 "protected: virtual class std::shared_ptr<class InferenceEngine::IAllocator> const & __cdecl InferenceEngine::TBlob<int,struct std::enable_if<1,void> >::getAllocator(void)const " (?getAllocator@?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@MEBAAEBV?$shared_ptr@VIAllocator@InferenceEngine@@@std@@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::BlockingDesc::BlockingDesc(class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > const &,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > const &)" (__imp_??0BlockingDesc@InferenceEngine@@QEAA@AEBV?$vector@_KV?$allocator@_K@std@@@std@@0@Z),该符号在函数 "public: __cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: virtual __cdecl InferenceEngine::BlockingDesc::~BlockingDesc(void)" (__imp_??1BlockingDesc@InferenceEngine@@UEAA@XZ),该符号在函数 "public: __cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::Precision const &,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> >,class InferenceEngine::BlockingDesc const &)" (__imp_??0TensorDesc@InferenceEngine@@QEAA@AEBVPrecision@1@V?$vector@_KV?$allocator@_K@std@@@std@@AEBVBlockingDesc@1@@Z),该符号在函数 "public: __cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::Precision const &,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> >,enum InferenceEngine::Layout)" (__imp_??0TensorDesc@InferenceEngine@@QEAA@AEBVPrecision@1@V?$vector@_KV?$allocator@_K@std@@@std@@W4Layout@1@@Z),该符号在函数 "public: __cdecl InferenceEngine::Blob::Blob(class InferenceEngine::Precision,enum InferenceEngine::Layout,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > const &)" (??0Blob@InferenceEngine@@QEAA@VPrecision@1@W4Layout@1@AEBV?$vector@_KV?$allocator@_K@std@@@std@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: virtual __cdecl InferenceEngine::TensorDesc::~TensorDesc(void)" (__imp_??1TensorDesc@InferenceEngine@@UEAA@XZ),该符号在函数 "public: __cdecl InferenceEngine::Blob::Blob(class InferenceEngine::TensorDesc)" (??0Blob@InferenceEngine@@QEAA@VTensorDesc@1@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > & __cdecl InferenceEngine::TensorDesc::getDims(void)" (__imp_?getDims@TensorDesc@InferenceEngine@@QEAAAEAV?$vector@_KV?$allocator@_K@std@@@std@@XZ),该符号在函数 "public: virtual void __cdecl InferenceEngine::TBlob<int,struct std::enable_if<1,void> >::allocate(void)" (?allocate@?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@UEAAXXZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > const & __cdecl InferenceEngine::TensorDesc::getDims(void)const " (__imp_?getDims@TensorDesc@InferenceEngine@@QEBAAEBV?$vector@_KV?$allocator@_K@std@@@std@@XZ),该符号在函数 "public: unsigned __int64 __cdecl InferenceEngine::Blob::byteSize(void)const " (?byteSize@Blob@InferenceEngine@@QEBA_KXZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: static enum InferenceEngine::Layout __cdecl InferenceEngine::TensorDesc::getLayoutByDims(class std::vector<unsigned __int64,class std::allocator<unsigned __int64> >)" (__imp_?getLayoutByDims@TensorDesc@InferenceEngine@@SA?AW4Layout@2@V?$vector@_KV?$allocator@_K@std@@@std@@@Z),该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::TensorDesc const &)" (__imp_??0TensorDesc@InferenceEngine@@QEAA@AEBV01@@Z),该符号在函数 "public: __cdecl InferenceEngine::TBlob<int,struct std::enable_if<1,void> >::TBlob<int,struct std::enable_if<1,void> >(class InferenceEngine::TensorDesc const &)" (??0?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@QEAA@AEBVTensorDesc@1@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::Data::Data(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > const &,class InferenceEngine::Precision,enum InferenceEngine::Layout)" (__imp_??0Data@InferenceEngine@@QEAA@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@AEBV?$vector@_KV?$allocator@_K@std@@@3@VPrecision@1@W4Layout@1@@Z),该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: class InferenceEngine::TensorDesc const & __cdecl InferenceEngine::Data::getTensorDesc(void)const " (__imp_?getTensorDesc@Data@InferenceEngine@@QEBAAEBVTensorDesc@2@XZ),该符号在函数 "public: virtual class std::map<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> >,struct std::less<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >,class std::allocator<struct std::pair<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const ,class std::vector<unsigned __int64,class std::allocator<unsigned __int64> > > > > __cdecl InferenceEngine::CNNNetwork::getInputShapes(void)" (?getInputShapes@CNNNetwork@InferenceEngine@@UEAA?AV?$map@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V?$vector@_KV?$allocator@_K@std@@@2@U?$less@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V?$vector@_KV?$allocator@_K@std@@@2@@std@@@2@@std@@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: void __cdecl InferenceEngine::Data::setPrecision(class InferenceEngine::Precision const &)" (__imp_?setPrecision@Data@InferenceEngine@@QEAAXAEBVPrecision@2@@Z),该符号在函数 "public: void __cdecl InferenceEngine::InputInfo::setPrecision(class InferenceEngine::Precision)" (?setPrecision@InputInfo@InferenceEngine@@QEAAXVPrecision@2@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::Data::~Data(void)" (__imp_??1Data@InferenceEngine@@QEAA@XZ),该符号在函数 "public: void * __cdecl InferenceEngine::Data::`scalar deleting destructor'(unsigned int)" (??_GData@InferenceEngine@@QEAAPEAXI@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 __imp_findPlugin,该符号在函数 "public: class InferenceEngine::details::SOPointer<class InferenceEngine::IInferencePlugin,class InferenceEngine::details::SharedObjectLoader> __cdecl InferenceEngine::PluginDispatcher::getSuitablePlugin(enum InferenceEngine::TargetDevice)const " (?getSuitablePlugin@PluginDispatcher@InferenceEngine@@QEBA?AV?$SOPointer@VIInferencePlugin@InferenceEngine@@VSharedObjectLoader@details@2@@details@2@W4TargetDevice@2@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 __imp_GetInferenceEngineVersion,该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 __imp_CreateCNNNetReader,该符号在函数 "public: __cdecl InferenceEngine::CNNNetReader::CNNNetReader(void)" (??0CNNNetReader@InferenceEngine@@QEAA@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::CpuExtensions(void)" (__imp_??0CpuExtensions@Cpu@Extensions@InferenceEngine@@QEAA@XZ),该符号在函数 "public: __cdecl std::_Ref_count_obj<class InferenceEngine::Extensions::Cpu::CpuExtensions>::_Ref_count_obj<class InferenceEngine::Extensions::Cpu::CpuExtensions><>(void)" (??$?0$$V@?$_Ref_count_obj@VCpuExtensions@Cpu@Extensions@InferenceEngine@@@std@@QEAA@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: virtual __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::~CpuExtensions(void)" (__imp_??1CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA@XZ),该符号在函数 "public: virtual void * __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::`scalar deleting destructor'(unsigned int)" (??_GCpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAPEAXI@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::GetVersion(struct InferenceEngine::Version const * &)const " (?GetVersion@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEBAXAEAPEBUVersion@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::Release(void)" (?Release@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXXZ) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::SetLogCallback(class InferenceEngine::IErrorListener &)" (?SetLogCallback@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXAEAVIErrorListener@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::Unload(void)" (?Unload@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXXZ) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getFactoryFor(class InferenceEngine::ILayerImplFactory * &,class InferenceEngine::CNNLayer const *,struct InferenceEngine::ResponseDesc *)" (?getFactoryFor@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAPEAVILayerImplFactory@4@PEBVCNNLayer@4@PEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getPrimitiveTypes(char * * &,unsigned int &,struct InferenceEngine::ResponseDesc *)" (?getPrimitiveTypes@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAPEAPEADAEAIPEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getShapeInferImpl(class std::shared_ptr<class InferenceEngine::IShapeInferImpl> &,char const *,struct InferenceEngine::ResponseDesc *)" (?getShapeInferImpl@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAV?$shared_ptr@VIShapeInferImpl@InferenceEngine@@@std@@PEBDPEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1 错误 LNK1120 27 个无法解析的外部命令 88999 c:\users\颜俊毅\documents\visual studio 2015\Projects\88999\x64\Debug\88999.exe 1

Tensorflow Object Detection API Demo运行报错求?

![图片说明](https://img-ask.csdn.net/upload/201810/12/1539335033_129004.png) 这个能运行但是在jupyter运行Demo就报下面的错误是怎么回事..... ![图片说明](https://img-ask.csdn.net/upload/201810/12/1539335054_656491.png)

VS2013上MFC界面,编译成功,运行报错!!!求教

问题描述:同样的项目,在debug下编译成功,但运行报错,如下图: ![图片说明](https://img-ask.csdn.net/upload/201812/02/1543744285_332719.png) 修复了VS2013,报错。。。 查找错误日志: 电脑->管理->事件查看器->Windows日志->应用程序查看错误日志显示如下: ![图片说明](https://img-ask.csdn.net/upload/201812/02/1543744499_272557.png) 报错如下:“F:\DEMO\PoliceImgSys4.0\Debug\opencv_imgproc230d.dll”的激活上下文生成失败。 找不到从属程序集 Microsoft.VC90.DebugCRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8"。 请使用 sxstrace.exe 进行详细诊断。 解决:其中激活上下文文件,找不到从属集的有:opencv_core230d.dll,opencv_legacy230d.dll,opencv_highgui230d.dll,opencv_imgproc230d.dll等。找到对应的缺失文件放入.exe文件夹下,依旧报错。。。 问题描述:同样的项目,在release下编译成功,但运行报错,如下图: ![图片说明](https://img-ask.csdn.net/upload/201812/02/1543744898_726633.png) 点击忽略,界面可以正常运行。 但是点击文件下的.exe可执行文件文件,报错如下图,还是哪里有问题: ![图片说明](https://img-ask.csdn.net/upload/201812/02/1543745079_413182.png) 尝试了很多方法,都没能成功,希望有哪位大佬给与帮助,感谢!!!****

初学者学习中使用maven管理项目spring的Demo运行报错,求指导

Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/beans/factory/ListableBeanFactory at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637) at java.lang.ClassLoader.defineClass(ClassLoader.java:621) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Caused by: java.lang.ClassNotFoundException: org.springframework.beans.factory.ListableBeanFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 12 more

.大神求助flutter项目运行报错

libdvm.so has text relocations. This is wasting memory and is a security risk

百度地图demo 运行时报错

下载百度地图的参考demo,改了key值后,运行直接报java.lang.NoClassDefFoundError: com.baidu.mapapi.SDKInitializer的错。。。想知道是哪里配置错误吗,求大神解答

微信sdk demo运行报错 求救

java.lang.SecurityException: Permission Denial: get/set setting for user asks to run as user -2 but is calling from user 0; this requires android.permission.INTERACT_ACROSS_USERS_FULL。 这是什么错误?怎么解决啊

linux下pandoc使用自定义的template模板将md文件转换为中文pdf文件-报错问题

## linux下pandoc使用自定义的template模板m将md文件转换为中文pdf文件报错问题 --- 在linux环境下使用pandoc将md文件转换为pdf文件(中文的),使用了自定义的模板,该模板来自于[tzengyuxio](https://github.com/tzengyuxio/pages/tree/gh-pages/pandoc)的pm-template.latex文件,我将该模板文件放到pandoc的模板文件夹下(linux默认的路径是:/usr/share/pandoc/data/templates) --- **linux下一安装pandoc(1.16.0.2),texlive2017并且已经配置好中文环境** *转换指令* ``` pandoc demo.md -o new.pdf --latex-engine=xelatex --template=pm-template.latex ``` - 报错问题 - - 1.第一次报错问题(已解决) ![图片说明](https://img-ask.csdn.net/upload/201912/19/1576724614_579189.png) 该问题已经通过[killercup](https://github.com/killercup/trpl-ebook/issues/18)得到了解决,在pm-templatex.latex文件导言区加入如下内容 ``` \newcommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} ``` - - 2.**第二次报错问题(就是我想请教各位前辈的)** ![图片说明](https://img-ask.csdn.net/upload/201912/19/1576724974_966113.png) 如果哪位前辈知道这个报错问题的话,qing指点我一下,万分感谢

xcode9开发AR程序demo运行报错

如题,控制台说不能运行那个demo,但是自己xcode和ios版本都没问题

tkinter程序运行时报错

学习tkinter项目,照着老师写的代码写却仍然报错.具体代码如下: ![图片说明](https://img-ask.csdn.net/upload/201912/17/1576544180_961000.jpg) 运行的时候标黄色的那一行总是报错如下: File "E:/Program Files/Python/FollowMyHeart/GUI/Phase6/Demo.py", line 21, in addfile info['text'] = '\n'.join(filelists) NameError: name 'info' is not defined 按说label_info都已经定义过了.老师写的代码也是可以运行的.为什么在我的电脑就就运行不了呢

Xcode 10 编译运行报错

今天把Xcode10.1升级到 Xcode10.3,在真机环境中编译运行一直报错,“This application’s bundle identifier does not match its code signing identifier.”,求大佬们助攻下! 编译环境:Xcode 10.2 、Xcode 10.3 得到库的过程:用Carthage github "NordicSemiconductor/IOS-Pods-DFU-Library" ~> 4.5 用到的动态库:iOSDFULibrary.framework 4.5.0版本、ZIPFoundation.framework 0.9.9版本 真机中运行报错信息:“This application’s bundle identifier does not match its code signing identifier.” 一开始认为都是自己证书,签名配置有问题。然后更换了一波 bundle ID 、signing Certificate。甚至去了开发者官网,重新更换了 开发者证书,并下载到本地,进行app的签名证书配置,还是报上面的错误。 然后认为是自己Xcode版本的问题,删除Xcode,重新下载安装最新版本Xcode 10.3,编译运行,问题依旧。 最后写了一个小demo,用来测试 iOSDFULibrary.framework 4.5.0版本、ZIPFoundation.framework 0.9.9 版本 动态库,发现在真机情况下编译运行还是有相同的错误。模拟器还是可以正常的编译运行。 ![图片说明](https://img-ask.csdn.net/upload/201908/21/1566353221_502387.png)

学习Servlet时运行报错,如何解决?

**JSP页面错误**: type Exception report message Error instantiating servlet class com.rl.servlet.ServeltDemo1 description The server encountered an internal error that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class com.rl.servlet.ServeltDemo1 org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1500) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1456) java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Unknown Source) root cause java.lang.IllegalAccessException: Class org.apache.catalina.core.DefaultInstanceManager can not access a member of class com.rl.servlet.ServeltDemo1 with modifiers "" sun.reflect.Reflection.ensureMemberAccess(Unknown Source) java.lang.Class.newInstance(Unknown Source) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1500) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1456) java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Unknown Source) note The full stack trace of the root cause is available in the Apache Tomcat/8.0.32 logs. **控制台报错**: 严重: Allocate exception for servlet helloServlet java.lang.IllegalAccessException: Class org.apache.catalina.core.DefaultInstanceManager can not access a member of class com.rl.servlet.ServeltDemo1 with modifiers "" at sun.reflect.Reflection.ensureMemberAccess(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:119) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1102) at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:828) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:135) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1500) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1456) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Unknown Source) **实现类代码**: class ServeltDemo1 implements Servlet { @Override public void init(ServletConfig arg0) throws ServletException { System.out.println("相应请求"); System.out.println("Servlet组件被创建了"); } @Override public void service(ServletRequest request, ServletResponse response) throws ServletException, IOException { response.getOutputStream().write("<font color = 'red'>Hello Servlet</font>".getBytes()); } @Override public void destroy() { System.out.println("Servlet销毁了"); } } **web.xml代码**: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0"> <servlet> <!-- 设置Servlet名称 --> <servlet-name>helloServlet</servlet-name> <!-- 具体的Servlet类 --> <servlet-class>com.rl.servlet.ServeltDemo1</servlet-class> </servlet> <servlet-mapping> <servlet-name>helloServlet</servlet-name> <url-pattern>/hello</url-pattern> </servlet-mapping> </web-app>

百度地图编译通过运行报错

![图片说明](https://img-ask.csdn.net/upload/201603/27/1459067238_93898.png) 在学习百度地图的使用按照别人的demo敲的不知道为什么会出现错误哪位前辈帮忙看一下. Error:(27, 22) 错误: 无法将类 BMapManager中的构造器 BMapManager应用到给定类型; 需要: 没有参数 找到: MainActivity 原因: 实际参数列表和形式参数列表长度不同 Error:(31, 20) 错误: 无法将类 BMapManager中的方法 init应用到给定类型; 需要: 没有参数 找到: String,<匿名MKGeneralListener> 原因: 实际参数列表和形式参数列表长度不同

运行RN的安卓项目报错。

装好RN环境,第一次运行安卓Demo,报错信息如下: ![图片说明](https://img-ask.csdn.net/upload/201905/26/1558856313_132103.png) 然后我用百度的方法,修改本地的gradle版本后,又报错如下: ![图片说明](https://img-ask.csdn.net/upload/201905/26/1558856420_301284.png) 求助各位大佬,这个问题怎么解决啊?

下载的银联demo怎么运行java版呀,配置了src/upmp.properties

在用浏览器测试时,总是显示无法显示网页,如何测试,刚学java,不是很懂

运行jar 报错 Exception :

![图片说明](https://img-ask.csdn.net/upload/201708/25/1503631633_39319.png)

arcgis android 撒点报错 VerifyError,demo中运行没问题的

代码在demo中跑起来完全正常,但是在整合到项目中之后能显示地图,但是无法撒点,卡在GraphicsLayer.addGraphic(),每次一到这里就报错,闪退 java.lang.VerifyError: com/esri/core/internal/util/d at com.esri.core.symbol.PictureMarkerSymbol.toJson(SourceFile:264) at com.esri.android.map.GraphicsLayer.addGraphic(SourceFile:257) at com.mlight.chat.activities.jqxx.JqxxMapActivity.sprinkle(JqxxMapActivity.java:427) at com.mlight.chat.activities.jqxx.JqxxMapActivity.access$300(JqxxMapActivity.java:52) at com.mlight.chat.activities.jqxx.JqxxMapActivity$2.handleMessage(JqxxMapActivity.java:349) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:136) at android.app.ActivityThread.main(ActivityThread.java:5336) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:515) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:873) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:689) at dalvik.system.NativeStart.main(Native Method)

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

String s = new String(" a ") 到底产生几个对象?

老生常谈的一个梗,到2020了还在争论,你们一天天的,哎哎哎,我不是针对你一个,我是说在座的各位都是人才! 上图红色的这3个箭头,对于通过new产生一个字符串(”宜春”)时,会先去常量池中查找是否已经有了”宜春”对象,如果没有则在常量池中创建一个此字符串对象,然后堆中再创建一个常量池中此”宜春”对象的拷贝对象。 也就是说准确答案是产生了一个或两个对象,如果常量池中原来没有 ”宜春” ,就是两个。...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Linux面试题(2020最新版)

文章目录Linux 概述什么是LinuxUnix和Linux有什么区别?什么是 Linux 内核?Linux的基本组件是什么?Linux 的体系结构BASH和DOS之间的基本区别是什么?Linux 开机启动过程?Linux系统缺省的运行级别?Linux 使用的进程间通信方式?Linux 有哪些系统日志文件?Linux系统安装多个桌面环境有帮助吗?什么是交换空间?什么是root帐户什么是LILO?什...

Linux命令学习神器!命令看不懂直接给你解释!

大家都知道,Linux 系统有非常多的命令,而且每个命令又有非常多的用法,想要全部记住所有命令的所有用法,恐怕是一件不可能完成的任务。 一般情况下,我们学习一个命令时,要么直接百度去搜索它的用法,要么就直接用 man 命令去查看守冗长的帮助手册。这两个都可以实现我们的目标,但有没有更简便的方式呢? 答案是必须有的!今天给大家推荐一款有趣而实用学习神器 — kmdr,让你解锁 Linux 学习新姿势...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

史上最全的 python 基础知识汇总篇,没有比这再全面的了,建议收藏

网友们有福了,小编终于把基础篇的内容全部涉略了一遍,这是一篇关于基础知识的汇总的文章,请朋友们收下,不用客气,不过文章篇幅肯能会有点长,耐心阅读吧爬虫(七十)多进程multiproces...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

85后蒋凡:28岁实现财务自由、34岁成为阿里万亿电商帝国双掌门,他的人生底层逻辑是什么?...

蒋凡是何许人也? 2017年12月27日,在入职4年时间里,蒋凡开挂般坐上了淘宝总裁位置。 为此,时任阿里CEO张勇在任命书中力赞: 蒋凡加入阿里,始终保持创业者的冲劲,有敏锐的...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

新一代神器STM32CubeMonitor介绍、下载、安装和使用教程

关注、星标公众号,不错过精彩内容作者:黄工公众号:strongerHuang最近ST官网悄悄新上线了一款比较强大的工具:STM32CubeMonitor V1.0.0。经过我研究和使用之...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

大学一路走来,学习互联网全靠这几个网站,最终拿下了一把offer

大佬原来都是这样炼成的

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

什么时候跳槽,为什么离职,你想好了么?

都是出来打工的,多为自己着想

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你期望月薪4万,出门右拐,不送,这几个点,你也就是个初级的水平

先来看几个问题通过注解的方式注入依赖对象,介绍一下你知道的几种方式@Autowired和@Resource有何区别说一下@Autowired查找候选者的...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

立即提问
相关内容推荐