openvino里的object detection demo 运行的时候有很多报错是怎么回事? 5C

代码如下:/*
// Copyright (c) 2018 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
*/
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include

#include
#include
#include

#include
#include
#include
#include "object_detection_demo.h"
#include "detectionoutput.h"

using namespace InferenceEngine;

bool ParseAndCheckCommandLine(int argc, char *argv[]) {
// ---------------------------Parsing and validation of input args--------------------------------------
slog::info << "Parsing input parameters" << slog::endl;

gflags::ParseCommandLineNonHelpFlags(&argc, &argv, true);
if (FLAGS_h) {
    showUsage();
    return false;
}

if (FLAGS_ni < 1) {
    throw std::logic_error("Parameter -ni should be greater than 0 (default: 1)");
}

if (FLAGS_i.empty()) {
    throw std::logic_error("Parameter -i is not set");
}

if (FLAGS_m.empty()) {
    throw std::logic_error("Parameter -m is not set");
}

return true;

}

/**

  • \brief The entry point for the Inference Engine object_detection demo application
  • \file object_detection_demo/main.cpp
  • \example object_detection_demo/main.cpp
    /
    int main(int argc, char *argv[]) {
    try {
    /
    * This demo covers certain topology and cannot be generalized for any object detection one **/
    slog::info << "InferenceEngine: " << GetInferenceEngineVersion() << "\n";

    // ------------------------------ Parsing and validation of input args ---------------------------------
    if (!ParseAndCheckCommandLine(argc, argv)) {
        return 0;
    }
    
    /** This vector stores paths to the processed images **/
    std::vector<std::string> images;
    parseImagesArguments(images);
    if (images.empty()) throw std::logic_error("No suitable images were found");
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 1. Load Plugin for inference engine -------------------------------------
    slog::info << "Loading plugin" << slog::endl;
    InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d);
    
    /*If CPU device, load default library with extensions that comes with the product*/
    if (FLAGS_d.find("CPU") != std::string::npos) {
        /**
        * cpu_extensions library is compiled from "extension" folder containing
        * custom MKLDNNPlugin layer implementations. These layers are not supported
        * by mkldnn, but they can be useful for inferencing custom topologies.
        **/
        plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
    }
    
    if (!FLAGS_l.empty()) {
        // CPU(MKLDNN) extensions are loaded as a shared library and passed as a pointer to base extension
        IExtensionPtr extension_ptr = make_so_pointer<IExtension>(FLAGS_l);
        plugin.AddExtension(extension_ptr);
        slog::info << "CPU Extension loaded: " << FLAGS_l << slog::endl;
    }
    
    if (!FLAGS_c.empty()) {
        // clDNN Extensions are loaded from an .xml description and OpenCL kernel files
        plugin.SetConfig({ { PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c } });
        slog::info << "GPU Extension loaded: " << FLAGS_c << slog::endl;
    }
    
    /** Setting plugin parameter for per layer metrics **/
    if (FLAGS_pc) {
        plugin.SetConfig({ { PluginConfigParams::KEY_PERF_COUNT, PluginConfigParams::YES } });
    }
    
    /** Printing plugin version **/
    printPluginVersion(plugin, std::cout);
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
    std::string binFileName = fileNameNoExt(FLAGS_m) + ".bin";
    slog::info << "Loading network files:"
        "\n\t" << FLAGS_m <<
        "\n\t" << binFileName <<
        slog::endl;
    
    CNNNetReader networkReader;
    /** Read network model **/
    networkReader.ReadNetwork(FLAGS_m);
    
    /** Extract model name and load weigts **/
    networkReader.ReadWeights(binFileName);
    CNNNetwork network = networkReader.getNetwork();
    
    Precision p = network.getPrecision();
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 3. Configure input & output ---------------------------------------------
    
    // ------------------------------ Adding DetectionOutput -----------------------------------------------
    
    /**
    * The only meaningful difference between Faster-RCNN and SSD-like topologies is the interpretation
    * of the output data. Faster-RCNN has 2 output layers which (the same format) are presented inside SSD.
    *
    * But SSD has an additional post-processing DetectionOutput layer that simplifies output filtering.
    * So here we are adding 3 Reshapes and the DetectionOutput to the end of Faster-RCNN so it will return the
    * same result as SSD and we can easily parse it.
    */
    
    std::string firstLayerName = network.getInputsInfo().begin()->first;
    
    int inputWidth = network.getInputsInfo().begin()->second->getTensorDesc().getDims()[3];
    int inputHeight = network.getInputsInfo().begin()->second->getTensorDesc().getDims()[2];
    
    DataPtr bbox_pred_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_bbox_name.c_str());
    if (bbox_pred_reshapeInPort == nullptr) {
        throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_bbox_name);
    }
    
    SizeVector bbox_pred_reshapeOutDims = {
        bbox_pred_reshapeInPort->getTensorDesc().getDims()[0] *
        bbox_pred_reshapeInPort->getTensorDesc().getDims()[1], 1
    };
    DataPtr rois_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_proposal_name.c_str());
    if (rois_reshapeInPort == nullptr) {
        throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_proposal_name);
    }
    
    SizeVector rois_reshapeOutDims = { rois_reshapeInPort->getTensorDesc().getDims()[0] * rois_reshapeInPort->getTensorDesc().getDims()[1], 1 };
    
    DataPtr cls_prob_reshapeInPort = ((ICNNNetwork&)network).getData(FLAGS_prob_name.c_str());
    if (cls_prob_reshapeInPort == nullptr) {
        throw std::logic_error(std::string("Can't find output layer named ") + FLAGS_prob_name);
    }
    
    SizeVector cls_prob_reshapeOutDims = { cls_prob_reshapeInPort->getTensorDesc().getDims()[0] * cls_prob_reshapeInPort->getTensorDesc().getDims()[1], 1 };
    
    /*
    Detection output
    */
    
    int normalized = 0;
    int prior_size = normalized ? 4 : 5;
    int num_priors = rois_reshapeOutDims[0] / prior_size;
    
    // num_classes guessed from the output dims
    if (bbox_pred_reshapeOutDims[0] % (num_priors * 4) != 0) {
        throw std::logic_error("Can't guess number of classes. Something's wrong with output layers dims");
    }
    int num_classes = bbox_pred_reshapeOutDims[0] / (num_priors * 4);
    slog::info << "num_classes guessed: " << num_classes << slog::endl;
    
    LayerParams detectionOutParams;
    detectionOutParams.name = "detection_out";
    detectionOutParams.type = "DetectionOutput";
    detectionOutParams.precision = p;
    CNNLayerPtr detectionOutLayer = CNNLayerPtr(new CNNLayer(detectionOutParams));
    detectionOutLayer->params["background_label_id"] = "0";
    detectionOutLayer->params["code_type"] = "caffe.PriorBoxParameter.CENTER_SIZE";
    detectionOutLayer->params["eta"] = "1.0";
    detectionOutLayer->params["input_height"] = std::to_string(inputHeight);
    detectionOutLayer->params["input_width"] = std::to_string(inputWidth);
    detectionOutLayer->params["keep_top_k"] = "200";
    detectionOutLayer->params["nms_threshold"] = "0.3";
    detectionOutLayer->params["normalized"] = std::to_string(normalized);
    detectionOutLayer->params["num_classes"] = std::to_string(num_classes);
    detectionOutLayer->params["share_location"] = "0";
    detectionOutLayer->params["top_k"] = "400";
    detectionOutLayer->params["variance_encoded_in_target"] = "1";
    detectionOutLayer->params["visualize"] = "False";
    
    detectionOutLayer->insData.push_back(bbox_pred_reshapeInPort);
    detectionOutLayer->insData.push_back(cls_prob_reshapeInPort);
    detectionOutLayer->insData.push_back(rois_reshapeInPort);
    
    SizeVector detectionOutLayerOutDims = { 7, 200, 1, 1 };
    DataPtr detectionOutLayerOutPort = DataPtr(new Data("detection_out", detectionOutLayerOutDims, p,
        TensorDesc::getLayoutByDims(detectionOutLayerOutDims)));
    detectionOutLayerOutPort->creatorLayer = detectionOutLayer;
    detectionOutLayer->outData.push_back(detectionOutLayerOutPort);
    
    DetectionOutputPostProcessor detOutPostProcessor(detectionOutLayer.get());
    
    network.addOutput(FLAGS_bbox_name, 0);
    network.addOutput(FLAGS_prob_name, 0);
    network.addOutput(FLAGS_proposal_name, 0);
    
    // --------------------------- Prepare input blobs -----------------------------------------------------
    slog::info << "Preparing input blobs" << slog::endl;
    
    /** Taking information about all topology inputs **/
    InputsDataMap inputsInfo(network.getInputsInfo());
    
    /** SSD network has one input and one output **/
    if (inputsInfo.size() != 1 && inputsInfo.size() != 2) throw std::logic_error("Demo supports topologies only with 1 or 2 inputs");
    
    std::string imageInputName, imInfoInputName;
    
    InputInfo::Ptr inputInfo = inputsInfo.begin()->second;
    
    SizeVector inputImageDims;
    /** Stores input image **/
    
    /** Iterating over all input blobs **/
    for (auto & item : inputsInfo) {
        /** Working with first input tensor that stores image **/
        if (item.second->getInputData()->getTensorDesc().getDims().size() == 4) {
            imageInputName = item.first;
    
            slog::info << "Batch size is " << std::to_string(networkReader.getNetwork().getBatchSize()) << slog::endl;
    
            /** Creating first input blob **/
            Precision inputPrecision = Precision::U8;
            item.second->setPrecision(inputPrecision);
    
        }
        else if (item.second->getInputData()->getTensorDesc().getDims().size() == 2) {
            imInfoInputName = item.first;
    
            Precision inputPrecision = Precision::FP32;
            item.second->setPrecision(inputPrecision);
            if ((item.second->getTensorDesc().getDims()[1] != 3 && item.second->getTensorDesc().getDims()[1] != 6) ||
                item.second->getTensorDesc().getDims()[0] != 1) {
                throw std::logic_error("Invalid input info. Should be 3 or 6 values length");
            }
        }
    }
    
    // ------------------------------ Prepare output blobs -------------------------------------------------
    slog::info << "Preparing output blobs" << slog::endl;
    
    OutputsDataMap outputsInfo(network.getOutputsInfo());
    
    const int maxProposalCount = detectionOutLayerOutDims[1];
    const int objectSize = detectionOutLayerOutDims[0];
    
    /** Set the precision of output data provided by the user, should be called before load of the network to the plugin **/
    
    outputsInfo[FLAGS_bbox_name]->setPrecision(Precision::FP32);
    outputsInfo[FLAGS_prob_name]->setPrecision(Precision::FP32);
    outputsInfo[FLAGS_proposal_name]->setPrecision(Precision::FP32);
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 4. Loading model to the plugin ------------------------------------------
    slog::info << "Loading model to the plugin" << slog::endl;
    
    ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 5. Create infer request -------------------------------------------------
    InferRequest infer_request = executable_network.CreateInferRequest();
    // -----------------------------------------------------------------------------------------------------
    
    // --------------------------- 6. Prepare input --------------------------------------------------------
    /** Collect images data ptrs **/
    std::vector<std::shared_ptr<unsigned char>> imagesData, originalImagesData;
    std::vector<int> imageWidths, imageHeights;
    for (auto & i : images) {
        FormatReader::ReaderPtr reader(i.c_str());
        if (reader.get() == nullptr) {
            slog::warn << "Image " + i + " cannot be read!" << slog::endl;
            continue;
        }
        /** Store image data **/
        std::shared_ptr<unsigned char> originalData(reader->getData());
        std::shared_ptr<unsigned char> data(reader->getData(inputInfo->getTensorDesc().getDims()[3], inputInfo->getTensorDesc().getDims()[2]));
        if (data.get() != nullptr) {
            originalImagesData.push_back(originalData);
            imagesData.push_back(data);
            imageWidths.push_back(reader->width());
            imageHeights.push_back(reader->height());
        }
    }
    if (imagesData.empty()) throw std::logic_error("Valid input images were not found!");
    
    size_t batchSize = network.getBatchSize();
    slog::info << "Batch size is " << std::to_string(batchSize) << slog::endl;
    if (batchSize != imagesData.size()) {
        slog::warn << "Number of images " + std::to_string(imagesData.size()) + \
            " doesn't match batch size " + std::to_string(batchSize) << slog::endl;
        slog::warn << std::to_string(std::min(imagesData.size(), batchSize)) + \
            " images will be processed" << slog::endl;
        batchSize = std::min(batchSize, imagesData.size());
    }
    
    /** Creating input blob **/
    Blob::Ptr imageInput = infer_request.GetBlob(imageInputName);
    
    /** Filling input tensor with images. First b channel, then g and r channels **/
    size_t num_channels = imageInput->getTensorDesc().getDims()[1];
    size_t image_size = imageInput->getTensorDesc().getDims()[3] * imageInput->getTensorDesc().getDims()[2];
    
    unsigned char* data = static_cast<unsigned char*>(imageInput->buffer());
    
    /** Iterate over all input images **/
    for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) {
        /** Iterate over all pixel in image (b,g,r) **/
        for (size_t pid = 0; pid < image_size; pid++) {
            /** Iterate over all channels **/
            for (size_t ch = 0; ch < num_channels; ++ch) {
                /**          [images stride + channels stride + pixel id ] all in bytes            **/
                data[image_id * image_size * num_channels + ch * image_size + pid] = imagesData.at(image_id).get()[pid*num_channels + ch];
            }
        }
    }
    
    if (imInfoInputName != "") {
        Blob::Ptr input2 = infer_request.GetBlob(imInfoInputName);
        auto imInfoDim = inputsInfo.find(imInfoInputName)->second->getTensorDesc().getDims()[1];
    
        /** Fill input tensor with values **/
        float *p = input2->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>();
    
        for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) {
            p[image_id * imInfoDim + 0] = static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[2]);
            p[image_id * imInfoDim + 1] = static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[3]);
            for (int k = 2; k < imInfoDim; k++) {
                p[image_id * imInfoDim + k] = 1.0f;  // all scale factors are set to 1.0
            }
        }
    }
    // -----------------------------------------------------------------------------------------------------
    
    // ---------------------------- 7. Do inference --------------------------------------------------------
    slog::info << "Start inference (" << FLAGS_ni << " iterations)" << slog::endl;
    
    typedef std::chrono::high_resolution_clock Time;
    typedef std::chrono::duration<double, std::ratio<1, 1000>> ms;
    typedef std::chrono::duration<float> fsec;
    
    double total = 0.0;
    /** Start inference & calc performance **/
    for (int iter = 0; iter < FLAGS_ni; ++iter) {
        auto t0 = Time::now();
        infer_request.Infer();
        auto t1 = Time::now();
        fsec fs = t1 - t0;
        ms d = std::chrono::duration_cast<ms>(fs);
        total += d.count();
    }
    // -----------------------------------------------------------------------------------------------------
    
    // ---------------------------- 8. Process output ------------------------------------------------------
    slog::info << "Processing output blobs" << slog::endl;
    
    Blob::Ptr bbox_output_blob = infer_request.GetBlob(FLAGS_bbox_name);
    Blob::Ptr prob_output_blob = infer_request.GetBlob(FLAGS_prob_name);
    Blob::Ptr rois_output_blob = infer_request.GetBlob(FLAGS_proposal_name);
    
    std::vector<Blob::Ptr> detOutInBlobs = { bbox_output_blob, prob_output_blob, rois_output_blob };
    
    Blob::Ptr output_blob = std::make_shared<TBlob<float>>(Precision::FP32, Layout::NCHW, detectionOutLayerOutDims);
    output_blob->allocate();
    std::vector<Blob::Ptr> detOutOutBlobs = { output_blob };
    
    detOutPostProcessor.execute(detOutInBlobs, detOutOutBlobs, nullptr);
    
    const float* detection = static_cast<PrecisionTrait<Precision::FP32>::value_type*>(output_blob->buffer());
    
    std::vector<std::vector<int> > boxes(batchSize);
    std::vector<std::vector<int> > classes(batchSize);
    
    /* Each detection has image_id that denotes processed image */
    for (int curProposal = 0; curProposal < maxProposalCount; curProposal++) {
        float image_id = detection[curProposal * objectSize + 0];
        float label = detection[curProposal * objectSize + 1];
        float confidence = detection[curProposal * objectSize + 2];
        float xmin = detection[curProposal * objectSize + 3] * imageWidths[image_id];
        float ymin = detection[curProposal * objectSize + 4] * imageHeights[image_id];
        float xmax = detection[curProposal * objectSize + 5] * imageWidths[image_id];
        float ymax = detection[curProposal * objectSize + 6] * imageHeights[image_id];
    
        /* MKLDnn and clDNN have little differente in DetectionOutput layer, so we need this check */
        if (image_id < 0 || confidence == 0) {
            continue;
        }
    
        std::cout << "[" << curProposal << "," << label << "] element, prob = " << confidence <<
            "    (" << xmin << "," << ymin << ")-(" << xmax << "," << ymax << ")" << " batch id : " << image_id;
    
        if (confidence > 0.5) {
            /** Drawing only objects with >50% probability **/
            classes[image_id].push_back(static_cast<int>(label));
            boxes[image_id].push_back(static_cast<int>(xmin));
            boxes[image_id].push_back(static_cast<int>(ymin));
            boxes[image_id].push_back(static_cast<int>(xmax - xmin));
            boxes[image_id].push_back(static_cast<int>(ymax - ymin));
            std::cout << " WILL BE PRINTED!";
        }
        std::cout << std::endl;
    }
    
    for (size_t batch_id = 0; batch_id < batchSize; ++batch_id) {
        addRectangles(originalImagesData[batch_id].get(), imageHeights[batch_id], imageWidths[batch_id], boxes[batch_id], classes[batch_id]);
        const std::string image_path = "out_" + std::to_string(batch_id) + ".bmp";
        if (writeOutputBmp(image_path, originalImagesData[batch_id].get(), imageHeights[batch_id], imageWidths[batch_id])) {
            slog::info << "Image " + image_path + " created!" << slog::endl;
        }
        else {
            throw std::logic_error(std::string("Can't create a file: ") + image_path);
        }
    }
    // -----------------------------------------------------------------------------------------------------
    std::cout << std::endl << "total inference time: " << total << std::endl;
    std::cout << "Average running time of one iteration: " << total / static_cast<double>(FLAGS_ni) << " ms" << std::endl;
    std::cout << std::endl << "Throughput: " << 1000 * static_cast<double>(FLAGS_ni) * batchSize / total << " FPS" << std::endl;
    std::cout << std::endl;
    
    /** Show performace results **/
    if (FLAGS_pc) {
        printPerformanceCounts(infer_request, std::cout);
    }
    

    }
    catch (const std::exception& error) {
    slog::err << error.what() << slog::endl;
    return 1;
    }
    catch (...) {
    slog::err << "Unknown/internal exception happened." << slog::endl;
    return 1;
    }

    slog::info << "Execution successful" << slog::endl;
    return 0;
    }

有如下报错:严重性 代码 说明 项目 文件 行 禁止显示状态
错误 LNK2019 无法解析的外部符号 CreateFormatReader,该符号在函数 "public: cdecl FormatReader::ReaderPtr::ReaderPtr(char const *)" (??0ReaderPtr@FormatReader@@QEAA@PEBD@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误(活动) 无法引用 函数 "InferenceEngine::make_so_pointer(const std::string &name) [其中 T=InferenceEngine::IExtension]" (已声明 所在行数:164,所属文件:"c:\Users\颜俊毅\Desktop\dldt-2018\inference-engine\include\details\ie_so_pointer.hpp") -- 它是已删除的函数 88999 c:\Users\颜俊毅\Documents\Visual Studio 2015\Projects\88999\88999\7521.cpp 102
错误 LNK2019 无法解析的外部符号 __imp_CreateDefaultAllocator,该符号在函数 "protected: virtual class std::shared_ptr const & __cdecl InferenceEngine::TBlob >::getAllocator(void)const " (?getAllocator@?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@MEBAAEBV?$shared_ptr@VIAllocator@InferenceEngine@@@std@@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: cdecl InferenceEngine::BlockingDesc::BlockingDesc(class std::vector > const &,class std::vector > const &)" (imp_??0BlockingDesc@InferenceEngine@@QEAA@AEBV?$vector@_KV?$allocator@_K@std@@@std@@0@Z),该符号在函数 "public: cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: virtual cdecl InferenceEngine::BlockingDesc::~BlockingDesc(void)" (imp_??1BlockingDesc@InferenceEngine@@UEAA@XZ),该符号在函数 "public: cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::Precision const &,class std::vector >,class InferenceEngine::BlockingDesc const &)" (imp_??0TensorDesc@InferenceEngine@@QEAA@AEBVPrecision@1@V?$vector@_KV?$allocator@_K@std@@@std@@AEBVBlockingDesc@1@@Z),该符号在函数 "public: cdecl DetectionOutputPostProcessor::DetectionOutputPostProcessor(class InferenceEngine::CNNLayer const *)" (??0DetectionOutputPostProcessor@@QEAA@PEBVCNNLayer@InferenceEngine@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::Precision const &,class std::vector >,enum InferenceEngine::Layout)" (imp_??0TensorDesc@InferenceEngine@@QEAA@AEBVPrecision@1@V?$vector@_KV?$allocator@_K@std@@@std@@W4Layout@1@@Z),该符号在函数 "public: cdecl InferenceEngine::Blob::Blob(class InferenceEngine::Precision,enum InferenceEngine::Layout,class std::vector > const &)" (??0Blob@InferenceEngine@@QEAA@VPrecision@1@W4Layout@1@AEBV?$vector@_KV?$allocator@_K@std@@@std@@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: virtual cdecl InferenceEngine::TensorDesc::~TensorDesc(void)" (imp_??1TensorDesc@InferenceEngine@@UEAA@XZ),该符号在函数 "public: cdecl InferenceEngine::Blob::Blob(class InferenceEngine::TensorDesc)" (??0Blob@InferenceEngine@@QEAA@VTensorDesc@1@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: class std::vector > & cdecl InferenceEngine::TensorDesc::getDims(void)" (imp_?getDims@TensorDesc@InferenceEngine@@QEAAAEAV?$vector@_KV?$allocator@_K@std@@@std@@XZ),该符号在函数 "public: virtual void cdecl InferenceEngine::TBlob >::allocate(void)" (?allocate@?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@UEAAXXZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: class std::vector > const & cdecl InferenceEngine::TensorDesc::getDims(void)const " (imp_?getDims@TensorDesc@InferenceEngine@@QEBAAEBV?$vector@_KV?$allocator@_K@std@@@std@@XZ),该符号在函数 "public: unsigned int64 __cdecl InferenceEngine::Blob::byteSize(void)const " (?byteSize@Blob@InferenceEngine@@QEBA_KXZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: static enum InferenceEngine::Layout cdecl InferenceEngine::TensorDesc::getLayoutByDims(class std::vector >)" (imp_?getLayoutByDims@TensorDesc@InferenceEngine@@SA?AW4Layout@2@V?$vector@_KV?$allocator@_K@std@@@std@@@Z),该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: cdecl InferenceEngine::TensorDesc::TensorDesc(class InferenceEngine::TensorDesc const &)" (imp_??0TensorDesc@InferenceEngine@@QEAA@AEBV01@@Z),该符号在函数 "public: cdecl InferenceEngine::TBlob >::TBlob >(class InferenceEngine::TensorDesc const &)" (??0?$TBlob@HU?$enable_if@$00X@std@@@InferenceEngine@@QEAA@AEBVTensorDesc@1@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: cdecl InferenceEngine::Data::Data(class std::basic_string,class std::allocator > const &,class std::vector > const &,class InferenceEngine::Precision,enum InferenceEngine::Layout)" (imp_??0Data@InferenceEngine@@QEAA@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@AEBV?$vector@_KV?$allocator@_K@std@@@3@VPrecision@1@W4Layout@1@@Z),该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: class InferenceEngine::TensorDesc const & cdecl InferenceEngine::Data::getTensorDesc(void)const " (imp_?getTensorDesc@Data@InferenceEngine@@QEBAAEBVTensorDesc@2@XZ),该符号在函数 "public: virtual class std::map,class std::allocator >,class std::vector >,struct std::less,class std::allocator > >,class std::allocator,class std::allocator > const ,class std::vector > > > > cdecl InferenceEngine::CNNNetwork::getInputShapes(void)" (?getInputShapes@CNNNetwork@InferenceEngine@@UEAA?AV?$map@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V?$vector@_KV?$allocator@_K@std@@@2@U?$less@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V?$vector@_KV?$allocator@_K@std@@@2@@std@@@2@@std@@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: void cdecl InferenceEngine::Data::setPrecision(class InferenceEngine::Precision const &)" (imp_?setPrecision@Data@InferenceEngine@@QEAAXAEBVPrecision@2@@Z),该符号在函数 "public: void cdecl InferenceEngine::InputInfo::setPrecision(class InferenceEngine::Precision)" (?setPrecision@InputInfo@InferenceEngine@@QEAAXVPrecision@2@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2019 无法解析的外部符号 "
declspec(dllimport) public: cdecl InferenceEngine::Data::~Data(void)" (imp_??1Data@InferenceEngine@@QEAA@XZ),该符号在函数 "public: void * __cdecl InferenceEngine::Data::scalar deleting destructor'(unsigned int)" (??_GData@InferenceEngine@@QEAAPEAXI@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1
错误 LNK2019 无法解析的外部符号 __imp_findPlugin,该符号在函数 "public: class InferenceEngine::details::SOPointer<class InferenceEngine::IInferencePlugin,class InferenceEngine::details::SharedObjectLoader> __cdecl InferenceEngine::PluginDispatcher::getSuitablePlugin(enum InferenceEngine::TargetDevice)const " (?getSuitablePlugin@PluginDispatcher@InferenceEngine@@QEBA?AV?$SOPointer@VIInferencePlugin@InferenceEngine@@VSharedObjectLoader@details@2@@details@2@W4TargetDevice@2@@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1
错误 LNK2019 无法解析的外部符号 __imp_GetInferenceEngineVersion,该符号在函数 main 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1
错误 LNK2019 无法解析的外部符号 __imp_CreateCNNNetReader,该符号在函数 "public: __cdecl InferenceEngine::CNNNetReader::CNNNetReader(void)" (??0CNNNetReader@InferenceEngine@@QEAA@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1
错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::CpuExtensions(void)" (__imp_??0CpuExtensions@Cpu@Extensions@InferenceEngine@@QEAA@XZ),该符号在函数 "public: __cdecl std::_Ref_count_obj<class InferenceEngine::Extensions::Cpu::CpuExtensions>::_Ref_count_obj<class InferenceEngine::Extensions::Cpu::CpuExtensions><>(void)" (??$?0$$V@?$_Ref_count_obj@VCpuExtensions@Cpu@Extensions@InferenceEngine@@@std@@QEAA@XZ) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1
错误 LNK2019 无法解析的外部符号 "__declspec(dllimport) public: virtual __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::~CpuExtensions(void)" (__imp_??1CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA@XZ),该符号在函数 "public: virtual void * __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::
scalar deleting destructor'(unsigned int)" (??_GCpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAPEAXI@Z) 中被引用 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::GetVersion(struct InferenceEngine::Version const * &)const " (?GetVersion@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEBAXAEAPEBUVersion@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::Release(void)" (?Release@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXXZ) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::SetLogCallback(class InferenceEngine::IErrorListener &)" (?SetLogCallback@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXAEAVIErrorListener@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual void __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::Unload(void)" (?Unload@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAAXXZ) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getFactoryFor(class InferenceEngine::ILayerImplFactory * &,class InferenceEngine::CNNLayer const *,struct InferenceEngine::ResponseDesc *)" (?getFactoryFor@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAPEAVILayerImplFactory@4@PEBVCNNLayer@4@PEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getPrimitiveTypes(char * * &,unsigned int &,struct InferenceEngine::ResponseDesc *)" (?getPrimitiveTypes@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAPEAPEADAEAIPEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK2001 无法解析的外部符号 "public: virtual enum InferenceEngine::StatusCode __cdecl InferenceEngine::Extensions::Cpu::CpuExtensions::getShapeInferImpl(class std::shared_ptr &,char const *,struct InferenceEngine::ResponseDesc *)" (?getShapeInferImpl@CpuExtensions@Cpu@Extensions@InferenceEngine@@UEAA?AW4StatusCode@4@AEAV?$shared_ptr@VIShapeInferImpl@InferenceEngine@@@std@@PEBDPEAUResponseDesc@4@@Z) 88999 c:\Users\颜俊毅\documents\visual studio 2015\Projects\88999\88999\7521.obj 1

错误 LNK1120 27 个无法解析的外部命令 88999 c:\users\颜俊毅\documents\visual studio 2015\Projects\88999\x64\Debug\88999.exe 1

yuzying
yuzying 我也遇到这个问题,你解决了吗?
10 个月之前 回复

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Tensorflow Object Detection API Demo运行报错求?

![图片说明](https://img-ask.csdn.net/upload/201810/12/1539335033_129004.png) 这个能运行但是在jupyter运行Demo就报下面的错误是怎么回事..... ![图片说明](https://img-ask.csdn.net/upload/201810/12/1539335054_656491.png)

openvino demo 文件运行报错问题。

demo里面的 object detection demo 运行的时候出现错误如下 严重性 代码 说明 项目 文件 行 禁止显示状态 错误 C4996 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>::copy': Call to 'std::basic_string::copy' with parameters that may be unsafe - this call relies on the caller to check that the passed values are correct. To disable this warning, use -D_SCL_SECURE_NO_WARNINGS. See documentation on how to use Visual C++ 'Checked Iterators' 88999 d:\open_model_zoo-2018\demos\extension\ext_list.hpp 56 是怎么回事,求各位老师解答

安装Tensorflow object detection API之后运行model_builder_test.py报错?

``` Traceback (most recent call last): File "G:\python\models\research\object_detection\builders\model_builder_test.py", line 23, in <module> from object_detection.builders import model_builder File "G:\python\models\research\object_detection\builders\model_builder.py", line 20, in <module> from object_detection.builders import anchor_generator_builder File "G:\python\models\research\object_detection\builders\anchor_generator_builder.py", line 22, in <module> from object_detection.protos import anchor_generator_pb2 File "G:\python\models\research\object_detection\protos\anchor_generator_pb2.py", line 29, in <module> dependencies=[object__detection_dot_protos_dot_flexible__grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_multiscale__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_ssd__anchor__generator__pb2.DESCRIPTOR,]) File "G:\python\python setup\lib\site-packages\google\protobuf\descriptor.py", line 879, in __new__ return _message.default_pool.AddSerializedFile(serialized_pb) TypeError: Couldn't build proto file into descriptor pool! Invalid proto descriptor for file "object_detection/protos/anchor_generator.proto": object_detection/protos/flexible_grid_anchor_generator.proto: Import "object_detection/protos/flexible_grid_anchor_generator.proto" has not been loaded. object_detection/protos/multiscale_anchor_generator.proto: Import "object_detection/protos/multiscale_anchor_generator.proto" has not been loaded. object_detection.protos.AnchorGenerator.multiscale_anchor_generator: "object_detection.protos.MultiscaleAnchorGenerator" seems to be defined in "protos/multiscale_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. object_detection.protos.AnchorGenerator.flexible_grid_anchor_generator: "object_detection.protos.FlexibleGridAnchorGenerator" seems to be defined in "protos/flexible_grid_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. ``` 网上找了各种方法都没用,有些可能有用的但是不够详细。

TensorFlow Object Detection API 训练过程相关问题?

``` 2019-03-22 11:47:37.264972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:08:00.0, compute capability: 6.1) ``` Tensorflow能正常值用。 启动model_main().py后卡在这不动、相关文件夹中没有生成.ckpt文件。 ![图片说明](https://img-ask.csdn.net/upload/201903/22/1553233226_55747.jpg) 是我显卡太垃圾、计算慢还是其他原因啊???求大神。

在jupyter notebook上运行tensorflow目标识别官方测试代码object_detection_tutorial.ipynb,每次都是最后一个模块运行时出现“服务器挂了”,如何解决?

在annaconda中创建了tensorflow-gpu的环境,代码可以跑通,没有报错,但是每次到最后一块检测test_image 的时候就服务器挂了。 创建tensorflowcpu环境可以正常跑下来(最后显示那个输出结果),请问是为什么?如何解决呢? 对该环境用代码测试过,pycharm里,可以显示应用的显卡信息,算力等信息,应该是没有问题的。

提问:测试Tensorflow object detection API,然后就出问题了?

# AttributeError: module 'tensorflow.python.keras' has no attribute 'Model' ![图片说明](https://img-ask.csdn.net/upload/201901/19/1547905831_372525.png) 大神们帮我看看怎么弄

关于object detection运行视频检测代码出现报错:ValueError:assignment destination is read-only

我参考博主 withzheng的博客:https://blog.csdn.net/xiaoxiao123jun/article/details/76605928 在视频物体识别的部分中,我用的是Anaconda自带的spyder(python3.6)来运行他给的视频检测代码,出现了如下报错,![图片说明](https://img-ask.csdn.net/upload/201904/20/1555752185_448895.jpg) 具体报错: Moviepy - Building video video1_out.mp4. Moviepy - Writing video video1_out.mp4 t: 7%|▋ | 7/96 [00:40<09:17, 6.26s/it, now=None]Traceback (most recent call last): File "", line 1, in runfile('C:/models-master1/research/object_detection/object_detection_tutorial (1).py', wdir='C:/models-master1/research/object_detection') File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace) File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(),filename,'exec'), namespace) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 273, in white_clip.write_videofile(white_output, audio=False) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 137, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 326, in write_videofile logger=logger) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 216, in ffmpeg_write_video fps=fps, dtype="uint8"): File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 475, in iter_frames frame = self.get_frame(t) File "", line 2, in get_frame File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 89, in wrapper return f(*new_a, **new_kw) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 95, in get_frame return self.make_frame(t) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 138, in newclip = self.set_make_frame(lambda t: fun(self.get_frame, t)) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 511, in return self.fl(lambda gf, t: image_func(gf(t)), apply_to) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 267, in process_image image_process=detect_objects(image,sess,detection_graph) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 258, in detect_objects line_thickness=8) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 743, in visualize_boxes_and_labels_on_image_array use_normalized_coordinates=use_normalized_coordinates) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 129, in draw_bounding_box_on_image_array np.copyto(image, np.array(image_pil)) ValueError: assignment destination is read-only 想问问各位大神有遇到过类似的问题吗。。如何解决?

Tensorflow object detection API 使用VOC数据集出现错误。

环境:Win7+Anaconda+Python3.6+tensorflow 1.12.0 在进行目标检测时,运行train.py,跳转到array____ops.py在903行出错, ``` if ops.is_dense_tensor_like(elem): if dtype is not None and elem.dtype.base_dtype != dtype: raise TypeError("Cannot convert a list containing a tensor of dtype " "%s to %s (Tensor is: %r)" % (elem.dtype, dtype, elem)) converted_elems.append(elem) must_pack = True elif isinstance(elem, (list, tuple)): converted_elem = _autopacking_helper(elem, dtype, str(i)) if ops.is_dense_tensor_like(converted_elem): must_pack = True converted_elems.append(converted_elem) else: converted_elems.append(elem) ``` 错误显示为: TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>) 有没有遇到相同问题的,怎么解决啊,找了好多,都没有遇到靠谱的。

WIN10环境object_detection api训练时报错:Windows fatal exception: access violation

报错内容: Windows fatal exception: access violation Current thread 0x00000e40 (most recent call first): File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 84 in _preread_check File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 122 in read File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\label_map_util.py", line 138 in load_labelmap File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\label_map_util.py", line 169 in get_label_map_dict File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\data_decoders\tf_example_decoder.py", line 64 in __init__ File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\data_decoders\tf_example_decoder.py", line 319 in __init__ File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py", line 130 in build File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\inputs.py", line 579 in train_input File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\inputs.py", line 476 in _train_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1116 in _call_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1025 in _get_features_and_labels_from_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1188 in _train_model_default File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161 in _train_model File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370 in train File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 714 in run_local File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 613 in run File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 473 in train_and_evaluate File ".\object_detection\model_main.py", line 105 in main File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 250 in _run_main File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 299 in run File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\platform\app.py", line 40 in run File ".\object_detection\model_main.py", line 109 in <module>

如何对使用ssd检测出来的目标进行计数

![图片说明](https://img-ask.csdn.net/upload/201902/19/1550586482_193932.png) ![图片说明](https://img-ask.csdn.net/upload/201902/19/1550586497_236815.png) 我使用了ssd对图像进行检测,检测结果如图所示,请问如何对每检测结果中的每一个对象计数。如果对视频进行物体检测的计数,需要往哪个方向进行。有好的博客可以推荐一下。谢谢。 源代码如下 ``` # coding: utf-8 # # Object Detection Demo # Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. # # Imports # In[1]: import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') # ## Env setup # In[2]: # This is needed to display the images. get_ipython().magic('matplotlib inline') # ## Object detection imports # Here are the imports from the object detection module. # In[3]: from utils import label_map_util from utils import visualization_utils as vis_util # # Model preparation # ## Variables # # Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. # # By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. # In[4]: # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') # ## Download Model # In[5]: opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) # ## Load a (frozen) Tensorflow model into memory. # In[6]: detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') # ## Loading label map # Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine # In[7]: category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) # ## Helper code # In[8]: def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # # Detection # In[9]: # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) # In[ ]: def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict # In[ ]: for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ```

Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation

python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下

Tensorflow object-detection api 报错

我尝试使用ssd_mobilenet_v1模型,报错TypeError: `pred` must be a Tensor, or a Python bool, or 1 or 0. Found instead: None 不知道是什么原因引起的,is_training改成true的方法我已经试过了,没有用

在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去?

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去? 训练命令: ``` python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr ``` 配置文件: ``` training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring System information What is the top-level directory of the model you are using:models/research/object_detection/ Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit) TensorFlow installed from (source or binary):conda install tensorflow-gpu TensorFlow version (use command below):1.13.1 Bazel version (if compiling from source):N/A CUDA/cuDNN version:cudnn-7.6.0 GPU model and memory:GeForce GTX 1060 6GB Exact command to reproduce:See below my command for training : python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr This is my config : train_config { batch_size: 24 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } optimizer { rms_prop_optimizer { learning_rate { exponential_decay_learning_rate { initial_learning_rate: 0.00400000018999 decay_steps: 800720 decay_factor: 0.949999988079 } } momentum_optimizer_value: 0.899999976158 decay: 0.899999976158 epsilon: 1.0 } } fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 train_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord" } } eval_config { num_examples: 8000 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord" } ``` 窗口输出: (default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn..model_fn at 0x0000027CBAB7BB70>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.parallel_interleave(...). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.Dataset.batch(..., drop_remainder=True). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:04:00.0 totalMemory: 6.00GiB freeMemory: 4.97GiB 2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. creating index... index created! creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.43s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384 (default) D:\gitcode\models\research>

faster rcnn运行demo出现错误

faster rcnn配置好之后运行 ./tools/demo.py出现如下错误:: Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: Python

js中什么时候输出[object,object],什么时候输出他的所有键值

js中什么时候输出[object,object],什么时候输出他的所有键值。 console.log(options);//[object,object] options = options || {};//这个是什么意思 console.log(options);//输出对象的键值{url: "url", id: "id", pid: "pid"}

在JS里有数组,数组里有N个Object对象,object里面有日期,按日期进行数组排序,怎么做?

在JS里有数组,数组里有N个Object对象,object里面有日期,按日期进行数组排序,怎么做? 可以弄一个demo

【Tensorflow2.0】Tensorflow2.0版本可以使用object_dectectionAPI吗

我电脑上安装的是tensorflow2.0版本,在配置object-dectection API时出现了AttributeError: module 'tensorflow' has no attribute 'contrib'的问题,请懂的老师帮忙解答一下,十分感谢

c#里面的object和Object

c#里面的object和Object是一样吗?string和String一样吗?object是类?

Java List<Object> 取值???!!!

``` List<User> list = (mybatis、Oracle返回值); for (User user : list) { System.out.println(user.getName()); System.out.println(user.getAge()); } ``` 如上例子,可以得到list,可是这样写就是一个实体类就要写一遍(user.getName())这样的,如果这个实体类里面是很多很多,这样就要写很多很多。 想要一个通用的方法,就是不管List<Object>里面的object是什么实体类,这个实体类里面包含多少,都能通过遍历获得list,请问有方法吗?有实例可以参考的最好了,谢谢!

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Vue + Spring Boot 项目实战(十九):Web 项目优化解决方案

快来一起探索如何打脸我们的破项目,兄弟姐妹们把害怕打在公屏上!

你连存活到JDK8中著名的Bug都不知道,我怎么敢给你加薪

CopyOnWriteArrayList.java和ArrayList.java,这2个类的构造函数,注释中有一句话 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public ArrayList(Collection&lt;? ...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解!

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解! 目录 博客声明 大数据了解博主粉丝 博主的粉丝群体画像 粉丝群体性别比例、年龄分布 粉丝群体学历分布、职业分布、行业分布 国内、国外粉丝群体地域分布 博主的近期访问每日增量、粉丝每日增量 博客声明 因近期博主写专栏的文章越来越多,也越来越精细,逐步优化文章。因此,最近一段时间,订阅博主专栏的人数增长也非常快,并且专栏价

一个HashMap跟面试官扯了半个小时

一个HashMap能跟面试官扯上半个小时 关注 安琪拉的博客 1.回复面试领取面试资料 2.回复书籍领取技术电子书 3.回复交流领取技术电子书 前言 HashMap应该算是Java后端工程师面试的必问题,因为其中的知识点太多,很适合用来考察面试者的Java基础。 开场 面试官: 你先自我介绍一下吧! 安琪拉: 我是安琪拉,草丛三婊之一,最强中单(钟馗不服)!哦,不对,串场了,我是**,目...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

记录下入职中软一个月(外包华为)

我在年前从上一家公司离职,没想到过年期间疫情爆发,我也被困在家里,在家呆着的日子让人很焦躁,于是我疯狂的投简历,看面试题,希望可以进大公司去看看。 我也有幸面试了我觉得还挺大的公司的(虽然不是bat之类的大厂,但是作为一名二本计算机专业刚毕业的大学生bat那些大厂我连投简历的勇气都没有),最后选择了中软,我知道这是一家外包公司,待遇各方面甚至不如我的上一家公司,但是对我而言这可是外包华为,能...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

程序员是做全栈工程师好?还是专注一个领域好?

昨天,有位大一的同学私信我,说他要做全栈工程师。 我一听,这不害了孩子么,必须制止啊。 谁知,讲到最后,更确定了他做全栈程序员的梦想。 但凡做全栈工程师的,要么很惨,要么很牛! 但凡很牛的,绝不是一开始就是做全栈的! 全栈工程师听起来好听,但绝没有你想象的那么简单。 今天听我来给你唠,记得帮我点赞哦。 一、全栈工程师的职责 如果你学习编程的目的只是玩玩,那随意,想怎么学怎么学。...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

十个摸鱼,哦,不对,是炫酷(可以玩一整天)的网站!!!

文章目录前言正文**1、Kaspersky Cyberthreat real-time map****2、Finding Home****3、Silk – Interactive Generative Art****4、Liquid Particles 3D****5、WINDOWS93****6、Staggering Beauty****7、Ostagram图片生成器网址****8、全历史网址*...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

用了这个 IDE 插件,5分钟解决前后端联调!

点击上方蓝色“程序猿DD”,选择“设为星标”回复“资源”获取独家整理的学习资料!作者 |李海庆我是一个 Web 开发前端工程师,受到疫情影响,今天是我在家办公的第78天。开发了两周,...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

Java14 新特性解读

Java14 已于 2020 年 3 月 17 号发布,官方特性解读在这里:https://openjdk.java.net/projects/jdk/14/以下是个人对于特性的中文式...

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

工作两年简历写成这样,谁要你呀!

作者:小傅哥 博客:https://bugstack.cn 沉淀、分享、成长,让自己和他人都能有所收获! 一、前言 最近有伙伴问小傅哥,我的简历怎么投递了都没有反应,心里慌的很呀。 工作两年了目前的公司没有什么大项目,整天的维护别人的代码,有坑也不让重构,都烦死了。荒废我一身技能无处施展,投递的简历也没人看。我是不动物园里的猩猩,狒狒了! 我要加班,我要996,我要疯狂编码,求给我个机会… ...

相关热词 c#跨线程停止timer c#批量写入sql数据库 c# 自动安装浏览器 c#语言基础考试题 c# 偏移量打印是什么 c# 绘制曲线图 c#框体中的退出函数 c# 按钮透明背景 c# idl 混编出错 c#在位置0处没有任何行
立即提问