Python3.5版本,怎样才能成功安装face_recognition库?

win7系统,Python3.5版本,已经成功安装dlib 库,当pip install face_recognition时,
为什么还会出现Failed building wheel for dlib图片说明



1个回答

如果提示dlib安装失败,可检查本机是否安装boost-python,如果本机已安装,还要注意与python版本是否对应,注意,在配置caffe时会用到这个boost-python,如果caffe在python2.x中配置,而face_reccognition在python3.x中安装,则会报错,解决方法如下:

 brew install boost-python --with-python3 --without-python2

作者:roguesir
来源:CSDN
原文:https://blog.csdn.net/roguesir/article/details/77104246
版权声明:本文为博主原创文章,转载请附上博文链接!

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
python face_recognition库中使用其他的功能都可以正常运行,但只有用face_encodings会报错?
已正确安装dlib和face_recognition库 __代码如下:__ ``` import face_recognition face_recognition.face_encodings("F:/multi-media/video frames/260.png") ``` 报错如下: ``` PS F:\multi-media> & C:/Users/ONE7/AppData/Local/Programs/Python/Python37/python.exe "f:/multi-media/vedio analysis.py" Traceback (most recent call last): File "f:/multi-media/vedio analysis.py", line 118, in <module> face_recognition.face_encodings("F:/multi-media/video frames/260.png", known_face_locations=None, num_jitters=1) File "C:\Users\ONE7\AppData\Local\Programs\Python\Python37\lib\site-packages\face_recognition\api.py", line 209, in face_encodings raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small") File "C:\Users\ONE7\AppData\Local\Programs\Python\Python37\lib\site-packages\face_recognition\api.py", line 153, in _raw_face_landmarks face_locations = _raw_face_locations(face_image) File "C:\Users\ONE7\AppData\Local\Programs\Python\Python37\lib\site-packages\face_recognition\api.py", line 102, in _raw_face_locations return face_detector(img, number_of_times_to_upsample) TypeError: __call__(): incompatible function arguments. The following argument types are supported: 1. (self: dlib.fhog_object_detector, image: array, upsample_num_times: int=0) -> dlib.rectangles Invoked with: <dlib.fhog_object_detector object at 0x0000026A438B91B0>, 'F:/multi-media/video frames/260.png', 1 ```
face_recognition 人脸识别 识别阀值设置命令是什么?
小弟最近弄了下 face_recognition , 看网上都说比率在99.9以上, 但是我弄完之后 测试发现非常不准确, 然后想到是不是默认都阀值很低? 就想设置下阀值, 但是实在不知道这个阀值如何设置,有没有大神解释下这个阀值如何在Ubuntu 中设置下啊??血谢谢啦
关于python人脸识别库的数据类型问题
``` import face_recognition import cv2 import os def file_name(dir): names = os.listdir(dir) i=0 for name in names: index = name.rfind('.') name = name[:index] names[i]=name i=i+1 return names def file_list(dir): list_name=os.listdir(dir) return list_name video_capture = cv2.VideoCapture(0) face_dir="E:\\face" names1=file_name(face_dir) root=file_list(face_dir) for name1 in names1: image = face_recognition.load_image_file("E:\\face\\"+name1+".jpg") name1 = face_recognition.face_encodings(image)[0] # name1 = name1.astype('float64') # Create arrays of known face encodings and their names known_face_encodings = names1 known_face_names = names1 print(known_face_encodings) # Initialize some variables face_locations = [] face_encodings = [] face_names = [] process_this_frame = True while True: # Grab a single frame of video ret, frame = video_capture.read() # Resize frame of video to 1/4 size for faster face recognition processing small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) rgb_small_frame = small_frame[:, :, ::-1] # Only process every other frame of video to save time if process_this_frame: # Find all the faces and face encodings in the current frame of video face_locations = face_recognition.face_locations(rgb_small_frame) face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations) face_names = [] for face_encoding in face_encodings: # See if the face is a match for the known face(s) #face_encoding = face_encoding.astype('float64') matches = face_recognition.compare_faces(known_face_encodings, face_encoding) name = "Unknown" print(matches) # If a match was found in known_face_encodings, just use the first one. if True in matches: first_match_index = matches.index(True) name = known_face_names[first_match_index] print(first_match_index) face_names.append(name) process_this_frame = not process_this_frame # Display the results for (top, right, bottom, left), name in zip(face_locations, face_names): # Scale back up face locations since the frame we detected in was scaled to 1/4 size top *= 4 right *= 4 bottom *= 4 left *= 4 # Draw a box around the face cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) # Draw a label with a name below the face cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) # Display the resulting image cv2.imshow('Video', frame) # Hit 'q' on the keyboard to quit! if cv2.waitKey(1) & 0xFF == ord('q'): break # Release handle to the webcam video_capture.release() cv2.destroyAllWindows() ``` 总是提示: return np.linalg.norm(face_encodings - face_to_compare, axis=1) TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32') 这是什么鬼,转换了数据类型也没有用???
python+opencv人脸识别中的算法问题
python+opencv做的人脸识别,用的python开源库face_recognition,程序可以 跑的通,但是现在对于face_recognition里面的compare_faces算法不是很清楚 有哪位大神了解的吗?求教 参考:https://yq.aliyun.com/articles/460276
python3.6 anaconda3,prompt显示“此时不应有...”
![图片说明](https://img-ask.csdn.net/upload/201806/04/1528103058_350649.png)今天想要装face recognition的时候发现总是报错,打开prompt发现奇怪的问题,不知道是配置出了问题还是怎么回事,求问有没有知道怎么办的?
关于django项目静态文件的使用
先看我的目录: ![图片说明](https://img-ask.csdn.net/upload/201804/17/1523949857_707464.png) 首先我不是想把static文件用于html等,只有我的faceIdentify.py文件需要导入2个静态文件: ``` redictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") face_rec_model = dlib.face_recognition_model_v1("dlib_face_recognition_resnet_model_v1.dat") ``` 我再setting里配置了: ``` STATIC_URL = '/static/' STATICFILES_DIRS = os.path.join(BASE_DIR, 'static') ``` 结果是一运行项目就会报错说找不到文件。 RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat 网上教程大都是把静态文件用于html等,我只是加载一下,但不会怎么弄,求大神指点,万分感谢!!!
QT5.9+OpenCV3.10_contrib,GitHub上人脸识别找不到face相关类和hpp
调配了很多天的环境,网上说OpenCV3以后face.hpp在contrib扩展包里面,我都配置好之后还是有如下毛病:(之前用的OpenCV2.410也是如下毛病) #如果在头文件加上 ``` #include<opencv2/contrib/contrib.hpp> #include<opencv2/face.hpp> ``` 就会报错 ``` error: opencv2/face.hpp: No such file or directory ``` ## 如果把这个注释掉(依然保留#include<opencv2/contrib/contrib.hpp>) 就会报错 ![图片说明](https://img-ask.csdn.net/upload/201709/13/1505307425_541519.png) 即 face名词空间还是没找到,和face有关的类还是没找到 百度了很深的程度也没有解决,希望有前辈指点一下,很想完成一个人脸识别的功能 下面把代码都贴出来(UI就不放了) 1、pro文件: ``` #------------------------------------------------- # # Project created by QtCreator 2015-11-11T08:11:51 # #------------------------------------------------- QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = face_recognition TEMPLATE = app SOURCES += main.cpp\ mainwindow.cpp HEADERS += mainwindow.h FORMS += mainwindow.ui INCLUDEPATH+=C:\OpenCV_contrib\include\opencv\ C:\OpenCV_contrib\include\opencv2\ C:\OpenCV_contrib\include LIBS += -LC:/OpenCV_contrib/lib -lopencv_core2410.dll \ -lopencv_highgui2410.dll -lopencv_imgproc2410.dll -lopencv_features2d2410.dll \ -lopencv_calib3d2410.dll \ -lopencv_objdetect2410.dll \ -lopencv_contrib2410.dll ``` MainWindow.h : ``` #ifndef MAINWINDOW_H #define MAINWINDOW_H #include<QMainWindow> #include<QCloseEvent> #include<opencv2/highgui/highgui.hpp> #include<opencv2/imgproc/imgproc.hpp> #include<opencv2/core/core.hpp> #include<opencv2/objdetect/objdetect.hpp> #include<opencv2/contrib/contrib.hpp> //#include<opencv2/face.hpp> #include<iostream> using namespace std; using namespace cv; using namespace face; namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private slots: void on_loadButton_clicked(); void on_testButton_clicked(); void on_regButton_clicked(); void closeEvent(QCloseEvent *e); private: Ui::MainWindow *ui; Ptr<LBPHFaceRecognizer> model; QString fileName,saveXml,saveName,name[10]; }; #endif // MAINWINDOW_H ``` cpp文件: ``` #include "mainwindow.h" #include "ui_mainwindow.h" #include<QDebug> #include<QFileDialog> #include<QPixmap> #include<QFile> #include<QTextStream> //正面,上,下,左,右5张.阉值85.00 MainWindow::MainWindow(QWidget *parent): QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); saveName = "Names.txt"; saveXml = "att_model.xml"; model = createLBPHFaceRecognizer(); if(QFile::exists(saveXml)&&QFile::exists(saveName)) { model->load(saveXml.toStdString()); QFile file(saveName); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in(&file); QString lineText; while(!in.atEnd()) { lineText = in.readLine(); QString i = lineText.split(":").first(); name[i.toInt()] = lineText.split(":").last(); } } // for(int i=1;i<11;i++) // for(int j=1;j<10;j++) // { // QString file = "att_faces/s%1/%2.pgm"; // images.push_back(imread(file.arg(i).arg(j).toStdString(), CV_LOAD_IMAGE_GRAYSCALE)); // labels.push_back(i); // } // model = createLBPHFaceRecognizer(); // //model->train(images, labels); // //model->save("att_model.xml"); // model->load("att_model.xml"); } MainWindow::~MainWindow() { delete ui; } void MainWindow::closeEvent(QCloseEvent *e) { model->save(saveXml.toStdString()); QFile file(saveName); if(!file.open(QIODevice::WriteOnly|QIODevice::Text)) return; QTextStream out(&file); for(int i=0;i<10;i++) { if(name[i].isEmpty()) continue; out<<i<<":"<<name[i]<<"\n"; } e->accept(); } void MainWindow::on_loadButton_clicked() { fileName = QFileDialog::getOpenFileName(this,tr("选择图片"),tr(".")); if(fileName.isEmpty()) return; ui->showLabel->setPixmap(QPixmap(fileName)); ui->textBrowser->append(tr("打开图片%1").arg(fileName.split("/").last())); } void MainWindow::on_testButton_clicked() { if(fileName.isEmpty()||ui->nameEdit->text().isEmpty()) return; vector<Mat> images; vector<int> labels; images.push_back(imread(fileName.toStdString(),CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(ui->labelBox->value()); name[ui->labelBox->value()] = ui->nameEdit->text(); ui->textBrowser->append(tr("准备训练: 姓名:%1 标签:%2 ...").arg(ui->nameEdit->text()).arg(ui->labelBox->value())); model->update(images,labels); ui->textBrowser->append(tr("训练完成")); } void MainWindow::on_regButton_clicked() { ui->nameLabel->clear(); if(fileName.isEmpty()) return; Mat image = imread(fileName.toStdString(), CV_LOAD_IMAGE_GRAYSCALE); model->setThreshold(ui->doubleSpinBox->value()); ui->textBrowser->append(tr("准备识别Threshold:%1 ...").arg(ui->doubleSpinBox->value())); int result = model->predict(image); ui->textBrowser->append(tr("识别完成")); if(result < 0) ui->nameLabel->setText(tr("无法识别此人")); else ui->nameLabel->setText(tr("%1").arg(name[result])); } ``` main.cpp: #include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); }
python列表索引超出范围如何处理?
# 大佬们这个显示“list index out of range”(中文:列表索引超出范围) #怎么解决? ``` import cv2 recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read("C:\work\AI\AI-picture\Face recognition\\face_trainer\\trainer.yml") cascadePath = "C:\work\AI\AI-picture\Face recognition\haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascadePath) font = cv2.FONT_HERSHEY_SIMPLEX idnum = 0 names = ['A', 'Bob'] cam = cv2.VideoCapture(0, cv2.CAP_DSHOW) minW = 0.1*cam.get(3) minH = 0.1*cam.get(4) while True: ret, img = cam.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor=1.2, minNeighbors=5, minSize=(int(minW), int(minH)) ) for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2) idnum, confidence = recognizer.predict(gray[y:y+h, x:x+w]) if confidence < 100: idnum = names[idnum]#问题出在这儿 confidence = "{0}%".format(round(100 - confidence)) else: idnum = "unknown" confidence = "{0}%".format(round(100 - confidence)) cv2.putText(img, str(idnum), (x+5, y-5), font, 1, (0, 0, 255), 1) cv2.putText(img, str(confidence), (x+5, y+h-5), font, 1, (0, 0, 0), 1) cv2.imshow('camera', img) k = cv2.waitKey(10) if k == 27: break cam.release() cv2.destroyAllWindows() ```
求大神修改下生成的代码
Blockly.Blocks['sound_recognition'] = { init: function() { this.appendDummyInput() .appendField("Speech Synthesis"); this.appendDummyInput() .appendField("Choose Language:") .appendField(new Blockly.FieldDropdown([["US English Male","E"], ["Chinese Female","CF"], ["Chinese Male","CM"], ["Korean Female","K"]]), "lang_"); this.appendStatementInput("recognition_"); this.setPreviousStatement(true); this.setNextStatement(true); this.setColour(230); this.setTooltip(""); this.setHelpUrl(""); } }; Blockly.JavaScript['sound_recognition'] = function(block) { dropdown_lang_ = block.getFieldValue('lang_'); var statements_recognition_ = Blockly.JavaScript.statementToCode(block, 'recognition_'); if(dropdown_lang_=="E"){ responsiveVoice.setDefaultVoice("US English Male"); var code=responsiveVoice.speak(statements_recognition_); } return code; };
django在哪编写自己的业务逻辑
先看我的目录结构 ![图片说明](https://img-ask.csdn.net/upload/201804/16/1523849100_831822.png) 我现在能做个最简单的流程,在urls.py写好url,通过此url调用views.py里的一个函数,然后页面输入能显示,但我想写一个业务,需要导入 shape_predictor_68_face_landmarks.dat 文件(以下称为A文件) 我在faceRecognition文件夹里新建了个py文件,导入此文件,居然报错: 代码: ``` ace_rec_model = dlib.face_recognition_model_v1("C:\\Users\\51530\\Desktop\\openFace\\shape_predictor_68_face_landmarks.dat") ``` (注:我这个A文件放在faceRecognition文件下了,直接用‘shape_predictor_68_face_landmarks.dat’会提示无法打开此文件,所以上代码我直接指向我这个文件的本地目录了) 报错: RuntimeError: An error occurred while trying to read the first object from the file C:\Users\51530\Desktop\openFace\shape_predictor_68_face_landmarks.dat. ERROR: Error deserializing object of type unsigned long while deserializing object of type std::string 跪求大神指点,第一个用django项目,就是想做个简单的服务跑上去,真是一步一个坑啊。。。
吴恩达深度学习第四课第四周fr_utils.py报错,有人遇到过吗
Face Recognition/fr_utils.py, Line21中_get_session()和Line140中model无法找到引用,请问这是什么原因 加载模型时候会报如下错误: Using TensorFlow backend. 2018-08-26 21:30:53.046324: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Total Params: 3743280 Traceback (most recent call last): File "C:/Users/51530/PycharmProjects/DL/wuenda/Face/faceV3.py", line 60, in <module> load_weights_from_FaceNet(FRmodel) File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 133, in load_weights_from_FaceNet weights_dict = load_weights() File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 154, in load_weights conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) File "E:\anaconda\lib\site-packages\numpy\lib\npyio.py", line 1867, in genfromtxt raise ValueError(errmsg) ValueError: Some errors were detected ! Line #7 (got 2 columns instead of 1) Line #12 (got 3 columns instead of 1) Line #15 (got 2 columns instead of 1) 具体此文件: ``` #### PART OF THIS CODE IS USING CODE FROM VICTOR SY WANG: https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/utils.py #### import tensorflow as tf import numpy as np import os import cv2 from numpy import genfromtxt from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D import h5py import matplotlib.pyplot as plt _FLOATX = 'float32' def variable(value, dtype=_FLOATX, name=None): v = tf.Variable(np.asarray(value, dtype=dtype), name=name) _get_session().run(v.initializer) return v def shape(x): return x.get_shape() def square(x): return tf.square(x) def zeros(shape, dtype=_FLOATX, name=None): return variable(np.zeros(shape), dtype, name) def concatenate(tensors, axis=-1): if axis < 0: axis = axis % len(tensors[0].get_shape()) return tf.concat(axis, tensors) def LRN2D(x): return tf.nn.lrn(x, alpha=1e-4, beta=0.75) def conv2d_bn(x, layer=None, cv1_out=None, cv1_filter=(1, 1), cv1_strides=(1, 1), cv2_out=None, cv2_filter=(3, 3), cv2_strides=(1, 1), padding=None): num = '' if cv2_out == None else '1' tensor = Conv2D(cv1_out, cv1_filter, strides=cv1_strides, data_format='channels_first', name=layer+'_conv'+num)(x) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+num)(tensor) tensor = Activation('relu')(tensor) if padding == None: return tensor tensor = ZeroPadding2D(padding=padding, data_format='channels_first')(tensor) if cv2_out == None: return tensor tensor = Conv2D(cv2_out, cv2_filter, strides=cv2_strides, data_format='channels_first', name=layer+'_conv'+'2')(tensor) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+'2')(tensor) tensor = Activation('relu')(tensor) return tensor WEIGHTS = [ 'conv1', 'bn1', 'conv2', 'bn2', 'conv3', 'bn3', 'inception_3a_1x1_conv', 'inception_3a_1x1_bn', 'inception_3a_pool_conv', 'inception_3a_pool_bn', 'inception_3a_5x5_conv1', 'inception_3a_5x5_conv2', 'inception_3a_5x5_bn1', 'inception_3a_5x5_bn2', 'inception_3a_3x3_conv1', 'inception_3a_3x3_conv2', 'inception_3a_3x3_bn1', 'inception_3a_3x3_bn2', 'inception_3b_3x3_conv1', 'inception_3b_3x3_conv2', 'inception_3b_3x3_bn1', 'inception_3b_3x3_bn2', 'inception_3b_5x5_conv1', 'inception_3b_5x5_conv2', 'inception_3b_5x5_bn1', 'inception_3b_5x5_bn2', 'inception_3b_pool_conv', 'inception_3b_pool_bn', 'inception_3b_1x1_conv', 'inception_3b_1x1_bn', 'inception_3c_3x3_conv1', 'inception_3c_3x3_conv2', 'inception_3c_3x3_bn1', 'inception_3c_3x3_bn2', 'inception_3c_5x5_conv1', 'inception_3c_5x5_conv2', 'inception_3c_5x5_bn1', 'inception_3c_5x5_bn2', 'inception_4a_3x3_conv1', 'inception_4a_3x3_conv2', 'inception_4a_3x3_bn1', 'inception_4a_3x3_bn2', 'inception_4a_5x5_conv1', 'inception_4a_5x5_conv2', 'inception_4a_5x5_bn1', 'inception_4a_5x5_bn2', 'inception_4a_pool_conv', 'inception_4a_pool_bn', 'inception_4a_1x1_conv', 'inception_4a_1x1_bn', 'inception_4e_3x3_conv1', 'inception_4e_3x3_conv2', 'inception_4e_3x3_bn1', 'inception_4e_3x3_bn2', 'inception_4e_5x5_conv1', 'inception_4e_5x5_conv2', 'inception_4e_5x5_bn1', 'inception_4e_5x5_bn2', 'inception_5a_3x3_conv1', 'inception_5a_3x3_conv2', 'inception_5a_3x3_bn1', 'inception_5a_3x3_bn2', 'inception_5a_pool_conv', 'inception_5a_pool_bn', 'inception_5a_1x1_conv', 'inception_5a_1x1_bn', 'inception_5b_3x3_conv1', 'inception_5b_3x3_conv2', 'inception_5b_3x3_bn1', 'inception_5b_3x3_bn2', 'inception_5b_pool_conv', 'inception_5b_pool_bn', 'inception_5b_1x1_conv', 'inception_5b_1x1_bn', 'dense_layer' ] conv_shape = { 'conv1': [64, 3, 7, 7], 'conv2': [64, 64, 1, 1], 'conv3': [192, 64, 3, 3], 'inception_3a_1x1_conv': [64, 192, 1, 1], 'inception_3a_pool_conv': [32, 192, 1, 1], 'inception_3a_5x5_conv1': [16, 192, 1, 1], 'inception_3a_5x5_conv2': [32, 16, 5, 5], 'inception_3a_3x3_conv1': [96, 192, 1, 1], 'inception_3a_3x3_conv2': [128, 96, 3, 3], 'inception_3b_3x3_conv1': [96, 256, 1, 1], 'inception_3b_3x3_conv2': [128, 96, 3, 3], 'inception_3b_5x5_conv1': [32, 256, 1, 1], 'inception_3b_5x5_conv2': [64, 32, 5, 5], 'inception_3b_pool_conv': [64, 256, 1, 1], 'inception_3b_1x1_conv': [64, 256, 1, 1], 'inception_3c_3x3_conv1': [128, 320, 1, 1], 'inception_3c_3x3_conv2': [256, 128, 3, 3], 'inception_3c_5x5_conv1': [32, 320, 1, 1], 'inception_3c_5x5_conv2': [64, 32, 5, 5], 'inception_4a_3x3_conv1': [96, 640, 1, 1], 'inception_4a_3x3_conv2': [192, 96, 3, 3], 'inception_4a_5x5_conv1': [32, 640, 1, 1,], 'inception_4a_5x5_conv2': [64, 32, 5, 5], 'inception_4a_pool_conv': [128, 640, 1, 1], 'inception_4a_1x1_conv': [256, 640, 1, 1], 'inception_4e_3x3_conv1': [160, 640, 1, 1], 'inception_4e_3x3_conv2': [256, 160, 3, 3], 'inception_4e_5x5_conv1': [64, 640, 1, 1], 'inception_4e_5x5_conv2': [128, 64, 5, 5], 'inception_5a_3x3_conv1': [96, 1024, 1, 1], 'inception_5a_3x3_conv2': [384, 96, 3, 3], 'inception_5a_pool_conv': [96, 1024, 1, 1], 'inception_5a_1x1_conv': [256, 1024, 1, 1], 'inception_5b_3x3_conv1': [96, 736, 1, 1], 'inception_5b_3x3_conv2': [384, 96, 3, 3], 'inception_5b_pool_conv': [96, 736, 1, 1], 'inception_5b_1x1_conv': [256, 736, 1, 1], } def load_weights_from_FaceNet(FRmodel): # Load weights from csv files (which was exported from Openface torch model) weights = WEIGHTS weights_dict = load_weights() # Set layer weights of the model for name in weights: if FRmodel.get_layer(name) != None: FRmodel.get_layer(name).set_weights(weights_dict[name]) elif model.get_layer(name) != None: model.get_layer(name).set_weights(weights_dict[name]) def load_weights(): # Set weights path dirPath = './weights' fileNames = filter(lambda f: not f.startswith('.'), os.listdir(dirPath)) paths = {} weights_dict = {} for n in fileNames: paths[n.replace('.csv', '')] = dirPath + '/' + n for name in WEIGHTS: if 'conv' in name: conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) conv_w = np.reshape(conv_w, conv_shape[name]) conv_w = np.transpose(conv_w, (2, 3, 1, 0)) conv_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) weights_dict[name] = [conv_w, conv_b] elif 'bn' in name: bn_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) bn_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) bn_m = genfromtxt(paths[name + '_m'], delimiter=',', dtype=None) bn_v = genfromtxt(paths[name + '_v'], delimiter=',', dtype=None) weights_dict[name] = [bn_w, bn_b, bn_m, bn_v] elif 'dense' in name: dense_w = genfromtxt(dirPath+'/dense_w.csv', delimiter=',', dtype=None) dense_w = np.reshape(dense_w, (128, 736)) dense_w = np.transpose(dense_w, (1, 0)) dense_b = genfromtxt(dirPath+'/dense_b.csv', delimiter=',', dtype=None) weights_dict[name] = [dense_w, dense_b] return weights_dict def load_dataset(): train_dataset = h5py.File('datasets/train_happy.h5', "r") train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('datasets/test_happy.h5', "r") test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def img_to_encoding(image_path, model): img1 = cv2.imread(image_path, 1) img = img1[...,::-1] img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12) x_train = np.array([img]) embedding = model.predict_on_batch(x_train) return embedding ```
Android华为荣耀8上无法运行google voice recognition的code
想学习一下android studio上面的google voice recognition的code, 在网上找了 sample code之后, 在华为荣耀8实机上运行,button上显示“recognizer not present”. 显示这个实机的Android version 是 7.0,API level 27 ![图片说明](https://img-ask.csdn.net/upload/201803/21/1521646543_62155.png) 下面是MainActivity.java ```public class MainActivity extends Activity { private static final String TAG = "VoiceRecognition"; private static final int VOICE_RECOGNITION_REQUEST_CODE = 1234; private final int AUDIO_REQUEST_CODE = 3; private TextView textView; private Button button; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); initWidget(); // Check to see if a recognition activity is present PackageManager pm = getPackageManager(); List<ResolveInfo> activities = pm.queryIntentActivities( new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0); if (activities.size() != 0) { button.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { if (ContextCompat.checkSelfPermission(MainActivity.this, Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) { Toast.makeText(MainActivity.this, "You have already granted this permission!", Toast.LENGTH_SHORT).show(); } else { requestStoragePermission(); } startVoiceRecognitionActivity(); } }); } else { button.setEnabled(false); button.setText("Recognizer not present"); } } private void requestStoragePermission() { if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.RECORD_AUDIO)) { new AlertDialog.Builder(this) .setTitle("Permission needed") .setMessage("This permission is needed because of this and that") .setPositiveButton("ok", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions(MainActivity.this, new String[] {Manifest.permission.RECORD_AUDIO}, AUDIO_REQUEST_CODE); } }) .setNegativeButton("cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }) .create().show(); } else { ActivityCompat.requestPermissions(this, new String[] {Manifest.permission.RECORD_AUDIO}, AUDIO_REQUEST_CODE); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if (requestCode == AUDIO_REQUEST_CODE) { if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { Toast.makeText(this, "Permission GRANTED", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Permission DENIED", Toast.LENGTH_SHORT).show(); } } } private void initWidget() { textView = (TextView)findViewById(R.id.tv); button = (Button)findViewById(R.id.btn); } /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); // Display an hint to the user about what he should say. intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Please speak now");//注意不要硬编码 // Given an hint to the recognizer about what the user is going to say intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); // Specify how many results you want to receive. The results will be sorted // where the first result is the one with higher confidence. intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 1);//通常情况下,第一个结果是最准确的。 startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); StringBuilder stringBuilder = new StringBuilder(); int Size = matches.size(); for(int i=0;i<Size;++i) { stringBuilder.append(matches.get(i)); stringBuilder.append("\n"); } textView.setText(stringBuilder); } super.onActivityResult(requestCode, resultCode, data); }} ``` 顺带一提,这个code在上图的API level 22,Android 5.1的虚拟机是可以运行的,但是 在API level 27,Android 8.0就出现了直接跳到“try again”界面(下图) 在Android 5.1 ![图片说明](https://img-ask.csdn.net/upload/201803/21/1521646678_861804.png) 在Android 8.0 ![图片说明](https://img-ask.csdn.net/upload/201803/21/1521646712_807160.png) 总结一下,其实是两个问题,这个code在华为Android 7.0的实机上无法加载google voice recognition的dialog, 在虚拟机Android8.0 虽然可以加载dialog, 但是无法正常运行voice recognition, 在Android 5.1上可以正常运行。之前有提问过一次,有回答说是6.0后permission的问题,这个code已经加了runtime permission确保麦克风是有permission的,但是问题依然存在。请大神帮忙看下问题到底出在哪。谢谢。
Weak Number Recognition
Problem Description Wiskey wants to develop an image retrieval system, but he can’t deal with the number recognition. The number is neat which no any aliasing, no any redundant point, and made by ‘#’. You can use the software “ASCII Art Studio”, the data come from there. The font is Arial, and the font size between “五号” and “一号", the style is bold. The data is weak, not all of the numbers which various sizes are need to be recognized. Come on baby; please improve your program as much as possible. Input First line will contain one integer mean how many cases will follow by. In each case will contain two integers N, M mean the N*M matrix will follow. (10 <= N, M <= 26) Output Print which number did you recognized. Sample Input 2 10 7 ### ##### ### ### ## ## ## ## ## ## ## ## ### ### ##### ### 14 9 ##### ####### ### ### ### ### ### ### ### ### ##### ##### ### ### ### ### ### ### ### ### ####### ##### Sample Output 0 8
【cudaFree() failed. Reason: driver shutting down 】
问题说明: 定义检测网络时,定义为全局变量会引发问题,但是定义为局部变量就没有这个问题。哪位大神遇到过类似情况,请求指教。。 //net_type detect_net; //anet_type feature_net; //示例代码: #include <iostream> #include <dlib/dnn.h> #include <dlib/data_io.h> #include <dlib/image_processing.h> //#include <dlib/gui_widgets.h> #include <stdio.h> using namespace std; using namespace dlib; // ---------------------------------------------------------------------------------------- //hog face detecting template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET> using residual = add_prev1<block<N,BN,1,tag1<SUBNET>>>; template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET> using residual_down = add_prev2<avg_pool<2,2,2,2,skip1<tag2<block<N,BN,2,tag1<SUBNET>>>>>>; template <int N, template <typename> class BN, int stride, typename SUBNET> using block = BN<con<N,3,3,1,1,relu<BN<con<N,3,3,stride,stride,SUBNET>>>>>; template <int N, typename SUBNET> using ares = relu<residual<block,N,affine,SUBNET>>; template <int N, typename SUBNET> using ares_down = relu<residual_down<block,N,affine,SUBNET>>; template <typename SUBNET> using alevel0 = ares_down<256,SUBNET>; template <typename SUBNET> using alevel1 = ares<256,ares<256,ares_down<256,SUBNET>>>; template <typename SUBNET> using alevel2 = ares<128,ares<128,ares_down<128,SUBNET>>>; template <typename SUBNET> using alevel3 = ares<64,ares<64,ares<64,ares_down<64,SUBNET>>>>; template <typename SUBNET> using alevel4 = ares<32,ares<32,ares<32,SUBNET>>>; using anet_type = loss_metric<fc_no_bias<128,avg_pool_everything< alevel0< alevel1< alevel2< alevel3< alevel4< max_pool<3,3,2,2,relu<affine<con<32,7,7,2,2, input_rgb_image_sized<150> >>>>>>>>>>>>; // ------------------------------------------------------------------------------------------------------------ // dnn face detecting template <long num_filters, typename SUBNET> using con5d = con<num_filters,5,5,2,2,SUBNET>; template <long num_filters, typename SUBNET> using con5 = con<num_filters,5,5,1,1,SUBNET>; template <typename SUBNET> using downsampler = relu<affine<con5d<32, relu<affine<con5d<32, relu<affine<con5d<16,SUBNET>>>>>>>>>; template <typename SUBNET> using rcon5 = relu<affine<con5<45,SUBNET>>>; using net_type = loss_mmod<con<1,9,9,1,1,rcon5<rcon5<rcon5<downsampler<input_rgb_image_pyramid<pyramid_down<6>>>>>>>>; // ------------------------------------------------------------------------------------------------------------ // define the global variable //shape_predictor sp; //net_type detect_net; //anet_type feature_net; void sc_face_recog_init(); void sc_face_recog_init() { /* shape_predictor sp; net_type detect_net; anet_type feature_net; */ deserialize("mmod_human_face_detector.dat") >> detect_net; deserialize("shape_predictor_5_face_landmarks.dat") >> sp; deserialize("dlib_face_recognition_resnet_model_v1.dat") >> feature_net; } int main(int argc, char *argv[]) { printf("hello\n\n\n\n"); //运行是出现错误 cudaFree() failed. Reason: driver shutting down // cudaFreeHost() failed. Reason: driver shutting down sc_face_recog_init(); detect_net.clean(); return 0; } ``` ```
Weak Number Recognition 的编程
Problem Description Wiskey wants to develop an image retrieval system, but he can’t deal with the number recognition. The number is neat which no any aliasing, no any redundant point, and made by ‘#’. You can use the software “ASCII Art Studio”, the data come from there. The font is Arial, and the font size between “五号” and “一号", the style is bold. The data is weak, not all of the numbers which various sizes are need to be recognized. Come on baby; please improve your program as much as possible. Input First line will contain one integer mean how many cases will follow by. In each case will contain two integers N, M mean the N*M matrix will follow. (10 <= N, M <= 26) Output Print which number did you recognized. Sample Input 2 10 7 ### ##### ### ### ## ## ## ## ## ## ## ## ### ### ##### ### 14 9 ##### ####### ### ### ### ### ### ### ### ### ##### ##### ### ### ### ### ### ### ### ### ####### ##### Sample Output 0 8
小白求问为什么应用一进去就闪退了
04-17 22:29:38.316 2512-2512/? I/appproc: denglibo exec app_process! cmdline: /system/bin com.android.commands.pm.Pm install -t -r /data/local/tmp/com.example.hasee.myapplication 04-17 22:29:38.316 2512-2512/? D/AndroidRuntime: >>>>>> START com.android.internal.os.RuntimeInit uid 0 <<<<<< 04-17 22:29:38.317 2512-2512/? D/AndroidRuntime: CheckJNI is OFF 04-17 22:29:38.332 2512-2512/? D/libnativebridge: PreInitializeNativeBridge name=unknown 04-17 22:29:38.359 2512-2512/? E/memtrack: Couldn't load memtrack module (No such file or directory) 04-17 22:29:38.359 2512-2512/? E/android.os.Debug: failed to load memtrack module: -2 04-17 22:29:38.376 2512-2512/? D/AndroidRuntime: Calling main entry com.android.commands.pm.Pm --------- beginning of system 04-17 22:29:38.382 2159-2174/com.android.defcontainer I/NativeHelper: denglibo call findSupportedAbi! abiList=x86 armeabi-v7a armeabi packageName=com.example.hasee.myapplication 04-17 22:29:38.384 2159-2174/com.android.defcontainer D/NativeLibraryHelper: denglibo call LdFindSupportedAbi, param: supportedAbisArray=x86 armeabi-v7a armeabi 04-17 22:29:38.385 2159-2174/com.android.defcontainer D/NativeLibraryHelper: denglibo native LdFindSupportedAbi return! ret=-114 abi=NULL 04-17 22:29:38.388 2159-2173/com.android.defcontainer D/DefContainer: Copying /data/local/tmp/com.example.hasee.myapplication to base.apk 04-17 22:29:38.393 1541-1570/system_process I/NativeHelper: denglibo call findSupportedAbi! abiList=x86 armeabi-v7a armeabi packageName=com.example.hasee.myapplication 04-17 22:29:38.395 1541-1570/system_process D/NativeLibraryHelper: denglibo call LdFindSupportedAbi, param: supportedAbisArray=x86 armeabi-v7a armeabi 04-17 22:29:38.396 1541-1570/system_process D/NativeLibraryHelper: denglibo native LdFindSupportedAbi return! ret=-114 abi=NULL 04-17 22:29:38.409 1541-1570/system_process D/PackageManager: Renaming /data/app/vmdl270166158.tmp to /data/app/com.example.hasee.myapplication-2 04-17 22:29:38.410 1541-1561/system_process I/ActivityManager: Force stopping com.example.hasee.myapplication appid=10031 user=-1: uninstall pkg 04-17 22:29:38.417 1541-1570/system_process I/PackageManager: denglibo-scanPackageDirtyLI oldSeting(packages.xml): com.example.hasee.myapplication orig primaryCpuAbi: null secondaryCpuAbi: null cpuAbiOverride: null myOrigCpuAbi: null 04-17 22:29:38.417 1541-1570/system_process I/PackageManager: Package com.example.hasee.myapplication codePath changed from /data/app/com.example.hasee.myapplication-1 to /data/app/com.example.hasee.myapplication-2; Retaining data and using new 04-17 22:29:38.424 1541-1570/system_process I/NativeHelper: denglibo call findSupportedAbi! abiList=x86 armeabi-v7a armeabi packageName=com.example.hasee.myapplication 04-17 22:29:38.427 1541-1570/system_process D/NativeLibraryHelper: denglibo call LdFindSupportedAbi, param: supportedAbisArray=x86 armeabi-v7a armeabi 04-17 22:29:38.427 1541-1570/system_process D/NativeLibraryHelper: denglibo native LdFindSupportedAbi return! ret=-114 abi=NULL 04-17 22:29:38.428 1541-1570/system_process I/PackageManager: denglibo-copyNativeBinariesForSupportedAbi finish: copyRet = -114 04-17 22:29:38.428 1541-1570/system_process I/PackageManager: Running dexopt on: /data/app/com.example.hasee.myapplication-2/base.apk pkg=com.example.hasee.myapplication isa=x86 vmSafeMode=false 04-17 22:29:38.437 2523-2523/? I/dex2oat: /system/bin/dex2oat --zip-fd=5 --zip-location=/data/app/com.example.hasee.myapplication-2/base.apk --oat-fd=6 --oat-location=/data/dalvik-cache/x86/data@app@com.example.hasee.myapplication-2@base.apk@classes.dex --instruction-set=x86 --instruction-set-features=default --runtime-arg -Xms64m --runtime-arg -Xmx512m --swap-fd=7 04-17 22:29:38.439 2523-2523/? E/libnativebridge: denglibol LoadNativeBridge error! nb_library_filename invalid! 04-17 22:29:38.517 2523-2523/? I/dex2oat: Decided to run without swap. 04-17 22:29:38.539 2523-2523/? W/dex2oat: Before Android 4.1, method int android.support.v7.widget.DropDownListView.lookForSelectablePosition(int, boolean) would have incorrectly overridden the package-private method in android.widget.ListView 04-17 22:29:38.558 2523-2525/? W/dex2oat: Before Android 4.1, method int android.support.v7.widget.MenuPopupWindow$MenuDropDownListView.lookForSelectablePosition(int, boolean) would have incorrectly overridden the package-private method in android.widget.ListView 04-17 22:29:38.560 2523-2525/? W/dex2oat: Before Android 4.1, method android.graphics.PorterDuffColorFilter android.support.graphics.drawable.VectorDrawableCompat.updateTintFilter(android.graphics.PorterDuffColorFilter, android.content.res.ColorStateList, android.graphics.PorterDuff$Mode) would have incorrectly overridden the package-private method in android.graphics.drawable.Drawable 04-17 22:29:39.453 2523-2523/? I/dex2oat: dex2oat took 1.015s (threads: 2) arena alloc=306KB java alloc=3MB native alloc=6MB free=9MB 04-17 22:29:39.454 1541-1570/system_process W/PackageManager: Code path for pkg : com.example.hasee.myapplication changing from /data/app/com.example.hasee.myapplication-1 to /data/app/com.example.hasee.myapplication-2 04-17 22:29:39.454 1541-1570/system_process W/PackageManager: Resource path for pkg : com.example.hasee.myapplication changing from /data/app/com.example.hasee.myapplication-1 to /data/app/com.example.hasee.myapplication-2 04-17 22:29:39.455 1541-1561/system_process I/ActivityManager: Force stopping com.example.hasee.myapplication appid=10031 user=-1: update pkg 04-17 22:29:39.464 1541-1570/system_process I/ActivityManager: Force stopping com.example.hasee.myapplication appid=10031 user=0: pkg removed 04-17 22:29:39.471 1752-1752/com.android.launcher3 I/art: Explicit concurrent mark sweep GC freed 1842(105KB) AllocSpace objects, 3(108KB) LOS objects, 39% free, 11MB/19MB, paused 137us total 6.681ms 04-17 22:29:39.473 1994-1994/com.android.flysilkworm I/art: Explicit concurrent mark sweep GC freed 165(6KB) AllocSpace objects, 1(36KB) LOS objects, 24% free, 8MB/11MB, paused 126us total 6.168ms 04-17 22:29:39.478 1644-1644/com.android.systemui I/art: Explicit concurrent mark sweep GC freed 17097(644KB) AllocSpace objects, 0(0B) LOS objects, 39% free, 16MB/27MB, paused 156us total 13.779ms 04-17 22:29:39.485 1541-1587/system_process I/InputReader: Reconfiguring input devices. changes=0x00000010 04-17 22:29:39.486 1541-1587/system_process I/InputReader: Reconfiguring input devices. changes=0x00000010 04-17 22:29:39.487 1714-1714/com.android.emu.coreservice I/EmuCoreService: Broadcast action = android.intent.action.PACKAGE_REMOVED 04-17 22:29:39.491 1541-1570/system_process W/Settings: Setting install_non_market_apps has moved from android.provider.Settings.Global to android.provider.Settings.Secure, returning read-only value. 04-17 22:29:39.491 1541-1587/system_process I/InputReader: Reconfiguring input devices. changes=0x00000010 04-17 22:29:39.492 2093-2093/com.android.keychain W/ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.startService:1692 android.content.ContextWrapper.startService:516 android.content.ContextWrapper.startService:516 com.android.keychain.KeyChainBroadcastReceiver.onReceive:12 android.app.ActivityThread.handleReceiver:2609 04-17 22:29:39.493 1752-1752/com.android.launcher3 W/Launcher: setApplicationContext called twice! old=com.android.launcher3.LauncherApplication@22977308 new=com.android.launcher3.LauncherApplication@22977308 04-17 22:29:39.494 1541-1560/system_process W/Searchables: No global search activity found 04-17 22:29:39.503 1541-1541/system_process I/art: Explicit concurrent mark sweep GC freed 83651(4MB) AllocSpace objects, 29(1129KB) LOS objects, 33% free, 15MB/23MB, paused 1.088ms total 29.050ms 04-17 22:29:39.503 1541-1541/system_process D/JobSchedulerService: Receieved: android.intent.action.PACKAGE_REMOVED 04-17 22:29:39.503 1541-1541/system_process D/BackupManagerService: Received broadcast Intent { act=android.intent.action.PACKAGE_REMOVED dat=package:com.example.hasee.myapplication flg=0x4000010 (has extras) } 04-17 22:29:39.504 1541-1570/system_process I/art: WaitForGcToComplete blocked for 12.932ms for cause Explicit 04-17 22:29:39.512 1994-1994/com.android.flysilkworm I/appstore: app Not Started 04-17 22:29:39.512 1541-1541/system_process D/BackupManagerService: Received broadcast Intent { act=android.intent.action.PACKAGE_ADDED dat=package:com.example.hasee.myapplication flg=0x4000010 (has extras) } 04-17 22:29:39.512 1541-1541/system_process W/BackupManagerService: Removing schedule queue dupe of com.example.hasee.myapplication 04-17 22:29:39.513 1541-1560/system_process W/Searchables: No global search activity found 04-17 22:29:39.515 1541-1560/system_process W/VoiceInteractionManagerService: no available voice recognition services found for user 0 04-17 22:29:39.515 1714-1714/com.android.emu.coreservice I/EmuCoreService: Broadcast action = android.intent.action.PACKAGE_ADDED 04-17 22:29:39.527 1541-1753/system_process W/BackupManagerService: dataChanged but no participant pkg='com.android.launcher3' uid=10020 04-17 22:29:39.536 1994-1994/com.android.flysilkworm I/appstore: app Not Started 04-17 22:29:39.539 1752-1752/com.android.launcher3 W/Launcher: setApplicationContext called twice! old=com.android.launcher3.LauncherApplication@22977308 new=com.android.launcher3.LauncherApplication@22977308 04-17 22:29:39.542 1541-1570/system_process I/art: Explicit concurrent mark sweep GC freed 5719(344KB) AllocSpace objects, 0(0B) LOS objects, 33% free, 15MB/23MB, paused 1.145ms total 38.299ms 04-17 22:29:39.546 2512-2512/? I/art: System.exit called, status: 0 04-17 22:29:39.547 2512-2512/? I/AndroidRuntime: VM exiting with result code 0. 04-17 22:29:39.628 1994-2199/com.android.flysilkworm I/System.out: 200 04-17 22:29:39.655 2536-2536/? I/appproc: denglibo exec app_process! cmdline: /system/bin com.android.commands.am.Am start -n com.example.hasee.myapplication/com.example.hasee.myapplication.MainActivity -a android.intent.action.MAIN -c android.intent.category.LAUNCHER 04-17 22:29:39.655 2536-2536/? D/AndroidRuntime: >>>>>> START com.android.internal.os.RuntimeInit uid 0 <<<<<< 04-17 22:29:39.657 2536-2536/? D/AndroidRuntime: CheckJNI is OFF 04-17 22:29:39.667 2536-2536/? D/libnativebridge: PreInitializeNativeBridge name=unknown 04-17 22:29:39.679 2536-2536/? E/memtrack: Couldn't load memtrack module (No such file or directory) 04-17 22:29:39.679 2536-2536/? E/android.os.Debug: failed to load memtrack module: -2 04-17 22:29:39.694 2536-2536/? D/AndroidRuntime: Calling main entry com.android.commands.am.Am 04-17 22:29:39.718 1541-1555/system_process I/ActivityManager: START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.hasee.myapplication/.MainActivity} from uid 0 on display 0 04-17 22:29:39.719 1541-1555/system_process V/WindowManager: addAppToken: AppWindowToken{362f7eb7 token=Token{2ac5a1b6 ActivityRecord{18688551 u0 com.example.hasee.myapplication/.MainActivity t7}}} to stack=1 task=7 at 0 04-17 22:29:39.720 1541-1566/system_process V/WindowManager: Adding window Window{3c93318e u0 Starting com.example.hasee.myapplication} at 2 of 5 (after Window{fa9636f u0 com.android.launcher3/com.android.launcher3.Launcher}) 04-17 22:29:39.721 2536-2536/? D/AndroidRuntime: Shutting down VM 04-17 22:29:39.721 2536-2536/? D/libnativebridge: call UnloadNativeBridge! state=1 04-17 22:29:39.730 1541-1555/system_process I/ActivityManager: Start proc 2546:com.example.hasee.myapplication/u0a31 for activity com.example.hasee.myapplication/.MainActivity 04-17 22:29:39.731 1714-1714/com.android.emu.coreservice I/EmuCoreService: Broadcast action = android.intent.action.TOP_ACTIVITY_CHANGED 04-17 22:29:39.748 2546-2546/? I/art: Late-enabling -Xcheck:jni 04-17 22:29:39.748 2546-2546/? D/libnativebridge: call UnloadNativeBridge! state=1 04-17 22:29:39.768 2546-2554/? I/art: Debugger is no longer active 04-17 22:29:39.861 2546-2546/? D/AndroidRuntime: Shutting down VM --------- beginning of crash 04-17 22:29:39.861 2546-2546/? E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.hasee.myapplication, PID: 2546 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.hasee.myapplication/com.example.hasee.myapplication.MainActivity}: java.lang.RuntimeException: Your content must have a ListView whose id attribute is 'android.R.id.list' at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2325) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387) at android.app.ActivityThread.access$800(ActivityThread.java:151) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:905) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:700) Caused by: java.lang.RuntimeException: Your content must have a ListView whose id attribute is 'android.R.id.list' at android.app.ListActivity.onContentChanged(ListActivity.java:243) at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:382) at android.app.Activity.setContentView(Activity.java:2145) at com.example.hasee.myapplication.MainActivity.onCreate(MainActivity.java:34) at android.app.Activity.performCreate(Activity.java:5990) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1106) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2278) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387)  at android.app.ActivityThread.access$800(ActivityThread.java:151)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)  at android.os.Handler.dispatchMessage(Handler.java:102)  at android.os.Looper.loop(Looper.java:135)  at android.app.ActivityThread.main(ActivityThread.java:5254)  at java.lang.reflect.Method.invoke(Native Method)  at java.lang.reflect.Method.invoke(Method.java:372)  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:905)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:700)  04-17 22:29:39.861 1541-1708/system_process W/ActivityManager: Force finishing activity 1 com.example.hasee.myapplication/.MainActivity 04-17 22:29:39.951 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:40.069 1365-1365/? E/EGL_adreno: tid 1365: eglCreateSyncKHR(1987): error 0x3004 (EGL_BAD_ATTRIBUTE) 04-17 22:29:40.074 1541-1541/system_process W/art: Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService.crashApplication(com.android.server.am.ProcessRecord, android.app.ApplicationErrorReport$CrashInfo) from ActivityManagerService.java:12135 waiters=0 for 211ms 04-17 22:29:40.074 1541-1561/system_process W/System.err: java.lang.Exception: denglibo show AlertDialog! title= 04-17 22:29:40.074 1541-1561/system_process W/System.err: at android.app.AlertDialog.show(AlertDialog.java:112) 04-17 22:29:40.074 1541-1561/system_process W/System.err: at com.android.server.am.ActivityManagerService$MainHandler.handleMessage(ActivityManagerService.java:1342) 04-17 22:29:40.074 1541-1561/system_process W/System.err: at android.os.Handler.dispatchMessage(Handler.java:102) 04-17 22:29:40.074 1541-1561/system_process W/System.err: at android.os.Looper.loop(Looper.java:135) 04-17 22:29:40.074 1541-1561/system_process W/System.err: at android.os.HandlerThread.run(HandlerThread.java:61) 04-17 22:29:40.074 1541-1561/system_process W/System.err: at com.android.server.ServiceThread.run(ServiceThread.java:46) 04-17 22:29:40.106 1541-2235/system_process I/OpenGLRenderer: Initialized EGL, version 1.4 04-17 22:29:40.106 1541-2235/system_process I/EGL_adreno: eglCreateContext request GLES major-version=2 04-17 22:29:40.107 1541-2235/system_process D/EGL_adreno: eglCreateContext: 0x9edfe9a0: maj 3 min 1 rcv 4 04-17 22:29:40.111 1541-2235/system_process D/EGL_adreno: eglMakeCurrent: 0x9edfe9a0: ver 3 1 (tinfo 0xa22f8770) 04-17 22:29:40.118 1541-2235/system_process E/EGL_adreno: tid 2235: eglSurfaceAttrib(1266): error 0x3009 (EGL_BAD_MATCH) 04-17 22:29:40.118 1541-2235/system_process W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xaec22700, error=EGL_BAD_MATCH 04-17 22:29:40.119 1541-2235/system_process D/EGL_adreno: eglMakeCurrent: 0x9edfe9a0: ver 3 1 (tinfo 0xa22f8770) 04-17 22:29:40.574 1541-1561/system_process W/ActivityManager: Activity pause timeout for ActivityRecord{18688551 u0 com.example.hasee.myapplication/.MainActivity t7 f} 04-17 22:29:40.577 1714-1714/com.android.emu.coreservice I/EmuCoreService: Broadcast action = android.intent.action.TOP_ACTIVITY_CHANGED 04-17 22:29:40.593 1752-2049/com.android.launcher3 I/OpenGLRenderer: Initialized EGL, version 1.4 04-17 22:29:40.594 1752-2049/com.android.launcher3 I/EGL_adreno: eglCreateContext request GLES major-version=2 04-17 22:29:40.595 1752-2049/com.android.launcher3 D/EGL_adreno: eglCreateContext: 0xaec34760: maj 3 min 1 rcv 4 04-17 22:29:40.602 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:40.615 1752-2049/com.android.launcher3 E/EGL_adreno: tid 2049: eglSurfaceAttrib(1266): error 0x3009 (EGL_BAD_MATCH) 04-17 22:29:40.615 1752-2049/com.android.launcher3 W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xaec601a0, error=EGL_BAD_MATCH 04-17 22:29:40.616 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:40.662 1752-2049/com.android.launcher3 V/RenderScript: 0xaedacc00 Launching thread(s), CPUs 2 04-17 22:29:40.798 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:29:41.146 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:41.157 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:41.160 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:41.160 1752-2049/com.android.launcher3 W/OpenGLRenderer: Incorrectly called buildLayer on View: ShortcutAndWidgetContainer, destroying layer... 04-17 22:29:41.161 1752-2049/com.android.launcher3 D/EGL_adreno: eglMakeCurrent: 0xaec34760: ver 3 1 (tinfo 0xaec395f0) 04-17 22:29:41.578 1644-1644/com.android.systemui W/ResourceType: No package identifier when getting value for resource number 0x00000000 04-17 22:29:41.578 1644-1644/com.android.systemui W/PackageManager: Failure retrieving resources for com.example.hasee.myapplication: Resource ID #0x0 04-17 22:29:50.691 1541-1561/system_process W/ActivityManager: Activity destroy timeout for ActivityRecord{18688551 u0 com.example.hasee.myapplication/.MainActivity t7 f} 04-17 22:29:59.192 1365-1515/? E/SurfaceFlinger: warning, detect vsync overflow! 04-17 22:30:00.041 1365-1515/? E/SurfaceFlinger: warning, detect vsync overflow! 04-17 22:30:00.801 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:30:20.806 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:30:40.811 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:31:00.816 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:31:20.817 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:31:20.920 1541-1595/system_process E/WifiStateMachine: denglibo mScanResults.size=1 04-17 22:31:20.920 1541-1595/system_process I/LdWifi: ld_get_eth0_ip return 172.16.2.15 04-17 22:31:40.821 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:32:00.824 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45 04-17 22:32:20.827 1541-1595/system_process E/WifiStateMachine: WifiStateMachine CMD_START_SCAN source -2 txSuccessRate=0.00 rxSuccessRate=0.00 targetRoamBSSID=00:81:ec:72:db:e9 RSSI=-45
C# 调用C++ DLL 参数Emgu.cv.Mat如何传递
C++ 做了视频识别的DLL动态库,导出函数 extern "C" __declspec(dllexport)float Recognition(Mat frame,int type); C#主程序调用C++的视频识别动态库 [DllImport(@"Recognition.dll", EntryPoint = "Recognition", CallingConvention = CallingConvention.Cdecl)] public extern static float Recognition(Mat frame, int iType); C#调用内容如下: Bitmap bmp = cameraControl1.TakeSnapshot(); Image<Bgr, byte> image = new Image<Bgr, byte>(bmp); //Image<Bgr, byte>转Mat Mat mat = image.Mat; Mat Graymat = mat.Clone(); float Value = Recognition(Graymat, 1); 运行到float Value = Recognition( Graymat, 1);行,查看 Graymat 变量内容正常; 按F11 调试到C++工程;查看传入的Mat 变量 frame 宽度、高度、数据均不正常;不知道原因,改成参数引用传递也不行。调试信息如图 哪位大神急救下! ![图片说明](https://img-ask.csdn.net/upload/201908/20/1566271847_310078.png)![图片说明](https://img-ask.csdn.net/upload/201908/20/1566271855_169575.png)
statements_do这段代码为什么不执行呢?
Blockly.Blocks['speech_recognition'] = { init: function() { this.appendValueInput("condition") .setCheck(null) .appendField("if") .appendField("you") .appendField("hear"); this.appendStatementInput("do") .setCheck(null) .appendField("do"); this.setInputsInline(false); this.setPreviousStatement(true, null); this.setNextStatement(true, null); this.setColour(230); this.setTooltip(""); this.setHelpUrl(""); } }; Blockly.JavaScript['speech_recognition'] = function(block) { var value_condition = Blockly.JavaScript.valueToCode(block, 'condition', Blockly.JavaScript.ORDER_ATOMIC); var statements_do = Blockly.JavaScript.statementToCode(block, 'do'); var SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition; var recognition = new SpeechRecognition(); recognition.continuous = true; recognition.onend = function() { return; recognition.start(); } recognition.onresult = function(event) { var current = event.resultIndex; var transcript = event.results[current][0].transcript; var code="if("+transcript+"=="+value_condition+"){\n"+statements_do+";\n}"; } recognition.start(); return code; };
C语言Maple is a good techer
Problem Description Recently, Mr. Maple feels it more and more boring to check the students' homework, especially those with multiple choice questions. Then he emerges an idea of using computer to help him check the answers. So that Mr. Maple needs a program, which can recognize what letters the students filled in after the answer sheets are scanned into the computer. So far, the answer sheets have been transfered into square-like patterns (refer to the sample), in which ‘X’ presents the painted pixel, and ‘.’ presents blank. It’s your turn to write a program for recognition. Go! * Some details about the patterns: 1) The size of patterns is always 16 * 16. 2) Each pattern can have one and only one character. 3) The character belongs to {A, B, C, D}. 4) The written character won’t be too small. 5) The character may be distorted or rotated (a little). 6) Redundant pixels would turn up in a few cases. 7) Necessary pixels would be missing in a few cases. 8) It is guaranteed that all the test data can be easily judged by eye. Input The first line contains the number T of testcases (1 ≤ T ≤ 50). Then T patterns below. Note that there is a blank line between the patterns. Output For each pattern print a line consisting of the corresponding character. Sample Input 3 ................ ........X....... .......XX....... ......X..X...... .....XX..X...... .....X...X...... ....XX...X...... ....X....X...... ...X.....XX..... ...XXXXXXXXXX... ..XX......XX.... ..XX.......X.... ..XX.......XX... ..XX........X... .XXX.X.......XX. ................ ..X............. ................ ......XXXXXX.... ............X... .....XX......XX. .....X.......XXX ....XX........X. ....X.........X. ...X.....XX..X.. ...XXXXXXXXX.... ...X......XX.X.. ..X...........X. ..X...........X. ..X.........XX.. .XXXXXXXXXXX.... ................ ................ ................ ................ .........XXX.... ......X......... ...XX........... .XX............. X............... X............... .XX............. .XX.........X... ...XX.XXXXXXX... .....XXXXXXX.... ................ ................ ................ Sample Output A B C
相见恨晚的超实用网站
相见恨晚的超实用网站 持续更新中。。。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
JavaScript 为什么能活到现在?
作者 | 司徒正美 责编 |郭芮 出品 | CSDN(ID:CSDNnews) JavaScript能发展到现在的程度已经经历不少的坎坷,早产带来的某些缺陷是永久性的,因此浏览器才有禁用JavaScript的选项。甚至在jQuery时代有人问出这样的问题,jQuery与JavaScript哪个快?在Babel.js出来之前,发明一门全新的语言代码代替JavaScript...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
吐血推荐珍藏的Visual Studio Code插件
作为一名Java工程师,由于工作需要,最近一个月一直在写NodeJS,这种经历可以说是一部辛酸史了。好在有神器Visual Studio Code陪伴,让我的这段经历没有更加困难。眼看这段经历要告一段落了,今天就来给大家分享一下我常用的一些VSC的插件。 VSC的插件安装方法很简单,只需要点击左侧最下方的插件栏选项,然后就可以搜索你想要的插件了。 下面我们进入正题 Material Theme ...
实战:如何通过python requests库写一个抓取小网站图片的小爬虫
有点爱好的你,偶尔应该会看点图片文字,最近小网站经常崩溃消失,不如想一个办法本地化吧,把小照片珍藏起来! 首先,准备一个珍藏的小网站,然后就可以开始啦! 第一步 我们先写一个获取网站的url的链接,因为url常常是由page或者,其他元素构成,我们就把他分离出来,我找到的网站主页下有图片区 图片区内有标题页,一个标题里有10张照片大概 所以步骤是: 第一步:进入图片区的标题页 def getH...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
相关热词 c#委托 逆变与协变 c#新建一个项目 c#获取dll文件路径 c#子窗体调用主窗体事件 c# 拷贝目录 c# 调用cef 网页填表c#源代码 c#部署端口监听项目、 c#接口中的属性使用方法 c# 昨天
立即提问