python-opencv2的cv2.inRange把特定颜色提取到白色背景
    lower=np.array(lower,dtype="uint8")
    upper=np.array(upper,dtype="uint8")
    hsv=cv2.cvtColor(image,cv2.COLOR_RGB2HSV)
    cv2.imshow("hsv",hsv)
    mask=cv2.inRange(hsv,lower,upper)

这里mask得到了在lower和upper之间的颜色,但是还是在原图里面的,如何把它提取到一张白色的背景里面呢,或者怎么把原图其他部分变成白色?

1个回答

CV_EXPORTS_W void inRange(InputArray src, InputArray lowerb,
InputArray upperb, OutputArray dst);
CV_THRESH_BINARY_INV
你查下python中你对应opencv版本定义的代码,看这个常量参数,有没有,就可以反向把区间外的东西置白

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
opencv 中的inRange函数怎么用,求高人指点///,急求,大神给帮忙解决一下,谢谢

inRange函数的参数是什么意思,有没有详解??opencv 中的inRange函数怎么用,求高人指点///,急求,大神给帮忙解决一下,谢谢

openCV_python自带的ANN进行手写字体识别,报错。求助

![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479207_695592.png)![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479217_497206.png) 我用python3.6按照《OpenCV3计算机视觉》书上代码进行手写字识别,识别率很低,运行时还报了错:OpenCV(3.4.1) Error: Assertion failed ((type == 5 || type == 6) && inputs.cols == layer_sizes[0]) in cv::ml::ANN_MLPImpl::predict, file C:\projects\opencv-python\opencv\modules\ml\src\ann_mlp.cpp, line 411 ``` 具体代码如下:求大佬指点下 import cv2 import numpy as np import digits_ann as ANN def inside(r1, r2): x1, y1, w1, h1 = r1 x2, y2, w2, h2 = r2 if (x1 > x2) and (y1 > y2) and (x1 + w1 < x2 + w2) and (y1 + h1 < y2 + h2): return True else: return False def wrap_digit(rect): x, y, w, h = rect padding = 5 hcenter = x + w / 2 vcenter = y + h / 2 if (h > w): w = h x = hcenter - (w / 2) else: h = w y = vcenter - (h / 2) return (int(x - padding), int(y - padding), int(w + padding), int(h + padding)) ''' 注意:首次测试时,建议将使用完整的训练数据集,且进行多次迭代,直到收敛 如:ann, test_data = ANN.train(ANN.create_ANN(100), 50000, 30) ''' ann, test_data = ANN.train(ANN.create_ANN(10), 50000, 1) # 调用所需识别的图片,并处理 path = "C:\\Users\\64601\\PycharmProjects\Ann\\images\\numbers.jpg" img = cv2.imread(path, cv2.IMREAD_UNCHANGED) bw = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) bw = cv2.GaussianBlur(bw, (7, 7), 0) ret, thbw = cv2.threshold(bw, 127, 255, cv2.THRESH_BINARY_INV) thbw = cv2.erode(thbw, np.ones((2, 2), np.uint8), iterations=2) image, cntrs, hier = cv2.findContours(thbw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) rectangles = [] for c in cntrs: r = x, y, w, h = cv2.boundingRect(c) a = cv2.contourArea(c) b = (img.shape[0] - 3) * (img.shape[1] - 3) is_inside = False for q in rectangles: if inside(r, q): is_inside = True break if not is_inside: if not a == b: rectangles.append(r) for r in rectangles: x, y, w, h = wrap_digit(r) cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) roi = thbw[y:y + h, x:x + w] try: digit_class = ANN.predict(ann, roi)[0] except: print("except") continue cv2.putText(img, "%d" % digit_class, (x, y - 1), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0)) cv2.imshow("thbw", thbw) cv2.imshow("contours", img) cv2.waitKey() cv2.destroyAllWindows() ####### import cv2 import pickle import numpy as np import gzip """OpenCV ANN Handwritten digit recognition example Wraps OpenCV's own ANN by automating the loading of data and supplying default paramters, such as 20 hidden layers, 10000 samples and 1 training epoch. The load data code is taken from http://neuralnetworksanddeeplearning.com/chap1.html by Michael Nielsen """ def vectorized_result(j): e = np.zeros((10, 1)) e[j] = 1.0 return e def load_data(): with gzip.open('C:\\Users\\64601\\PycharmProjects\\Ann\\mnist.pkl.gz') as fp: # 注意版本不同,需要添加传入第二个参数encoding='bytes',否则出现编码错误 training_data, valid_data, test_data = pickle.load(fp, encoding='bytes') fp.close() return (training_data, valid_data, test_data) def wrap_data(): # tr_d数组长度为50000,va_d数组长度为10000,te_d数组长度为10000 tr_d, va_d, te_d = load_data() # 训练数据集 training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = list(zip(training_inputs, training_results)) # 校验数据集 validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]] validation_data = list(zip(validation_inputs, va_d[1])) # 测试数据集 test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]] test_data = list(zip(test_inputs, te_d[1])) return (training_data, validation_data, test_data) def create_ANN(hidden=20): ann = cv2.ml.ANN_MLP_create() # 建立模型 ann.setTrainMethod(cv2.ml.ANN_MLP_RPROP | cv2.ml.ANN_MLP_UPDATE_WEIGHTS) # 设置训练方式为反向传播 ann.setActivationFunction( cv2.ml.ANN_MLP_SIGMOID_SYM) # 设置激活函数为SIGMOID,还有cv2.ml.ANN_MLP_IDENTITY,cv2.ml.ANNMLP_GAUSSIAN ann.setLayerSizes(np.array([784, hidden, 10])) # 设置层数,输入784层,输出层10 ann.setTermCriteria((cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 100, 0.1)) # 设置终止条件 return ann def train(ann, samples=10000, epochs=1): # tr:训练数据集; val:校验数据集; test:测试数据集; tr, val, test = wrap_data() for x in range(epochs): counter = 0 for img in tr: if (counter > samples): break if (counter % 1000 == 0): print("Epoch %d: Trained %d/%d" % (x, counter, samples)) counter += 1 data, digit = img ann.train(np.array([data.ravel()], dtype=np.float32), cv2.ml.ROW_SAMPLE, np.array([digit.ravel()], dtype=np.float32)) print("Epoch %d complete" % x) return ann, test def predict(ann, sample): resized = sample.copy() rows, cols = resized.shape if rows != 28 and cols != 28 and rows * cols > 0: resized = cv2.resize(resized, (28, 28), interpolation=cv2.INTER_CUBIC) return ann.predict(np.array([resized.ravel()], dtype=np.float32)) ```

最近在学习opencv的内容,然后在argparse上遇到了需要参数的报错

我最近刚接触openCV的内容,在win10的pycharm里面试着去运行相关的程序,但是遇到了报错,可能问题很小白,希望各位大牛不吝赐教。其内容是:deep-learning-object-detection.py: error: the following arguments are required: 全篇代码如下 ``` # USAGE # python deep_learning_object_detection.py --image images/example_01.jpg \ # --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel # import the necessary packages import numpy as np import argparse import cv2 ap = argparse.ArgumentParser() ap.add_argument("-i", r"--C:\Users\52314\Desktop\deep\images\example_01.jpg", required=True, help="path to input image") ap.add_argument("-p", r"--C:\Users\52314\Desktop\deepMobileNetSSD_deploy.prototxt.txt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", r"--C:\Users\52314\Desktop\deep\deep_learning_object_detection.py", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) # initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(Args[prototxt], Args[model]) # load the input image and construct an input blob for the image # by resizing to a fixed 300x300 pixels and then normalizing it # (note: normalization is done via the authors of the MobileNet SSD # implementation) image = cv2.imread(Args["image"]) (h, w) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5) # pass the blob through the network and obtain the detections and # predictions print("[INFO] computing object detections...") net.setInput(blob) detections = net.forward() # loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > Args["confidence"]: # extract the index of the class label from the `detections`, # then compute the (x, y)-coordinates of the bounding box for # the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # display the prediction label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) print("[INFO] {}".format(label)) cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) # show the output image cv2.imshow("Output", image) cv2.waitKey(0) ``` 我在网上看到说argparse在win10上面兼容不好所以换了个表达方式,那个也不行,那么到底是什么问题呢?要如何解决这个问题呢?非常感谢!

python opencv 图片前景与背景的分割,拜大神求如何改错

在网上找到了一个用Kmeans算法对图片前景与背景的分割的例子,很适合现在的学习,可一直有一个错误不会修改,跪求大神了。 ``` ```# -*- coding: utf-8 -*- import cv2 import numpy as np import math def panelAbstract(srcImage): # read pic shape imgHeight,imgWidth = srcImage.shape[:2] imgHeight = int(imgHeight);imgWidth = int(imgWidth) # 均值聚类提取前景:二维转一维 imgVec = np.float32(srcImage.reshape((-1,3))) criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,10,1.0) flags = cv2.KMEANS_RANDOM_CENTERS label,clusCenter = cv2.kmeans(imgVec,2,None,criteria,10,flags) clusCenter = np.uint8(clusCenter) clusResult = clusCenter[label.flatten()] imgres = clusResult.reshape((srcImage.shape)) imgres = cv2.cvtColor(imgres,cv2.COLOR_BGR2GRAY) bwThresh = int((np.max(imgres)+np.min(imgres))/2) _,thresh = cv2.threshold(imgres,bwThresh,255,cv2.THRESH_BINARY_INV) threshRotate = cv2.merge([thresh,thresh,thresh]) # 确定前景外接矩形 #find contours contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) minvalx = np.max([imgHeight,imgWidth]);maxvalx = 0 minvaly = np.max([imgHeight,imgWidth]);maxvaly = 0 maxconArea = 0;maxAreaPos = -1 for i in range(len(contours)): if maxconArea < cv2.contourArea(contours[i]): maxconArea = cv2.contourArea(contours[i]) maxAreaPos = i objCont = contours[maxAreaPos] # 旋转校正前景 rect = cv2.minAreaRect(objCont) for j in range(len(objCont)): minvaly = np.min([minvaly,objCont[j][0][0]]) maxvaly = np.max([maxvaly,objCont[j][0][0]]) minvalx = np.min([minvalx,objCont[j][0][1]]) maxvalx = np.max([maxvalx,objCont[j][0][1]]) if rect[2] <=-45: rotAgl = 90 +rect[2] else: rotAgl = rect[2] if rotAgl == 0: panelImg = srcImage[minvalx:maxvalx,minvaly:maxvaly,:] else: rotCtr = rect[0] rotCtr = (int(rotCtr[0]),int(rotCtr[1])) rotMdl = cv2.getRotationMatrix2D(rotCtr,rotAgl,1) imgHeight,imgWidth = srcImage.shape[:2] #图像的旋转 dstHeight = math.sqrt(imgWidth *imgWidth + imgHeight*imgHeight) dstRotimg = cv2.warpAffine(threshRotate,rotMdl,(int(dstHeight),int(dstHeight))) dstImage = cv2.warpAffine(srcImage,rotMdl,(int(dstHeight),int(dstHeight))) dstRotimg = cv2.cvtColor(dstRotimg,cv2.COLOR_BGR2GRAY) _,dstRotBW = cv2.threshold(dstRotimg,127,255,0) contours = cv2.findContours(dstRotBW,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) maxcntArea = 0;maxAreaPos = -1 for i in range(len(contours)): if maxcntArea < cv2.contourArea(contours[i]): maxcntArea = cv2.contourArea(contours[i]) maxAreaPos = i x,y,w,h = cv2.boundingRect(contours[maxAreaPos]) #提取前景:panel panelImg = dstImage[int(y):int(y+h),int(x):int(x+w),:] return panelImg if __name__=="__main__": srcImage = cv2.imread('11.jpg') a=panelAbstract(srcImage) cv2.imshow('figa',a) cv2.waitKey(0) cv2.destroyAllWindows() 这是原地址https://blog.csdn.net/Dawn__Z/article/details/82115160 报错如下(知道错是什么意思就是不会改):Traceback (most recent call last): File "D:\Workspaces\MyEclipse 2015\pythonTest\src\cc.py", line 70, in a=panelAbstract(srcImage) File "D:\Workspaces\MyEclipse 2015\pythonTest\src\cc.py", line 7, in panelAbstract imgHeight,imgWidth = srcImage.shape[:2] AttributeError: 'NoneType' object has no attribute 'shape'

基于Python Opencv 更改指定矩阵数组

如何利用奇偶量化进行图像水印 比如想要更改图片【1】【2】矩阵中的数字,将所有的1换为0,将所有的0换成1或者-1 如何在下面调取出来这个【1】【2】矩阵 ``` import numpy as np from scipy import ndimage import cv2 import random import os #量子化テーブル Q = np.array(((16, 11, 10, 16, 24, 40, 51, 61), (12, 12, 14, 19, 26, 58, 60, 55), (14, 13, 16, 24, 40, 57, 69, 56), (14, 17, 22, 29, 51, 87, 80, 62), (18, 22, 37, 56, 68, 109, 103, 77), (24, 35, 55, 64, 81, 104, 113, 92), (49, 64, 78, 87, 103, 121, 120, 101), (72, 92, 95, 98, 112, 100, 103, 99)), dtype=np.float32) y = cv2.imread(r'C:\Users\Owner\Desktop\so\sample.jpg', 0) def psnr1(img1, img2): mse = np.mean((img1/1.0 - img2/1.0) ** 2 ) if mse < 1.0e-10: return 100 return 10 * math.log10(255.0**2/mse) def get_FileSize(filePath): fsize = os.path.getsize(filePath) fsize = fsize/float(1024 * 1024) return round(fsize, 2) y1 = y.astype(np.float32) # print(y1.dtype) m, n = y1.shape hdata = np.vsplit(y1,n/8) # 縦方向から8個にする for i in range(0, n//8): blockdata = np.hsplit(hdata[i],m/8) #水平方向にも8にする for j in range(0, m//8): block = blockdata[j] #print("block[{},{}] data \n{}".format(i,j,blockdata[j])) Yb = cv2.dct(block.astype(np.float)) F1 = Yb * Q F = F1 // Q #print("block[{},{}] data\n{}".format(i,j,F)) iblock = cv2.idct(Yb) #print(iblock) Y = cv2.dct(y1) print(Y.shape) cv2.imshow("Dct",Y) y2 = cv2.idct(Y) print(psnr1(y,y2)) size1 = get_FileSize(r"C:\Users\Owner\Desktop\so\sample.jpg") print("文件大小:%.2f MB"%(size1)) size = get_FileSize(r"C:\Users\Owner\Desktop\so\sample1.jpg") print("文件大小:%.2f MB"%(size)) print(size/size1 - 1) cv2.imshow("iDCT",y2.astype(np.uint8)) cv2.waitKey(0) cv2.imwrite(r'C:\Users\Owner\Desktop\so\sample1.jpg', y2) ```

Python运行vibe算法过慢

现在在做毕设,有一部分要用到前景目标提取,我打算用VIBE算法来做,但是同样的算法,Python运行时初始化都要20-50s,我朋友用matlab基本就是1s内最多1s多一点就能完成初始化,虽然Python运行慢,但是也不至于慢这么多吧,而且理论上vibe算法的运行速度应该是比较快的,大佬们看一下是不是写的代码的问题。。。 ``` def initial_background(I_gray, N): t1 = cv2.getTickCount() I_pad = np.pad(I_gray, 1, 'symmetric')#对称填充 height = I_pad.shape[0] width = I_pad.shape[1] samples = np.zeros((height, width, N)) t2 = cv2.getTickCount() time = (t2 - t1) * 1000 / cv2.getTickFrequency() print(time) for i in range(1, height - 1): for j in range(1, width - 1): for n in range(N): x, y = 0, 0 while (x == 0 and y == 0): x = np.random.randint(-1, 1) y = np.random.randint(-1, 1) ri = i + x rj = j + y samples[i, j, n] = I_pad[ri, rj] t3 = cv2.getTickCount() time2 = (t3 - t1) * 1000 / cv2.getTickFrequency() print(time2) samples = samples[1:height - 1, 1:width - 1] return samples ``` 上面是Python的初始化部分代码,大佬们帮忙看看这部分代码有哪些可以优化的。matlab代码我看不懂,也不知道该发哪个,第一次在CSDN上问问题,不知道会不会有人回答。。。

Python程序中能调用外部库但是cmd中运行opencv工具报错没有外部的库

``` #-*- encoding: UTF-8 -*- import cv2 import numpy import argparse import Image import time from naoqi import ALProxy from naoqi import ALBroker def nothing(x): pass def choseHSV(filePath): img=cv2.imread(filePath,1) hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) cv2.namedWindow("image",cv2.WINDOW_NORMAL) cv2.createTrackbar('minH','image',0,179,nothing) cv2.createTrackbar('minS','image',0,255,nothing) cv2.createTrackbar('minV','image',0,255,nothing) cv2.createTrackbar('maxH','image',0,179,nothing) cv2.createTrackbar('maxS','image',0,255,nothing) cv2.createTrackbar('maxV','image',0,255,nothing) while(1): minH=cv2.getTrackbarPos('minH','image') minS=cv2.getTrackbarPos('minS','image') minV=cv2.getTrackbarPos('minV','image') maxH=cv2.getTrackbarPos('maxH','image') maxS=cv2.getTrackbarPos('maxS','image') maxV=cv2.getTrackbarPos('maxV','image') thresholdMin = numpy.array([minH, minS, minV]) thresholdMax = numpy.array([maxH, maxS, maxV]) mask=cv2.inRange(hsv,thresholdMin,thresholdMax) res=cv2.bitwise_and(img,img,mask = mask) cv2.imshow('image',res) k=cv2.waitKey(1) if k==ord('q'): break cv2.destroyAllWindows() def readRobotTemperature(): pass def takePhotos(cameraID, robotIP, port): CAMERA = ALProxy("ALVideoDevice", robotIP, port) CAMERA.setActiveCamera(cameraID) # VGA 设置分辨率为2:640*480 0:160*120 resolution = 2 # RGB 设置颜色空间为RGB colorSpace = 11 videoClient = CAMERA.subscribe("python_client", resolution, colorSpace, 5) #设置曝光度模式 CAMERA.setCamerasParameter(videoClient,22,2) time.sleep(0.5) #获取照片 naoImage = CAMERA.getImageRemote(videoClient) CAMERA.unsubscribe(videoClient) imageWidth = naoImage[0] imageHeight = naoImage[1] array = naoImage[6] #装换为PIL图片格式 img = Image.fromstring("RGB", (imageWidth, imageHeight), array) img.save("photo.png", "PNG") def test(): broker = ALBroker("broker","0.0.0.0",0,"127.0.0.1",9559) MOTION = ALProxy("ALMotion") MOTION.moveTo(1,0,0) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--filePath", type=str, help="file path.") args = parser.parse_args() choseHSV(args.filePath) #takePhotos(0, "192.168.1.103", 9559) #test() ``` 以上就是代码,代码是一个tools工具,没有问题,但是一用cmd命令运行以下命令: tools.py --filePath pic.png (最后那是一个图片) 就会报错没有XXX库,我调试了一下,发现所有的外部库它都会报错,不知道这是个什么问题,有没有大神能帮我看看,谢谢了先

python3调用别人的opencv图片匹配程序报错

小白,调用别人python算法进行图片匹配报错。 代码: import cv2 from matplotlib import pyplot as plt import numpy as np import os import math def getMatchNum(matches,ratio): '''返回特征点匹配数量和匹配掩码''' matchesMask=[[0,0] for i in range(len(matches))] matchNum=0 for i,(m,n) in enumerate(matches): if m.distance<ratio*n.distance: #将距离比率小于ratio的匹配点删选出来 matchesMask[i]=[1,0] matchNum+=1 return (matchNum,matchesMask) path='D:/code/' queryPath=path+'yangben/' #图库路径 samplePath=path+'yuanjian/image1.jpg' #样本图片 comparisonImageList=[] #记录比较结果 #创建SIFT特征提取器 sift = cv2.xfeatures2d.SIFT_create() #创建FLANN匹配对象 FLANN_INDEX_KDTREE=0 indexParams=dict(algorithm=FLANN_INDEX_KDTREE,trees=5) searchParams=dict(checks=50) flann=cv2.FlannBasedMatcher(indexParams,searchParams) sampleImage=cv2.imread(samplePath,0) kp1, des1 = sift.detectAndCompute(sampleImage, None) #提取样本图片的特征 for parent,dirnames,filenames in os.walk(queryPath): for p in filenames: p=queryPath+p queryImage=cv2.imread(p,0) kp2, des2 = sift.detectAndCompute(queryImage, None) #提取比对图片的特征 matches=flann.knnMatch(des1,des2,k=2) #匹配特征点,为了删选匹配点,指定k为2,这样对样本图的每个特征点,返回两个匹配 (matchNum,matchesMask)=getMatchNum(matches,0.9) #通过比率条件,计算出匹配程度 matchRatio=matchNum*100/len(matches) drawParams=dict(matchColor=(0,255,0), singlePointColor=(255,0,0), matchesMask=matchesMask, flags=0) comparisonImage=cv2.drawMatchesKnn(sampleImage,kp1,queryImage,kp2,matches,None,**drawParams) comparisonImageList.append((comparisonImage,matchRatio)) #记录下结果 comparisonImageList.sort(key=lambda x:x[1],reverse=True) #按照匹配度排序 count=len(comparisonImageList) column=4 row=math.ceil(count/column) #绘图显示 figure,ax=plt.subplots(row,column) for index,(image,ratio) in enumerate(comparisonImageList): ax[int(index/column)][index%column].set_title('Similiarity %.2f%%' % ratio) ax[int(index/column)][index%column].imshow(image) plt.show() 报错信息: Traceback (most recent call last): File "sift7.py", line 55, in <module> ax[int(index/column)][index%column].set_title('Similiarity %.2f%%' % ratio) TypeError: 'AxesSubplot' object does not support indexing 求大神指点。

gocv:如何使用opencv从蓝色背景切出图像

<div class="post-text" itemprop="text"> <p>I started playing with <a href="https://github.com/hybridgroup/gocv" rel="nofollow noreferrer">gocv</a>. I'm trying to figure out a simple thing: how to cut out an object from an image which has a background of certain colour. In this case the object is pizza and background colour is blue.</p> <p><a href="https://i.stack.imgur.com/PnoWu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PnoWu.png" alt="enter image description here"></a></p> <p>I'm using <a href="https://godoc.org/gocv.io/x/gocv#InRange" rel="nofollow noreferrer">InRange</a> function (<a href="https://docs.opencv.org/master/d2/de8/group__core__array.html#ga48af0ab51e36436c5d04340e036ce981" rel="nofollow noreferrer">inRange</a> in OpenCV) to define the upper and lower threshold for blue colour to create a mask and then <a href="https://godoc.org/gocv.io/x/gocv#Mat.CopyToWithMask" rel="nofollow noreferrer">CopyToWithMask</a> function (<a href="https://docs.opencv.org/master/d3/d63/classcv_1_1Mat.html#a626fe5f96d02525e2604d2ad46dd574f" rel="nofollow noreferrer">copyTo</a> in OpenCV) to apply the mask on the original image. I expect the result to be the blue background with the pizza cut out of it.</p> <p>The code is very simple:</p> <pre><code>package main import ( "fmt" "os" "gocv.io/x/gocv" ) func main() { imgPath := "pizza.png" // read in an image from filesystem img := gocv.IMRead(imgPath, gocv.IMReadColor) if img.Empty() { fmt.Printf("Could not read image %s ", imgPath) os.Exit(1) } // Create a copy of an image hsvImg := img.Clone() // Convert BGR to HSV image gocv.CvtColor(img, hsvImg, gocv.ColorBGRToHSV) lowerBound := gocv.NewMatFromScalar(gocv.NewScalar(110.0, 100.0, 100.0, 0.0), gocv.MatTypeCV8U) upperBound := gocv.NewMatFromScalar(gocv.NewScalar(130.0, 255.0, 255.0, 0.0), gocv.MatTypeCV8U) // Blue mask mask := gocv.NewMat() gocv.InRange(hsvImg, lowerBound, upperBound, mask) // maskedImg: output array that has the same size and type as the input arrays. maskedImg := gocv.NewMatWithSize(hsvImg.Rows(), hsvImg.Cols(), gocv.MatTypeCV8U) hsvImg.CopyToWithMask(maskedImg, mask) // save the masked image newImg := gocv.NewMat() // Convert back to BGR before saving gocv.CvtColor(maskedImg, newImg, gocv.ColorHSVToBGR) gocv.IMWrite("no_pizza.jpeg", newImg) } </code></pre> <p>However the resulting image is basically almost completely black except for a slight hint of a pizza edge:</p> <p><a href="https://i.stack.imgur.com/KFWI2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KFWI2.jpg" alt="enter image description here"></a></p> <p>As for the chosen upper and lower bound of blue colours, I followed the guide mentioned in the official <a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html?highlight=hsv_green" rel="nofollow noreferrer">documentation</a>:</p> <pre><code>blue = np.uint8([[[255, 0, 0]]]) hsv_blue = cv2.cvtColor(blue, cv2.COLOR_BGR2HSV) print(hsv_blue) [[[120 255 255]]] </code></pre> <blockquote> <p>Now you take [H-10, 100,100] and [H+10, 255, 255] as lower bound and upper bound respectively.</p> </blockquote> <p>I'm sure I'm missing something fundamental, but can't figure out what it is.</p> </div>

opencv检测里遇到的setSVMDetector()问题

* 我的程序想将opencv行人检测迁移到别的检测中,但是我的图片是1:1,无法分解到3780维,请问这种有什么解决方法吗? * 能否直接更改setSVMDetector()的默认参数3780维? * 还是说可以更改图片大小,但我期望的原图仍要保持1:1的比例,分解3780只能得到 ``` 3780 = 2*2*3*5*7*9 # 9是9个bin ``` 贴一段自己的检测代码 ``` PosNum = 1997 # 有1997张正样本 NegNum = 1931 winSize = (20, 20) # 窗口刚好就是一张图的大小 blockSize = (10, 10) # 有4个块 blockStride = (5, 5) # 块步长5,5 cellSize = (5, 5) # 有4个cell nBin = 9 # 9个bin 一共3780维是opencv固定的 # 创建hog对象 hog = cv2.HOGDescriptor(winSize, blockSize, blockStride, cellSize, nBin) # 创建svm对象 svm = cv2.ml.SVM_create() # 计算hog # 设置特征维度 featureNum = int(((20-10)/5+1)*((20-10)/5+1)*4*9) # 324就不符合3780 # 初始化特征数组 featureArray = np.zeros(((PosNum+NegNum), featureNum), np.float32) # 二维数组,通过ij访问 # 初始化标签数组 labelArray = np.zeros(((PosNum+NegNum), 1), np.int32) # svm 监督学习 样本 标签 for i in range(0, PosNum): fileName = r'pic_pos\\'+str(i+1)+'.jpg' img = cv2.imread(fileName) # 图片加载 hist = hog.compute(img, (5, 5)) # 计算hog for j in range(0, featureNum): featureArray[i, j] = hist[j] labelArray[i, 0] = 1 for i in range(0, NegNum): fileName = r'pic_neg\\' + str(i + 1) + '.jpg' img = cv2.imread(fileName) # 图片加载 hist = hog.compute(img, (2, 2)) # 计算hog for j in range(0, featureNum): featureArray[i+PosNum, j] = hist[j] labelArray[i+PosNum, 0] = -1 # 负样本label=-1 svm.setType(cv2.ml.SVM_C_SVC) svm.setKernel(cv2.ml.SVM_LINEAR) svm.setC(0.01) # 训练 ret = svm.train(featureArray, cv2.ml.ROW_SAMPLE, labelArray) # 检测 # rho是svm得到的hog描述信息(不详细讨论) alpha = np.zeros((1), np.float32) # 一行一列数组 rho = svm.getDecisionFunction(0, alpha) # get方法获取rho print(rho) print(alpha) alphaArray = np.zeros((1, 1), np.float32) # 大小1*1 supportVArray = np.zeros((1, featureNum), np.float32) # 1*featureNum resultArray = np.zeros((1, featureNum), np.float32) # 1*featureNum alphaArray[0, 0] = alpha resultArray = -1*alphaArray*supportVArray # 检测 myDetect = np.zeros((325), np.float32) # 放检测结果 # myDetect = np.zeros((8101), np.uint8) # 放检测结果 我怀疑是灰度图uint8导致,因为源程序是rgb彩色图片 for i in range(0, 324): myDetect[i] = resultArray[0, i] # 前3780来自resultArray[0,i] # print(i, myDetect[i]) myDetect[324] = rho[0] # 最后一位来自rho[0] # 构建hog(重要 ) myHog = cv2.HOGDescriptor() myHog.setSVMDetector(myDetect) # 把myDetect属性传到myHog imageSrc = cv2.imread('s2.jpg', 0) # 读取待检测图片,2:彩色图片1,灰度图0 objs = myHog.detectMultiScale(imageSrc, 0, (5, 5), (20, 20), 1.05, 2) # 对应目标的检测 x = int(objs[0][0][0]) y = int(objs[0][0][1]) w = int(objs[0][0][2]) h = int(objs[0][0][3]) cv2.rectangle(imageSrc, (x, y), (x+w, y+h), (255, 0, 0), 2) # 绘制矩形框 cv2.imshow('dst', imageSrc) # 最终结果展示 cv2.waitKey(0) ``` 补充说明: 程序运行后出现error: (-215) checkDetectorSize() in function cv::HOGDescriptor::setSVMDetector 报错行69 myHog.setSVMDetector(myDetect) # 把myDetect属性传到myHog,目前我的猜测报错是因为维数问题,所以想着如何去更改维度

OpenCV的 Canny 函数为什么输入源图也可以?和输入灰度图有什么区别吗

OpenCV 的 Canny 函数要求源图是 8 位单通道,但是为什么直接把 BGR 的源图输入也可以呀,感觉跑出来的结果和输入灰度图没有什么差别。。。求大佬指教

跑AlexNet猫狗遇到IndexError: too many indices for array的问题

小弟最近在看OpenCV+TensorFlow这本书的案例 照着输进去了结果发现运行不下去, 问题应该是出现在第二块内容,但是真的不太明白!求各位大神赐教,如何修改! ![图片说明](https://img-ask.csdn.net/upload/201906/30/1561889895_192060.png) 第一块,修改照片尺寸,为啥呀 ``` import cv2 import os def resize(dir): for root, dirs, files in os.walk(dir): for file in files: filepath = os.path.join(root, file) try: image = cv2.imread(filepath) dim = (227, 227) resized = cv2.resize(image, dim) path = "C:\\Users\\Telon_Hu\\Desktop\\ANNs\\train1\\" + file cv2.imwrite(path, resized) except: print(filepath) # os.remove(filepath) cv2.waitKey(0) resize('C:\\Users\\Telon_Hu\\Desktop\\ANNs\\train') ``` ``` import os import numpy as np import tensorflow as tf import cv2 def get_file(file_dir): images=[] temp=[] for root,sub_folders,files in os.walk(file_dir): ''' os.walk(path)---返回的是一个三元组(root,dirs,files): root 所指的是当前正在遍历的这个文件夹的本身的地址 dirs 是一个 list ,内容是该文件夹中所有的目录的名字(不包括子目录) files 同样是 list , 内容是该文件夹中所有的文件(不包括子目录) ''' for name in files: images.append(os.path.join(root,name)) for name in sub_folders: temp.append(os.path.join(root,name)) labels=[] for one_folder in temp: n_img=len(os.listdir(one_folder)) #s.listdir() 方法用于返回指定的文件夹包含的文件或文件夹的名字的列表 letter=one_folder.split('\\')[-1] #split() 通过指定分隔符对字符串进行切片,默认为-1, 即分隔所有。 if letter=='cat': labels=np.append(labels,n_img*[0]) else: labels=np.append(labels,n_img*[1]) temp=np.array([images, labels]) temp=temp.transpose() #矩阵转置 np.random.shuffle(temp) #随机排序 image_list=list(temp[:, 0]) label_list=list(temp[:, 1]) label_list=[int(float(i)) for i in label_list] return image_list,label_list def get_batch(image_list,label_list,img_width,img_height,batch_size,capacity): image=tf.cast(image_list,tf.string) label=tf.cast(label_list,tf.int32) input_queue=tf.train.slice_input_producer([image,label]) label=input_queue[1] image_contents=tf.read_file(input_queue[0]) #通过图片地址读取图片 image=tf.image.decode_jpeg(image_contents,channels=3) #解码图片成矩阵 image=tf.image.resize_image_with_crop_or_pad(image,img_width,img_height) ''' tf.image.resize_images 不能保证图像的纵横比,这样用来做抓取位姿的识别,可能受到影响 tf.image.resize_image_with_crop_or_pad可让纵横比不变 ''' image=tf.image.per_image_standardization(image) #将图片标准化 image_batch,label_batch=tf.train.batch([image,label],batch_size=batch_size,num_threads=64,capacity=capacity) ''' tf.train.batch([example, label], batch_size=batch_size, capacity=capacity): 1.[example, label]表示样本和样本标签,这个可以是一个样本和一个样本标签 2.batch_size是返回的一个batch样本集的样本个数 3.num_threads是线程 4.capacity是队列中的容量。 ''' label_batch=tf.reshape(label_batch,[batch_size]) return image_batch,label_batch def one_hot(labels): '''one-hot 编码''' n_sample=len(labels) n_class=max(labels)+1 onehot_labels=np.zeros((n_sample,n_class)) onehot_labels[np.arange(n_sample),labels]=1 return onehot_labels get_file('C:\\Users\\Telon_Hu\\Desktop\\ANNs\\dogs_vs_cats\\') ``` ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import time import os import sys import creat_and_read_TFReacord as reader x_train,y_train=reader.get_file('dogs_vs_cats') image_batch,label_batch=reader.get_batch(x_train,y_train,227,227,50,2048) #Batch_Normalization正则化 def batch_norm(inputs,is_train,is_conv_out=True,decay=0.999): scale=tf.Variable(tf.ones([inputs.get_shape()[-1]])) beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]])) pop_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable=False) pop_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable=False) if is_train: if is_conv_out: batch_mean, batch_var = tf.nn.moments(inputs, [0, 1, 2]) else: batch_mean, batch_var = tf.nn.moments(inputs, [0]) train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay)) with tf.control_dependencies([train_mean, train_var]): return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, scale, 0.001) else: return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001) with tf.device('/gpu:0'): # 模型参数 learning_rate = 1e-4 training_iters = 200 batch_size = 50 display_step = 5 n_classes = 2 n_fc1 = 4096 n_fc2 = 2048 # 构建模型 x = tf.placeholder(tf.float32, [None, 227, 227, 3]) y = tf.placeholder(tf.float32, [None, n_classes]) W_conv = { 'conv1': tf.Variable(tf.truncated_normal([11, 11, 3, 96], stddev=0.0001)), 'conv2': tf.Variable(tf.truncated_normal([5, 5, 96, 256], stddev=0.01)), 'conv3': tf.Variable(tf.truncated_normal([3, 3, 256, 384], stddev=0.01)), 'conv4': tf.Variable(tf.truncated_normal([3, 3, 384, 384], stddev=0.01)), 'conv5': tf.Variable(tf.truncated_normal([3, 3, 384, 256], stddev=0.01)), 'fc1': tf.Variable(tf.truncated_normal([6 * 6 * 256, n_fc1], stddev=0.1)), 'fc2': tf.Variable(tf.truncated_normal([n_fc1, n_fc2], stddev=0.1)), 'fc3': tf.Variable(tf.truncated_normal([n_fc2, n_classes], stddev=0.1)) } b_conv = { 'conv1': tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[96])), 'conv2': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[256])), 'conv3': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[384])), 'conv4': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[384])), 'conv5': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[256])), 'fc1': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[n_fc1])), 'fc2': tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[n_fc2])), 'fc3': tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[n_classes])) } x_image = tf.reshape(x, [-1, 227, 227, 3]) # 卷积层 1 conv1 = tf.nn.conv2d(x_image, W_conv['conv1'], strides=[1, 4, 4, 1], padding='VALID') conv1 = tf.nn.bias_add(conv1, b_conv['conv1']) conv1 = batch_norm(conv1, True) conv1 = tf.nn.relu(conv1) # 池化层 1 pool1 = tf.nn.avg_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') norm1 = tf.nn.lrn(pool1, 5, bias=1.0, alpha=0.001 / 9.0, beta=0.75) # 卷积层 2 conv2 = tf.nn.conv2d(pool1, W_conv['conv2'], strides=[1, 1, 1, 1], padding='SAME') conv2 = tf.nn.bias_add(conv2, b_conv['conv2']) conv2 = batch_norm(conv2, True) conv2 = tf.nn.relu(conv2) # 池化层 2 pool2 = tf.nn.avg_pool(conv2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 卷积层3 conv3 = tf.nn.conv2d(pool2, W_conv['conv3'], strides=[1, 1, 1, 1], padding='SAME') conv3 = tf.nn.bias_add(conv3, b_conv['conv3']) conv3 = batch_norm(conv3, True) conv3 = tf.nn.relu(conv3) # 卷积层4 conv4 = tf.nn.conv2d(conv3, W_conv['conv4'], strides=[1, 1, 1, 1], padding='SAME') conv4 = tf.nn.bias_add(conv4, b_conv['conv4']) conv4 = batch_norm(conv4, True) conv4 = tf.nn.relu(conv4) # 卷积层5 conv5 = tf.nn.conv2d(conv4, W_conv['conv5'], strides=[1, 1, 1, 1], padding='SAME') conv5 = tf.nn.bias_add(conv5, b_conv['conv5']) conv5 = batch_norm(conv5, True) conv5 = tf.nn.relu(conv5) # 池化层5 pool5 = tf.nn.avg_pool(conv5, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') reshape = tf.reshape(pool5, [-1, 6 * 6 * 256]) fc1 = tf.add(tf.matmul(reshape, W_conv['fc1']), b_conv['fc1']) fc1 = batch_norm(fc1, True, False) fc1 = tf.nn.relu(fc1) # 全连接层 2 fc2 = tf.add(tf.matmul(fc1, W_conv['fc2']), b_conv['fc2']) fc2 = batch_norm(fc2, True, False) fc2 = tf.nn.relu(fc2) fc3 = tf.add(tf.matmul(fc2, W_conv['fc3']), b_conv['fc3']) # 定义损失 loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=fc3)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss) # 评估模型 correct_pred = tf.equal(tf.argmax(fc3,1),tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) init = tf.global_variables_initializer() def onehot(labels): '''one-hot 编码''' n_sample = len(labels) n_class = max(labels) + 1 onehot_labels = np.zeros((n_sample, n_class)) onehot_labels[np.arange(n_sample), labels] = 1 return onehot_labels save_model = ".//model//AlexNetModel.ckpt" def train(opech): with tf.Session() as sess: sess.run(init) train_writer = tf.summary.FileWriter(".//log", sess.graph) # 输出日志的地方 saver = tf.train.Saver() c = [] start_time = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) step = 0 for i in range(opech): step = i image, label = sess.run([image_batch, label_batch]) labels = onehot(label) acc=[] sess.run(optimizer, feed_dict={x: image, y: labels}) loss_record = sess.run(loss, feed_dict={x: image, y: labels}) acc=sess.run(accuracy,feed_dict={x:image,y:labels}) print("now the loss is %f " % loss_record) print("now the accuracy is %f "%acc) c.append(loss_record) end_time = time.time() print('time: ', (end_time - start_time)) start_time = end_time print("---------------%d onpech is finished-------------------" % i) print("Optimization Finished!") # checkpoint_path = os.path.join(".//model", 'model.ckpt') # 输出模型的地方 saver.save(sess, save_model) print("Model Save Finished!") coord.request_stop() coord.join(threads) plt.plot(c) plt.xlabel('Iter') plt.ylabel('loss') plt.title('lr=%f, ti=%d, bs=%d' % (learning_rate, training_iters, batch_size)) plt.tight_layout() plt.savefig('cat_and_dog_AlexNet.jpg', dpi=200) train(training_iters) ```

yolo3 darknet.py问题

我用darknetAB https://github.com/AlexeyAB/darknet 编译gpu版本后生成darknet.py文件 然后我也编译了yolo_cpp_dll.sln文件 生成dll文件 然后运行darknet.py文件 不显示图片 异常退出 ![图片说明](https://img-ask.csdn.net/upload/201911/02/1572688446_628910.png) 百度了这个问题 有人说要换python3.5版本 我也尝试了 但是也是不行 不会显示图片。请问各位大佬到底怎么解决??急!!!谢谢!!! ``` #!python3 """ Python 3 wrapper for identifying objects in images Requires DLL compilation Both the GPU and no-GPU version should be compiled; the no-GPU version should be renamed "yolo_cpp_dll_nogpu.dll". On a GPU system, you can force CPU evaluation by any of: - Set global variable DARKNET_FORCE_CPU to True - Set environment variable CUDA_VISIBLE_DEVICES to -1 - Set environment variable "FORCE_CPU" to "true" To use, either run performDetect() after import, or modify the end of this file. See the docstring of performDetect() for parameters. Directly viewing or returning bounding-boxed images requires scikit-image to be installed (`pip install scikit-image`) Original *nix 2.7: https://github.com/pjreddie/darknet/blob/0f110834f4e18b30d5f101bf8f1724c34b7b83db/python/darknet.py Windows Python 2.7 version: https://github.com/AlexeyAB/darknet/blob/fc496d52bf22a0bb257300d3c79be9cd80e722cb/build/darknet/x64/darknet.py @author: Philip Kahn @date: 20180503 """ #pylint: disable=R, W0401, W0614, W0703 from ctypes import * import math import random import os def sample(probs): s = sum(probs) probs = [a/s for a in probs] r = random.uniform(0, 1) for i in range(len(probs)): r = r - probs[i] if r <= 0: return i return len(probs)-1 def c_array(ctype, values): arr = (ctype*len(values))() arr[:] = values return arr class BOX(Structure): _fields_ = [("x", c_float), ("y", c_float), ("w", c_float), ("h", c_float)] class DETECTION(Structure): _fields_ = [("bbox", BOX), ("classes", c_int), ("prob", POINTER(c_float)), ("mask", POINTER(c_float)), ("objectness", c_float), ("sort_class", c_int)] class IMAGE(Structure): _fields_ = [("w", c_int), ("h", c_int), ("c", c_int), ("data", POINTER(c_float))] class METADATA(Structure): _fields_ = [("classes", c_int), ("names", POINTER(c_char_p))] #lib = CDLL("/home/pjreddie/documents/darknet/libdarknet.so", RTLD_GLOBAL) #lib = CDLL("libdarknet.so", RTLD_GLOBAL) hasGPU = True if os.name == "nt": cwd = os.path.dirname(__file__) os.environ['PATH'] = cwd + ';' + os.environ['PATH'] winGPUdll = os.path.join(cwd, "yolo_cpp_dll.dll") winNoGPUdll = os.path.join(cwd, "yolo_cpp_dll_nogpu.dll") envKeys = list() for k, v in os.environ.items(): envKeys.append(k) try: try: tmp = os.environ["FORCE_CPU"].lower() if tmp in ["1", "true", "yes", "on"]: raise ValueError("ForceCPU") else: print("Flag value '"+tmp+"' not forcing CPU mode") except KeyError: # We never set the flag if 'CUDA_VISIBLE_DEVICES' in envKeys: if int(os.environ['CUDA_VISIBLE_DEVICES']) < 0: raise ValueError("ForceCPU") try: global DARKNET_FORCE_CPU if DARKNET_FORCE_CPU: raise ValueError("ForceCPU") except NameError: pass # print(os.environ.keys()) # print("FORCE_CPU flag undefined, proceeding with GPU") if not os.path.exists(winGPUdll): raise ValueError("NoDLL") lib = CDLL(winGPUdll, RTLD_GLOBAL) except (KeyError, ValueError): hasGPU = False if os.path.exists(winNoGPUdll): lib = CDLL(winNoGPUdll, RTLD_GLOBAL) print("Notice: CPU-only mode") else: # Try the other way, in case no_gpu was # compile but not renamed lib = CDLL(winGPUdll, RTLD_GLOBAL) print("Environment variables indicated a CPU run, but we didn't find `"+winNoGPUdll+"`. Trying a GPU run anyway.") else: lib = CDLL("./libdarknet.so", RTLD_GLOBAL) lib.network_width.argtypes = [c_void_p] lib.network_width.restype = c_int lib.network_height.argtypes = [c_void_p] lib.network_height.restype = c_int copy_image_from_bytes = lib.copy_image_from_bytes copy_image_from_bytes.argtypes = [IMAGE,c_char_p] def network_width(net): return lib.network_width(net) def network_height(net): return lib.network_height(net) predict = lib.network_predict_ptr predict.argtypes = [c_void_p, POINTER(c_float)] predict.restype = POINTER(c_float) if hasGPU: set_gpu = lib.cuda_set_device set_gpu.argtypes = [c_int] make_image = lib.make_image make_image.argtypes = [c_int, c_int, c_int] make_image.restype = IMAGE get_network_boxes = lib.get_network_boxes get_network_boxes.argtypes = [c_void_p, c_int, c_int, c_float, c_float, POINTER(c_int), c_int, POINTER(c_int), c_int] get_network_boxes.restype = POINTER(DETECTION) make_network_boxes = lib.make_network_boxes make_network_boxes.argtypes = [c_void_p] make_network_boxes.restype = POINTER(DETECTION) free_detections = lib.free_detections free_detections.argtypes = [POINTER(DETECTION), c_int] free_ptrs = lib.free_ptrs free_ptrs.argtypes = [POINTER(c_void_p), c_int] network_predict = lib.network_predict_ptr network_predict.argtypes = [c_void_p, POINTER(c_float)] reset_rnn = lib.reset_rnn reset_rnn.argtypes = [c_void_p] load_net = lib.load_network load_net.argtypes = [c_char_p, c_char_p, c_int] load_net.restype = c_void_p load_net_custom = lib.load_network_custom load_net_custom.argtypes = [c_char_p, c_char_p, c_int, c_int] load_net_custom.restype = c_void_p do_nms_obj = lib.do_nms_obj do_nms_obj.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] do_nms_sort = lib.do_nms_sort do_nms_sort.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] free_image = lib.free_image free_image.argtypes = [IMAGE] letterbox_image = lib.letterbox_image letterbox_image.argtypes = [IMAGE, c_int, c_int] letterbox_image.restype = IMAGE load_meta = lib.get_metadata lib.get_metadata.argtypes = [c_char_p] lib.get_metadata.restype = METADATA load_image = lib.load_image_color load_image.argtypes = [c_char_p, c_int, c_int] load_image.restype = IMAGE rgbgr_image = lib.rgbgr_image rgbgr_image.argtypes = [IMAGE] predict_image = lib.network_predict_image predict_image.argtypes = [c_void_p, IMAGE] predict_image.restype = POINTER(c_float) predict_image_letterbox = lib.network_predict_image_letterbox predict_image_letterbox.argtypes = [c_void_p, IMAGE] predict_image_letterbox.restype = POINTER(c_float) def array_to_image(arr): import numpy as np # need to return old values to avoid python freeing memory arr = arr.transpose(2,0,1) c = arr.shape[0] h = arr.shape[1] w = arr.shape[2] arr = np.ascontiguousarray(arr.flat, dtype=np.float32) / 255.0 data = arr.ctypes.data_as(POINTER(c_float)) im = IMAGE(w,h,c,data) return im, arr def classify(net, meta, im): out = predict_image(net, im) res = [] for i in range(meta.classes): if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] res.append((nameTag, out[i])) res = sorted(res, key=lambda x: -x[1]) return res def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45, debug= False): """ Performs the meat of the detection """ #pylint: disable= C0321 im = load_image(image, 0, 0) if debug: print("Loaded image") ret = detect_image(net, meta, im, thresh, hier_thresh, nms, debug) free_image(im) if debug: print("freed image") return ret def detect_image(net, meta, im, thresh=.5, hier_thresh=.5, nms=.45, debug= False): #import cv2 #custom_image_bgr = cv2.imread(image) # use: detect(,,imagePath,) #custom_image = cv2.cvtColor(custom_image_bgr, cv2.COLOR_BGR2RGB) #custom_image = cv2.resize(custom_image,(lib.network_width(net), lib.network_height(net)), interpolation = cv2.INTER_LINEAR) #import scipy.misc #custom_image = scipy.misc.imread(image) #im, arr = array_to_image(custom_image) # you should comment line below: free_image(im) num = c_int(0) if debug: print("Assigned num") pnum = pointer(num) if debug: print("Assigned pnum") predict_image(net, im) letter_box = 0 #predict_image_letterbox(net, im) #letter_box = 1 if debug: print("did prediction") # dets = get_network_boxes(net, custom_image_bgr.shape[1], custom_image_bgr.shape[0], thresh, hier_thresh, None, 0, pnum, letter_box) # OpenCV dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, None, 0, pnum, letter_box) if debug: print("Got dets") num = pnum[0] if debug: print("got zeroth index of pnum") if nms: do_nms_sort(dets, num, meta.classes, nms) if debug: print("did sort") res = [] if debug: print("about to range") for j in range(num): if debug: print("Ranging on "+str(j)+" of "+str(num)) if debug: print("Classes: "+str(meta), meta.classes, meta.names) for i in range(meta.classes): if debug: print("Class-ranging on "+str(i)+" of "+str(meta.classes)+"= "+str(dets[j].prob[i])) if dets[j].prob[i] > 0: b = dets[j].bbox if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] if debug: print("Got bbox", b) print(nameTag) print(dets[j].prob[i]) print((b.x, b.y, b.w, b.h)) res.append((nameTag, dets[j].prob[i], (b.x, b.y, b.w, b.h))) if debug: print("did range") res = sorted(res, key=lambda x: -x[1]) if debug: print("did sort") free_detections(dets, num) if debug: print("freed detections") return res netMain = None metaMain = None altNames = None def performDetect(imagePath="data/dog.jpg", thresh= 0.25, configPath = "./cfg/yolov3.cfg", weightPath = "yolov3.weights", metaPath= "./cfg/coco.data", showImage= True, makeImageOnly = False, initOnly= False): """ Convenience function to handle the detection and returns of objects. Displaying bounding boxes requires libraries scikit-image and numpy Parameters ---------------- imagePath: str Path to the image to evaluate. Raises ValueError if not found thresh: float (default= 0.25) The detection threshold configPath: str Path to the configuration file. Raises ValueError if not found weightPath: str Path to the weights file. Raises ValueError if not found metaPath: str Path to the data file. Raises ValueError if not found showImage: bool (default= True) Compute (and show) bounding boxes. Changes return. makeImageOnly: bool (default= False) If showImage is True, this won't actually *show* the image, but will create the array and return it. initOnly: bool (default= False) Only initialize globals. Don't actually run a prediction. Returns ---------------------- When showImage is False, list of tuples like ('obj_label', confidence, (bounding_box_x_px, bounding_box_y_px, bounding_box_width_px, bounding_box_height_px)) The X and Y coordinates are from the center of the bounding box. Subtract half the width or height to get the lower corner. Otherwise, a dict with { "detections": as above "image": a numpy array representing an image, compatible with scikit-image "caption": an image caption } """ # Import the global variables. This lets us instance Darknet once, then just call performDetect() again without instancing again global metaMain, netMain, altNames #pylint: disable=W0603 assert 0 < thresh < 1, "Threshold should be a float between zero and one (non-inclusive)" if not os.path.exists(configPath): raise ValueError("Invalid config path `"+os.path.abspath(configPath)+"`") if not os.path.exists(weightPath): raise ValueError("Invalid weight path `"+os.path.abspath(weightPath)+"`") if not os.path.exists(metaPath): raise ValueError("Invalid data file path `"+os.path.abspath(metaPath)+"`") if netMain is None: netMain = load_net_custom(configPath.encode("ascii"), weightPath.encode("ascii"), 0, 1) # batch size = 1 if metaMain is None: metaMain = load_meta(metaPath.encode("ascii")) if altNames is None: # In Python 3, the metafile default access craps out on Windows (but not Linux) # Read the names file and create a list to feed to detect try: with open(metaPath) as metaFH: metaContents = metaFH.read() import re match = re.search("names *= *(.*)$", metaContents, re.IGNORECASE | re.MULTILINE) if match: result = match.group(1) else: result = None try: if os.path.exists(result): with open(result) as namesFH: namesList = namesFH.read().strip().split("\n") altNames = [x.strip() for x in namesList] except TypeError: pass except Exception: pass if initOnly: print("Initialized detector") return None if not os.path.exists(imagePath): raise ValueError("Invalid image path `"+os.path.abspath(imagePath)+"`") # Do the detection #detections = detect(netMain, metaMain, imagePath, thresh) # if is used cv2.imread(image) detections = detect(netMain, metaMain, imagePath.encode("ascii"), thresh) if showImage: try: from skimage import io, draw import numpy as np image = io.imread(imagePath) print("*** "+str(len(detections))+" Results, color coded by confidence ***") imcaption = [] for detection in detections: label = detection[0] confidence = detection[1] pstring = label+": "+str(np.rint(100 * confidence))+"%" imcaption.append(pstring) print(pstring) bounds = detection[2] shape = image.shape # x = shape[1] # xExtent = int(x * bounds[2] / 100) # y = shape[0] # yExtent = int(y * bounds[3] / 100) yExtent = int(bounds[3]) xEntent = int(bounds[2]) # Coordinates are around the center xCoord = int(bounds[0] - bounds[2]/2) yCoord = int(bounds[1] - bounds[3]/2) boundingBox = [ [xCoord, yCoord], [xCoord, yCoord + yExtent], [xCoord + xEntent, yCoord + yExtent], [xCoord + xEntent, yCoord] ] # Wiggle it around to make a 3px border rr, cc = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr2, cc2 = draw.polygon_perimeter([x[1] + 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr3, cc3 = draw.polygon_perimeter([x[1] - 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr4, cc4 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] + 1 for x in boundingBox], shape= shape) rr5, cc5 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] - 1 for x in boundingBox], shape= shape) boxColor = (int(255 * (1 - (confidence ** 2))), int(255 * (confidence ** 2)), 0) draw.set_color(image, (rr, cc), boxColor, alpha= 0.8) draw.set_color(image, (rr2, cc2), boxColor, alpha= 0.8) draw.set_color(image, (rr3, cc3), boxColor, alpha= 0.8) draw.set_color(image, (rr4, cc4), boxColor, alpha= 0.8) draw.set_color(image, (rr5, cc5), boxColor, alpha= 0.8) if not makeImageOnly: io.imshow(image) io.show() detections = { "detections": detections, "image": image, "caption": "\n<br/>".join(imcaption) } except Exception as e: print("Unable to show image: "+str(e)) return detections if __name__ == "__main__": print(performDetect()) ```

实例中滑动窗口HOG特征向量是如何计算的,怎么得到计算结果的?

# -*- coding: utf-8 -*- import cv2 import numpy as np from skimage.feature import hog # initialization image_height = 48 image_width = 48 window_size = 24 window_step = 6 image = cv2.imread('a.jpg', 0) #a.jpg是一个48*48的灰度图像 def sliding_hog_windows(image): hog_windows = [] for y in range(0, image_height, window_step): for x in range(0, image_width, window_step): window = image[y:y+window_size, x:x+window_size] hog_windows.extend(hog(window, orientations=8, pixels_per_cell=(8, 8), cells_per_block=(1, 1), visualise=False)) return hog_windows features = sliding_hog_windows(image) #程序调试得到的结果是:features是一个2592维向量,手动计算怎么计算?

想要在一个.py里调用另一个.py但是有错?

主程序如下,num想给图片编号,但是调用有问题 ``` # -*- coding: UTF-8 -*- #!/usr/bin/env python #前进1m,画边长( 0.6 )m的五边形 import rospy from geometry_msgs.msg import Twist from math import radians import trying class ROUTE(): count = 0 num = 0 def __init__(self): # 初始化节点 rospy.init_node('route', anonymous=True) # ctrl + c退出 rospy.on_shutdown(self.shutdown) self.cmd_vel = rospy.Publisher('cmd_vel_mux/input/navi', Twist, queue_size=10) # 5 HZ更新频率 r = rospy.Rate(5); # 两个不同的Twist,转,直走 # 0.5 m/s 前进 move_cmd = Twist() move_cmd.linear.x = 0.2 #旋转18 deg/s turn_cmd = Twist() turn_cmd.linear.x = 0 turn_cmd.angular.z = radians(18); # 前进1M rospy.loginfo("我在找位置ing") for x in range(0,25): self.cmd_vel.publish(move_cmd) r.sleep() while not rospy.is_shutdown(): #来拍照啊 rospy.loginfo("123,茄子!") for x in range(0,15): trying.TakePhoto( num ) num = num + 1 r.sleep() #转54度 rospy.loginfo("好丑~我换个角度哈~") for x in range(0,15): self.cmd_vel.publish(turn_cmd) r.sleep() # 前进0.5M rospy.loginfo("我在找下一个位置ing") for x in range(0,15): self.cmd_vel.publish(move_cmd) r.sleep() # 转234度 rospy.loginfo("就在这里拍啦!") for x in range(0,65): self.cmd_vel.publish(turn_cmd) r.sleep() count = count + 1 if(count == 5): count = 0 if(count == 0): rospy.loginfo("是不是该结束了?好累哦~~") if( num > 7): rospy.loginfo("都要没电啦……停下来吧……机器人也很累呀!") def shutdown(self): # 停啦 rospy.loginfo("停啦,么么哒,记得充电~~") self.cmd_vel.publish(Twist()) rospy.sleep(1) if __name__ == '__main__': try: ROUTE() except: rospy.loginfo("Bye~~") rospy.sleep(1) ``` 调用的trying.py如下: ``` # -*- coding: UTF-8 -*- #!/usr/bin/env python from __future__ import print_function import sys import rospy import cv2 from std_msgs.msg import String from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError class TakePhoto ( num ): def __init__(self): self.bridge = CvBridge() self.image_received = False # Connect image topic img_topic = "/camera/rgb/image_raw" self.image_sub = rospy.Subscriber(img_topic, Image, self.callback) # Allow up to one second to connection rospy.sleep(1) def callback(self, data): # Convert image to OpenCV format try: cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError as e: print(e) self.image_received = True self.image = cv_image def take_picture(self, img_title): if self.image_received: # Save an image cv2.imwrite(img_title, self.image) return True else: return False # Initialize rospy.init_node('take_photo', anonymous=False) camera = TakePhoto() # Take a photo # Use '_image_title' parameter from command line # Default value is 'photo.jpg' img_title = rospy.get_param('~image_title', num+'.jpg') if camera.take_picture(img_title): rospy.loginfo("保存啦 " + img_title) else: rospy.loginfo("没有吧……") # Sleep to give the last log messages time to be sent rospy.sleep(1) ``` 报错: Traceback (most recent call last): File "route.py", line 10, in <module> import trying File "/home/hazel/helloworld/turtlebot/trying.py", line 13, in <module> class TakePhoto ( num ): NameError: name 'num' is not defined 自己感觉没有错……前两次都是缩进的问题,现在专门全部改成了Tab键,就算吧num全部删掉,它也不调用直接退出了,我觉得是函数调用的问题

为什么会出现AttributeError: 'NoneType' object has no attribute 'tobytes'求大神

ndex1 for i in range(img_list.__len__())] index1 += 1 img_data += img_list labels_data += label_list recorder_file = 'images/train.tfrecord' writer = tf.python_io.TFRecordWriter(recorder_file) for i in range(img_data.__len__()): im_d=img_data[i] im_l=labels_data[i] data= cv2.imread(im_d) #data = tf.gfile.FastGFile(im_d,'rb').read() #获得的是字节型不用转换,所获得的的包比OpenCV小 #将图片和标签封装 ex = tf.train.Example( features = tf.train.Features( feature = { 'image' : tf.train.Feature( bytes_list=tf.train.BytesList( value=[data.tobytes()] )), 'label' : tf.train.Feature( int64_list=tf.train.Int64List( value=[im_l] )), } ) ) writer.write(ex.SerializeToString()) print('everything is ok!') writer.close() ile "D:/2D/imgWrite.py", line 46, in <module> value=[data.tobytes()] AttributeError: 'NoneType' object has no attribute 'tobytes'

4小时玩转微信小程序——基础入门与微信支付实战

4小时玩转微信小程序——基础入门与微信支付实战

Python可以这样学(第四季:数据分析与科学计算可视化)

Python可以这样学(第四季:数据分析与科学计算可视化)

组成原理课程设计(实现机器数的真值还原等功能)

实现机器数的真值还原(定点小数)、定点小数的单符号位补码加减运算、定点小数的补码乘法运算和浮点数的加减运算。

javaWeb图书馆管理系统源码mysql版本

系统介绍 图书馆管理系统主要的目的是实现图书馆的信息化管理。图书馆的主要业务就是新书的借阅和归还,因此系统最核心的功能便是实现图书的借阅和归还。此外,还需要提供图书的信息查询、读者图书借阅情况的查询等

土豆浏览器

土豆浏览器可以用来看各种搞笑、电影、电视剧视频

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

Java8零基础入门视频教程

Java8零基础入门视频教程

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

TTP229触摸代码以及触摸返回值处理

自己总结的ttp229触摸代码,触摸代码以及触摸按键处理

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

java jdk 8 帮助文档 中文 文档 chm 谷歌翻译

JDK1.8 API 中文谷歌翻译版 java帮助文档 JDK API java 帮助文档 谷歌翻译 JDK1.8 API 中文 谷歌翻译版 java帮助文档 Java最新帮助文档 本帮助文档是使用谷

Ubuntu18.04安装教程

Ubuntu18.04.1安装一、准备工作1.下载Ubuntu18.04.1 LTS2.制作U盘启动盘3.准备 Ubuntu18.04.1 的硬盘空间二、安装Ubuntu18.04.1三、安装后的一些工作1.安装输入法2.更换软件源四、双系统如何卸载Ubuntu18.04.1新的改变功能快捷键合理的创建标题,有助于目录的生成如何改变文本的样式插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列...

快速排序---(面试碰到过好几次)

原理:    快速排序,说白了就是给基准数据找其正确索引位置的过程.    如下图所示,假设最开始的基准数据为数组第一个元素23,则首先用一个临时变量去存储基准数据,即tmp=23;然后分别从数组的两端扫描数组,设两个指示标志:low指向起始位置,high指向末尾.    首先从后半部分开始,如果扫描到的值大于基准数据就让high减1,如果发现有元素比该基准数据的值小(如上图中18&amp;lt...

手把手实现Java图书管理系统(附源码)

手把手实现Java图书管理系统(附源码)

HTML期末大作业

这是我自己做的HTML期末大作业,花了很多时间,稍加修改就可以作为自己的作业了,而且也可以作为学习参考

Python数据挖掘简易入门

Python数据挖掘简易入门

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

C++语言基础视频教程

C++语言基础视频教程

UnityLicence

UnityLicence

软件测试2小时入门

软件测试2小时入门

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

Python数据分析师-实战系列

系列课程主要包括Python数据分析必备工具包,数据分析案例实战,核心算法实战与企业级数据分析与建模解决方案实战,建议大家按照系列课程阶段顺序进行学习。所有数据集均为企业收集的真实数据集,整体风格以实战为导向,通俗讲解Python数据分析核心技巧与实战解决方案。

YOLOv3目标检测实战系列课程

《YOLOv3目标检测实战系列课程》旨在帮助大家掌握YOLOv3目标检测的训练、原理、源码与网络模型改进方法。 本课程的YOLOv3使用原作darknet(c语言编写),在Ubuntu系统上做项目演示。 本系列课程包括三门课: (1)《YOLOv3目标检测实战:训练自己的数据集》 包括:安装darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 (2)《YOLOv3目标检测:原理与源码解析》讲解YOLOv1、YOLOv2、YOLOv3的原理、程序流程并解析各层的源码。 (3)《YOLOv3目标检测:网络模型改进方法》讲解YOLOv3的改进方法,包括改进1:不显示指定类别目标的方法 (增加功能) ;改进2:合并BN层到卷积层 (加快推理速度) ; 改进3:使用GIoU指标和损失函数 (提高检测精度) ;改进4:tiny YOLOv3 (简化网络模型)并介绍 AlexeyAB/darknet项目。

超详细MySQL安装及基本使用教程

一、下载MySQL 首先,去数据库的官网http://www.mysql.com下载MySQL。 点击进入后的首页如下:  然后点击downloads,community,选择MySQL Community Server。如下图:  滑到下面,找到Recommended Download,然后点击go to download page。如下图:  点击download进入下载页面选择No...

一学即懂的计算机视觉(第一季)

一学即懂的计算机视觉(第一季)

董付国老师Python全栈学习优惠套餐

购买套餐的朋友可以关注微信公众号“Python小屋”,上传付款截图,然后领取董老师任意图书1本。

爬取妹子图片(简单入门)

安装第三方请求库 requests 被网站禁止了访问 原因是我们是Python过来的 重新给一段 可能还是存在用不了,使用网页的 编写代码 上面注意看匹配内容 User-Agent:请求对象 AppleWebKit:请求内核 Chrome浏览器 //请求网页 import requests import re //正则表达式 就是去不规则的网页里面提取有规律的信息 headers = { 'User-Agent':'存放浏览器里面的' } response = requests.get

web网页制作期末大作业

分享思维,改变世界. web网页制作,期末大作业. 所用技术:html css javascript 分享所学所得

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Spring Boot -01- 快速入门篇(图文教程)

Spring Boot -01- 快速入门篇 今天开始不断整理 Spring Boot 2.0 版本学习笔记,大家可以在博客看到我的笔记,然后大家想看视频课程也可以到【慕课网】手机 app,去找【Spring Boot 2.0 深度实践】的课程,令人开心的是,课程完全免费! 什么是 Spring Boot? Spring Boot 是由 Pivotal 团队提供的全新框架。Spring Boot...

立方体线框模型透视投影 (计算机图形学实验)

计算机图形学实验 立方体线框模型透视投影 的可执行文件,亲测可运行,若需报告可以联系我,期待和各位交流

Python数据清洗实战入门

Python数据清洗实战入门

相关热词 c# 开发接口 c# 中方法上面的限制 c# java 时间戳 c#单元测试入门 c# 数组转化成文本 c#实体类主外键关系设置 c# 子函数 局部 c#窗口位置设置 c# list 查询 c# 事件 执行顺序
立即提问
相关内容推荐