尝试将 SCRIPT im2double 作为函数执行: D:\MATLAB\R2018a\toolbox\matlab\images\im2double.m

尝试将 SCRIPT im2double 作为函数执行:
D:\MATLAB\R2018a\toolbox\matlab\images\im2double.m

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
C:\Users\86186>calc C:\Users\86186>taskkill /f /im calc.exe 错误: 没有找到进程 "calc.exe"。
C:\Users\86186>calc C:\Users\86186>taskkill /f /im calc.exe 错误: 没有找到进程 "calc.exe"。 我的电脑已经成功打开了calc,并且任务管理器也有calc,但是就是taskkill找不到路径
关于matlab函数定义的问题,定义了但是提示未定义 求大神解答
![这是.m文件](https://img-ask.csdn.net/upload/201904/18/1555578183_437205.png) 以下是关于函数的定义 function p = init_phi(im,type) im = dimensionz(im); %m = im(150:250,150:250); [dim1, dim2] = size(im); p = zeros(dim1+2,dim2+2); switch lower (type) case 'circle' for i = 1:dim1+2 for j = 1:dim2+2 p(i,j) = (sqrt((i/dim1-0.5)^2 + (j/dim2-0.5)^2) - 0.2) * 30; end end case 'grid' for i = 1:dim1+1 for j = 1:dim2+1 p(i,j) = sin(i*pi/5) + sin(j*pi/5); end end case 'circle 2' for i = 1:dim1+2 for j = 1:dim2+2 p(i,j) = (sqrt(((i+2)/dim1-0.3)^2 + (j/dim2-0.9)^2) - 0.2) * 30; end end case 'square' %p = zeros(dim1+2,dim2+2); p(floor((dim1+2)/3:(dim1+2)*2/3),floor((dim2+2)/3:(dim2+2)*2/3)) = 1; % p(floor(((dim1+2)/3)+1):floor((dim1+2)*2/3)-1,floor((dim2+2)/3)+1:floor((dim2+2)*2/3)-1) = 0; p = bwdist(p)-bwdist(1-p)+im2double(p)-.5; end end 运行后提示 未定义函数或变量 'init'。求大神详细解答ORZ
minst深度学习例程不收敛,成功率始终在十几
minst深度学习程序不收敛 是关于tensorflow的问题。我是tensorflow的初学者。从书上抄了minst的学习程序。但是运行之后,无论学习了多少批次,成功率基本不变。 我做了许多尝试,去掉了正则化,去掉了滑动平均,还是不行。把batch_size改成了2,观察变量运算情况,输入x是正确的,但神经网络的输出y很多情况下在x不一样的情况下y的两个结果是完全一样的。进而softmax的结果也是一样的。百思不得其解,找不到造成这种情况的原因。这里把代码和运行情况都贴出来,请大神帮我找找原因。大过年的,祝大家春节快乐万事如意。 补充一下,进一步的测试表明,不是不能完成训练,而是要到700000轮以上,且最高达到65%左右就不能提高了。仔细看每一步的参数,是regularization值过大10e15以上,一点点减少,前面的训练都在训练它了。这东西我不是很明白。 ``` import struct import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider, Button import tensorflow as tf import time #把MNIST的操作封装在一个类中,以后用起来方便。 class MyMinst(): def decode_idx3_ubyte(self,idx3_ubyte_file): with open(idx3_ubyte_file, 'rb') as f: print('解析文件:', idx3_ubyte_file) fb_data = f.read() offset = 0 fmt_header = '>iiii' # 以大端法读取4个 unsinged int32 magic_number, num_images, num_rows, num_cols = struct.unpack_from(fmt_header, fb_data, offset) print('idex3 魔数:{},图片数:{}'.format(magic_number, num_images)) offset += struct.calcsize(fmt_header) fmt_image = '>' + str(num_rows * num_cols) + 'B' images = np.empty((num_images, num_rows*num_cols)) #做了修改 for i in range(num_images): im = struct.unpack_from(fmt_image, fb_data, offset) images[i] = np.array(im)#这里用一维数组表示图片,np.array(im).reshape((num_rows, num_cols)) offset += struct.calcsize(fmt_image) return images def decode_idx1_ubyte(self,idx1_ubyte_file): with open(idx1_ubyte_file, 'rb') as f: print('解析文件:', idx1_ubyte_file) fb_data = f.read() offset = 0 fmt_header = '>ii' # 以大端法读取两个 unsinged int32 magic_number, label_num = struct.unpack_from(fmt_header, fb_data, offset) print('idex1 魔数:{},标签数:{}'.format(magic_number, label_num)) offset += struct.calcsize(fmt_header) labels = np.empty(shape=[0,10],dtype=float) #神经网络需要把label变成10位float的数组 fmt_label = '>B' # 每次读取一个 byte for i in range(label_num): n=struct.unpack_from(fmt_label, fb_data, offset) labels=np.append(labels,[[0,0,0,0,0,0,0,0,0,0]],axis=0) labels[i][n]=1 offset += struct.calcsize(fmt_label) return labels def __init__(self): #固定的训练文件位置 self.img=self.decode_idx3_ubyte("/home/zhangyl/Downloads/mnist/train-images.idx3-ubyte") self.result=self.decode_idx1_ubyte("/home/zhangyl/Downloads/mnist/train-labels.idx1-ubyte") print(self.result[0]) print(self.result[1000]) print(self.result[25000]) #固定的验证文件位置 self.validate_img=self.decode_idx3_ubyte("/home/zhangyl/Downloads/mnist/t10k-images.idx3-ubyte") self.validate_result=self.decode_idx1_ubyte("/home/zhangyl/Downloads/mnist/t10k-labels.idx1-ubyte") #每一批读训练数据的起始位置 self.train_read_addr=0 #每一批读训练数据的batchsize self.train_batchsize=100 #每一批读验证数据的起始位置 self.validate_read_addr=0 #每一批读验证数据的batchsize self.validate_batchsize=100 #定义用于返回batch数据的变量 self.train_img_batch=self.img self.train_result_batch=self.result self.validate_img_batch=self.validate_img self.validate_result_batch=self.validate_result def get_next_batch_traindata(self): n=len(self.img) #对参数范围适当约束 if self.train_read_addr+self.train_batchsize<=n : self.train_img_batch=self.img[self.train_read_addr:self.train_read_addr+self.train_batchsize] self.train_result_batch=self.result[self.train_read_addr:self.train_read_addr+self.train_batchsize] self.train_read_addr+=self.train_batchsize #改变起始位置 if self.train_read_addr==n : self.train_read_addr=0 else: self.train_img_batch=self.img[self.train_read_addr:n] self.train_img_batch.append(self.img[0:self.train_read_addr+self.train_batchsize-n]) self.train_result_batch=self.result[self.train_read_addr:n] self.train_result_batch.append(self.result[0:self.train_read_addr+self.train_batchsize-n]) self.train_read_addr=self.train_read_addr+self.train_batchsize-n #改变起始位置,这里没考虑batchsize大于n的情形 return self.train_img_batch,self.train_result_batch #测试一下用临时变量返回是否可行 def set_train_read_addr(self,addr): self.train_read_addr=addr def set_train_batchsize(self,batchsize): self.train_batchsize=batchsize if batchsize <1 : self.train_batchsize=1 def set_validate_read_addr(self,addr): self.validate_read_addr=addr def set_validate_batchsize(self,batchsize): self.validate_batchsize=batchsize if batchsize<1 : self.validate_batchsize=1 myminst=MyMinst() #minst类的实例 batch_size=2 #设置每一轮训练的Batch大小 learning_rate=0.8 #初始学习率 learning_rate_decay=0.999 #学习率的衰减 max_steps=300000 #最大训练步数 #定义存储训练轮数的变量,在使用tensorflow训练神经网络时, #一般会将代表训练轮数的变量通过trainable参数设置为不可训练的 training_step = tf.Variable(0,trainable=False) #定义得到隐藏层和输出层的前向传播计算方式,激活函数使用relu() def hidden_layer(input_tensor,weights1,biases1,weights2,biases2,layer_name): layer1=tf.nn.relu(tf.matmul(input_tensor,weights1)+biases1) return tf.matmul(layer1,weights2)+biases2 x=tf.placeholder(tf.float32,[None,784],name="x-input") y_=tf.placeholder(tf.float32,[None,10],name="y-output") #生成隐藏层参数,其中weights包含784*500=39200个参数 weights1=tf.Variable(tf.truncated_normal([784,500],stddev=0.1)) biases1=tf.Variable(tf.constant(0.1,shape=[500])) #生成输出层参数,其中weights2包含500*10=5000个参数 weights2=tf.Variable(tf.truncated_normal([500,10],stddev=0.1)) biases2=tf.Variable(tf.constant(0.1,shape=[10])) #计算经过神经网络前后向传播后得到的y值 y=hidden_layer(x,weights1,biases1,weights2,biases2,'y') #初始化一个滑动平均类,衰减率为0.99 #为了使模型在训练前期可以更新的更快,这里提供了num_updates参数,并设置为当前网络的训练轮数 #averages_class=tf.train.ExponentialMovingAverage(0.99,training_step) #定义一个更新变量滑动平均值的操作需要向滑动平均类的apply()函数提供一个参数列表 #train_variables()函数返回集合图上Graph.TRAINABLE_VARIABLES中的元素。 #这个集合的元素就是所有没有指定trainable_variables=False的参数 #averages_op=averages_class.apply(tf.trainable_variables()) #再次计算经过神经网络前向传播后得到的y值,这里使用了滑动平均,但要牢记滑动平均值只是一个影子变量 #average_y=hidden_layer(x,averages_class.average(weights1), # averages_class.average(biases1), # averages_class.average(weights2), # averages_class.average(biases2), # 'average_y') #softmax,计算交叉熵损失,L2正则,随机梯度优化器,学习率采用指数衰减 #函数原型为sparse_softmax_cross_entropy_with_logits(_sential,labels,logdits,name) #与softmax_cross_entropy_with_logits()函数的计算方式相同,更适用于每个类别相互独立且排斥 #的情况,即每一幅图只能属于一类 #在1.0.0版本的TensorFlow中,这个函数只能通过命名参数的方式来使用,在这里logits参数是神经网 #络不包括softmax层的前向传播结果,lables参数给出了训练数据的正确答案 softmax=tf.nn.softmax(y) cross_entropy=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y+1e-10,labels=tf.argmax(y_,1)) #argmax()函数原型为argmax(input,axis,name,dimension)用于计算每一个样例的预测答案,其中 # input参数y是一个batch_size*10(batch_size行,10列)的二维数组。每一行表示一个样例前向传 # 播的结果,axis参数“1”表示选取最大值的操作只在第一个维度进行。即只在每一行选取最大值对应的下标 # 于是得到的结果是一个长度为batch_size的一维数组,这个一维数组的值就表示了每一个样例的数字识别 # 结果。 regularizer=tf.contrib.layers.l2_regularizer(0.0001) #计算L2正则化损失函数 regularization=regularizer(weights1)+regularizer(weights2) #计算模型的正则化损失 loss=tf.reduce_mean(cross_entropy)#+regularization #总损失 #用指数衰减法设置学习率,这里staircase参数采用默认的False,即学习率连续衰减 learning_rate=tf.train.exponential_decay(learning_rate,training_step, batch_size,learning_rate_decay) #使用GradientDescentOptimizer优化算法来优化交叉熵损失和正则化损失 train_op=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=training_step) #在训练这个模型时,每过一遍数据既需要通过反向传播来更新神经网络中的参数,又需要 # 更新每一个参数的滑动平均值。control_dependencies()用于这样的一次性多次操作 #同样的操作也可以使用下面这行代码完成: #train_op=tf.group(train_step,average_op) #with tf.control_dependencies([train_step,averages_op]): # train_op=tf.no_op(name="train") #检查使用了滑动平均模型的神经网络前向传播结果是否正确 #equal()函数原型为equal(x,y,name),用于判断两个张量的每一维是否相等。 #如果相等返回True,否则返回False crorent_predicition=tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) #cast()函数的原型为cast(x,DstT,name),在这里用于将一个布尔型的数据转换为float32类型 #之后对得到的float32型数据求平均值,这个平均值就是模型在这一组数据上的正确率 accuracy=tf.reduce_mean(tf.cast(crorent_predicition,tf.float32)) #创建会话和开始训练过程 with tf.Session() as sess: #在稍早的版本中一般使用initialize_all_variables()函数初始化全部变量 tf.global_variables_initializer().run() #准备验证数据 validate_feed={x:myminst.validate_img,y_:myminst.validate_result} #准备测试数据 test_feed= {x:myminst.img,y_:myminst.result} for i in range(max_steps): if i%1000==0: #计算滑动平均模型在验证数据上的结果 #为了能得到百分数输出,需要将得到的validate_accuracy扩大100倍 validate_accuracy= sess.run(accuracy,feed_dict=validate_feed) print("After %d trainning steps,validation accuracy using average model is %g%%" %(i,validate_accuracy*100)) #产生这一轮使用一个batch的训练数据,并进行训练 #input_data.read_data_sets()函数生成的类提供了train.next_batch()函数 #通过设置函数的batch_size参数就可以从所有的训练数据中读取一个小部分作为一个训练batch myminst.set_train_batchsize(batch_size) xs,ys=myminst.get_next_batch_traindata() var_print=sess.run([x,y,y_,loss,train_op,softmax,cross_entropy,regularization,weights1],feed_dict={x:xs,y_:ys}) print("after ",i," trainning steps:") print("x=",var_print[0][0],var_print[0][1],"y=",var_print[1],"y_=",var_print[2],"loss=",var_print[3], "softmax=",var_print[5],"cross_entropy=",var_print[6],"regularization=",var_print[7],var_print[7]) time.sleep(0.5) #使用测试数据集检验神经网络训练之后的正确率 #为了能得到百分数输出,需要将得到的test_accuracy扩大100倍 test_accuracy=sess.run(accuracy,feed_dict=test_feed) print("After %d training steps,test accuracy using average model is %g%%"%(max_steps,test_accuracy*100)) 下面是运行情况的一部分: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 76. 202. 254. 255. 163. 37. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 182. 253. 253. 253. 253. 253. 253. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 179. 253. 253. 212. 91. 218. 253. 253. 179. 109. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 105. 253. 253. 160. 35. 156. 253. 253. 253. 253. 250. 113. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 19. 212. 253. 253. 88. 121. 253. 233. 128. 91. 245. 253. 248. 114. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 104. 253. 253. 110. 2. 142. 253. 90. 0. 0. 26. 199. 253. 248. 63. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 173. 253. 253. 29. 0. 84. 228. 39. 0. 0. 0. 72. 251. 253. 215. 29. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 203. 13. 0. 0. 0. 0. 0. 0. 0. 0. 82. 253. 253. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 164. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 198. 253. 184. 6. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 82. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 138. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 128. 253. 253. 47. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 154. 253. 253. 47. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 102. 253. 253. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 164. 0. 0. 0. 0. 0. 0. 0. 0. 0. 16. 208. 253. 211. 17. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 244. 253. 175. 4. 0. 0. 0. 0. 0. 0. 0. 0. 44. 253. 253. 156. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 253. 253. 29. 0. 0. 0. 0. 0. 0. 0. 30. 217. 253. 188. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 253. 253. 59. 0. 0. 0. 0. 0. 0. 60. 217. 253. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 78. 253. 253. 231. 48. 0. 0. 0. 26. 128. 249. 253. 244. 94. 15. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 151. 253. 253. 234. 101. 121. 219. 229. 253. 253. 201. 80. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 38. 232. 253. 253. 253. 253. 253. 253. 253. 201. 66. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 232. 253. 253. 95. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 86. 46. 0. 0. 0. 0. 0. 0. 91. 246. 252. 232. 57. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 103. 252. 187. 13. 0. 0. 0. 0. 22. 219. 252. 252. 175. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 0. 0. 0. 0. 8. 181. 252. 246. 30. 0. 0. 0. 0. 65. 252. 237. 197. 64. 0. 0. 0. 0. 0. 0. 0. 0. 0. 87. 0. 0. 0. 13. 172. 252. 252. 104. 0. 0. 0. 0. 5. 184. 252. 67. 103. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 172. 252. 248. 145. 14. 0. 0. 0. 0. 109. 252. 183. 137. 64. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 224. 252. 248. 134. 0. 0. 0. 0. 0. 53. 238. 252. 245. 86. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 12. 174. 252. 223. 88. 0. 0. 0. 0. 0. 0. 209. 252. 252. 179. 9. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 171. 252. 246. 61. 0. 0. 0. 0. 0. 0. 83. 241. 252. 211. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 129. 252. 252. 249. 220. 220. 215. 111. 192. 220. 221. 243. 252. 252. 149. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 144. 253. 253. 253. 253. 253. 253. 253. 253. 253. 255. 253. 226. 153. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 44. 77. 77. 77. 77. 77. 77. 77. 77. 153. 253. 235. 32. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 74. 214. 240. 114. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 221. 243. 57. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 180. 252. 119. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 136. 252. 153. 7. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 136. 251. 226. 34. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 123. 252. 246. 39. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 165. 252. 127. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 165. 175. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.58273095 0.50121385 -0.74845004 0.35842288 -0.13741069 -0.5839622 0.2642774 0.5101677 -0.29416046 0.5471707 ] [ 0.58273095 0.50121385 -0.74845004 0.35842288 -0.13741069 -0.5839622 0.2642774 0.5101677 -0.29416046 0.5471707 ]] y_= [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]] loss= 2.2801425 softmax= [[0.14659645 0.13512042 0.03872566 0.11714067 0.07134604 0.04564939 0.10661562 0.13633572 0.06099501 0.14147504] [0.14659645 0.13512042 0.03872566 0.11714067 0.07134604 0.04564939 0.10661562 0.13633572 0.06099501 0.14147504]] cross_entropy= [1.9200717 2.6402135] regularization= 50459690000000.0 50459690000000.0 after 45 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 25. 214. 225. 90. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 145. 212. 253. 253. 60. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 106. 253. 253. 246. 188. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 45. 164. 254. 253. 223. 108. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 236. 253. 252. 124. 28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 100. 217. 253. 218. 116. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 158. 175. 225. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 217. 241. 248. 114. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 21. 201. 253. 253. 114. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 107. 253. 253. 213. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 170. 254. 254. 169. 0. 0. 0. 0. 0. 2. 13. 100. 133. 89. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 18. 210. 253. 253. 100. 0. 0. 0. 19. 76. 116. 253. 253. 253. 176. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 41. 222. 253. 208. 18. 0. 0. 93. 209. 232. 217. 224. 253. 253. 241. 31. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 157. 253. 253. 229. 32. 0. 154. 250. 246. 36. 0. 49. 253. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 128. 253. 253. 253. 195. 125. 247. 166. 69. 0. 0. 37. 236. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 37. 253. 253. 253. 253. 253. 135. 32. 0. 7. 130. 73. 202. 253. 133. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 185. 253. 253. 253. 253. 64. 0. 10. 210. 253. 253. 253. 153. 9. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 253. 253. 253. 253. 238. 218. 221. 253. 253. 235. 156. 37. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 111. 228. 253. 253. 253. 253. 254. 253. 168. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 9. 110. 178. 253. 253. 249. 63. 5. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 121. 240. 253. 218. 121. 121. 44. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 17. 107. 184. 240. 253. 252. 252. 252. 252. 252. 252. 219. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 75. 122. 230. 252. 252. 252. 253. 252. 252. 252. 252. 252. 252. 239. 56. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 77. 129. 213. 244. 252. 252. 252. 252. 252. 253. 252. 252. 209. 252. 252. 252. 225. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 240. 252. 252. 252. 252. 252. 252. 213. 185. 53. 53. 53. 89. 252. 252. 252. 120. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 240. 232. 198. 93. 164. 108. 66. 28. 0. 0. 0. 0. 81. 252. 252. 222. 24. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 76. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 252. 243. 108. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 144. 238. 252. 115. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 70. 241. 248. 133. 28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 252. 252. 172. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 64. 255. 253. 209. 21. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 246. 253. 207. 21. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 172. 252. 209. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 168. 252. 252. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 43. 208. 252. 241. 53. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 166. 252. 204. 62. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 166. 243. 191. 29. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 168. 231. 177. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 6. 172. 241. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 177. 202. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.8592988 0.3954708 -0.77875614 0.26675048 0.19804694 -0.61968666 0.18084174 0.4034736 -0.34189415 0.43645462] [ 0.8592988 0.3954708 -0.77875614 0.26675048 0.19804694 -0.61968666 0.18084174 0.4034736 -0.34189415 0.43645462]] y_= [[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]] loss= 2.2191708 softmax= [[0.19166051 0.12052987 0.0372507 0.10597225 0.09893605 0.04367344 0.09724841 0.12149832 0.05765821 0.12557226] [0.19166051 0.12052987 0.0372507 0.10597225 0.09893605 0.04367344 0.09724841 0.12149832 0.05765821 0.12557226]] cross_entropy= [2.3304868 2.1078548] regularization= 50459690000000.0 50459690000000.0 after 46 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 49. 0. 0. 0. 0. 0. 0. 34. 244. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 89. 135. 0. 0. 0. 0. 0. 0. 40. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 150. 0. 0. 0. 0. 0. 0. 40. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 233. 0. 0. 0. 0. 0. 0. 77. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 255. 136. 0. 0. 0. 0. 0. 0. 77. 254. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 135. 0. 0. 0. 0. 0. 0. 123. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 135. 0. 0. 0. 0. 0. 0. 136. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 16. 254. 135. 0. 0. 0. 0. 0. 0. 136. 237. 8. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 98. 254. 135. 0. 0. 38. 99. 98. 98. 219. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 255. 208. 186. 254. 254. 255. 254. 254. 254. 254. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 105. 254. 253. 239. 180. 135. 39. 39. 39. 237. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 137. 92. 24. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 237. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 79. 253. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 31. 242. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 61. 248. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 50. 236. 255. 124. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 53. 231. 253. 253. 107. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 9. 193. 253. 253. 230. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 156. 253. 253. 149. 36. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 253. 253. 190. 8. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 175. 253. 253. 72. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 123. 253. 253. 138. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 244. 253. 230. 34. 0. 9. 24. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 181. 253. 249. 123. 0. 69. 195. 253. 249. 146. 15. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 21. 231. 253. 202. 0. 70. 236. 253. 253. 253. 253. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 22. 139. 253. 213. 26. 13. 200. 253. 253. 183. 252. 253. 220. 22. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 129. 0. 86. 253. 253. 129. 4. 105. 253. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 77. 22. 245. 253. 183. 4. 0. 2. 105. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 132. 253. 253. 11. 24. 253. 253. 116. 0. 0. 1. 150. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 241. 10. 24. 253. 253. 59. 0. 0. 82. 253. 212. 30. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 147. 0. 24. 253. 253. 150. 30. 44. 208. 212. 31. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 174. 3. 7. 185. 253. 253. 227. 247. 184. 30. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 150. 253. 253. 145. 95. 234. 253. 253. 253. 126. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 253. 253. 253. 253. 253. 169. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 114. 240. 253. 253. 234. 135. 44. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.7093834 0.30119324 -0.80789334 0.1838598 0.12065991 -0.6538477 0.49587095 0.6995347 -0.38699397 0.33823296] [ 0.7093834 0.30119324 -0.80789334 0.1838598 0.12065991 -0.6538477 0.49587095 0.6995347 -0.38699397 0.33823296]] y_= [[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]] loss= 2.2107558 softmax= [[0.16371341 0.10884525 0.03590371 0.09679484 0.09086671 0.04188326 0.1322382 0.16210894 0.05469323 0.11295244] [0.16371341 0.10884525 0.03590371 0.09679484 0.09086671 0.04188326 0.1322382 0.16210894 0.05469323 0.11295244]] cross_entropy= [2.3983614 2.0231504] regularization= 50459690000000.0 50459690000000.0 after 47 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 139. 212. 253. 159. 86. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 34. 89. 203. 253. 252. 252. 252. 252. 74. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 49. 184. 234. 252. 252. 184. 110. 100. 208. 252. 199. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 95. 233. 252. 252. 176. 56. 0. 0. 0. 17. 234. 249. 75. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 220. 253. 178. 54. 4. 0. 0. 0. 0. 43. 240. 243. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 221. 255. 180. 55. 5. 0. 0. 0. 7. 160. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 116. 253. 252. 252. 67. 0. 0. 0. 91. 252. 231. 42. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 190. 252. 252. 185. 38. 0. 119. 234. 252. 54. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 177. 252. 252. 179. 155. 236. 227. 119. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 26. 221. 252. 252. 253. 252. 130. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 229. 253. 255. 144. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 236. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 234. 252. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 19. 236. 252. 252. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 53. 181. 252. 168. 43. 232. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 179. 255. 218. 32. 93. 253. 252. 84. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 81. 244. 239. 33. 0. 114. 252. 209. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 207. 252. 237. 70. 153. 240. 252. 32. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 207. 252. 253. 252. 252. 252. 210. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 61. 242. 253. 252. 168. 96. 12. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 68. 254. 255. 254. 107. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 176. 230. 253. 253. 253. 212. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 28. 197. 253. 253. 253. 253. 253. 229. 107. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 194. 253. 253. 253. 253. 253. 253. 253. 253. 53. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 69. 241. 253. 253. 253. 253. 241. 186. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 161. 253. 253. 253. 246. 40. 57. 231. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 140. 253. 253. 253. 253. 154. 0. 25. 253. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 213. 253. 253. 253. 135. 8. 0. 3. 128. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 77. 238. 253. 253. 253. 7. 0. 0. 0. 116. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 165. 253. 253. 231. 70. 1. 0. 0. 0. 78. 237. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 33. 253. 253. 253. 182. 0. 0. 0. 0. 0. 0. 200. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 98. 253. 253. 253. 24. 0. 0. 0. 0. 0. 0. 42. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 253. 24. 0. 0. 0. 0. 0. 0. 163. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 189. 13. 0. 0. 0. 0. 0. 53. 227. 253. 121. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 114. 0. 0. 0. 0. 0. 21. 227. 253. 231. 27. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 114. 0. 0. 0. 5. 131. 143. 253. 231. 59. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 236. 73. 58. 217. 223. 253. 253. 253. 174. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 253. 253. 253. 253. 253. 253. 253. 253. 48. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 149. 253. 253. 253. 253. 253. 253. 253. 253. 182. 15. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 12. 168. 253. 253. 253. 253. 253. 248. 89. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.5813921 0.21609789 -0.8359629 0.10818548 0.44052082 -0.6865921 0.78338754 0.5727978 -0.4297532 0.24992661] [ 0.5813921 0.21609789 -0.8359629 0.10818548 0.44052082 -0.6865921 0.78338754 0.5727978 -0.4297532 0.24992661]] y_= [[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] loss= 2.452383 softmax= [[0.14272858 0.09905256 0.03459087 0.08892009 0.1239742 0.04016358 0.1746773 0.14150718 0.05192496 0.10246069] [0.14272858 0.09905256 0.03459087 0.08892009 0.1239742 0.04016358 0.1746773 0.14150718 0.05192496 0.10246069]] cross_entropy= [2.9579558 1.9468105] regularization= 50459690000000.0 50459690000000.0 已终止 ```
关于matlab提取视频中运动目标的问题
想用MATLAB做一个提取运动目标的功能,如图![图片说明](https://img-ask.csdn.net/upload/201911/23/1574496675_713294.png) 代码如下但是现象极其糟糕 不知道错误处在哪里 请各位大神指教 ``` fileName = 'video.avi'; obj = VideoReader(fileName);%输入视频位置 numFrames = obj.NumberOfFrames;% 帧的总数 for k = 1 : 206% 读取帧 frame = read(obj,k);%读取第几帧 frame1=read(obj, k+1); frame2=read(obj, k+2); grayframe=rgb2gray(frame); %彩色转化成灰度图 grayframe1 = rgb2gray(frame1);%灰度化 grayframe2=rgb2gray(frame2); difgrayframe= grayframe - grayframe1;%邻帧差 difgrayframe2= grayframe1 - grayframe2;%邻帧差 level1 = graythresh(difgrayframe); level2 = graythresh(difgrayframe2); fdiff1=im2bw(abs(difgrayframe),level1);%与阈值比较转换成二值图像 fdiff2=im2bw(abs(difgrayframe2),level2);%与阈值比较转换成二值图像 f= fdiff1&fdiff2; %得到移动的区域 pause(0.01); k1=medfilt2(f,[3,3]); %进行3*3模板中值滤波 k2 = bwmorph(k1,'close'); %闭运算 figure(1); imshow(k2); end ```
求助帖!!!ubuntu18.04 安装opencv4.0.0源码无限次出现如下问题
./src/image_opencv.cpp: In function ‘IplImage* image_to_ipl(image)’: ./src/image_opencv.cpp:16:5: error: ‘IPlImage’ was not declared in this scope IPlImage *disp = cvCreateImage(cvSize(im.w,im.h), IPL_DEPTH_8U, im.c); ^~~~~~~~ compilation terminated due to -Wfatal-errors. Makefile:86: recipe for target 'obj/image_opencv.o' failed make: *** [obj/image_opencv.o] Error 1 make: *** Waiting for unfinished jobs.... 但我在src文件夹中的image_opencv.cpp中也引入了#include "opencv2/imgproc/imgproc_c.h" 求助实在不知道怎么办了
springboot+WebSocke开发IM系统遇见的问题?
开发IM系统用于多商家平台不同客服和交友类的小程序,那么我后端使用WebSocket写了IM功能,做到了推送,用户之间可以给用户发信息聊天,但是有一个问题不明白,接收信息的用户未必在线,那么推送就会失败,这个时候,推送的信息是保存在哪里的?redis还是数据库?然后才等待接收者上线时候,查询一遍要不要待推送的,然后推送给他。
c语言编程时出现错误为:1.#QNAN0000000,麻烦大佬们看一下问题出在哪儿了,感谢
![图片说明](https://img-ask.csdn.net/upload/201911/11/1573471041_542290.png) # 数值分析大作业 ## c语言 这个是数值分析大作业,我是按照书上一步一步编写的,前366行都是没问题的,问题就出在双步位移QR分解那里,但是我真的无能为力了,感谢大佬们相救,感谢。 ``` 代码如下:(有些printf是我测试用的) #include<stdio.h> #include<math.h> const int n=10,L=10; double err=1e-12; //定义主函数 int main() { int i,j; double A[n][n],B[n][n]; //B[][]是为保证矩阵A(n-1)[][]不被破坏的中间矩阵 //调用函数声明 void faketriangle(double a[n][n]); //构造拟上三角化矩阵函数 void QR(double a[n][n]); void doublestepQR(double a[n][n]); printf("矩阵A为:\n"); //定义矩阵A for(i=0;i<n;i++) { for(j=0;j<n;j++) { if(i==j) { A[i][j]=1.52*cos(1+i+1.2*(1+j)); }else { A[i][j]=sin(0.5*(1+i)+0.2*(1+j)); } printf("%0.12f\t",A[i][j]); } printf("\n"); } printf("\n"); printf("矩阵A(n-1)为:\n"); //调用构造拟上三角化矩阵函数A(n-1) faketriangle(A); for(i=0;i<n;i++) { for(j=0;j<n;j++) { B[i][j]=A[i][j]; } } printf("\n"); printf("矩阵A(n-1)分解为QR矩阵\n"); QR(A); printf("\n"); doublestepQR(B); } //构造拟上三角化矩阵函数A(n-1) 函数主体 void faketriangle(double a[n][n]) { int i,j,r; double sum,d,c,h,t; double u[n],w[n],q[n],p[n]; //算法开始迭代 for(r=0;r<n-2;r++) { sum=0; for(i=r+2;i<n;i++) { sum+=fabs(a[i][r]); } //判断是否满足a[i][r]=0(i=r+2,..,n)? if(sum>0) { sum=0; //计算d for(i=r+1;i<n;i++) { sum+=a[i][r]*a[i][r]; } d=sqrt(sum); //计算c if(a[r+1][r]>0) { c=-d; }else { c=d; } //计算h h=c*c-c*a[r+1][r]; //向量u[r]的建立 for(i=0;i<n;i++) { if(i<r+1) { u[i]=0; }else if(i==r+1) { u[i]=a[i][r]-c; }else { u[i]=a[i][r]; } // printf("%0.12f\n",u[i]); } //求解向量p for(i=0;i<n;i++) { sum=0; for(j=0;j<n;j++) { sum+=a[j][i]*u[j]; } p[i]=sum/h; // printf("%0.12f\n",p[i]); } // //求解矩阵q for(i=0;i<n;i++) { sum=0; for(j=0;j<n;j++) { sum+=a[i][j]*u[j]; } q[i]=sum/h; // printf("%0.12f\n",q[i]); } //求t sum=0; for(i=0;i<n;i++) { sum+=p[i]*u[i]; } t=sum/h; // printf("%0.12f\n",t); //求w[] for(i=0;i<n;i++) { w[i]=q[i]-t*u[i]; // printf("%0.12f\n",w[l]); } //求a(r+1) for(i=0;i<n;i++) { for(j=0;j<n;j++) { a[i][j]=a[i][j]-(w[i]*u[j]+u[i]*p[j]); // printf("%0.12f\t",a[i][j]); } // printf("\n"); } } } //输出矩阵A(n-1) for(i=0;i<n;i++) { for(j=0;j<n;j++) { printf("%0.12f\t",a[i][j]); } printf("\n"); } } //将 A(n-1)矩阵进行QR分解 void QR(double a[n][n]) { int i,j,k,r; double sum,d,c,h; double Q[n][n],u[n],w[n],p[n],R[n][n],b[n][n]; //定义矩阵Q[][] for(i=0;i<n;i++) { for(j=0;j<n;j++) { if(i==j) { Q[i][j]=1; }else { Q[i][j]=0; } // printf("%0.12f\t",Q[i][j]); } // printf("\n"); } // printf("\n"); //算法迭代 for(r=0;r<n-1;r++) { sum=0; for(i=r+1;i<n;i++) { sum+=fabs(a[i][r]); } if(sum>0) { //计算d sum=0; for(i=r;i<n;i++) { sum+=a[i][r]*a[i][r]; } d=sqrt(sum); //计算c if(a[r][r]>0) { c=-d; }else { c=d; } //计算h h=c*(c-a[r][r]); //构造向量u[] for(i=0;i<n;i++) { if(i<r) { u[i]=0; }else if(i==r) { u[i]=a[i][r]-c; }else { u[i]=a[i][r]; } } //计算w[] for(i=0;i<n;i++) { sum=0; for(j=0;j<n;j++) { sum+=Q[i][j]*u[j]; } w[i]=sum; } //计算Q(r+1) for(i=0;i<n;i++) { for(j=0;j<n;j++) { Q[i][j]=Q[i][j]-w[i]*u[j]/h; } } //计算P[] for(j=0;j<n;j++) { sum=0; for(i=0;i<n;i++) { sum+=a[i][j]*u[i]/h; } p[j]=sum; } //计算a[r+1] for(i=0;i<n;i++) { for(j=0;j<n;j++) { a[i][j]=a[i][j]-u[i]*p[j]; } } } } printf("Q矩阵为:\n"); //输出Q[][]、R[][] for(i=0;i<n;i++) { for(j=0;j<n;j++) { printf("%0.12f\t",Q[i][j]); } printf("\n"); } printf("\n"); printf("R矩阵为:\n"); for(i=0;i<n;i++) { for(j=0;j<n;j++) { R[i][j]=a[i][j]; printf("%0.12f\t",R[i][j]); } printf("\n"); } printf("RQ相乘后的矩阵为:\n"); //求解R[][]*Q[][] for(i=0;i<n;i++) { for(j=0;j<n;j++) { sum=0; for(k=0;k<n;k++) { sum+=R[i][k]*Q[k][j]; } b[i][j]=sum; printf("%0.12f\t",b[i][j]); } printf("\n"); } // printf("QR相乘后的矩阵为:\n"); // //求解Q[][]*R[][] // for(i=0;i<n;i++) // { // for(j=0;j<n;j++) // { // sum=0; // for(k=0;k<n;k++) // { // sum+=Q[i][k]*R[k][j]; // } // b[i][j]=sum; // printf("%0.12f\t",b[i][j]); // } // printf("\n"); // } } //以上没问题 //定义复数结构体 struct complex { double re; double im; }; //对A(n-1)进行双步位移的QR分解 void doublestepQR(double a[n][n]) { //M[][]QR分解函数声明 void MQR(double a[n][n],double M[n][n],int m); int i,j,k,m,r,l; double A[n][n],M[n][n],I[n][n],A2[n][n]; double x,y,z,sum; double s,t; struct complex lambda[n]; struct complex s1,s2; k=0;m=n-1; step3: if(fabs(a[m][m-1])<=err) { lambda[m].re=a[m][m]; lambda[m].im=0; m--; goto step4; }else { goto step5; } step4: if(m==0) { lambda[m].re=a[0][0]; lambda[m].im=0; goto step11; }else if(m==-1) { goto step11; }else { goto step3; } step5: x=a[m-1][m-1]+a[m][m]; y=a[m-1][m-1]*a[m][m]-a[m-1][m]*a[m][m-1]; z=x*x-4*y; if(z>=0) { z=sqrt(z); s1.re=(x+z)/2; s1.im=0; s2.re=(x-z)/2; s2.im=0; }else { z=sqrt(fabs(z)); s1.re=(x)/2; s1.im=(z)/2; s2.re=x/2; s2.im=(-z)/2; } step6: if(m==1) { lambda[m].re=s1.re; lambda[m].im=0; lambda[m-1].re=s2.re; lambda[m-1].im=s2.im; goto step11; }else { goto step7; } step7: if(fabs(a[m-1][m-2])<=err) { if(z>=0) { lambda[m-1].re=(x+sqrt(z))/2; //两个特征值 lambda[m-1].im=0; lambda[m-2].re=(x-sqrt(z))/2; lambda[m-2].im=0; }else { lambda[m-1].re=(x)/2; lambda[m-1].im=(sqrt(fabs(z)))/2; lambda[m-2].re=x/2; lambda[m-2].im=(-sqrt(fabs(z)))/2; } m=m-2; goto step4; }else { goto step8; } step8: if(k==L) { goto step12; }else { goto step9; } step9: // for(i=0;i<m;i++) // { // for(j=0;j<m;j++) // { // a[i][j]=a[i][j]; // } // } s=a[m-1][m-1]+a[m][m]; t=a[m-1][m-1]*a[m][m]-a[m][m-1]*a[m-1][m]; //定义矩阵I[][] for(i=0;i<m;i++) { for(j=0;j<m;j++) { if(i==j) { I[i][j]=1; }else { I[i][j]=0; } } } // printf("%M[][]为:\n"); //计算矩阵M[][] for(i=0;i<m;i++) { for(j=0;j<m;j++) { sum=0; for(l=0;l<m;l++) { sum+=a[i][l]*a[l][j]; } M[i][j]=sum-s*a[i][j]+t*I[i][j]; printf("%0.12f\t",M[i][j]); } printf("\n"); } printf("\n"); //调用M[][]QR分解、计算A(k+1)的函数 MQR(a,M,m); step10: k++; goto step3; step11: printf("特征值已计算完毕。\n"); for(i=0;i<n;i++) { printf("%0.12f+%0.12fi\n",lambda[i].re,lambda[i].im); } for(i=0;i<n;i++) { if(lambda[i].im==0) { // vector(); } } step12: printf("未得到所有特征值。\n"); } //M[][]QR分解、计算A(k+1)的函数 void MQR(double a[n][n],double M[n][n],int m) { int i,j,r; double sum,d,c,h,t; double B[n][n],C[n][n],p[n],q[n],v[n],w[n],u[n]; printf("B:\n"); for(i=0;i<m;i++) { for(j=0;j<m;j++) { B[i][j]=M[i][j]; printf("%f\t",B[i][j]); } printf("\n"); } for(i=0;i<m;i++) { for(j=0;j<m;j++) { C[i][j]=a[i][j]; } } //循环计算矩阵A(k+1) for(r=0;r<m;r++) //r的范围 { sum=0; for(i=r+1;i<m;i++) { sum+=fabs(B[i][r]); } printf("sum:\n"); printf("%0.12f\n",sum); //sum有问题 printf("\n\n\n"); printf("r:%d\n\n\n\n\n",r); if(sum>0) { //计算d sum=0; for(i=r;i<m+1;i++) { sum+=B[i][r]*B[i][r]; printf("B:%0.12f\n",B[i][r]); //B[][]? printf("\n"); } printf("sum:%0.12f\n",sum); //sum有问题 fiest!! printf("\n"); d=sqrt(sum); printf("%0.12f\n",d); //d有问题 printf("\n"); //计算c if(B[r][r]>0) { c=-d; }else { c=d; } printf("%0.12f\n",c); //c有问题 printf("\n"); //计算h h=c*(c-B[r][r]); printf("%0.12f\n",h); //h有问题 printf("\n"); //构造向量u[] for(i=0;i<m;i++) { if(i<r) { u[i]=0; }else if(i==r) { u[i]=B[i][r]-c; }else { u[i]=B[i][r]; } } //计算v[] for(i=0;i<m;i++) { sum=0; for(j=r;j<m;j++) { sum+=B[j][i]*u[j]; } v[i]=sum/h; } //计算B(r+1) for(i=0;i<m;i++) { for(j=0;j<m;j++) { B[i][j]=B[i][j]-u[i]*v[j]; } } //计算p[] for(i=0;i<m;i++) { sum=0; for(j=r;j<m;j++) { sum+=C[j][i]*u[j]; } p[i]=sum/h; } //计算q[] for(i=0;i<m;i++) { sum=0; for(j=r;j<m;j++) { sum+=C[i][j]*u[j]; } q[i]=sum/h; } //计算t sum=0; for(i=r;i<m;i++) { sum+=p[i]*u[i]; } t=sum/h; //计算w[] for(i=0;i<m;i++) { w[i]=q[i]-t*u[i]; } //计算C[r+1] for(i=0;i<m;i++) { for(j=0;j<m;j++) { C[i][j]=C[i][j]-w[i]*u[j]-u[i]*p[j]; } } }else { ; } } for(i=0;i<m;i++) { for(j=0;j<m;j++) { a[i][j]=C[i][j]; printf("%f\t",a[i][j]); } printf("\n"); } } ```
MATLAB的function文件
我下面的MATLAB程序,我想输出三个结果,为啥结果只会出现一个,就是b,不会出现其他的结果呢?不知道是什么原因,请大神帮帮我,谢谢了。 function[b,e,h]=lvboshibie(a) a=imread('lena.bmp'); %figure;imshow(a); %title('原图像'); a=im2double(a);%将数变为double型数 b=imnoise(a,'gaussian',0.015); C=im2double(b); F=im2col(C,[3,3],'distinct');%将加躁图像分割成3*3的小窗口 [i,j]=size(F); m=zeros(i,j); n=size(i,j); for i=1:9 for j=1:5180 m(i,j)=F(i,j)-F(5,j);%用3*3的窗口中的中心值对窗口中的数据求出估计的受躁程度 m(5,j)=0.001; n(i,j)=m(i,j)./F(5,j); end end %为用accumarray函数把数据变回原来的排列方式做准备 %制造c矩阵 K=[1 1 0;1 2 0;1 3 0;2 1 0;2 2 0;2 3 0;3 1 0;3 2 0;3 3 0]; A=K; for ii=1:5179; A=[A;K]; end g=zeros(1,5180); F=[1:5180]; F=[g,g,F]; F=[F;F;F;F;F;F;F;F;F]; F=reshape(F,[46620,3]); c=A+F; val=reshape(n,[46620,1]); A=accumarray(c,val);%使用此函数把原来用im2col变成9*5180的矩阵变为一个个3*3的小矩阵 D=reshape(A,[3,15540]); %将矩阵变为原来的222*210的形式 M = []; B=[]; for i = 1:3:15538 temp=D(:,i:i+2); M = [M; temp]; % 使矩阵变为15540*3 end for j=1:222:15539 TEMP=M(j:j+221,:); B = [B, TEMP]; % 使矩阵变为220*210 end %使矩阵为222*208 B=B(1:222,1:208); E=B+100; d=100.*a;%求出估计的灰度时用数据 r=d./E; e=im2uint8(r); [m,n]=size(e); for i=1:m for j=1:n if (e(i,j)>=0)&&(e(i,j)<=50) u=23; q=23/3; v(i,j)=e(i,j)-u; V(1)=2*q^2; elseif (e(i,j)>=51)&&(e(i,j)<=100) u=83; q=47/3; v(i,j)=e(i,j)-u; V(2)=2*q^2; else u=117; q=138/3; v(i,j)=e(i,j)-u; V(3)=2*q^2; end end end v=im2double(v); Q=v.^2; for i=1:m for j=1:n if (e(i,j)>=0)&&(e(i,j)<=50) H(i,j)=im2double(Q(i,j)/V(1)); N(i,j)=exp(-H(i,j)); elseif (e(i,j)>=51)&&(e(i,j)<=100) H(i,j)=im2double(Q(i,j)/V(2)); N(i,j)=exp(-H(i,j)); else H(i,j)=im2double(Q(i,j)/V(3)); N(i,j)=exp(-H(i,j)); end end end s=ones(224,210); l=s*26; l(2:223,2:209)=e; o=zeros(224,210); o(2:223,2:209)=N; [m,n]=find(l<=25); q=size(m); for i=1:q x(m(i),n(i))=l(m(i)-1,n(i)-1)*o(m(i)-1,n(i)-1)+l(m(i)-1,n(i))*o(m(i)-1,n(i))+l(m(i)-1,n(i)+1)*o(m(i)-1,n(i)+1)+l(m(i),n(i)-1)*o(m(i),n(i)-1)+l(m(i),n(i))*o(m(i),n(i))+l(m(i),n(i)+1)*o(m(i),n(i)+1)+l(m(i)+1,n(i)-1)*o(m(i),n(i)-1)+l(m(i)+1,n(i))*o(m(i)+1,n(i))+l(m(i)+1,n(i)+1)*o(m(i)+1,n(i)+1); y(m(i),n(i))=o(m(i)+1,n(i))+o(m(i),n(i))+o(m(i)-1,n(i))+o(m(i)+1,n(i)-1)+o(m(i)-1,n(i)-1)+o(m(i),n(i)-1)+o(m(i)-1,n(i)+1)+o(m(i),n(i)+1)+o(m(i)+1,n(i)+1); l(m(i),n(i))=x(m(i),n(i))/y(m(i),n(i)); end h=l(2:223,2:209); h=round(h); %figure,imshow(h,[]); %title('第二次去噪效果'); b e h end
有大佬帮忙看一下这个死锁怎么形成的吗
按道理主键+RR不会形成死锁啊 ``` InnoDB: * WE ROLL BACK TRANSACTION (1) InnoDB: * (2) WAITING FOR THIS LOCK TO BE GRANTED: InnoDB: * (2) HOLDS THE LOCK(S): InnoDB: * (2) TRANSACTION: InnoDB: InnoDB: * (1) WAITING FOR THIS LOCK TO BE GRANTED: InnoDB: Transactions deadlock detected, dumping detailed information. update examinees set token = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOlwvXC9leGFtLmNvZGVwa3UuY29tXC9hcGlcL2V4YW1pbmVlXC9sb2dpbiIsImlhdCI6MTU3NjM3MjE4MCwiZXhwIjoxNjEyMzcyMTgwLCJuYmYiOjE1NzYzNzIxODAsImp0aSI6Im15TGZ6NmhqVFJzYTNjb1EiLCJzdWIiOjg4NCwicHJ2IjoiNDY3MGJjYmM4YzU5NTJjZTdmN2ExZjQ4OTliOTU0YzU1YjU2NDMyZCJ9.pIIe2xY-eh_3_HhUOURdj325sq9Od1koOVPlRKZ6hXM', login_count = 7, last_login_at = '2019-12-15 09:09:40', examinees.updated_at = '2019-12-15 09:09:40' where id = 884 1: len 6; hex 0000003d236f; asc =#o;; 5: len 15; hex 38353836343036304071712e636f6d; asc 85864060@qq.com;; 14: len 30; hex 68747470733a2f2f6578616d2d313235333338363431342e66696c652e6d; asc https://exam-1253386414.file.m; (total 92 bytes); 10: len 18; hex 343431333233323030383130303333343131; asc 441323200810033411;; 14: len 30; hex 68747470733a2f2f6578616d2d313235333338363431342e66696c652e6d; asc https://exam-1253386414.file.m; (total 92 bytes); RECORD LOCKS space id 301 page no 83 n bits 88 index PRIMARY of table exam.examinees trx id 4006767 lock_mode X locks rec but not gap waiting Record lock, heap no 12 PHYSICAL RECORD: n_fields 27; compact format; info bits 0 3: len 9; hex e69d8ee59bbde7909b; asc ;; 25: len 4; hex 5d68cbfc; asc ]h ;; 18: len 24; hex e883a1e88081e5b888e280ad203135333237323433333934; asc 15327243394;; 7: len 30; hex 24327924313024526e656676315a48716755427072383455456c71732e45; asc $2y$10$Rnefv1ZHqgUBpr84UElqs.E; (total 60 bytes); mysql tables in use 1, locked 1 LOCK WAIT 19 lock struct(s), heap size 1136, 8 row lock(s), undo log entries 224 3: len 6; hex e69cb1e888aa; asc ;; 7: len 30; hex 243279243130247756783376524c2e38756457306945644f6e5061734f53; asc $2y$10$wVx3vRL.8udW0iEdOnPasOS; (total 60 bytes); * (1) TRANSACTION: 0: len 4; hex 00000374; asc t;; 13: len 0; hex ; asc ;; 22: len 4; hex 00000008; asc ;; 23: len 4; hex 5df58817; asc ] ;; Record lock, heap no 19 PHYSICAL RECORD: n_fields 27; compact format; info bits 0 8: len 1; hex 01; asc ;; 18: len 24; hex e883a1e88081e5b888e280ad203135333237323433333934; asc 15327243394;; 11: len 1; hex 81; asc ;; 13: len 0; hex ; asc ;; 16: len 1; hex 81; asc ;; 20: len 4; hex 00000000; asc ;; 22: len 4; hex 00000004; asc ;; 24: SQL NULL; 6: len 11; hex 3133383032353838323832; asc 13802588282;; 21: len 30; hex 65794a30655841694f694a4b563151694c434a68624763694f694a49557a; asc eyJ0eXAiOiJKV1QiLCJhbGciOiJIUz; (total 337 bytes); 24: SQL NULL; TRANSACTION 4006767, ACTIVE 1426 sec starting index read 19: len 4; hex 00000013; asc ;; RECORD LOCKS space id 301 page no 28 n bits 88 index PRIMARY of table exam.examinees trx id 4005686 lock_mode X locks rec but not gap waiting Record lock, heap no 19 PHYSICAL RECORD: n_fields 27; compact format; info bits 0 MySQL thread id 10516222, OS thread handle 139667263854336, query id 300969200 *.190 exam updating 10: len 18; hex 343431333233323030383130303333343131; asc 441323200810033411;; 11: len 1; hex 81; asc ;; 12: len 4; hex 00000000; asc ;; 16: len 1; hex 81; asc ;; mysql tables in use 1, locked 1 MySQL thread id 10520696, OS thread handle 139667256985344, query id 301142683 *.190 exam updating 26: len 4; hex 5df58817; asc ] ;; 3: len 6; hex e69cb1e888aa; asc ;; 13: len 0; hex ; asc ;; 10: len 18; hex 343430333034323030383032303230343334; asc 440304200802020434;; 12: len 4; hex 00000000; asc ;; 17: len 1; hex 01; asc ;; 15: len 10; hex 323030382d31302d3033; asc 2008-10-03;; 19: len 4; hex 00000013; asc ;; 20: len 4; hex 00000000; asc ;; 7: len 30; hex 243279243130247756783376524c2e38756457306945644f6e5061734f53; asc $2y$10$wVx3vRL.8udW0iEdOnPasOS; (total 60 bytes); 9: len 30; hex 5b2268747470733a5c2f5c2f6578616d2d313235333338363431342e6669; asc ["https://exam-1253386414.fi; (total 205 bytes); 28 lock struct(s), heap size 3520, 28 row lock(s), undo log entries 1850 26: len 4; hex 5df58817; asc ] ;; 23: len 4; hex 5df5863f; asc ] ?;; 17: len 1; hex 02; asc ;; 26: len 4; hex 5df5863f; asc ] ?;; RECORD LOCKS space id 301 page no 28 n bits 88 index PRIMARY of table exam.examinees trx id 4006767 lock_mode X locks rec but not gap 0: len 4; hex 00000374; asc t;; 4: len 9; hex e69e97e7bfa0e78fa0; asc ;; 15: len 10; hex 323030382d30322d3032; asc 2008-02-02;; TRANSACTION 4005686, ACTIVE 1709 sec starting index read 2: len 7; hex 29000001cf11da; asc ) ;; 8: len 1; hex 01; asc ;; 4: len 9; hex e69e97e7bfa0e78fa0; asc ;; 5: len 15; hex 38353836343036304071712e636f6d; asc 85864060@qq.com;; 9: len 30; hex 5b2268747470733a5c2f5c2f6578616d2d313235333338363431342e6669; asc ["https://exam-1253386414.fi; (total 205 bytes); 18: len 24; hex e883a1e88081e5b888e280ad203135333237323433333934; asc 15327243394;; 6: len 11; hex 3138313239393331303638; asc 18129931068;; 14: len 30; hex 68747470733a2f2f6578616d2d313235333338363431342e66696c652e6d; asc https://exam-1253386414.file.m; (total 92 bytes); update examinees set token = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOlwvXC9leGFtLmNvZGVwa3UuY29tXC9hcGlcL2V4YW1pbmVlXC9sb2dpbiIsImlhdCI6MTU3NjM3MzQ1NCwiZXhwIjoxNjEyMzczNDU0LCJuYmYiOjE1NzYzNzM0NTQsImp0aSI6IlhXWmZNSUdNTjVLSTd6U0kiLCJzdWIiOjE2MjYsInBydiI6IjQ2NzBiY2JjOGM1OTUyY2U3ZjdhMWY0ODk5Yjk1NGM1NWI1NjQzMmQifQ.zUGTJnfnGREKzbkASKjVtPVaac_3bMXXUVuDs7vdDKE', login_count = 4, last_login_at = '2019-12-15 09:30:54', examinees.updated_at = '2019-12-15 09:30:54' where id = 1626 1: len 6; hex 0000003d236f; asc =#o;; 2: len 7; hex 29000001cf11da; asc ) ;; 25: len 4; hex 5d68cbfc; asc ]h ;; 6: len 11; hex 3138313239393331303638; asc 18129931068;; 15: len 10; hex 323030382d31302d3033; asc 2008-10-03;; 23: len 4; hex 5df58817; asc ] ;; 0: len 4; hex 0000065a; asc Z;; 11: len 1; hex 81; asc ;; 12: len 4; hex 00000000; asc ;; 15: len 10; hex 323030382d31302d3033; asc 2008-10-03;; 23: len 4; hex 5df58817; asc ] ;; 0: len 4; hex 0000065a; asc Z;; 9: len 30; hex 5b2268747470733a5c2f5c2f736372617463682d776f726b732d31323533; asc ["https://scratch-works-1253; (total 215 bytes); 21: len 30; hex 65794a30655841694f694a4b563151694c434a68624763694f694a49557a; asc eyJ0eXAiOiJKV1QiLCJhbGciOiJIUz; (total 337 bytes); 16: len 1; hex 81; asc ;; 21: len 30; hex 65794a30655841694f694a4b563151694c434a68624763694f694a49557a; asc eyJ0eXAiOiJKV1QiLCJhbGciOiJIUz; (total 339 bytes); 17: len 1; hex 01; asc ;; 4: len 6; hex e983ade99c9e; asc ;; 5: len 15; hex 31323038383438374071712e636f6d; asc 12088487@qq.com;; 19: len 4; hex 00000013; asc ;; 24: SQL NULL; 2: len 7; hex 38000001a3284a; asc 8 (J;; 20: len 4; hex 00000000; asc ;; 25: len 4; hex 5dea312c; asc ] 1,;; ```
请问matlab里自带的deconvblind函数运用的是哪种盲解卷积的算法,求原理过程~~~
% Parse inputs to verify valid function calling syntaxes and arguments [J,P,NUMIT,DAMPAR,READOUT,WEIGHT,sizeI,classI,sizePSF,FunFcn,FunArg] = ... parse_inputs(varargin{:}); % 1. Prepare parameters for iterations % % Create indexes for image according to the sampling rate idx = repmat({':'},[1 length(sizeI)]); wI = max(WEIGHT.*(READOUT + J{1}),0);% at this point - positivity constraint fw = fftn(WEIGHT); clear WEIGHT; DAMPAR22 = (DAMPAR.^2)/2; % 2. L_R Iterations % lambda = 2*any(J{4}(:)~=0); for k = (lambda + 1) : (lambda + NUMIT), % 2.a Make an image and PSF predictions for the next iteration if k > 2,% image lambda = (J{4}(:,1).'*J{4}(:,2))/(J{4}(:,2).'*J{4}(:,2) + eps); lambda = max(min(lambda,1),0); % stability enforcement lambda(0,1)之间 end Y = max(J{2} + lambda*(J{2} - J{3}),0);% image positivity constraint if k > 2,% PSF lambda = (P{4}(:,1).'*P{4}(:,2))/(P{4}(:,2).'*P{4}(:,2) + eps); lambda = max(min(lambda,1),0);% stability enforcement end B = max(P{2} + lambda*(P{2} - P{3}),0);% PSF positivity constraint sumPSF = sum(B(:)); B = B/(sum(B(:)) + (sumPSF==0)*eps);% normalization is a necessary constraint, % because given only input image, the algorithm cannot even know how much % power is in the image vs PSF. Therefore, we force PSF to satisfy this % type of normalization: sum to one. % 2.b Make core for the LR estimation CC = corelucy(Y,psf2otf(B,sizeI),DAMPAR22,wI,READOUT,1,idx,[],[]); % 2.c Determine next iteration image & apply positivity constraint J{3} = J{2}; H = psf2otf(P{2},sizeI); scale = real(ifftn(conj(H).*fw)) + sqrt(eps); J{2} = max(Y.*real(ifftn(conj(H).*CC))./scale,0); clear scale; J{4} = [J{2}(:)-Y(:) J{4}(:,1)]; clear Y H; % 2.d Determine next iteration PSF & apply positivity constraint + normalization P{3} = P{2}; H = fftn(J{3}); scale = otf2psf(conj(H).*fw,sizePSF) + sqrt(eps); P{2} = max(B.*otf2psf(conj(H).*CC,sizePSF)./scale,0); clear CC H; sumPSF = sum(P{2}(:)); P{2} = P{2}/(sumPSF + (sumPSF==0)*eps); if ~isempty(FunFcn), FunArg{1} = P{2}; P{2} = feval(FunFcn,FunArg{:}); end; P{4} = [P{2}(:)-B(:) P{4}(:,1)]; end clear fw wI; % 3. Convert the right array (for cell it is first array, for notcell it is % second array) to the original image class & output the whole num = 1 + strcmp(classI{1},'notcell'); if ~strcmp(classI{2},'double'), J{num} = images.internal.changeClass(classI{2},J{num}); end if num == 2,% the input & output is NOT a cell P = P{2}; J = J{2}; end; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Function: parse_inputs function [J,P,NUMIT,DAMPAR,READOUT,WEIGHT,sizeI,classI,sizePSF,FunFcn,FunArg] ... = parse_inputs(varargin) % % Outputs: % I=J{1} the input array (could be any numeric class, 2D, 3D) % P=P{1} the operator that distorts the ideal image % % Defaults: % NUMIT=[];NUMIT_d = 10; % Number of iterations, usually produces good % result by 10. DAMPAR =[];DAMPAR_d = 0;% No damping is default WEIGHT =[]; % All pixels are of equal quality, flat-field is one READOUT=[];READOUT_d= 0;% Zero readout noise or any other % back/fore/ground noise associated with CCD camera. % Or the Image is corrected already for this noise by user. FunFcn = '';FunFcn_d = ''; FunArg = {};FunArg_d = {}; funnum = [];funnum_d = nargin+1; narginchk(2,inf);% no constraint on max number % because of FUN args % First, assign the inputs starting with the cell/not cell image & PSF switch iscell(varargin{1}) + iscell(varargin{2}), case 0, % no-cell array is used to do a single set of iterations classI{1} = 'notcell'; J{1} = varargin{1};% create a cell array in order to do the iterations P{1} = varargin{2}; case 1, error(message('images:deconvblind:IandInitpsfMustBeOfSameType')) case 2,% input cell is used to resume the interrupted iterations or classI{1} = 'cell';% to interrupt the iteration to resume them later J = varargin{1}; P = varargin{2}; if length(J)~=length(P), error(message('images:deconvblind:IandInitpsfMustBeOfSameSize')) end end; % check the Image, which is the first array of the J-cell [sizeI, sizePSF] = padlength(size(J{1}), size(P{1})); classI{2} = class(J{1}); validateattributes(J{1},{'uint8' 'uint16' 'double' 'int16' 'single'},... {'real' 'nonempty' 'finite'},mfilename,'I',1); if prod(sizeI)<2, error(message('images:deconvblind:inputImageMustHaveAtLeast2Elements')) elseif ~isa(J{1},'double'), J{1} = im2double(J{1}); end % check the PSF, which is the first array of the P-cell validateattributes(P{1},{'uint8' 'uint16' 'double' 'int16' 'single'},... {'real' 'nonempty' 'finite' 'nonzero'},mfilename,'INITPSF',2); if prod(sizePSF)<2, error(message('images:deconvblind:initPSFMustHaveAtLeast2Elements')) elseif ~isa(P{1},'double'), P{1} = double(P{1}); end % now since the image&PSF are OK&double, we assign the rest of the J & P cells len = length(J); if len == 1,% J = {I} will be reassigned to J = {I,I,0,0} J{2} = J{1}; J{3} = 0; J{4}(prod(sizeI),2) = 0; P{2} = P{1}; P{3} = 0; P{4}(prod(sizePSF),2) = 0; elseif len ~= 4,% J = {I,J,Jm1,gk} has to have 4 or 1 arrays error(message('images:deconvblind:inputCellsMustBe1or4ElementNumArrays')) else % check if J,Jm1,gk are double in the input cell if ~all([isa(J{2},'double'),isa(J{3},'double'),isa(J{4},'double')]), error(message('images:deconvblind:ImageCellElementsMustBeDouble')) elseif ~all([isa(P{2},'double'),isa(P{3},'double'),isa(P{4},'double')]), error(message('images:deconvblind:psfCellElementsMustBeDouble')) end end; % Second, Find out if we have a function to put additional constraints on the PSF % function_classes = {'inline','function_handle','char'}; idx = []; for n = 3:nargin, idx = strmatch(class(varargin{n}),function_classes); if ~isempty(idx), [FunFcn,msgStruct] = fcnchk(varargin{n}); %only works on char, making it inline if ~isempty(msgStruct) error(msgStruct) end FunArg = [{0},varargin(n+1:nargin)]; try % how this function works, just in case. feval(FunFcn,FunArg{:}); catch ME Ftype = {'inline object','function_handle','expression ==>'}; Ffcnstr = {' ',' ',varargin{n}}; error(message('images:deconvblind:userSuppliedFcnFailed', Ftype{ idx }, Ffcnstr{ idx }, ME.message)) end funnum = n; break end end if isempty(idx), FunFcn = FunFcn_d; FunArg = FunArg_d; funnum = funnum_d; end % % Third, Assign the inputs for general deconvolution: % if funnum>7 error(message('images:validate:tooManyInputs',mfilename)); end switch funnum, case 4,% deconvblind(I,PSF,NUMIT,fun,...) NUMIT = varargin{3}; case 5,% deconvblind(I,PSF,NUMIT,DAMPAR,fun,...) NUMIT = varargin{3}; DAMPAR = varargin{4}; case 6,% deconvblind(I,PSF,NUMIT,DAMPAR,WEIGHT,fun,...) NUMIT = varargin{3}; DAMPAR = varargin{4}; WEIGHT = varargin{5}; case 7,% deconvblind(I,PSF,NUMIT,DAMPAR,WEIGHT,READOUT,fun,...) NUMIT = varargin{3}; DAMPAR = varargin{4}; WEIGHT = varargin{5}; READOUT = varargin{6}; end % Forth, Check validity of the gen.conv. input parameters: % % NUMIT check number of iterations if isempty(NUMIT), NUMIT = NUMIT_d; else %verify validity validateattributes(NUMIT,{'double'},... {'scalar' 'positive' 'integer' 'finite'},... mfilename,'NUMIT',3); end % DAMPAR check damping parameter if isempty(DAMPAR), DAMPAR = DAMPAR_d; elseif (numel(DAMPAR)~=1) && ~isequal(size(DAMPAR),sizeI), error(message('images:deconvblind:damparMustBeSizeOfInputImage')); elseif ~isa(DAMPAR,classI{2}), error(message('images:deconvblind:damparMustBeSameClassAsInputImage')); elseif ~strcmp(classI{2},'double'), DAMPAR = im2double(DAMPAR); end if ~isfinite(DAMPAR), error(message('images:deconvblind:damparMustBeFinite')); end % WEIGHT check weighting if isempty(WEIGHT), WEIGHT = ones(sizeI); else numw = numel(WEIGHT); validateattributes(WEIGHT,{'double'},{'finite'},mfilename,'WEIGHT',5); if (numw ~= 1) && ~isequal(size(WEIGHT),sizeI), error(message('images:deconvblind:weightMustBeSizeOfInputImage')); elseif numw == 1, WEIGHT = repmat(WEIGHT,sizeI); end; end % READOUT check read-out noise if isempty(READOUT), READOUT = READOUT_d; elseif (numel(READOUT)~=1) && ~isequal(size(READOUT),sizeI), error(message('images:deconvblind:readoutMustBeSizeOfInputImage')); elseif ~isa(READOUT,classI{2}), error(message('images:deconvblind:readoutMustBeSameClassAsInputImage')); elseif ~strcmp(classI{2},'double'), READOUT = im2double(READOUT); end if ~isfinite(READOUT), error(message('images:deconvblind:readoutMustBeFinite')); end;
PIL库 image相关wen'ti
``` import numpy as np from PIL import Image #此出在array函数里放入的参数不应该是列表或者元组吗?经过打印,发现Image.open的返回值,并不是,为什么可以直接丢入作为参数呢? a=np.array(Image.open(r"C:\Users\21977\Desktop\QQ.jpg")) print(a) #a与[255,255,255]的维度都不一致,请问是怎样减得呢? a=[255,255,255]-a im=Image.fromarray(a.astype("uint8")) im.save(r"C:\Users\21977\Desktop\newQQ.jpg") ```
QQ钱包接入问题,一直签名错误
我的网站最近准备接入QQ钱包支付,一切就绪但是接口一直报签名错误,我把所有请求字符串、签名和XML输出来了也并未发现哪里有问题。我想问问有没有哪位大哥有验签工具的,给我一份或者帮我看看这个有没有错误。 我开始以为是接口秘钥错误,然后重置了好几次,确定key并没有错误,后来又查看是不是缺了某个参数,后来检查了下也并没有。 按键名拼接的字符串并拼接秘钥key后的字符串: ``` body=121545445444&fee_type=CNY&mch_id=1508061891&nonce_str=1qva73hnb3q3ui5mz8d6fmbsf1b7og6v&notify_url=http://127.0.0.1:8088/pay/qqpay_notify.php&out_trade_no=202001081240075600250253&spbill_create_ip=127.0.0.1&total_fee=1000&trade_type=NATIVE&key=mQiw2feH4xqyd98q0B9ALdFxbxISo7iM ``` MD5后的值 ``` 3dfd38cdc138f4c2f80d992660f2c922 ``` 请求XML ``` <xml> <out_trade_no>202001081240075600250253</out_trade_no> <body>121545445444</body> <fee_type><![CDATA[CNY]]></fee_type> <notify_url><![CDATA[http://127.0.0.1:8088/pay/qqpay_notify.php]]></notify_url> <spbill_create_ip><![CDATA[127.0.0.1]]></spbill_create_ip> <total_fee>1000</total_fee> <trade_type><![CDATA[NATIVE]]></trade_type> <mch_id>1508061891</mch_id> <nonce_str><![CDATA[1qva73hnb3q3ui5mz8d6fmbsf1b7og6v]]></nonce_str> <sign><![CDATA[3DFD38CDC138F4C2F80D992660F2C922]]></sign> </xml> ``` 至于域名,我已经上传服务器在QQ钱包审核通过的域名下进行过测试,结果与本地测试相同,都是报SIGNERROR
初学MATLAB,求各位老师解释下这段代码,及参数的含义
function pushbutton3_Callback(hObject, eventdata, handles) global tu d=10; n=2; im=double(tu); [r,c,td]=size(im); fr=im(:,:,1); fg=im(:,:,2); fb=im(:,:,3); aftr=homofil(fr,d,r,c,n); aftg=homofil(fg,d,r,c,n); aftb=homofil(fb,d,r,c,n); zztx=cat(3,aftr,aftg,aftb); axes(handles.axes3); imshow(zztx); title('同态滤波增强效果图'); function im_eu=homofil(im,d,r,c,n) %巴特沃思高通滤波器(BHPF) A=zeros(r,c); for i=1:r for j=1:c A(i,j)=(((i-r/2).^2+(j-c/2).^2)).^(.5); H(i,j)=1/(1+((d/A(i,j))^(2*n))); end end %设置参数高频和低频值 alphaL=0.0999; alphaH=2.555; H=((alphaH-alphaL).*H)+alphaL; H=1-H; im_l=log2(1+im); im_f=fft2(im_l); im_nf=H.*im_f; im_n=abs(ifft2(im_nf)); im_e=exp(im_n); immin=min(min(im_e)); immax=max(max(im_e)); im_eu=uint8((im_e-immin)*255/(immax-immin));
MATLAB运行程序显示未定义函数或变量 'net'。
我的环境是MATLAB 2016a+vs2015+GPUwindows,想运行一段行人再识别的train源代码,代码是从github上下载的,但是总:显示未定义函数或变量 'net'。如下图所示、 ![图片说明](https://img-ask.csdn.net/upload/201809/14/1536887989_682483.png) 源代码网址https://github.com/layumi/2016_person_re-ID.git 想请教一下这个要怎么处理,万分感谢。下面是我的代码 function train_id_net_res_2stream(varargin) % ------------------------------------------------------------------------- % Part 4.1: prepare the data % ------------------------------------------------------------------------- % Load character dataset imdb = load('./url_data.mat') ; imdb = imdb.imdb; % ------------------------------------------------------------------------- % Part 4.2: initialize a CNN architecture % ------------------------------------------------------------------------- net = resnet52_2stream(); net.params(net.getParamIndex('fc751f')).learningRate = 0.01; net.params(net.getParamIndex('fc751b')).learningRate = 0.2; net.conserveMemory = true; net.meta.normalization.averageImage = reshape([105.6920,99.1345,97.9152],1,1,3); % ------------------------------------------------------------------------- % Part 4.3: train and evaluate the CNN % ------------------------------------------------------------------------- opts.train.averageImage = net.meta.normalization.averageImage; opts.train.batchSize = 48; opts.train.continue = true; opts.train.gpus = 1; %Select gpu card. The gpu id in Matlab start from 1. opts.train.prefetch = false ; opts.train.expDir = './data/resnet52_2stream_drop0.9_new' ; % your model will store here opts.train.learningRate = [0.1*ones(1,70),0.01*ones(1,5)] ; opts.train.derOutputs = {'objective', 0.5,'objective_2', 0.5,'objective_final', 1} ; opts.train.weightDecay = 0.0005; opts.train.numEpochs = numel(opts.train.learningRate) ; [opts, ~] = vl_argparse(opts.train, varargin) ; % Call training function in MatConvNet [~,~] = cnn_train_dag(net, imdb, @getBatch,opts) ; % -------------------------------------------------------------------- function inputs = getBatch(imdb, batch,opts) % -------------------------------------------------------------------- im1_url = imdb.images.data(batch) ; label1 = imdb.images.label(:,batch) ; batchsize = numel(batch); % every epoch we will add negative pairs until 1:4 dividor = 2; dividor = min(5,dividor*power(1.01,opts.epoch)); half = round(batchsize/dividor); label_f = cat(1,ones(half,1,'single'),ones(batchsize-half,1,'single')*2); % select half from same class, second half from different class; batch2 = zeros(batchsize,1); for i=1:batchsize if(i<=half) batch2(i) = rand_same_class(imdb, batch(i)); else batch2(i) = rand_diff_class(imdb, batch(i)); end end im2_url = imdb.images.data(batch2) ; im1 = vl_imreadjpeg(im1_url,'Flip'); im2 = vl_imreadjpeg(im2_url,'Flip'); label2 = imdb.images.label(:,batch2) ; %------------------------------process data oim1 = zeros(224,224,3,batchsize,'single'); oim2 = zeros(224,224,3,batchsize,'single'); for i=1:batchsize x1 = randi(33);x2 = randi(33); y1 = randi(33);y2 = randi(33); tim1 = im1{i}; tim2 = im2{i}; temp1 = tim1(x1:x1+223,y1:y1+223,:); temp2 = tim2(x2:x2+223,y2:y2+223,:); oim1(:,:,:,i) = temp1; oim2(:,:,:,i) = temp2; end oim1 = bsxfun(@minus,oim1,opts.averageImage); oim2 = bsxfun(@minus,oim2,opts.averageImage); inputs = {'data',gpuArray(oim1),'data_2',gpuArray(oim2),'label',label1,'label_2',label2,'label_f',label_f}; ``` ```
如何将客户端(ubuntu18.04 nxlog+graylog-collector-sidecar)日志传给服务器端(centOS7 graylog2.0+mongo+elasticesearch)
已经完成的工作 1、服务器端graylog的web已经可以访问了 2、客户端graylog-collector-sidecar也安装配置完了,启动正常 3、修改了配置文件/etc/graylog/collector-sidecar/collector_sidecar.yml 如下: ``` server_url: http://192.168.1.7:12900 #graylog端的IP node_id: graylog-collector-sidecar collector_id: file:/etc/graylog/collector-sidecar/collector-id tags: - linux - apache - redis update_interval: 10 log_path: /var/log/graylog/collector-sidecar backends: - name: nxlog enabled: true binary_path: /usr/bin/nxlog configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf ``` 4、nxlog安装好了,修改配置文件/etc/nxlog/nxlog.conf 如下,启动失败 ``` ######################################## # Modules # ######################################## #<Extension _syslog> # Module xm_syslog #</Extension> <Input in1> Module im_file File "/var/tmp/opencanary.log" #实际想要收集的log文件 SavePos TRUE </Input> #<Input in2> # Module im_tcp # Port 514 #</Input> <Output fileout1> Module om_udp Host 192.168.1.7 Port 12201 #与graylog的input端口对应 </Output> #<Output fileout2> # Module om_file # File "/var/log/nxlog/logmsg2.txt" #</Output> ######################################## # Routes # ######################################## <Route 1> Path in1 => fileout1 </Route> #<Route tcproute> # Path in2 => fileout2 #</Route> ``` 5、运行systemctl start nxlog.service报错, ![图片说明](https://img-ask.csdn.net/upload/201912/24/1577199315_599493.png) 问题: 怎么才能让graylog收集到日志,请教各位大佬。
如何定义MATLAB GUI设计中的全局变量?在不同按钮之间传递数据
需要实现图示的要求 ![图片说明](https://img-ask.csdn.net/upload/201908/28/1567007408_541348.png) 插入图片button1 ``` axes(handles.axes1) [filename,pathname]=uigetfile({'*.bmp;*.jpg;*.png;*.jpeg;*.tif'},'Pick an image','C:\Users'); str=[pathname filename]; if isequal(filename,0)||isequal(pathname,0) warndlg('选一张图片','Warning'); return; else I = imread(str); imshow(I); end; I=im2double(I); ``` button2从edit1输入一个值先赋给d ``` d = get(handles.edit1,'string'); d = str2num(d); ``` button3通过一系列图像处理,最后把f值显示在edit2里 ``` f=d/2; f=num2str(f); set(handles.edit2,'string',f); ``` 求问为什么不能正常显示出来啊?是因为button2中的d值不能传递到button3继续使用吗?
matlab运动物体检测程序报错
在做运动物体检测inputvideo 和outputvideo均正常 主程序如下 clear data disp('input video'); avi= VideoReader('samplevideo.avi'); numFrames = avi.NumberOfFrames;% 读取视频的帧数 vidHeight = avi.Height; vidWidth = avi.Width; for i = 1 : numFrames frame = read(avi,i);% 读取每一帧 imshow(frame);%显示每一帧 imwrite(frame,strcat(num2str(i),'.jpg'),'jpg');% 保存每一帧 end mov(1:numFrames) = ... struct('cdata', zeros(vidHeight, vidWidth, 3, 'uint8'),... 'colormap', []); for k = 1 : numFrames mov(k).cdata = read(avi, k); end video={mov.cdata}; for a = 1:length(video) imagesc(video{a}); axis image off drawnow; end; disp('output video'); tracking(video); 报错Conversion to double from cell is not possible Error in tracking (line 9) pixels = double(cat(4,video(1:2:end)))/255 于是按网上说法用cell2mat函数, pixels = double(cat(4,cell2mat(video(1:2:end))))/255 但是这样track函数就好像没调用,输出没结果,求帮助 tracking函数如下 function d = tracking(video) if ischar(video) % Load the video from an avi file. avi = mmread(video); pixels = double(cat(4,avi(1:2:end).cdata))/255; clear avi else % Compile the pixel data into a single array pixels = double(cat(4,cell2mat(video(1:2:end))))/255; clear video end % Convert to RGB to GRAY SCALE image. nFrames = size(pixels,4); for f = 1:nFrames % F = getframe(gcf); % [x,map]=frame2im(F); % imwrite(x,'fln.jpg','jpg'); % end pixel(:,:,f) = (rgb2gray(pixels(:,:,:,f))); end rows=240; cols=320; nrames=f; for l = 2:nrames d(:,:,l)=(abs(pixel(:,:,l)-pixel(:,:,l-1))); k=d(:,:,l); % imagesc(k); % drawnow; % himage = imshow('d(:,:,l)'); % hfigure = figure; % impixelregionpanel(hfigure, himage); % datar=imageinfo(imagesc(d(:,:,l))); % disp(datar); bw(:,:,l) = im2bw(k, .2); bw1=bwlabel(bw(:,:,l)); figure;imshow(bw(:,:,l)) hold on % % for h=1:rows % for w=1:cols % % if(d(:,:,l)< 0.1) % d(h,w,l)=0; % end % end % % end % % disp(d(:,:,l)); % % size(d(:,:,l)) cou=1; for h=1:rows for w=1:cols if(bw(h,w,l)>0.5) % disp(d(h,w,l)); toplen = h; if (cou == 1) tpln=toplen; end cou=cou+1; break end end end disp(toplen); coun=1; for w=1:cols for h=1:rows if(bw(h,w,l)>0.5) leftsi = w; if (coun == 1) lftln=leftsi; coun=coun+1; end break end end end disp(leftsi); disp(lftln); % % drawnow; % % d = abs(pixel(:, :, l), pixel(:, :, l-1)); % % disp(d); % s = regionprops(bw1, 'BoundingBox'); % % centroids = cat(1, s.Centroid); % % % ang=s.Orientation; % % % plot(centroids(:,1), centroids(:,2), 'r*') % for r = 1 : length(s) % rectangle('Position',s(r).BoundingBox,'EdgeColor','r'); % % % plot('position',s(r).BoundingBox,'faceregion','r'); % end % % % disp(ang); % % imaqmontage(k); widh=leftsi-lftln; heig=toplen-tpln; widt=widh/2; disp(widt); heit=heig/2; with=lftln+widt; heth=tpln+heit; wth(l)=with; hth(l)=heth; disp(heit); disp(widh); disp(heig); rectangle('Position',[lftln tpln widh heig],'EdgeColor','r'); disp(with); disp(heth); plot(with,heth, 'r*'); drawnow; hold off end; % wh=square(abs(wth(2)-wth(nrames))); % ht=square(abs(hth(2)-hth(nrames))); % disp(wth(1 % distan=sqrt(wh+ht); % % disp(distan);
微信小程序 腾讯云 即时通讯IM GenerateTestUserSig的开源模块怎么使用?
这是GenerateTestUserSig.js的代码 ``` global.webpackJsonpMpvue([16],{ /***/ "dutN": /***/ (function(module, __webpack_exports__, __webpack_require__) { "use strict"; /* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return _SDKAPPID; }); /* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return genTestUserSig; }); /* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__lib_generate_test_usersig_es_min_js__ = __webpack_require__("n7IX"); /*eslint-disable*/ const _SDKAPPID = 0; const _SECRETKEY = ''; /* * Module: GenerateTestUserSig * * Function: 用于生成测试用的 UserSig,UserSig 是腾讯云为其云服务设计的一种安全保护签名。 * 其计算方法是对 SDKAppID、UserID 和 EXPIRETIME 进行加密,加密算法为 HMAC-SHA256。 * * Attention: 请不要将如下代码发布到您的线上正式版本的 App 中,原因如下: * * 本文件中的代码虽然能够正确计算出 UserSig,但仅适合快速调通 SDK 的基本功能,不适合线上产品, * 这是因为客户端代码中的 SECRETKEY 很容易被反编译逆向破解,尤其是 Web 端的代码被破解的难度几乎为零。 * 一旦您的密钥泄露,攻击者就可以计算出正确的 UserSig 来盗用您的腾讯云流量。 * * 正确的做法是将 UserSig 的计算代码和加密密钥放在您的业务服务器上,然后由 App 按需向您的服务器获取实时算出的 UserSig。 * 由于破解服务器的成本要高于破解客户端 App,所以服务器计算的方案能够更好地保护您的加密密钥。 * * Reference:https://cloud.tencent.com/document/product/647/17275#Server */ function genTestUserSig(userID) { /** * 腾讯云 SDKAppId,需要替换为您自己账号下的 SDKAppId。 * * 进入腾讯云实时音视频[控制台](https://console.cloud.tencent.com/rav ) 创建应用,即可看到 SDKAppId, * 它是腾讯云用于区分客户的唯一标识。 */ var SDKAPPID = _SDKAPPID; /** * 签名过期时间,建议不要设置的过短 * <p> * 时间单位:秒 * 默认时间:7 x 24 x 60 x 60 = 604800 = 7 天 */ var EXPIRETIME = 604800; /** * 计算签名用的加密密钥,获取步骤如下: * * step1. 进入腾讯云实时音视频[控制台](https://console.cloud.tencent.com/rav ),如果还没有应用就创建一个, * step2. 单击“应用配置”进入基础配置页面,并进一步找到“帐号体系集成”部分。 * step3. 点击“查看密钥”按钮,就可以看到计算 UserSig 使用的加密的密钥了,请将其拷贝并复制到如下的变量中 * * 注意:该方案仅适用于调试Demo,正式上线前请将 UserSig 计算代码和密钥迁移到您的后台服务器上,以避免加密密钥泄露导致的流量盗用。 * 文档:https://cloud.tencent.com/document/product/647/17275#Server */ var SECRETKEY = _SECRETKEY; var generator = new __WEBPACK_IMPORTED_MODULE_0__lib_generate_test_usersig_es_min_js__["a" /* default */](SDKAPPID, SECRETKEY, EXPIRETIME); var userSig = generator.genTestUserSig(userID); return { sdkappid: SDKAPPID, userSig: userSig }; } /***/ }) }); ``` 我该怎么去引用GenerateTestUserSig.js, 使用里面的function genTestUserSig(userID) ``` var GenerateTestUserSig=require("../../debug/GenerateTestUserSig.js"); ``` 是这么引用吗?这么引用就会报错 ``` Uncaught TypeError: global.webpackJsonpMpvue is not a function at GenerateTestUserSig.js? [sm]:1 at require (VM167 WAService.js:1) at VM167 WAService.js:1 at person.js? [sm]:5 at require (VM167 WAService.js:1) at <anonymous>:92:7 at HTMLScriptElement.scriptLoaded (appservice?t=1574470090358:1736) at HTMLScriptElement.script.onload (appservice?t=1574470090358:1748) ```
function m=func(~,~);提示可能为设置函数返回值“m”,怎么修改,新手求指点
function m=func(~,~) im1=imread('E:\\im2.jpg'); im2=imread('E:\\im1.jpg'); im1= rgb2gray(im1); im2= rgb2gray(im2);%本程序处理的是灰度图像 im1 = im2double(im1); im2 = im2double(im2); im1_size=size(im1); im1_len=im1_size(1,1); im1_hei=im1_size(1,2);%取两幅图像的长宽的最小值,以此来裁剪两幅图像使其尺寸一致 im2_size=size(im2); im2_len=im2_size(1,1); im2_hei=im2_size(1,2); im_prolen=min(im1_len,im2_len); im_prohei=min(im1_hei,im2_hei); im1 = imresize(im1,[im_prolen,im_prohei]); im2 = imresize(im2,[im_prolen,im_prohei]); [~, imOut] = alignImages(im1, im2);%调用alignImages.M subplot(1,3,1);%以下的语句可以删除 imshow(im1); subplot(1,3,2); imshow(im2); subplot(1,3,3); imshow(imOut);
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了无数
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
《阿里巴巴开发手册》读书笔记-编程规约
Java编程规约命名风格 命名风格 类名使用UpperCamelCase风格 方法名,参数名,成员变量,局部变量都统一使用lowerCamelcase风格 常量命名全部大写,单词间用下划线隔开, 力求语义表达完整清楚,不要嫌名字长 ...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
Python绘图,圣诞树,花,爱心 | Turtle篇
1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle() circle.shape('circle') circle.color('red') circle.speed('fastest') circle.up() square = turtle.Turtle()
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
Linux 命令(122)—— watch 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] watch(1) manual
Linux 命令(121)—— cal 命令
1.命令简介 2.命令格式 3.选项说明 4.常用示例 参考文献 [1] cal(1) manual
记jsp+servlet+jdbc实现的新闻管理系统
1.工具:eclipse+SQLyog 2.介绍:实现的内容就是显示新闻的基本信息,然后一个增删改查的操作。 3.数据库表设计 列名 中文名称 数据类型 长度 非空 newsId 文章ID int 11 √ newsTitle 文章标题 varchar 20 √ newsContent 文章内容 text newsStatus 是否审核 varchar 10 news...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告(本文) 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允
相关热词 c# 识别回车 c#生成条形码ean13 c#子控制器调用父控制器 c# 写大文件 c# 浏览pdf c#获取桌面图标的句柄 c# list反射 c# 句柄 进程 c# 倒计时 线程 c# 窗体背景色
立即提问