三角形问题,Enumerate the Triangles

Problem Description
Little E is doing geometry works. After drawing a lot of points on a plane, he want to enumerate all the triangles which the vertexes are three of the points to find out the one with minimum perimeter. Your task is to implement his work.

Input
The input contains several test cases. The first line of input contains only one integer denoting the number of test cases.
The first line of each test cases contains a single integer N, denoting the number of points. (3 <= N <= 1000)
Next N lines, each line contains two integer X and Y, denoting the coordinates of a point. (0 <= X, Y <= 1000)

Output
For each test cases, output the minimum perimeter, if no triangles exist, output "No Solution".

Sample Input
2
3
0 0
1 1
2 2
4
0 0
0 2
2 1
1 1

Sample Output
Case 1: No Solution
Case 2: 4.650

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Enumerate the Triangles 枚举问题
Problem Description Little E is doing geometry works. After drawing a lot of points on a plane, he want to enumerate all the triangles which the vertexes are three of the points to find out the one with minimum perimeter. Your task is to implement his work. Input The input contains several test cases. The first line of input contains only one integer denoting the number of test cases. The first line of each test cases contains a single integer N, denoting the number of points. (3 <= N <= 1000) Next N lines, each line contains two integer X and Y, denoting the coordinates of a point. (0 <= X, Y <= 1000) Output For each test cases, output the minimum perimeter, if no triangles exist, output "No Solution". Sample Input 2 3 0 0 1 1 2 2 4 0 0 0 2 2 1 1 1 Sample Output Case 1: No Solution Case 2: 4.650
python enumerate函数问题
想问下python中,enumerate使用的问题,如下: enumerate() 函数用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个索引序列,同时列出数据和数据下标,一般用在 for 循环当中 我这边使用for循环对一个列表进行遍历,代码如下: list1=["11","33","55","dhsaj","233shuiq"] print(list(enumerate(list1))) 输出结果为: [(0, '11'), (1, '33'), (2, '55'), (3, 'dhsaj'), (4, '233shuiq')] list()实际上九十八输出结果变为列表类型,那么如果没有list() 应该输出为: (0, '11'), (1, '33'), (2, '55'), (3, 'dhsaj'), (4, '233shuiq') 但实际上执行print(enumerate(list1)),输出为: <enumerate object at 0x000002EF8AEF4948> 请问这是为什么?请知道的告诉下谢谢
enumerate the fonts 一闪就没了?
https://msdn.microsoft.com/en-us/library/windows/desktop/ms533822(v=vs.85).aspx ![图片说明](https://img-ask.csdn.net/upload/201610/10/1476065756_331394.png) 被注释掉的形式(delete[])可以保持住。
python3调用别人的opencv图片匹配程序报错
小白,调用别人python算法进行图片匹配报错。 代码: import cv2 from matplotlib import pyplot as plt import numpy as np import os import math def getMatchNum(matches,ratio): '''返回特征点匹配数量和匹配掩码''' matchesMask=[[0,0] for i in range(len(matches))] matchNum=0 for i,(m,n) in enumerate(matches): if m.distance<ratio*n.distance: #将距离比率小于ratio的匹配点删选出来 matchesMask[i]=[1,0] matchNum+=1 return (matchNum,matchesMask) path='D:/code/' queryPath=path+'yangben/' #图库路径 samplePath=path+'yuanjian/image1.jpg' #样本图片 comparisonImageList=[] #记录比较结果 #创建SIFT特征提取器 sift = cv2.xfeatures2d.SIFT_create() #创建FLANN匹配对象 FLANN_INDEX_KDTREE=0 indexParams=dict(algorithm=FLANN_INDEX_KDTREE,trees=5) searchParams=dict(checks=50) flann=cv2.FlannBasedMatcher(indexParams,searchParams) sampleImage=cv2.imread(samplePath,0) kp1, des1 = sift.detectAndCompute(sampleImage, None) #提取样本图片的特征 for parent,dirnames,filenames in os.walk(queryPath): for p in filenames: p=queryPath+p queryImage=cv2.imread(p,0) kp2, des2 = sift.detectAndCompute(queryImage, None) #提取比对图片的特征 matches=flann.knnMatch(des1,des2,k=2) #匹配特征点,为了删选匹配点,指定k为2,这样对样本图的每个特征点,返回两个匹配 (matchNum,matchesMask)=getMatchNum(matches,0.9) #通过比率条件,计算出匹配程度 matchRatio=matchNum*100/len(matches) drawParams=dict(matchColor=(0,255,0), singlePointColor=(255,0,0), matchesMask=matchesMask, flags=0) comparisonImage=cv2.drawMatchesKnn(sampleImage,kp1,queryImage,kp2,matches,None,**drawParams) comparisonImageList.append((comparisonImage,matchRatio)) #记录下结果 comparisonImageList.sort(key=lambda x:x[1],reverse=True) #按照匹配度排序 count=len(comparisonImageList) column=4 row=math.ceil(count/column) #绘图显示 figure,ax=plt.subplots(row,column) for index,(image,ratio) in enumerate(comparisonImageList): ax[int(index/column)][index%column].set_title('Similiarity %.2f%%' % ratio) ax[int(index/column)][index%column].imshow(image) plt.show() 报错信息: Traceback (most recent call last): File "sift7.py", line 55, in <module> ax[int(index/column)][index%column].set_title('Similiarity %.2f%%' % ratio) TypeError: 'AxesSubplot' object does not support indexing 求大神指点。
c# MTP 获取便携设备中的文件名称、路径等信息
根据网上代码使用 PortableDeviceApiLib.dll现在已经可以链接上设备并获取了文件与文件夹的objectID,但是不知道如何使用objectID获取文件名或者文件夹的路径名称信息,贴出获取objectid的代码,知道如何获取文件信息的大神请留言。 /// <summary> ///递归 枚举出设备中的文件或文件夹ID、设备ID /// </summary> /// <param name="pContent"></param> /// <param name="parentID"></param> /// <param name="indent"></param> /// <param name="objectIDs"></param> private void Enumerate(ref PortableDeviceApiLib.IPortableDeviceContent pContent, string parentID, string indent, ref List<string> objectIDs) { indent += " "; PortableDeviceApiLib.IEnumPortableDeviceObjectIDs pEnum; pContent.EnumObjects(0, parentID, null, out pEnum); uint cFetched = 0; do { string objectID; pEnum.Next(1, out objectID, ref cFetched); pContent.Properties if (objectID != null && !objectID.Equals("")) { objectIDs.Add(objectID); } if (cFetched > 0) { Enumerate(ref pContent, objectID, indent, ref objectIDs); } } while (cFetched > 0); } 请告知如何通过文件的objectID获取 文件名、路径信息。
炸弹人游戏,关于如何选取联通路径问题
随机生成10x10的格状地图,33格墙(标记W),33格空地(标记0),33格敌人(标记E),1格为炸弹人(标记B)。 炸弹只能放在空地上,敌人和墙无法通过,确保炸弹人能够抵达空地安放炸弹。 炸弹可以消灭安放地点同行和同列的敌人,即上下左右四个方向炸开,爆炸冲击波碰到墙则停止。 炸弹人需要安放两个炸弹,两个炸弹如消灭同一个敌人则只计数1个。 输出结果: 1) 10x10地图; 2) 两个炸弹合计消灭敌人数量的最大值; 3) 安放炸弹的两个位置坐标; 输出范例: 00WW00WW00 0B00EEEWEE 0WW0WEWEWE WE00E0WEEW WEW0WW00WE WEE0EEE0W0 WEW0WEW0WE 0WEE00E00E 0EW0WEWWW0 W00EEWEEE0 1号炸弹位置: (2,4) 2号炸弹位置: (6,4) 合计灭敌数: 10 这是我的代码 ``` import random a=['W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W','W', 'W','W','W','W','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O','O', 'O','O','O','O','O','O','O','O','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E','E', 'E','E','E','E','E','E','E','E','E','E','E','E','B'] random.shuffle(a) for i in range(len(a)): print(a[i],end='') if (i+1)%10==0: print('\n') BAdd=a.index('B') print(BAdd) rowB=BAdd//10+1 colB=BAdd%10+1 b=(rowB,colB) print('Bomber man coordinate:',b) o=[i for i, x in enumerate(a) if x =='O'] w=[i for i, x in enumerate(a) if x =='W'] e=[i for i, x in enumerate(a) if x =='E'] print(o) if BAdd%10==0: selected = [x for x in o if x == BAdd + 1 or x == BAdd - 10 or x == BAdd + 10] print(selected) elif BAdd%10==9: selected = [x for x in o if x == BAdd - 1 or x == BAdd - 10 or x == BAdd + 10] print(selected) else: selected=[x for x in o if x==BAdd-1 or x==BAdd+1 or x==BAdd-10 or x==BAdd+10] print(selected) if selected is not None: while len(selected)<=33: p=[x for x in o if x==[i+1 for i in selected] or x==[i-1 for i in selected] or x==[i+10 for i in selected] or x==[i-10 for i in selected]] selected.append(p) else: print('Cannot drop the bomb!') print(selected) print(p) ``` 在判定与炸弹人相连的空地时出现了问题 1. while语句下方p=...,报错列表类型不一致 2. 如何才能把所有与炸弹人相连的空地全部找出来,并添加到一个列表里? 要python代码,不要说遍历
tensorflow载入训练好的模型进行预测,同一张图片预测的结果却不一样????
最近在跑deeplabv1,在测试代码的时候,跑通了训练程序,但是用训练好的模型进行与测试却发现相同的图片预测的结果不一样??请问有大神知道怎么回事吗? 用的是saver.restore()方法载入模型。代码如下: ``` def main(): """Create the model and start the inference process.""" args = get_arguments() # Prepare image. img = tf.image.decode_jpeg(tf.read_file(args.img_path), channels=3) # Convert RGB to BGR. img_r, img_g, img_b = tf.split(value=img, num_or_size_splits=3, axis=2) img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) # Extract mean. img -= IMG_MEAN # Create network. net = DeepLabLFOVModel() # Which variables to load. trainable = tf.trainable_variables() # Predictions. pred = net.preds(tf.expand_dims(img, dim=0)) # Set up TF session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) #init = tf.global_variables_initializer() sess.run(tf.global_variables_initializer()) # Load weights. saver = tf.train.Saver(var_list=trainable) load(saver, sess, args.model_weights) # Perform inference. preds = sess.run([pred]) print(preds) if not os.path.exists(args.save_dir): os.makedirs(args.save_dir) msk = decode_labels(np.array(preds)[0, 0, :, :, 0]) im = Image.fromarray(msk) im.save(args.save_dir + 'mask1.png') print('The output file has been saved to {}'.format( args.save_dir + 'mask.png')) if __name__ == '__main__': main() ``` 其中load是 ``` def load(saver, sess, ckpt_path): '''Load trained weights. Args: saver: TensorFlow saver object. sess: TensorFlow session. ckpt_path: path to checkpoint file with parameters. ''' ckpt = tf.train.get_checkpoint_state(ckpt_path) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) print("Restored model parameters from {}".format(ckpt_path)) ``` DeepLabLFOVMode类如下: ``` class DeepLabLFOVModel(object): """DeepLab-LargeFOV model with atrous convolution and bilinear upsampling. This class implements a multi-layer convolutional neural network for semantic image segmentation task. This is the same as the model described in this paper: https://arxiv.org/abs/1412.7062 - please look there for details. """ def __init__(self, weights_path=None): """Create the model. Args: weights_path: the path to the cpkt file with dictionary of weights from .caffemodel. """ self.variables = self._create_variables(weights_path) def _create_variables(self, weights_path): """Create all variables used by the network. This allows to share them between multiple calls to the loss function. Args: weights_path: the path to the ckpt file with dictionary of weights from .caffemodel. If none, initialise all variables randomly. Returns: A dictionary with all variables. """ var = list() index = 0 if weights_path is not None: with open(weights_path, "rb") as f: weights = cPickle.load(f) # Load pre-trained weights. for name, shape in net_skeleton: var.append(tf.Variable(weights[name], name=name)) del weights else: # Initialise all weights randomly with the Xavier scheme, # and # all biases to 0's. for name, shape in net_skeleton: if "/w" in name: # Weight filter. w = create_variable(name, list(shape)) var.append(w) else: b = create_bias_variable(name, list(shape)) var.append(b) return var def _create_network(self, input_batch, keep_prob): """Construct DeepLab-LargeFOV network. Args: input_batch: batch of pre-processed images. keep_prob: probability of keeping neurons intact. Returns: A downsampled segmentation mask. """ current = input_batch v_idx = 0 # Index variable. # Last block is the classification layer. for b_idx in xrange(len(dilations) - 1): for l_idx, dilation in enumerate(dilations[b_idx]): w = self.variables[v_idx * 2] b = self.variables[v_idx * 2 + 1] if dilation == 1: conv = tf.nn.conv2d(current, w, strides=[ 1, 1, 1, 1], padding='SAME') else: conv = tf.nn.atrous_conv2d( current, w, dilation, padding='SAME') current = tf.nn.relu(tf.nn.bias_add(conv, b)) v_idx += 1 # Optional pooling and dropout after each block. if b_idx < 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 2, 2, 1], padding='SAME') elif b_idx == 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx == 4: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.avg_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx <= 6: current = tf.nn.dropout(current, keep_prob=keep_prob) # Classification layer; no ReLU. # w = self.variables[v_idx * 2] w = create_variable(name='w', shape=[1, 1, 1024, n_classes]) # b = self.variables[v_idx * 2 + 1] b = create_bias_variable(name='b', shape=[n_classes]) conv = tf.nn.conv2d(current, w, strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.bias_add(conv, b) return current def prepare_label(self, input_batch, new_size): """Resize masks and perform one-hot encoding. Args: input_batch: input tensor of shape [batch_size H W 1]. new_size: a tensor with new height and width. Returns: Outputs a tensor of shape [batch_size h w 18] with last dimension comprised of 0's and 1's only. """ with tf.name_scope('label_encode'): # As labels are integer numbers, need to use NN interp. input_batch = tf.image.resize_nearest_neighbor( input_batch, new_size) # Reducing the channel dimension. input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) input_batch = tf.one_hot(input_batch, depth=n_classes) return input_batch def preds(self, input_batch): """Create the network and run inference on the input batch. Args: input_batch: batch of pre-processed images. Returns: Argmax over the predictions of the network of the same shape as the input. """ raw_output = self._create_network( tf.cast(input_batch, tf.float32), keep_prob=tf.constant(1.0)) raw_output = tf.image.resize_bilinear( raw_output, tf.shape(input_batch)[1:3, ]) raw_output = tf.argmax(raw_output, dimension=3) raw_output = tf.expand_dims(raw_output, dim=3) # Create 4D-tensor. return tf.cast(raw_output, tf.uint8) def loss(self, img_batch, label_batch): """Create the network, run inference on the input batch and compute loss. Args: input_batch: batch of pre-processed images. Returns: Pixel-wise softmax loss. """ raw_output = self._create_network( tf.cast(img_batch, tf.float32), keep_prob=tf.constant(0.5)) prediction = tf.reshape(raw_output, [-1, n_classes]) # Need to resize labels and convert using one-hot encoding. label_batch = self.prepare_label( label_batch, tf.stack(raw_output.get_shape()[1:3])) gt = tf.reshape(label_batch, [-1, n_classes]) # Pixel-wise softmax loss. loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=gt) reduced_loss = tf.reduce_mean(loss) return reduced_loss ``` 按理说载入模型应该没有问题,可是不知道为什么结果却不一样? 图片:![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810836_83106.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810850_924663.png) 预测的结果: ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810884_985680.png) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810904_577649.png) 两次结果不一样,与保存的模型算出来的结果也不一样。 我用的是GitHub上这个人的代码: https://github.com/minar09/DeepLab-LFOV-TensorFlow 急急急,请问有大神知道吗???
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
使用pytorch的dataloader时报错:RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly
运行的是这一段代码,spyder老报错 RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly 奇怪的是,同样的代码我在jupyter notebook里就能正常运行。 请问该如何解决 ``` import torch import torch.utils.data as Data import torch.nn.functional as F from torch.autograd import Variable import matplotlib.pyplot as plt torch.manual_seed(1) # 设定随机数种子 # 定义超参数 LR = 0.01 # 学习率 BATCH_SIZE = 32 # 批大小 EPOCH = 12 # 迭代次数 x = torch.unsqueeze(torch.linspace(-1, 1, 1000), dim=1) y = x.pow(2) + 0.1*torch.normal(torch.zeros(*x.size())) #plt.scatter(x.numpy(), y.numpy()) #plt.show() # 将数据转换为torch的dataset格式 torch_dataset = Data.TensorDataset(x, y) # 将torch_dataset置入Dataloader中 loader = Data.DataLoader(dataset=torch_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2) class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.hidden = torch.nn.Linear(1, 20) self.predict = torch.nn.Linear(20, 1) def forward(self, x): x = F.relu(self.hidden(x)) x = self.predict(x) return x # 为每个优化器创建一个Net net_SGD = Net() net_Momentum = Net() net_RMSprop = Net() net_Adam = Net() nets = [net_SGD, net_Momentum, net_RMSprop, net_Adam] # 初始化优化器 opt_SGD = torch.optim.SGD(net_SGD.parameters(), lr=LR) opt_Momentum = torch.optim.SGD(net_Momentum.parameters(), lr=LR, momentum=0.8) opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(), lr=LR, alpha=0.9) opt_Adam = torch.optim.Adam(net_Adam.parameters(), lr=LR, betas=(0.9, 0.99)) optimizers = [opt_SGD, opt_Momentum, opt_RMSprop, opt_Adam] # 定义损失函数 loss_function = torch.nn.MSELoss() losses_history = [[], [], [], []] # 记录training时不同神经网络的loss值 for epoch in range(EPOCH): print('Epoch:', epoch + 1, 'Training...') for step, (batch_x, batch_y) in enumerate(loader): b_x = Variable(batch_x) b_y = Variable(batch_y) for net, opt, l_his in zip(nets, optimizers, losses_history): output = net(b_x) loss = loss_function(output, b_y) opt.zero_grad() loss.backward() opt.step() l_his.append(loss.item()) labels = ['SGD', 'Momentum', 'RMSprop', 'Adam'] for i, l_his in enumerate(losses_history): plt.plot(l_his, label=labels[i]) plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 0.2)) plt.show() ```
python获取邮箱的内容后,怎么取到里面的验证码
# 连接到POP3服务器: server = poplib.POP3(pop3_server) # 可以打开或关闭调试信息: # server.set_debuglevel(1) # 可选:打印POP3服务器的欢迎文字: print(server.getwelcome().decode('utf-8')) # 身份认证: server.user(email) server.pass_(password) # stat()返回邮件数量和占用空间: print('Messages: %s. Size: %s' % server.stat()) # list()返回所有邮件的编号: resp, mails, octets = server.list() # 可以查看返回的列表类似[b'1 82923', b'2 2184', ...] # print(mails) # 获取最新一封邮件, 注意索引号从1开始: index = len(mails) resp, lines, octets = server.retr(index) # lines存储了邮件的原始文本的每一行, # 可以获得整个邮件的原始文本: msg_content = b'\r\n'.join(lines).decode('utf-8') # 稍后解析出邮件: msg = Parser().parsestr(msg_content) print_info(msg) ![图片说明](https://img-ask.csdn.net/upload/201912/05/1575552952_417417.png) 全部源码如下: # -*- coding: utf-8 -*- from email.parser import Parser from email.header import decode_header from email.utils import parseaddr import poplib import requests import json import re # 输入邮件地址, 口令和POP3服务器地址: email = "123456789@qq.com" password = "abcdefg" pop3_server = "pop.qq.com" # 文本邮件的内容也是str,还需要检测编码,否则,非UTF-8编码的邮件都无法正常显示 def guess_charset(msg): charset = msg.get_charset() if charset is None: content_type = msg.get('Content-Type', '').lower() pos = content_type.find('charset=') if pos >= 0: charset = content_type[pos + 8:].strip() return charset def decode_str(s): # decode_header()返回一个list 偷懒,只取了第一个元素 value, charset = decode_header(s)[0] if charset: value = value.decode(charset) return value def print_info(msg, indent=0): if indent == 0: for header in ['From', 'To', 'Subject']: value = msg.get(header, '') if value: if header=='Subject': value = decode_str(value) else: hdr, addr = parseaddr(value) name = decode_str(hdr) value = u'%s <%s>' % (name, addr) print('%s%s: %s' % (' ' * indent, header, value)) if (msg.is_multipart()): parts = msg.get_payload() for n, part in enumerate(parts): print('%spart %s' % (' ' * indent, n)) print('%s--------------------' % (' ' * indent)) print_info(part, indent + 1) else: content_type = msg.get_content_type() if content_type=='text/plain' or content_type=='text/html': content = msg.get_payload(decode=True) charset = guess_charset(msg) if charset: content = content.decode(charset) print('%sText: %s' % (' ' * indent, content + '...')) else: print('%sAttachment: %s' % (' ' * indent, content_type)) # 连接到POP3服务器: server = poplib.POP3(pop3_server) # 可以打开或关闭调试信息: # server.set_debuglevel(1) # 可选:打印POP3服务器的欢迎文字: print(server.getwelcome().decode('utf-8')) # 身份认证: server.user(email) server.pass_(password) # stat()返回邮件数量和占用空间: print('Messages: %s. Size: %s' % server.stat()) # list()返回所有邮件的编号: resp, mails, octets = server.list() # 可以查看返回的列表类似[b'1 82923', b'2 2184', ...] # print(mails) # 获取最新一封邮件, 注意索引号从1开始: index = len(mails) resp, lines, octets = server.retr(index) # lines存储了邮件的原始文本的每一行, # 可以获得整个邮件的原始文本: msg_content = b'\r\n'.join(lines).decode('utf-8')\ # 稍后解析出邮件: msg = Parser().parsestr(msg_content) print_info(msg)
求解报错TypeError: slice indices must be integers or None or have an __index__ method
运行环境 pycharm2019.2.3 python 3.7 TensorFlow 2.0 代码如下 ``` import tensorflow as tf import numpy as np class DataLoader(): def __init__(self): path = tf.keras.utils.get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') with open(path, encoding='utf-8') as f: self.raw_text = f.read().lower() self.chars = sorted(list(set(self.raw_text))) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) self.text = [self.char_indices[c] for c in self.raw_text] def get_batch(self, seq_length, batch_size): seq = [] next_char = [] for i in range(batch_size): index = np.random.randint(0, len(self.text) - seq_length) seq.append(self.text[index:index + seq_length]) next_char.append(self.text[index + seq_length]) return np.array(seq), np.array(next_char) # [batch_size, seq_length], [num_batch] class RNN(tf.keras.Model): def __init__(self, num_chars, batch_size, seq_length): super().__init__() self.num_chars = num_chars self.seq_length = seq_length self.batch_size = batch_size self.cell = tf.keras.layers.LSTMCell(units=256) self.dense = tf.keras.layers.Dense(units=self.num_chars) def call(self, inputs, from_logits=False): inputs = tf.one_hot(inputs, depth=self.num_chars) # [batch_size, seq_length, num_chars] state = self.cell.get_initial_state(batch_size=self.batch_size, dtype=tf.float32) for t in range(self.seq_length): output, state = self.cell(inputs[:, t, :], state) logits = self.dense(output) if from_logits: return logits else: return tf.nn.softmax(logits) num_batches = 10 seq_length = 40 batch_size = 50 learning_rate = 1e-3 data_loader = DataLoader() model = RNN(num_chars=len(data_loader.chars), batch_size=batch_size, seq_length=seq_length) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) for batch_index in range(num_batches): X, y = data_loader.get_batch(seq_length, batch_size) with tf.GradientTape() as tape: y_pred = model(X) loss = tf.keras.losses.sparse_categorical_crossentropy(y_true=y, y_pred=y_pred) loss = tf.reduce_mean(loss) print("batch %d: loss %f" % (batch_index, loss.numpy())) grads = tape.gradient(loss, model.variables) optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables)) def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) X_, _ = data_loader.get_batch(seq_length, 1) for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` 报错: ``` Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] on win32 runfile('F:/pyth/pj3/study3.py', wdir='F:/pyth/pj3') batch 0: loss 4.041161 batch 1: loss 4.026710 batch 2: loss 4.005230 batch 3: loss 3.983728 batch 4: loss 3.920999 batch 5: loss 3.864793 batch 6: loss 3.644211 batch 7: loss 3.375458 batch 8: loss 3.620051 batch 9: loss 3.382381 diversity 0.200000: Traceback (most recent call last): File "<input>", line 1, in <module> File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/pyth/pj3/study3.py", line 97, in <module> y_pred = model.predict(X, diversity) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict use_multiprocessing=use_multiprocessing) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 722, in predict callbacks=callbacks) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 362, in model_iteration batch_ids = index_array[batch_start:batch_end] TypeError: slice indices must be integers or None or have an __index__ method ``` 可能有问题的地方: ``` for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` ``` def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) ```
Linux+pytorch下运行报错RuntimeError: PyTorch was compiled without NumPy support
我在尝试实现Github上开源的代码[Relation-Shape-CNN](https://github.com/Yochengliu/Relation-Shape-CNN ""),运行报错RuntimeError: PyTorch was compiled without NumPy support train_cls.py:36: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. config = yaml.load(f) ************************** [workers]: 4 [num_points]: 1024 [num_classes]: 40 [batch_size]: 32 [base_lr]: 0.001 [lr_clip]: 1e-05 [lr_decay]: 0.7 [decay_step]: 21 [epochs]: 200 [weight_decay]: 0 [bn_momentum]: 0.9 [bnm_clip]: 0.01 [bn_decay]: 0.5 [evaluate]: 1 [val_freq_epoch]: 0.5 [print_freq_iter]: 20 [input_channels]: 0 [relation_prior]: 1 [checkpoint]: [save_path]: cls [data_root]: /media/lab/16DE307A392D4AED/zs ************************** /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:113: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(xyz_raising.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:115: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(xyz_raising.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:122: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(mapping_func1.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:123: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(mapping_func2.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:125: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(mapping_func1.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:126: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(mapping_func2.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:131: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(cr_mapping.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:132: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(cr_mapping.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:153: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(self.conv_avg.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:155: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self.conv_avg.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:201: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self[0].weight, 1.0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:202: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self[0].bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:400: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(fc.bias, 0) Traceback (most recent call last): File "train_cls.py", line 167, in <module> main() File "train_cls.py", line 91, in main train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch) File "train_cls.py", line 101, in train for i, data in enumerate(train_dataloader, 0): File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 336, in __next__ return self._process_next_batch(batch) File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch raise batch.exc_type(batch.exc_msg) RuntimeError: Traceback (most recent call last): File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/data/ModelNet40Loader.py", line 55, in __getitem__ label = torch.from_numpy(self.labels[idx]).type(torch.LongTensor) RuntimeError: PyTorch was compiled without NumPy support 请大神解答!!!
scrapy爬取过程中出现重复的
# -*- coding: utf-8 -*- import scrapy class JobSpider(scrapy.Spider): name = 'job' allowed_domains = ['guazi.com'] start_urls = ['https://www.guazi.com/hz/buy/'] def parse(self, response): car_list=response.xpath('/html/body/div[6]/ul/li/a') # print(car_list) for num,i in enumerate(car_list): item={} item['name']=i.xpath('/html/body/div[6]/ul/li/a/h2/text()').extract()[num] #可以提取不同的 print(item) item['link']=i.xpath('/html/body/div[6]/ul[1]/li/a/@href').extract_first()提取的全是重复的
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed ,程序中出现anaconda错误?
在pycharm中运行python程序时,出现anaconda中的错误,如下图: ![图片说明](https://img-ask.csdn.net/upload/201910/22/1571713274_114393.png) 这是版本不匹配,还是程序里有调用,或者其它什么问题?有人可以帮忙看一下吗?这是教程视频里的程序,视频里可以运行出来,我的tensorflow、CUDA、cudnn是官网下的,可能比他的新一些,10.1和10.0版本,或者测试版和正式版这种差别。下面是运行的前向传播的代码: ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import datasets import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # x: [60k, 28, 28], # y: [60k] (x, y), _ = datasets.mnist.load_data() # x: [0~255] => [0~1.] x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) print(x.shape, y.shape, x.dtype, y.dtype) print(tf.reduce_min(x), tf.reduce_max(x)) print(tf.reduce_min(y), tf.reduce_max(y)) train_db = tf.data.Dataset.from_tensor_slices((x,y)).batch(128) train_iter = iter(train_db) sample = next(train_iter) print('batch:', sample[0].shape, sample[1].shape) # [b, 784] => [b, 256] => [b, 128] => [b, 10] # [dim_in, dim_out], [dim_out] w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1)) b1 = tf.Variable(tf.zeros([256])) w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1)) b2 = tf.Variable(tf.zeros([128])) w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1)) b3 = tf.Variable(tf.zeros([10])) lr = 1e-3 for epoch in range(10): # iterate db for 10 for step, (x, y) in enumerate(train_db): # for every batch # x:[128, 28, 28] # y: [128] # [b, 28, 28] => [b, 28*28] x = tf.reshape(x, [-1, 28*28]) with tf.GradientTape() as tape: # tf.Variable # x: [b, 28*28] # h1 = x@w1 + b1 # [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256] h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256]) h1 = tf.nn.relu(h1) # [b, 256] => [b, 128] h2 = h1@w2 + b2 h2 = tf.nn.relu(h2) # [b, 128] => [b, 10] out = h2@w3 + b3 # compute loss # out: [b, 10] # y: [b] => [b, 10] y_onehot = tf.one_hot(y, depth=10) # mse = mean(sum(y-out)^2) # [b, 10] loss = tf.square(y_onehot - out) # mean: scalar loss = tf.reduce_mean(loss) # compute gradients grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3]) # print(grads) # w1 = w1 - lr * w1_grad w1.assign_sub(lr * grads[0]) b1.assign_sub(lr * grads[1]) w2.assign_sub(lr * grads[2]) b2.assign_sub(lr * grads[3]) w3.assign_sub(lr * grads[4]) b3.assign_sub(lr * grads[5]) if step % 100 == 0: print(epoch, step, 'loss:', float(loss)) ```
根据公式产生随机数矩阵,怎么使用C语言的程序代码编写的过程有效的实现这个问题的算法的
Description A Black Box algorithm supposes that natural number sequence u(1), u(2), ..., u(N) is sorted in non-descending order, N <= M and for each p (1 <= p <= N) an inequality p <= u(p) <= M is valid. Making tests for this algorithm we have met with the following problem. For setting a random sequence {u(i)} a usual random data generator did not fit. As the sequence itself had been imposed certain restrictions, the method of choosing the next random element (in the interval defined by restrictions) did not give the random sequence as a whole. We have come to a conclusion that the problem can be solved in the following way. If we arrange all possible sequences in certain order (for example, in lexicographical order) and assign each sequence its number, after choice of the random number it is possible to take the correspondent sequence for the random one. At the first glance it seems enough to make up a program generating all these sequences in such order. Alas! Even having not great values of M and N it would have taken any powerful modern computer centuries to enumerate all such sequences. It turned out it was possible to avoid generating all sequences if we managed to create required sequence according to its number immediately. But even this statement does not cover all. As the amount of sequences is quite large, the number can be a long one, composed of hundreds decimal digits, though our random data generator could give only normal numbers. We decided to produce a long random number from a real random number distributed in [0,1]. Namely, present the number in binary notation: 0.b(1)b(2)..., where all b(i) = 0 or 1. Let us set a regulation to associate such real number to an integer from [A,B] segment: Formula 1: Here we suppose, that A <= B, p >= 0, and ``div 2" is an integer division by 2. Let M, N (1 <= N <= M <= 200) and a binary real number 0.b(1)b(2)...b(p) (1 <= p <= 400) be given. Write a program to find out the corresponding u(1), u(2), ..., u(N) sequence, i.e. to find a sequence with G(1,T,0.b(1)b(2)...b(p)) number in lexicographical order of all possible {u(i)} for the given M and N (T is the quantity of such sequences). Numeration begins with 1. Keep in mind that in lexicographical order {l(i)} proceeds {h(i)} if after omitting equal beginnings, the first number of {l(i)} tail is smaller than the first number or {h(i)} tail. Following example illustrates the list of all possible sequences for M = 4 and N = 3 in lexicographical order. A note (it does not concern the solution of this task): The choice of random binary vector 0.b(1)b(2)...b(p) does not give an absolute uniform random data generator if we use the Formula. However, taking into account the fact that [A,B] interval is big we shall obtain a distribution applicable in most cases. Example 1, 2, 3 1, 2, 4 1, 3, 3 1, 3, 4 1, 4, 4 2, 2, 3 2, 2, 4 2, 3, 3 2, 3, 4 2, 4, 4 3, 3, 3 3, 3, 4 3, 4, 4 4, 4, 4 (here T=14) Input The first line of input contains M and N. The second line contains binary real number 0.b(1)b(2)...b(p) (without leading, trailing and other spaces). Output Write into the output the corresponding sequence u(1), u(2), ..., u(N). The sequence numbers should be separated with one space. Sample Input 4 3 0.01101101011110010001101010001011010 Sample Output 2 2 4
pytorch 中 写入class ConvNet(nn.Module): 语句之后 出现错误NameError: name 'ConvNet' is not defined 这是怎么回事?
```import torch import torch.nn as nn from torch.autograd import Variable import torch.optim as optim import torch.nn.functional as F import torchvision.datasets as dsets import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np #%matplotlib inline image_size=28 num_classes=10 num_epochs=20 batch_size=64 train_dataset=dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) test_dataset=dsets.MNIST(root='./data', train=False, transform=transforms.ToTensor()) train_loader=torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) indices=range(len(test_dataset)) indices_val=indices[:5000] indices_test=indices[5000:] sampler_val=torch.utils.data.sampler.SubsetRandomSampler(indices_val) sampler_test=torch.utils.data.sampler.SubsetRandomSampler(indices_test) validation_loader=torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False, sampler=sampler_val ) test_loader=torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False, sampler=sampler_test ) idx=110 #随机选取的 muteimg=train_dataset[idx][0].numpy() plt.imshow(muteimg[0,...]) plt.show() print('标签是:',train_dataset[idx][1]) depth=[4,8] class ConvNet(nn.Module): def __init__(self): super(ConvNet,self).__init__() self.conv1=nn.Conv2d(1,4,5,padding=2) self.pool=nn.MaxPool2d(2,2) self.conv2=nn.Conv2d(depth[0],depth[1],5,padding=2) self.fc1=nn.Linear(image_size//4*image_size//4*depth[1],512) self.fc2=nn.Linear(512,num_classes) def forward(self, x): x=self.conv1(x) x=F.relu(x) x=self.pool(x) x=self.conv2(x) x=F.relu(x) x=self.pool(x) x=x.view(-1,image_size//4*image_size//4*depth[1]) x=F.relu(self.fc1(x)) x=F.dropout(x,training=self.training) x=self.fc2(x) x=F.log_softmax(x,dim=1) return x def retrieve_features(self,x): feature_map1=F.relu(self.conv1(x)) x=self.pool(feature_map1) feature_map2=F.relu(self.conv2(x)) return (feature_map1,feature_map2) net=ConvNet() criterion=nn.CrossEntropyLoss() optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9) record=[] weights=[] for epoch in range(num_epochs): train_rights=[] for batch_idx,(data,target) in enumerate (train_loader): data,target=Variable(data),Variable(target) net.train() output=net(data) loss=criterion(output,target) optimizer.zero_grad() optimizer.step() right=rightness(output,target) train_rights.append(right) if batch_idx % 100 ==0: net.eval() val_rights=[] for(data,target) in validation_loader: data,target=Variable(data),Variable(target) output=net(data) right=rightness(output,target) val_rights.append(right) train_r=(sum([tup[o] for tup in train_rights]),sum([tup[1] for tup in train_rights])) val_r=(sum([tup[0] for tup in val_rights]),sum([tup[1] for tup in val_rights])) record.append((100 - 100.*train_r[0]/train_r[1],100-100.*val_r[0]/val_r[1])) weights.append([net.conv1.weight.data.clone(),net.conv1.bias.data.clone(), net.conv2.weight.data.clone(),net.conv2.bias.data.clone()]) net.eval() vals=[] for dara,target in test_loader: data,targrt=Variable(data,volatile=True),Variable(target) output = net(data) val=rightness(output,target) vals.append(val) rights=(sum([tup[0] for tup in vals]),sum([tup[1] for tup in vals])) rights_rate=1.0*rights[0]/rights[1] right_rate plt.figure(figsize=(10,7)) plt.plot(record) plt.xlabel('Steps') plt.ylabel('Error rate') ``` ```pytorch 中 写入class ConvNet(nn.Module): 语句之后 出现错误NameError: name 'ConvNet' is not defined 这是怎么回事
Friends or Enemies?
Description A determined army on a certain border decided to enumerate the coordinates in its patrol in a way to make it difficult for the enemy to know what positions they are referring to in the case that the radio signal used for communication is intercepted. The enumeration process chosen was the following: first it is decided where the axes x and y are; then, a linear equation that describes the position of the border relative to the the axes (yes, it is a straight line) is defined; finally, all points on the Cartesian plane that is not part of the border are enumerated, the number 0 being attributed to the coordinate (0, 0) and starting from there numbers being attributed to integral coordinates following a clockwise spiral, always skipping points that fall on the border (see Figure 1). If the point (0, 0) falls on the border, the number 0 is attributed to the first point that is not part of the border following the specified order. ![](http://poj.org/images/3119_1.png) Figure 1: Enumeration of points of integral coordinates In fact the enemy does not have to know either what position the army is referring to or the system used to enumerate the points. Such a project, complicated the life of the army, once that it is difficult to determine whether two points are on the same side of the border or on opposite sides. That is where they need your help. Input The input contains several test cases. The first line of the input contains an integer N (1 ≤ N ≤ 100) which represents the quantity of test cases. N test cases follow. The first line of each test case contains two integers a and b (−5 ≤ a ≤ 5 and −10 ≤ b ≤ 10) which describe the equation of the border: y = ax + b. The second line of each test case contains an integer K, indicating the number of queries that follow it (1 ≤ K ≤ 1000). Each one of the following K lines describes a query, composed by two integers M and N representing the enumerated coordinates of two points (0 ≤ M, N ≤ 65535). Output For each test case in the input your program should produce K + 1 lines. The first line should contain the identification of the test case in the form Caso X, where X should be substituted by the case number (starting from 1). The K following lines should contain the results of the K queries made in the corresponding case in the input, in the form: Mesmo lado da fronteira (The same side of the border) or Lados opostos da fronteira (Opposite sides of the border) Sample Input 2 1 2 10 26 25 25 11 24 9 23 28 25 9 25 1 25 0 9 1 23 12 26 17 1 2 12 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 Sample Output Caso 1 Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Lados opostos da fronteira Lados opostos da fronteira Lados opostos da fronteira Lados opostos da fronteira Lados opostos da fronteira Caso 2 Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Lados opostos da fronteira Mesmo lado da fronteira Mesmo lado da fronteira Lados opostos da fronteira
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
深深的码丨Java HashMap 透析
HashMap 相关概念 HashTab、HashMap、TreeMap 均以键值对像是存储或操作数据元素。HashTab继承自Dictionary,HashMap、TreeMap继承自AbstractMap,三者均实现Map接口 **HashTab:**同步哈希表,不支持null键或值,因为同步导致性能影响,很少被使用 **HashMap:**应用较多的非同步哈希表,支持null键或值,是键值对...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
相关热词 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件 c#注册dll文件 c#安装.net
立即提问