ValueError: No gradients provided for any variable?

在使用卷积神经网络+全连接神经网络计算句子相似度训练模型出现无梯度的问题。
以下是源代码

import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
import os
import math
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
#构建输入的向量
sentence_x = np.random.randn(1000, 38, 300)
sentence_x = tf.cast(tf.reshape(sentence_x, [1000, 38, 300, 1]), dtype=tf.float32)

sentence_y = np.random.randn(1000, 38, 300)
sentence_y = tf.cast(tf.reshape(sentence_y, [1000, 38, 300, 1]), dtype=tf.float32)

label = np.random.randint(0, 2, (1, 1000))
label = tf.reshape(label, [1000])

train_db = tf.data.Dataset.from_tensor_slices((sentence_x, sentence_y, label))
train_db = train_db.shuffle(100).batch(20)


#卷积层
conv_layers = [ # 5 units of 2 * conv +maxpooling
    # unit 1
    layers.Conv2D(3, kernel_size=[2, 2], strides=[2, 2], padding='same', activation = tf.nn.relu),
    layers.Conv2D(3, kernel_size=[2, 2], padding='same', activation = tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides= 2, padding='same'),
    # unit 2
    layers.Conv2D(3, kernel_size=[2, 2], strides=[2, 2], padding='same', activation = tf.nn.relu),
    layers.Conv2D(3, kernel_size=[2, 2], padding='same', activation = tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides= 2, padding='same'),

]

fc_net = Sequential([
        layers.Dense(150, activation = tf.nn.relu),
        layers.Dense(80, activation = tf.nn.relu),
        layers.Dense(20, activation = None),
    ])
conv_net = Sequential(conv_layers)
conv_net.build(input_shape = [None, 38, 300, 1])
fc_net.build(input_shape = [None, 171])
optimizer = tf.keras.optimizers.Adam(1e-3)
variables = conv_net.trainable_variables + fc_net.trainable_variables
def main():
    for epoch in range(50):
        for step, (sentence_x, sentence_y, label) in enumerate(train_db):
            with tf.GradientTape() as tape:
                out1 = conv_net(sentence_x)
                out2 = conv_net(sentence_y)
                fc_input_x = tf.reshape(out1, [-1, 171])
                fc_input_y = tf.reshape(out2, [-1, 171])
                vec_x = fc_net(fc_input_x)
                vec_y = fc_net(fc_input_y)
                #对输出的句向量进行计算相似度值
                output = tf.exp(-tf.reduce_sum(tf.abs(vec_x - vec_y), axis=1))
                output = tf.reshape(output, [-1])

                output = tf.math.ceil(output)
                output1 = tf.one_hot(tf.cast(output, dtype=tf.int32), depth=2)
                label = tf.cast(label, dtype=tf.int32)
                label= tf.one_hot(label, depth=2)
                print("output1", output1)
                print("label", label)
                loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=output1, labels=label))
                #loss = tf.reduce_sum(tf.square(output1-label))
            grad = tape.gradient(loss, variables)
            optimizer.apply_gradients(zip(grad, variables))

            if step % 10 == 0:
                print("epoch={0}, step = {1}, loss={2}".format(epoch, step, loss))



if __name__ == '__main__':
    main()



希望大佬们能指点一下,本人入门级小白。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
导入npy预训练文件出现No gradients provided for any variable

Traceback (most recent call last): File "/data2/test/cxj/fcn_vgg16/test_fcn16_vgg.py", line 134, in <module> loss, optimizer, fcn_prob,fcn_pred,fcn_pred_up,lr = train_net(vgg_fcn=fcn, input_tensor=images, out_tensor=true_out) File "/data2/test/cxj/fcn_vgg16/test_fcn16_vgg.py", line 72, in train_net apply_gradient_op = optimizer.apply_gradients(grads) File "/home/test/anaconda3/envs/mask_rcnn/lib/python3.4/site-packages/tensorflow/python/training/optimizer.py", line 591, in apply_gradients ([str(v) for _, v, _ in converted_grads_and_vars],)) ValueError: No gradients provided for any variable: ["<tf.Variable 'upscore2/up_filter:0' shape=(4, 4, 2, 2) dtype=float32_ref>", "<tf.Variable 'upscore32/up_filter:0' shape=(32, 32, 2, 2) dtype=float32_ref>"]. 在线等,求指教!!!!

tensorflow当中的loss里面的logits可不可以是placeholder

我使用tensorflow实现手写数字识别,我希望softmax_cross_entropy_with_logits里面的logits先用一个placeholder表示,然后在计算的时候再通过计算出的值再传给placeholder,但是会报错ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients。我知道直接把logits那里改成outputs就可以了,但是如果我一定要用logits的结果先是一个placeholder,我应该怎么解决呢。 ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/home/as/下载/resnet-152_mnist-master/mnist_dataset", one_hot=True) from tensorflow.contrib.layers import fully_connected x = tf.placeholder(dtype=tf.float32,shape=[None,784]) y = tf.placeholder(dtype=tf.float32,shape=[None,1]) hidden1 = fully_connected(x,100,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden2 = fully_connected(hidden1,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden3 = fully_connected(hidden2,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) outputs = fully_connected(hidden3,10,activation_fn=None, weights_initializer=tf.random_normal_initializer()) a = tf.placeholder(tf.float32,[None,10]) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=a) reduce_mean_loss = tf.reduce_mean(loss) equal_result = tf.equal(tf.argmax(outputs,1),tf.argmax(y,1)) cast_result = tf.cast(equal_result,dtype=tf.float32) accuracy = tf.reduce_mean(cast_result) train_op = tf.train.AdamOptimizer(0.001).minimize(reduce_mean_loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(30000): xs,ys = mnist.train.next_batch(128) result = outputs.eval(feed_dict={x:xs}) sess.run(train_op,feed_dict={a:result,y:ys}) print(i) ```

python错误:ValueError: No JSON object could be decoded

#-*- coding:utf-8 -*- import requests from operator import itemgetter # 执行API调用并存储响应 url = 'http://hacker-news.firebaseio.com/v0/topstories.json' r = requests.get(url) print("Status code:", r.status_code) # 处理有关每篇文章的信息 submission_ids = r.json() submission_dicts = [] for submission_id in submission_ids[:30]: # 对于每篇文章,都执行一个API调用 url = ('http://hacker-news.firebaseio.com/v0/item/' + str(submission_id) + '.json') submission_r = requesets.get(url) print(submisssion_r.status_code) reponse_dict = submission_r.json() submission_dict = { 'title': resopnse_dict['title'], 'link': 'http://news.ycombinator.com/item?id=' + str(submission_id), 'comments': response_dict.get('descendants', 0) } submission_dicts.append(submission_dict) submission_dicts = sorted(submission_dicts, key=itemgetter('comments'), recerse=Ture) for submission_dict in submission_dicts: print("/nTitle:", submission_dict['title']) print("Discussion link:", submission_dict['link']) print("Comeents", submission_dict['comments'])

错误提示ValueError: unsupported format character

应该是这一段 '''将方法体中的host字段进行替换''' def get_raw_body(self, req, ip): ip = self.get_host_from_url(ip) host_reg = re.compile(r'Host:\s([a-z\.A-Z0-9]+)') host = host_reg.findall(req) if not host or host[0] == '': print ('[-]ERROR MESSAGE!Wrong format for request body') sys.exit() req, num = re.subn(host_reg, "Host: %s", req) return req % ip 错误提示: return req % (ip) ValueError: unsupported format character '{' (0x7b) at index 31 源程序是2.7,我的是3.6,不想卸载去下2.7,为了这一个程序不值得...

Keras报错 ‘ValueError: 'pool5' is not in list’

很长的一个project,在keras下实现VGG16。 这是报错的整个代码段: ``` for roi, roi_context in zip(rois, rois_context): ins = [im_in, dmap_in, np.array([roi]), np.array([roi_context])] print("Testing ROI {c}") subtimer.tic() blobs_out = model.predict(ins) subtimer.toc() print("Storing Results") print(layer_names) post_roi_layers = set(layer_names[layer_names.index("pool5"):]) for name, val in zip(layer_names, blobs_out): if name not in outs: outs[name] = val else: if name in post_roi_layers: outs[name] = np.concatenate([outs[name], val]) c += 1 ``` 报错信息: ``` Loading Test Data data is loaded from roidb_test_19_smol.pkl Number of Images to test: 10 Testing ROI {c} Storing Results ['cls_score', 'bbox_pred_3d'] Traceback (most recent call last): File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/main.py", line 6, in <module> results = test_main.test_tf_implementation(cache_file="roidb_test_19_smol.pkl", weights_path="rgbd_det_iter_40000.h5") File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test_main.py", line 36, in test_tf_implementation results = test.test_net(tf_model, roidb) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 324, in test_net im_detect_3d(net, im, dmap, test['boxes'], test['boxes_3d'], test['rois_context']) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 200, in im_detect_3d post_roi_layers = set(layer_names[layer_names.index("pool5"):]) ValueError: 'pool5' is not in list ```

ValueError: invalid literal for int() with base 10: 'aer'

#coding=utf-8 #Version:python3.6.0 #Tools:Pycharm 2017.3.2 import numpy as np import tensorflow as tf import re TRAIN_PATH="data/ptb.train.txt" EVAL_PATH="data/ptb.valid.txt" TEST_PATH="data/ptb.test.txt" HIDDEN_SIZE=300 NUM_LAYERS=2 VOCAB_SIZE=10000 TRAIN_BATCH_SIZE=20 TRAIN_NUM_STEP=35 EVAL_BATCH_SIZE=1 EVAL_NUM_STEP=1 NUM_EPOCH=5 LSTM_KEEP_PROB=0.9 EMBEDDING_KEEP_PROB=0.9 MAX_GRED_NORM=5 SHARE_EMB_AND_SOFTMAX=True class PTBModel(object): def __init__(self,is_training,batch_size,num_steps): self.batch_size=batch_size self.num_steps=num_steps self.input_data=tf.placeholder(tf.int32,[batch_size,num_steps]) self.targets=tf.placeholder(tf.int32,[batch_size,num_steps]) dropout_keep_prob=LSTM_KEEP_PROB if is_training else 1.0 lstm_cells=[ tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_SIZE), output_keep_prob=dropout_keep_prob) for _ in range (NUM_LAYERS)] cell=tf.nn.rnn_cell.MultiRNNCell(lstm_cells) self.initial_state=cell.zero_state(batch_size,tf.float32) embedding=tf.get_variable("embedding",[VOCAB_SIZE,HIDDEN_SIZE]) inputs=tf.nn.embedding_lookup(embedding,self.input_data) if is_training: inputs=tf.nn.dropout(inputs,EMBEDDING_KEEP_PROB) outputs=[] state=self.initial_state with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step>0:tf.get_variable_scope().reuse_variables() cell_output,state=cell(inputs[:,time_step,:],state) outputs.append(cell_output) # 把输出队列展开成[batch,hidden_size*num_steps]的形状,然后再reshape成[batch*numsteps,hidden_size]的形状 output=tf.reshape(tf.concat(outputs,1),[-1,HIDDEN_SIZE]) if SHARE_EMB_AND_SOFTMAX: weight=tf.transpose(embedding) else: weight=tf.get_variable("weight",[HIDDEN_SIZE,VOCAB_SIZE]) bias=tf.get_variable("bias",[VOCAB_SIZE]) logits=tf.matmul(output,weight)+bias loss=tf.nn.sparse_softmax_cross_entropy_with_logits( labels=tf.reshape(self.targets,[-1]), logits=logits ) self.cost=tf.reduce_sum(loss)/batch_size self.final_state=state # 只在训练模型时定义反向传播操作 if not is_training:return trainable_variables=tf.trainable_variables() #控制梯度大小 grads,_=tf.clip_by_global_norm( tf.gradients(self.cost,trainable_variables),MAX_GRED_NORM) # 定义优化方法 optimizer=tf.train.GradientDescentOptimizer(learning_rate=1.0) # zip() 函数用于将可迭代的对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的对象,这样做的好处是节约了不少的内存。 #定义训练步骤 self.train_op=optimizer.apply_gradients( zip(grads,trainable_variables)) def run_epoch(session,model,batches,train_op,output_log,step): total_costs=0.0 iters=0 state=session.run(model.initial_state) for x,y in batches: cost,state,_=session.run( [model.cost,model.final_state,train_op], {model.input_data:x,model.targets:y, model.initial_state:state} ) total_costs+=cost iters+=model.num_steps # 只有在训练时输出日志 if output_log and step %100==0: print("After %d steps,perplexity is %.3f"%( step,np.exp(total_costs/iters) )) step +=1 return step,np.exp(total_costs/iters) # 从文件中读取数据,并返回包含单词编号的数组 def read_data(file_path): with open(file_path,"r") as fin: id_string=" ".join([line.strip() for line in fin.readlines()]) id_list=[int(w) for w in id_string.split()] # 将读取的单词编号转为整数 return id_list def make_batches(id_list,batch_size,num_step): # 计算总的batch数量,每个batch包含的单词数量是batch_size*num_step try: num_batches=(len(id_list)-1)/(batch_size*num_step) data=np.array(id_list[:num_batches*batch_size*num_step]) data=np.reshape(data,[batch_size,num_batches*num_step]) data_batches=np.split(data,num_batches,axis=1) label=np.array(id_list[1:num_batches*batch_size*num_step+1]) label=np.reshape(label,[batch_size,num_batches*num_step]) label_batches=np.split(label,num_batches,axis=1) return list(zip(data_batches,label_batches)) def main(): # 定义初始化函数 intializer=tf.random_uniform_initializer(-0.05,0.05) with tf.variable_scope("language_model",reuse=None,initializer=intializer): train_model=PTBModel(True,TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) with tf.variable_scope("language_model",reuse=True,initializer=intializer): eval_model=PTBModel(False,EVAL_BATCH_SIZE,EVAL_NUM_STEP) with tf.Session() as session: tf.global_variables_initializer().run() train_batches=make_batches(read_data(TRAIN_PATH),TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) eval_batches=make_batches(read_data(EVAL_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) test_batches=make_batches(read_data(TEST_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) step=0 for i in range(NUM_EPOCH): print("In iteration:%d" % (i+1)) step,train_pplx=run_epoch(session,train_model,train_batches,train_model.train_op,True,step) print("Epoch:%d Train perplexity:%.3f"%(i+1,train_pplx)) _,eval_pplx=run_epoch(session,eval_model,eval_batches,tf.no_op,False,0) print("Epoch:%d Eval perplexity:%.3f"%(i+1,eval_pplx)) _,test_pplx=run_epoch(session,eval_model,test_batches,tf.no_op(),False,0) print("Test perplexity:%.3f"% test_pplx) if __name__ == '__main__': main()

ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

跟着书做"打造自己的图片识别模型"项目时候遇到报错,报错早不到数据文件,但是文件路径和数据都没问题 D:\Anaconda\anaconda\envs\tensorflow\python.exe D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py --train_dir=satellite/train_dir --dataset_name=satellite --dataset_split_name=train --dataset_dir=satellite/data --model_name=inception_v3 --checkpoint_path=satellite/pretrained/inception_v3.ckpt --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --max_number_of_steps=100000 --batch_size=32 --learning_rate=0.001 --learning_rate_decay_type=fixed --save_interval_secs=300 --save_summaries_secs=2 --log_every_n_steps=10 --optimizer=rmsprop --weight_decay=0.00004 WARNING:tensorflow:From D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py:397: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step Traceback (most recent call last): File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 572, in <module> tf.app.run() File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 430, in main common_queue_min=10 * FLAGS.batch_size) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\dataset_data_provider.py", line 94, in __init__ scope=scope) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 238, in parallel_read data_files = get_data_files(data_sources) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 311, in get_data_files raise ValueError('No data files found in %s' % (data_sources,)) ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

ValueError: Unknown mat file type, version 0, 0

训练模型导入.mat文件时出现如下错误: ``` ValueError: Unknown mat file type, version 0, 0 ``` 读取文件代码为: ``` np.array(sio.loadmat(image[0][i])['section'], dtype=np.float32) ``` 望大神指教!不胜感激!

ValueError: too many values to unpack (expected 2)

网上说是元素找不到对应的 代码如下: ``` import turtle file=open("C:/Users/jyz_1/Desktop/新建文本文档.txt") file=file.read() lines=file.split("重庆") i=0 lsy=[] for line in lines: #index the temprature inn=line.index('\n')#The first \n inc=line.index("C")#The first C if i==0: tu=int(line[line.find('\n',inn+1)+1:inc])#The second \n if "~" in line: tl=int(line[line.index('~')+1:line.rindex('C')]) else: tl=tu i=i+1 else: fn=line.find('\n',inn+1) tu=int(line[line.find('\n',fn+1)+1:inc])#The third \n if "~" in line: tl=int(line[line.index('~')+1:line.rindex('C')]) else: tl=tu t=(tl+tu)/2#daily average temprature lsy.append(t) #find the date lsx=[] dates=file.split("\n") for date in dates: if "-" in date: if date.replace("-","").isnumeric()==True: p1=date.index('-')#the first - p2=date.find('-',p1+1)#the second - month=date[p1+1:p2] day=date[p2+1:] date_on_x=int(month+day) lsx.append(date_on_x) #draw axis def drawx(): turtle.pu() turtle.goto(-50,-50) turtle.pd() turtle.fd(240) def drawy(): turtle.pu() turtle.goto(-50,-50) turtle.seth(90) turtle.pd() turtle.fd(160) #comment the axis def comx(): turtle.pu() turtle.goto(-50,-65) turtle.seth(0) for i in range(1,13): turtle.write(i) turtle.fd(20) def comy(): turtle.pu() turtle.goto(-75,-50) turtle.seth(90) for i in range(-30,51,10): turtle.write(float(i)) turtle.fd(20) #draw the rainbow def rainbow(): #define the color if t<8: turtle.color("purple") elif 8<=t<12: turtle.color("lightblue") elif 12<=t<22: turtle.color("green") elif 22<=t<28: turtle.color("yellow") elif 28<=t<30: turtle.color("orange") elif t>=30: turtle.color("red") #let's draw! for x,t in lsx,lsy: turtle.pu() turtle.goto(x,t) turtle.pd() turtle.circle(10) drawx() drawy() comx() comy() rainbow() ``` 报错: ``` Traceback (most recent call last): File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python37-32\32rx.py", line 92, in <module> rainbow(t) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python37-32\32rx.py", line 83, in rainbow for x,t in lsx,lsy: ValueError: too many values to unpack (expected 2) ``` 但是我用len发现lsx,lsy长度相同 也就是说,lsx,lsy中的元素一一对应 那这个报错是怎么回事?

ValueError: invalid literal for int() with base 10: '05.png'

原帖子是这个https://www.cnblogs.com/skyfsm/p/8051705.html 我只是把这个帖子里的素材换成自己的而已 一直报![图片说明](https://img-ask.csdn.net/upload/202005/04/1588582796_8539.png) 谁救救我啊,我卡在这好久了,另问有没有可用深度学习的的发票训练集和数据集,可有偿

在使用机器学习算法过程中报错:ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 224 while Y.shape[1] == 334

使用KNN算法过程中遇到了ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 224 while Y.shape[1] == 334 的问题 代码截图:![图片说明](https://img-ask.csdn.net/upload/202005/26/1590474323_300759.png) 报错截图:![图片说明](https://img-ask.csdn.net/upload/202005/26/1590474354_165863.png) 求大佬救救孩子吧

在Cent OS中复现已发表文章的 神经网络训练过程,报错ValueError: low >= high

``` Traceback (most recent call last): File "trainIEEE39LoadSheddingAgent.py", line 139, in <module> env.reset() File "/root/RLGC/src/py/PowerDynSimEnvDef_v3.py", line 251, in reset fault_bus_idx = np.random.randint(0, total_fault_buses)# an integer, in the range of [0, total_bus_num-1] File "mtrand.pyx", line 630, in numpy.random.mtrand.RandomState.randint File "bounded_integers.pyx", line 1228, in numpy.random.bounded_integers._rand_int64 ValueError: low >= high ``` 报错如上,为什么会这样报错?如何解决?谢谢!

python调用cv2.findContours时报错:ValueError: not enough values to unpack (expected 3, got 2)

完整代码如下: ``` import cv2 import numpy as np img = np.zeros((200, 200), dtype=np.uint8) img[50:150, 50:150] = 255 ret, thresh = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY) image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) color = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) img = cv2.drawContours(color, contours, -1, (0,255,0), 2) cv2.imshow("contours", color) cv2.waitKey() cv2.destroyAllWindows() ``` 但是cv2.findContours报如下错误: ValueError: not enough values to unpack (expected 3, got 2) python版本为3.6,opencv为4.0.0

Django创建超级用户时,出现错误 ValueError: invalid literal for int() with base 10: ''

ERROR exception 135 Internal Server Error: /users/ Traceback (most recent call last): File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/exception.py", line 41, in inner response = get_response(request) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view return view_func(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/views/generic/base.py", line 68, in view return self.dispatch(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 505, in dispatch response = self.handle_exception(exc) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 465, in handle_exception self.raise_uncaught_exception(exc) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception raise exc File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 502, in dispatch response = handler(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/generics.py", line 242, in post return self.create(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/mixins.py", line 19, in create self.perform_create(serializer) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/mixins.py", line 24, in perform_create serializer.save() File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/serializers.py", line 213, in save self.instance = self.create(validated_data) File "/home/python/dihai02/per02/apps/users/serializers/user.py", line 25, in create user = User.objects.create_user(**validated_data) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/models.py", line 159, in create_user return self._create_user(username, email, password, **extra_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/models.py", line 153, in _create_user user.save(using=self._db) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/base_user.py", line 80, in save super(AbstractBaseUser, self).save(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 808, in save force_update=force_update, update_fields=update_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 838, in save_base updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 924, in _save_table result = self._do_insert(cls._base_manager, using, fields, update_pk, raw) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 963, in _do_insert using=using, raw=raw) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/query.py", line 1076, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1111, in execute_sql for sql, params in self.as_sql(): File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1064, in as_sql for obj in self.query.objs File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1064, in <listcomp> for obj in self.query.objs File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1063, in <listcomp> [self.prepare_value(field, self.pre_save_val(field, obj)) for field in fields] File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1003, in prepare_value value = field.get_db_prep_save(value, connection=self.connection) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 770, in get_db_prep_save prepared=False) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 762, in get_db_prep_value value = self.get_prep_value(value) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 1853, in get_prep_value return int(value) ValueError: invalid literal for int() with base 10: ''

反归一化时报错ValueError: operands could not be broadcast together with shapes

在使用scaler.inverse_transform(y_test)进行反归一化时,报错ValueError: operands could not be broadcast together with shapes (984,2) (4,)(984,2),我断调试了一下,在这个位置报错:![图片说明](https://img-ask.csdn.net/upload/202005/14/1589426709_978169.png)

在引入qgis.core时报错ValueError: PyCapsule_GetPointer called with incorrect name

Traceback (most recent call last): File "D:/pyCode/first/index.py", line 1, in <module> from qgis.core import * File "E:\QGIS\apps\qgis\python\qgis\__init__.py", line 78, in <module> import qgis.gui File "E:\QGIS\apps\qgis\python\qgis\gui\__init__.py", line 25, in <module> from qgis._gui import * ValueError: PyCapsule_GetPointer called with incorrect name

python运行时出现ValueError: Invalid file name (invalid file name)

相关程序如下 ``` parser = argparse.ArgumentParser(description='evaluate.py') parser.add_argument('INPUT', help='path to input image') parser.add_argument('REF', default="", nargs="?", help='path to reference image, if omitted NR IQA is assumed') parser.add_argument('--model', '-m', default='', help='path to the trained model') parser.add_argument('--top', choices=('patchwise', 'weighted'), default='weighted', help='top layer and loss definition') parser.add_argument('--gpu', '-g', default=0, type=int, help='GPU ID') args = parser.parse_args() ``` 我在运行 ``` python evaluate.py D:\PyCharm 2019.3.3\test-code\deepIQA-master\img.jpg ``` 后提示我 ``` Traceback (most recent call last): File "D:/PyCharm 2019.3.3/test-code/deepIQA-master/evaluate.py", line 75, in <module> serializers.load_hdf5(args.model, model) File "D:\PyCharm 2019.3.3\lib\site-packages\chainer\serializers\hdf5.py", line 195, in load_hdf5 with h5py.File(filename, 'r') as f: File "D:\PyCharm 2019.3.3\lib\site-packages\h5py\_hl\files.py", line 408, in __init__ swmr=swmr) File "D:\PyCharm 2019.3.3\lib\site-packages\h5py\_hl\files.py", line 173, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 88, in h5py.h5f.open ValueError: Invalid file name (invalid file name) ``` 是我的图片路径不对么?

ValueError: multilabel-indicator format is not supported的报错原因?

报错ValueError: multilabel-indicator format is not supported? 这个报错意思比较明确,不支持多分类,但我模型里y的label定义就是0和1,binary,为啥会有这个报错? 一个图像2分类的keras模型,总样本量=120,其中label"0"=110,label"1"=10,非平衡, 代码如下: data = np.load('D:/a.npz') image_data, label_data= data['image'], data['label'] skf = StratifiedKFold(n_splits=3, shuffle=True) for train, test in skf.split(image_data, label_data): train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = train_x.reshape(81,50176) test_x = test_x.reshape(39,50176) train_y = keras.utils.to_categorical(train_y,2) test_y = keras.utils.to_categorical(test_y,2) model = Sequential() model.add(Dense(units=128,activation="relu",input_shape=(50176,))) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=2,activation="sigmoid")) model.compile(optimizer=SGD(0.001),loss="binary_crossentropy",metrics=["accuracy"]) model.fit(train_x, train_y,batch_size=32,epochs=5,verbose=1) y_pred_model = model.predict_proba(test_x)[:,1] fpr_model, tpr_model, _ = roc_curve(test_y, y_pred_model) 报错提示如下: ---> 63 fpr_model, tpr_model, _ = roc_curve(test_y, y_pred_model) ValueError: multilabel-indicator format is not supported

ValueError: None values not supported.

Traceback (most recent call last): File "document_summarizer_training_testing.py", line 296, in <module> tf.app.run() File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "document_summarizer_training_testing.py", line 291, in main train() File "document_summarizer_training_testing.py", line 102, in train model = MY_Model(sess, len(vocab_dict)-2) File "/home/lyliu/Refresh-master-self-attention/my_model.py", line 70, in __init__ self.train_op_policynet_expreward = model_docsum.train_neg_expectedreward(self.rewardweighted_cross_entropy_loss_multi File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in train_neg_expectedreward grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in <listcomp> grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py", line 107,rm t = ops.convert_to_tensor(t, name="t") File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676o_tensor as_ref=False) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 741convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", onstant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", ake_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 使用tensorflow gpu版本 tensorflow 1.2.0。希望找到解决方法或者出现这个错误的原因

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Vue + Spring Boot 项目实战(十九):Web 项目优化解决方案

快来一起探索如何打脸我们的破项目,兄弟姐妹们把害怕打在公屏上!

你连存活到JDK8中著名的Bug都不知道,我怎么敢给你加薪

CopyOnWriteArrayList.java和ArrayList.java,这2个类的构造函数,注释中有一句话 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public ArrayList(Collection&lt;? ...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解!

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解! 目录 博客声明 大数据了解博主粉丝 博主的粉丝群体画像 粉丝群体性别比例、年龄分布 粉丝群体学历分布、职业分布、行业分布 国内、国外粉丝群体地域分布 博主的近期访问每日增量、粉丝每日增量 博客声明 因近期博主写专栏的文章越来越多,也越来越精细,逐步优化文章。因此,最近一段时间,订阅博主专栏的人数增长也非常快,并且专栏价

一个HashMap跟面试官扯了半个小时

一个HashMap能跟面试官扯上半个小时 关注 安琪拉的博客 1.回复面试领取面试资料 2.回复书籍领取技术电子书 3.回复交流领取技术电子书 前言 HashMap应该算是Java后端工程师面试的必问题,因为其中的知识点太多,很适合用来考察面试者的Java基础。 开场 面试官: 你先自我介绍一下吧! 安琪拉: 我是安琪拉,草丛三婊之一,最强中单(钟馗不服)!哦,不对,串场了,我是**,目...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

记录下入职中软一个月(外包华为)

我在年前从上一家公司离职,没想到过年期间疫情爆发,我也被困在家里,在家呆着的日子让人很焦躁,于是我疯狂的投简历,看面试题,希望可以进大公司去看看。 我也有幸面试了我觉得还挺大的公司的(虽然不是bat之类的大厂,但是作为一名二本计算机专业刚毕业的大学生bat那些大厂我连投简历的勇气都没有),最后选择了中软,我知道这是一家外包公司,待遇各方面甚至不如我的上一家公司,但是对我而言这可是外包华为,能...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

程序员是做全栈工程师好?还是专注一个领域好?

昨天,有位大一的同学私信我,说他要做全栈工程师。 我一听,这不害了孩子么,必须制止啊。 谁知,讲到最后,更确定了他做全栈程序员的梦想。 但凡做全栈工程师的,要么很惨,要么很牛! 但凡很牛的,绝不是一开始就是做全栈的! 全栈工程师听起来好听,但绝没有你想象的那么简单。 今天听我来给你唠,记得帮我点赞哦。 一、全栈工程师的职责 如果你学习编程的目的只是玩玩,那随意,想怎么学怎么学。...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

十个摸鱼,哦,不对,是炫酷(可以玩一整天)的网站!!!

文章目录前言正文**1、Kaspersky Cyberthreat real-time map****2、Finding Home****3、Silk – Interactive Generative Art****4、Liquid Particles 3D****5、WINDOWS93****6、Staggering Beauty****7、Ostagram图片生成器网址****8、全历史网址*...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

用了这个 IDE 插件,5分钟解决前后端联调!

点击上方蓝色“程序猿DD”,选择“设为星标”回复“资源”获取独家整理的学习资料!作者 |李海庆我是一个 Web 开发前端工程师,受到疫情影响,今天是我在家办公的第78天。开发了两周,...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

Java14 新特性解读

Java14 已于 2020 年 3 月 17 号发布,官方特性解读在这里:https://openjdk.java.net/projects/jdk/14/以下是个人对于特性的中文式...

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

工作两年简历写成这样,谁要你呀!

作者:小傅哥 博客:https://bugstack.cn 沉淀、分享、成长,让自己和他人都能有所收获! 一、前言 最近有伙伴问小傅哥,我的简历怎么投递了都没有反应,心里慌的很呀。 工作两年了目前的公司没有什么大项目,整天的维护别人的代码,有坑也不让重构,都烦死了。荒废我一身技能无处施展,投递的简历也没人看。我是不动物园里的猩猩,狒狒了! 我要加班,我要996,我要疯狂编码,求给我个机会… ...

相关热词 c#跨线程停止timer c#批量写入sql数据库 c# 自动安装浏览器 c#语言基础考试题 c# 偏移量打印是什么 c# 绘制曲线图 c#框体中的退出函数 c# 按钮透明背景 c# idl 混编出错 c#在位置0处没有任何行
立即提问