小白求教numpy中用bool选取数据的问题

书上说“通过布尔索引选取数组中的数据,将总是创建数据的副本” ,但下面的例子中 data[data<0]=0 却可以将原data数组中的负数设置为0,不是说是副本吗图片说明

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
numpy reshape函数使用报错
我在学习使用tensorflow的时候遇到了一个报错,查了很久也没能解决问题 ``` import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test) = mnist.load_data() print(x_train.shape,y_train.shape) print(x_test.shape,y_test.shape) import matplotlib.pyplot as plt import numpy as np np.pad(x_train,((0,0),(2,2),(2,2)),'constant',constant_values=0) x_train = x_train.astype('float32') x_train /= 255 x_train = x_train.reshape(x_train.shape[0],32,32,1) ``` 获得报错 ValueError: cannot reshape array of size 47040000 into shape (60000,32,32,1)。 我是win10系统,下载的数据保存成mnist.npz,不知道为什么报错了,请各位大佬指点一下,谢谢!
tensorflow 训练数据集时,报错InvalidArgumentError: Incompatible shapes: [15] vs. [15,6],标签的占位符与标签喂的数据格式不符,要怎么解决?
InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] 报错的详细信息如下所示: ``` INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] Caused by op 'input_producer/input_producer_EnqueueMany', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 320, in <module> batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) File "<ipython-input-19-6fa659dba762>", line 147, in batch_test tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') File "<ipython-input-19-6fa659dba762>", line 84, in read_records filename_queue = tf.train.string_input_producer([filename]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 232, in string_input_producer cancel_op=cancel_op) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 164, in input_producer enq = q.enqueue_many([input_tensor]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 367, in enqueue_many self._queue_ref, vals, name=scope) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 1556, in _queue_enqueue_many_v2 name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1038 try: -> 1039 return fn(*args) 1040 except errors.OpError as e: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1020 feed_dict, fetch_list, target_list, -> 1021 status, run_metadata) 1022 H:\aa\Anaconda\anaconda\envs\tensorflow\lib\contextlib.py in __exit__(self, type, value, traceback) 87 try: ---> 88 next(self.gen) 89 except StopIteration: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status() 465 compat.as_text(pywrap_tensorflow.TF_Message(status)), --> 466 pywrap_tensorflow.TF_GetCode(status)) 467 finally: InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) <ipython-input-19-6fa659dba762> in <module>() 318 range_num = 5 319 --> 320 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) 321 <ipython-input-19-6fa659dba762> in batch_test(record_file, resize_height, resize_width, n_batch, train_op, loss, acc, range_num, val_batch) 187 images_x = np.reshape(images, (-1, 30000)) 188 labels_y = np.reshape(labels, (-1, 6)) --> 189 _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 190 train_loss = train_loss + err 191 train_acc = train_acc + ac H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 776 try: 777 result = self._run(None, fetches, feed_dict, options_ptr, --> 778 run_metadata_ptr) 779 if run_metadata: 780 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 980 if final_fetches or final_targets: 981 results = self._do_run(handle, final_targets, final_fetches, --> 982 feed_dict_string, options, run_metadata) 983 else: 984 results = [] H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1030 if handle is None: 1031 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1032 target_list, options, run_metadata) 1033 else: 1034 return self._do_call(_prun_fn, self._session, handle, feed_dict, H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1050 except KeyError: 1051 pass -> 1052 raise type(e)(node_def, op, message) 1053 1054 def _extend_graph(self): InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'Equal', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 311, in <module> correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 672, in equal result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] ``` x,y- 占位符打印的信息如下: ``` x: Tensor("x-input:0", shape=(?, 100, 100, 3), dtype=float32) y_:Tensor("y_:0", shape=(?, 6), dtype=float32) ``` image 和 labels 的打印信息如下: ``` shape:(15, 100, 100, 3),tpye:float32,labels:[[ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1. 0.]] ``` 整个运行的代码如下: ``` import tensorflow as tf import numpy as np import os import cv2 import matplotlib.pyplot as plt import random import time from PIL import Image os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' data_path = 'people_pictures_train/record/one_train_demo_people_train.tfrecords' # tfrecords 文件的地址 data_path_val = 'people_pictures_train/record/one_test_demo_people_val.tfrecords' # tfrecords 文件的地址 print("----------------------------") tf.reset_default_graph() def get_example_nums(tf_records_filenames): ''' 统计tf_records图像的个数(example)个数 :param tf_records_filenames: tf_records文件路径 :return: ''' nums= 0 for record in tf.python_io.tf_record_iterator(tf_records_filenames): nums += 1 return nums def show_image(title,image): ''' 显示图片 :param title: 图像标题 :param image: 图像的数据 :return: ''' # plt.figure("show_image") # print(image.dtype) plt.imshow(image) plt.axis('on') # 关掉坐标轴为 off plt.title(title) # 图像题目 plt.show() def get_batch_images(images,labels,batch_size,labels_nums,one_hot=False,shuffle=False,num_threads=1): ''' :param images:图像 :param labels:标签 :param batch_size: :param labels_nums:标签个数 :param one_hot:是否将labels转为one_hot的形式 :param shuffle:是否打乱顺序,一般train时shuffle=True,验证时shuffle=False :return:返回batch的images和labels ''' min_after_dequeue = 200 capacity = min_after_dequeue + 3 * batch_size # 保证capacity必须大于min_after_dequeue参数值 if shuffle: images_batch, labels_batch = tf.train.shuffle_batch([images,labels], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue, num_threads=num_threads) else: images_batch, labels_batch = tf.train.batch([images,labels], batch_size=batch_size, capacity=capacity, num_threads=num_threads) if one_hot: labels_batch = tf.one_hot(labels_batch, labels_nums, 1, 0) return images_batch,labels_batch def read_records(filename,resize_height, resize_width,type=None): ''' 解析record文件:源文件的图像数据是RGB,uint8,[0,255],一般作为训练数据时,需要归一化到[0,1] :param filename: :param resize_height: :param resize_width: :param type:选择图像数据的返回类型 None:默认将uint8-[0,255]转为float32-[0,255] normalization:归一化float32-[0,1] standardization:归一化float32-[0,1],再减均值中心化 :return: ''' # 创建文件队列,不限读取的数量 filename_queue = tf.train.string_input_producer([filename]) # create a reader from file queue reader = tf.TFRecordReader() # reader从文件队列中读入一个序列化的样本 _, serialized_example = reader.read(filename_queue) # get feature from serialized example # 解析符号化的样本 features = tf.parse_single_example( serialized_example, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'depth': tf.FixedLenFeature([], tf.int64), 'labels': tf.FixedLenFeature([], tf.string) } ) tf_image = tf.decode_raw(features['image_raw'], tf.uint8)#获得图像原始的数据 tf_height = features['height'] tf_width = features['width'] tf_depth = features['depth'] # tf_label = tf.cast(features['labels'], tf.float32) tf_label = tf.decode_raw(features['labels'],tf.float32) # PS:恢复原始图像数据,reshape的大小必须与保存之前的图像shape一致,否则出错 # tf_image=tf.reshape(tf_image, [-1]) # 转换为行向量 tf_image=tf.reshape(tf_image, [resize_height, resize_width, 3]) # 设置图像的维度 tf_label=tf.reshape(tf_label, [6]) # 设置图像的维度 # 恢复数据后,才可以对图像进行resize_images:输入uint->输出float32 # tf_image=tf.image.resize_images(tf_image,[224, 224]) # [3]数据类型处理 # 存储的图像类型为uint8,tensorflow训练时数据必须是tf.float32 if type is None: tf_image = tf.cast(tf_image, tf.float32) elif type == 'normalization': # [1]若需要归一化请使用: # 仅当输入数据是uint8,才会归一化[0,255] # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.convert_image_dtype(tf_image, tf.float32) tf_image = tf.cast(tf_image, tf.float32) * (1. / 255.0) # 归一化 elif type == 'standardization': # 标准化 # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.per_image_standardization(tf_image) # 标准化(减均值除方差) # 若需要归一化,且中心化,假设均值为0.5,请使用: tf_image = tf.cast(tf_image, tf.float32) * (1. / 255) - 0.5 # 中心化 # 这里仅仅返回图像和标签 # return tf_image, tf_height,tf_width,tf_depth,tf_label return tf_image,tf_label def batch_test(record_file,resize_height,resize_width,n_batch,train_op,loss,acc,range_num,val_batch): ''' :param record_file: record文件路径 :param resize_height: :param resize_width: :return: :PS:image_batch, label_batch一般作为网络的输入 ''' # 读取record函数 tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') image_batch, label_batch= get_batch_images(tf_image,tf_label,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) a = image_batch.get_shape() a2 = a.as_list() b = label_batch.get_shape() b2 = b.as_list() print('image_batch: '+ str(image_batch) + ' label_batch: ' + str(label_batch)) print('image_batch-len:' + str(len(a2)) + ' label_batch-len: ' + str(len(b2))) # 测试的数据 images_val,labels_val = read_records(data_path_val,100,100,type='normalization') image_batch_val, label_batch_val = get_batch_images(images_val,labels_val,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) # print('image_batch_val: '+ str(image_batch_val) + ' label_batch_val: ' + str(label_batch_val)) init = tf.global_variables_initializer() with tf.Session() as sess: # 开始一个会话 sess.run(init) # train_writer = tf.summary.FileWriter('logs/train',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 # test_writer = tf.summary.FileWriter('logs/test',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for epoch in range(range_num) : start_time = time.time() train_loss, train_acc = 0,0 for i in range(n_batch): images, labels = sess.run([image_batch, label_batch]) print('shape:{},tpye:{},labels:{}'.format(images.shape,images.dtype,labels)) print('images-len:' + str(len(images)) + ' labels-len: ' + str(len(labels))) for i in range(len(images)): show_image("image0", images[i, :, :, :]) a = np.zeros( (len(labels)) ) print(' a: ' +str(a)) for i in range(len(labels)): for j in range(len(labels[i])): if labels[i][j] > 0: a[i] = j print(' a: ' +str(a)) print('x: ' + str(x) + ' y_:' + str(y_)) images_x = np.reshape(images, (-1, 30000)) labels_y = np.reshape(labels, (-1, 6)) _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 train_loss = train_loss + err train_acc = train_acc + ac print(" train loss: %f" % (np.sum(train_err)/n_batch)) print(" train acc: %f" % (np.sum(train_acc)/n_batch)) val_loss, val_acc = 0, 0 for i in range(val_batch): # test 在会话中取出images和labels测试数据, images_val2 主要是为了与 images_val 进行区分 images_val2, labels_val2 = sess.run([image_batch_val, label_batch_val]) val_loss, val_acc = sess.run([loss,acc], feed_dict={x:images_val_x, y_:labels_val2}) # 测试一下准确率,喂的数据是,图片和图片的标签 val_loss = val_loss + err val_acc = val_acc + ac print(" validation loss: %f" % (np.sum(val_loss)/val_batch)) print(" validation acc: %f" % (np.sum(val_acc)/val_batch)) # 停止所有线程 coord.request_stop() coord.join(threads) # 每个批次的大小 batch_size = 15 #每个批次 10,一次性放入100张图,放到神经网络中进行训练,以矩阵的形式放入 # 计算一共有多少个批次 # n_batch = mnist.train.num_examples // batch_size #整除 n_batch = get_example_nums(data_path) // batch_size val_batch = get_example_nums(data_path_val) // batch_size # 测试图片的数量 转换格式时以一个batch 放所有的图片 # val_num = get_example_nums(data_path_val) # 测试图片的数量 转换格式时以一个batch 放所有的图片 # train_num = get_example_nums(data_path) # 测试图片的数量 转换格式时以一个batch 放所有的图片 print ("-----------------" + str(n_batch) + " batch------------") #将所有的图片resize成100*100 w=100 h=100 c=3 #-----------------构建网络---------------------- #占位符 #-----------------构建网络---------------------- #占位符 x = tf.placeholder(tf.float32,[None,100,100,3],name='x-input') #图片像素 转换 一维向量,行与批次有关,none 代表行,列是784 y_=tf.placeholder(tf.float32,shape=[None,6],name='y_') def inference(input_tensor, train, regularizer): with tf.variable_scope('layer1-conv1'): conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0)) conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME') relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases)) with tf.name_scope("layer2-pool1"): pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID") with tf.variable_scope("layer3-conv2"): conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0)) conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME') relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases)) with tf.name_scope("layer4-pool2"): pool2 = tf.nn.max_pool(relu2, ksize=[1, 2 , 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer5-conv3"): conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME') relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases)) with tf.name_scope("layer6-pool3"): pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer7-conv4"): conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME') relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases)) with tf.name_scope("layer8-pool4"): pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') nodes = 6*6*128 reshaped = tf.reshape(pool4,[-1,nodes]) with tf.variable_scope('layer9-fc1'): fc1_weights = tf.get_variable("weight", [nodes, 1024], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights)) fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1)) fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases) if train: fc1 = tf.nn.dropout(fc1, 0.5) with tf.variable_scope('layer10-fc2'): fc2_weights = tf.get_variable("weight", [1024, 512], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights)) fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1)) fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases) if train: fc2 = tf.nn.dropout(fc2, 0.5) with tf.variable_scope('layer11-fc3'): fc3_weights = tf.get_variable("weight", [512, 6], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights)) fc3_biases = tf.get_variable("bias", [6], initializer=tf.constant_initializer(0.1)) logit = tf.matmul(fc2, fc3_weights) + fc3_biases return logit #---------------------------网络结束--------------------------- regularizer = tf.contrib.layers.l2_regularizer(0.0001) logits = inference(x,False,regularizer) #(小处理)将logits乘以1赋值给logits_eval,定义name,方便在后续调用模型时通过tensor名字调用输出tensor b = tf.constant(value=1,dtype=tf.float32) logits_eval = tf.multiply(logits,b,name='logits_eval') # loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits) train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("----------------------------") if __name__ == '__main__': range_num = 5 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) ```
Python中的numpy lexsort用法问题
我看网上写的 import numpy as np >>> a array([[ 2, 7, 4, 2], [35, 9, 1, 5], [22, 12, 3, 2]]) 按最后一列顺序排序 >>> a[np.lexsort(a.T)] array([[22, 12, 3, 2], [ 2, 7, 4, 2], [35, 9, 1, 5]]) 按最后一列逆序排序 >>>a[np.lexsort(-a.T)] array([[35, 9, 1, 5], [ 2, 7, 4, 2], [22, 12, 3, 2]]) 按第一列顺序排序 >>> a[np.lexsort(a[:,::-1].T)] array([[ 2, 7, 4, 2], [22, 12, 3, 2], [35, 9, 1, 5]]) 我想问下a[:,::-1].T这个代表啥意思? 比如我现在有一个10X8的数组,我现在要用第六列升序排序,该如何写代码?
python如何修改代码,使数据按列写入excel内?
![图片说明](https://img-ask.csdn.net/upload/202002/16/1581860871_193142.png) **问题一:**找了挺多方法都没能顺利解决,目前觉得问题可能是在最后一个for循环。因为最后一个循环不是一行一行输出,而是排序后直接输出三行,所以使用 for i range()的办法也行不通。 <mark>截取最后一段代码 ``` for node_id in top_k: human_string = node_lookup.id_to_string(node_id) # 获取置信度 score = predictions[node_id] print('%s' % (human_string)) sheet.write(count,1,'%s\n' % (human_string)) ``` **问题二:**虽然图片我是按1.jpg、2.jpg、3.jpg....这样编码,但是有时候读取输出却不是按顺序输出的,而是乱的,比如2、3、1如下图。 ![图片说明](https://img-ask.csdn.net/upload/202002/16/1581863363_333569.png) **辛苦各位大佬啦!** **下面是完整代码** ``` import tensorflow as tf import os import xlwt import numpy as np import re from PIL import Image import matplotlib.pyplot as plt count=0 workbook = xlwt.Workbook() sheet = workbook.add_sheet("Sheet Name1") class NodeLookup(object): def __init__(self): label_lookup_path = 'inception_model/imagenet_2012_challenge_label_map_proto.pbtxt' uid_lookup_path = 'inception_model/imagenet_synset_to_human_label_map.txt' self.node_lookup = self.load(label_lookup_path, uid_lookup_path) def load(self, label_lookup_path, uid_lookup_path): # 通过tensorflow读文件方法把文件读入,加载分类字符转 'n*******' 对应各分类名称的文件 proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines() uid_to_human={} # 一行一行读取数据 for line in proto_as_ascii_lines: # 去掉换行符 line = line.strip('\n') # 按照 '\t' 分割 parsed_items = line.split('\t') uid = parsed_items[0] human_string = parsed_items[1] # 保存编号字符串与分类名称的关系 uid_to_human[uid] = human_string # 加载分类字符串n*******对应分类编号1-1000的文件 proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines() node_id_to_uid = {} for line in proto_as_ascii: if line.strip().startswith('target_class:'): target_class = int(line.strip().split(':')[1]) elif line.strip().startswith('target_class_'): target_class_string = line.strip().split(':')[1].strip() node_id_to_uid[target_class] = target_class_string[1:-1] # 建立分类编号 1-1000 与对应分类名称的映射关系 node_id_to_name = {} for key,val in node_id_to_uid.items(): name = uid_to_human[val] node_id_to_name[key] = name # 最后得到如 {449: 'tench, Tinca tinca', ...} return node_id_to_name # 传入分类编号1-1000 返回分类名称,因为 inception-v3 分类结果返回的是编号不是直接给名称 def id_to_string(self, node_id): if node_id not in self.node_lookup: return '' return self.node_lookup[node_id] # 创建一个图来存放google训练好的模型 with tf.gfile.FastGFile('inception_model/classify_image_graph_def.pb','rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') node_lookup = NodeLookup() with tf.Session() as sess: # 拿到softmax的op # 'softmax:0'这个名字,可以在网络中找到这个节点,它的名字就'(softmax)', softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') for root,dirs,files in os.walk('images/'): ###把要测图片放入Images文件夹 for file in files: image_data = tf.gfile.FastGFile(os.path.join(root,file),'rb').read() # 运行softmax节点,向其中feed值 # 可以在网络中找到这个名字,DecodeJpeg/contents, # 据此可以发现,根据名字取网络中op时,如果其名字带括号,就用括号内的名字,如果不带括号,就用右上角介绍的名字。 # 而带个0,是默认情况,如果网络中出现同名节点,这个编号会递增 predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0':image_data}) predictions = np.squeeze(predictions)# 把结果转化为1维数据 image_path = os.path.join(root, file) print(image_path) img = Image.open(image_path) sheet.write(count,0, image_path) # row, column, value # 排序,拿概率最大的3个值,然后再对这3个值倒序 top_k = predictions.argsort()[-3:][::-1] for node_id in top_k: human_string = node_lookup.id_to_string(node_id) # 获取置信度 score = predictions[node_id] print('%s' % (human_string)) sheet.write(count,1,'%s\n' % (human_string)) count=count+1 workbook.save('Excel_test1.xls') ```
关于归一化和numpy.log处理数据的疑问
在数据挖掘中, 有对数据进行归一化处理,比如StandardNormalization, 这种归一化处理的 好处是对异常的离散数值有很好的效果, 而numpy.log 可以对一些离散的异常数值有这种处理, 经过这种log处理后,得到的直方图更接近高斯分布, 我的问题是: 1. 我在网上看到一些大数据挖掘方面的资料,利用LogisticRegressor, 并没有对数据进行 归一化处理, 这种归一化是否不一定必须的 ? 2. 如果采用了 StandardNormalization 这种归一化处理,是否也相当于采用了Log处理的效果,而且数值被限定在更小的范围之内? 3. 数据挖掘中,如果用到 LogisticRegressor这种算法,是否直接对那些离散值直接进行StandardNormalization处理,不用采用Log处理?
使用Stack unstack 但不自动重新排序
有一组数据,之前已经按照C1的数值从大到小排序。但是在对此数据进行unstack重塑后,数据又按照行索引的字母重新进行了排序。stack/unstack语句中可有方法不进行重新排序? ``` import pandas as pd import numpy as np df = pd.DataFrame(data={'i1':['x','x','x','x','y','y','y','y'], 'i2':['f','a','w','h','f','a','w','h'], 'c1':[20,15,10,9,8,7,6,4], 'c2':[23,6,34,78,13,45,67,8]}) df.set_index(['i1','i2'],inplace=True) cols = df.columns df1 = df.unstack('i1') print (df) print (df1) ```
lstm加三层感知器的神经网络,预测行人坐标轨迹,loss不下降是怎么原因?
我用两层的lstm编码坐标,然后用三层感知器解码,预测后五帧的轨迹,用的是mse和adam,尝试了从0.00001到0.3的学习率,batch size也改过train loss一直在小幅度的波动,test loss一直不变,想请问出现这种情况是可能是什么原因? ``` import torch import torch.nn as nn import torch.utils.data as data import numpy as np import random from torch import optim from helper import AverageMeter, adjust_learning_rate class net(torch.nn.Module): def __init__(self): super(net, self).__init__() self.gru = nn.GRU(2, 128, 2) self.lstm = nn.LSTM(2, 128, 2) def forward(self, input1, hidden): next_h_list = [] out_list = [] for i in range(20): input_tr = input1[:,i,:].unsqueeze(0) output, next_h = self.lstm(input_tr, (hidden[i][0] , hidden[i][1])) next_h_list.append(next_h) out_list.append(output.squeeze(0)) output_tr = torch.stack(out_list, 1) return output_tr, next_h_list def init_hidden(self, batch_size): return [[torch.zeros(2, batch_size, 128, requires_grad=True).cuda() for _ in range(2)] for _ in range(20)] #这里的维度(2,5,128)是从外向内的,最里面是128维 class decoder(torch.nn.Module): def __init__(self): super(decoder, self).__init__() self.fc1 = torch.nn.Linear(128 , 2) def forward(self, input): de_list = [] for i in range(20): output = self.fc1(input[:,i,:]) de_list.append(output) out = torch.stack(de_list, 1) return out class Model: def init(self): self.lr = 0.1 self.weight_decay = 5e-3 self.n_epochs = 500 self.loss_list = [] self.time_window = 300 self.window_size = 1000 self.m1 = net().cuda() self.m2 = decoder().cuda() self.m1_optim = optim.Adam(self.m1.parameters(), lr = self.lr, weight_decay = self.weight_decay) self.m2_optim = optim.Adam(self.m1.parameters(), lr = self.lr, weight_decay = self.weight_decay) self.batch_size = 256 self.test_times = 5 self.dataload('data/GC.npz') def dataload(self, path): data = np.load(path) tr_X, tr_Y = data['train_X'], data['train_Y'] te_X, te_Y = data['test_X'], data['test_Y'] tr_input = torch.FloatTensor(tr_X) tr_target = torch.FloatTensor(tr_Y) self.tr_input = tr_input.cuda() self.tr_target = tr_target.cuda() self.te_input = torch.FloatTensor(te_X).cuda() self.te_target = torch.FloatTensor(te_Y).cuda() # data loader train = torch.utils.data.TensorDataset(tr_input, tr_target) self.train_loader = torch.utils.data.DataLoader(train, batch_size=self.batch_size, shuffle=True, num_workers=4) def run(self, tr_input, tr_target): batch_size = tr_input.size(0) encoder_hidden = self.m1.init_hidden(batch_size) #print('batch_size: ',batch_size) tr_final = tr_input[:,:,4] #选择第五帧 for i in range(4): output , hidden = self.m1(tr_input[:,:,i], encoder_hidden) re_list = [] for i in range(5): output , hidden = self.m1(tr_final, encoder_hidden) output_decoder = self.m2(output) tr_final = output_decoder re_list.append(output_decoder) predict = torch.stack(re_list, 2) #loss L2_loss = ((tr_target - predict) **2).sum() / 20 MSE_loss = ((tr_target - predict) **2).sum(3).sqrt().mean() self.loss = L2_loss return predict, L2_loss.item(), MSE_loss.item() def train(self, epoch): MSE_loss_meter = AverageMeter() L2_square_loss_meter = AverageMeter() adjust_learning_rate([self.m1_optim, self.m2_optim], self.lr, epoch) for i ,(tr_input, tr_target)in enumerate(self.train_loader): tr_input = tr_input.cuda() tr_target = tr_target.cuda() self.m1_optim.zero_grad() self.m2_optim.zero_grad() predict, L2_loss, MSE_loss = self.run(tr_input, tr_target) MSE_loss_meter.update(MSE_loss) L2_square_loss_meter.update(L2_loss) self.loss.backward() self.m1_optim.step() self.m2_optim.step() return MSE_loss_meter.avg, L2_square_loss_meter.avg def test(self): with torch.no_grad(): predi, L2_square_lo, MSE_lo = self.run(self.te_input, self.te_target) return MSE_lo, L2_square_lo def final(self, epoch): self.init() for i in range(1, epoch + 1): MSE_loss, L2_square_loss = self.train(epoch) print('----------------epoch------------------: ',i+1) print('mse: ', MSE_loss) print('l2: ', L2_square_loss) self.loss_list.append(MSE_loss) if i % self.test_times == 0: test_loss_MSE , test_loss_L2 = self.test() print('----TEST----\n' + 'MSE Loss:%s' % test_loss_MSE) print('----TEST----\n' + 'L2 Loss:%s' % test_loss_L2) def set_random_seed(random_seed=0): np.random.seed(random_seed) torch.manual_seed(random_seed) torch.cuda.manual_seed_all(random_seed) random.seed(random_seed) def main(): set_random_seed() M = Model() M.final(1000) if __name__ == '__main__': main() ```
python数据处理时遇到如下问题
import pandas as pd import numpy as np import matplotlib.pyplot as plt import csv import fileinput import time pd.options.display.max_columns=None start = time.time() data=pd.read_csv('C:\\Users\\丹心傲雪\\Desktop\\毕业论文冲鸭\\1001-CD\\1001-CD.txt') #The path of data file data.columns=['carid','orderid','time','longitude','latitude'] #添加列标签 orderid_list=np.array(data['orderid'].drop_duplicates()) #订单号列表 columns=['carid','orderid','starttime','endtime','longitude','latitude'] data_bak = pd.DataFrame(columns=columns) append_dic = {} data_end_time = [] for i in range(len(orderid_list)): order=data[data['orderid']==orderid_list[i]]#根据订单号筛选数据 order.sort_values("time",inplace=True) #对同一订单的时间进行排序 order=np.array(order) #将df转为array for j in range(len(order[0])): append_dic[columns[j]] = order[0][j] append_dic['endtime'] = order[-1][2] data_bak = data_bak.append([append_dic],ignore_index=True) data_bak.to_csv('data.csv',index=False) end = time.time() print("运行时间:%.2f秒"%(end-start)) ``` ```
矩阵乘法错误 (Python3.7; jupyter notebook 6.0.0; numpy 1.16.4)该怎么办?
本人六年级小学生,很想进军人工智能领域,于是搞了本《Python神经网络编程》来看(图1),安装搭建了jupyter,可是在训练神经网络(更新权重那一步)的时候出了问题,也不只是怎么回事,打了ipdb断点,一查,阵矩大小、形状都没有问题,可是jupyter一直报错矩阵无法相乘,求大神解答(更多信息见正文) ![《Python神经网络编程》](https://img-ask.csdn.net/upload/202001/22/1579668143_1165.jpg) 先贴一下错误信息: > ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 3 is different from 5) 我百度了一下,这个好像是阵矩1的行与阵矩2的列不相同,我再次断点调试,各种一番捯饬,最后……额还是没找出问题…… 再贴一下诡异的jupyter运行截图: ![运行截图](https://img-ask.csdn.net/upload/202001/22/1579680986_443189.png) 图中可以看到框1是将要执行的语句,但是上一句打了ipdb断点,之后我在断点环境下执行了代码,结果成功了!!!成功了!!!可是我按下c之后就又报错了!!!求教啊,神马蛇皮走位?于是我再把断点去掉,又是同样的报错,放到导出py文件后本地执行也报错,神马情况??? 再贴一下train函数截图: ![图片说明](https://img-ask.csdn.net/upload/202001/22/1579680754_198010.png) 最后附上全部代码,方便大家解答: ```python import numpy import scipy.special # 神经网络 class Network: """神经网络类""" def __init__(self, nodes, l_rate): """神经网络初始化方法""" self.nodes = nodes self.lr = l_rate self.layer_num = len(nodes) self.weight_num = self.layer_num -1 self._init_layers() self._init_weights() self.activ = scipy.special.expit def _init_layers(self): """初始化所有神经层""" self.layers = [None for _ in range(self.layer_num)] for i in range(self.layer_num): # 本层的神经数 node_num = self.nodes[i] # 初始化本层神经,用0填充 self.layers[i] = numpy.zeros((node_num,)).T def _init_weights(self): """初始化所有权重""" self.weights = [None for _ in range(self.weight_num)] for i in range(self.weight_num): # 初始化本层权重,用[1/三√下一层神经数)]的正态分布填充 # 参数:1.正态分布中心 2.[1/(√下一层神经数)] 3.权重阵矩大小 self.weights[i] = numpy.random.normal(0.0, pow(self.nodes[i+1], -0.5), (self.nodes[i+1], self.nodes[i])) def query(self, inputs): """查询神经网络的结果""" assert len(inputs) is self.nodes[0] for i in range(self.layer_num): # 如果第是一遍遍历,将最后一次结果设为输入值 if i is 0: self.layers[0] = numpy.array(inputs, ndmin=2).T continue # 本轮神经计算值 self.layers[i] = self.weights[i - 1] @ self.layers[i - 1] self.layers[i] = self.activ(self.layers[i]) return self.layers[-1] def train(self, inputs, targets): """对神经网络进行训练""" assert len(targets) is self.nodes[-1] last_result = self.query(inputs) for i in reversed(range(self.weight_num)): if i is self.weight_num-1: errors = numpy.array(targets, ndmin=2).T - last_result else: errors = self.weights[i + 1] @ errors import ipdb; ipdb.set_trace() # 问题出在这一行↓↓↓ self.weights[i] += self.lr *((errors * self.layers[i+1] * (1.0 - self.layers[i+1])) @ self.layers[i].T) if __name__ == '__main__': nw = Network((3, 5, 5, 3), 0.01) for _ in range(10000): nw.train([1, 1, 1], [1, 1, 1]) results = nw.query([1, 1, 1]) print(results) ```
提示:TypeError: title() missing 1 required positional argument: 'label',求解。
import numpy as np import matplotlib.pyplot as plt plt.subplot(111,polar = True) dataLenth = 5 angles = np.linspace(0,2*np.pi,dataLenth,endpoint=False) labels =['沟通能力','业务理解能力','逻辑思维能力','快速学习能力','工具使用能力'] data = [2,3.5,4,4.5,5] data = np.concatenate((data, [data[0]])) angles = np.concatenate((angles, [angles[0]])) plt.polar(angles,data,color = "r",marker = "o") plt.xticks(angles,labels) plt.title(t = "某数据分析师的综合评级") plt.savefig("D:/pyx/polarplot.jpg") 上面是相关代码,运行后提示错误:TypeError: title() missing 1 required positional argument: 'label',求解。
tensorflow 报错You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是怎么看数据都没错,请大神指点
调试googlenet的代码,总是报错 InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是我怎么看喂的数据都没问题,请大神们指点 ``` # -*- coding: utf-8 -*- """ GoogleNet也被称为InceptionNet Created on Mon Feb 10 12:15:35 2020 @author: 月光下的云海 """ import tensorflow as tf from keras.datasets import cifar10 import numpy as np import tensorflow.contrib.slim as slim tf.reset_default_graph() tf.reset_default_graph() (x_train,y_train),(x_test,y_test) = cifar10.load_data() x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('int32') y_test = y_test.astype('int32') y_train = y_train.reshape(y_train.shape[0]) y_test = y_test.reshape(y_test.shape[0]) x_train = x_train/255 x_test = x_test/255 #************************************************ 构建inception ************************************************ #构建一个多分支的网络结构 #INPUTS: # d0_1:最左边的分支,分支0,大小为1*1的卷积核个数 # d1_1:左数第二个分支,分支1,大小为1*1的卷积核的个数 # d1_3:左数第二个分支,分支1,大小为3*3的卷积核的个数 # d2_1:左数第三个分支,分支2,大小为1*1的卷积核的个数 # d2_5:左数第三个分支,分支2,大小为5*5的卷积核的个数 # d3_1:左数第四个分支,分支3,大小为1*1的卷积核的个数 # scope:参数域名称 # reuse:是否重复使用 #*************************************************************************************************************** def inception(x,d0_1,d1_1,d1_3,d2_1,d2_5,d3_1,scope = 'inception',reuse = None): with tf.variable_scope(scope,reuse = reuse): #slim.conv2d,slim.max_pool2d的默认参数都放在了slim的参数域里面 with slim.arg_scope([slim.conv2d,slim.max_pool2d],stride = 1,padding = 'SAME'): #第一个分支 with tf.variable_scope('branch0'): branch_0 = slim.conv2d(x,d0_1,[1,1],scope = 'conv_1x1') #第二个分支 with tf.variable_scope('branch1'): branch_1 = slim.conv2d(x,d1_1,[1,1],scope = 'conv_1x1') branch_1 = slim.conv2d(branch_1,d1_3,[3,3],scope = 'conv_3x3') #第三个分支 with tf.variable_scope('branch2'): branch_2 = slim.conv2d(x,d2_1,[1,1],scope = 'conv_1x1') branch_2 = slim.conv2d(branch_2,d2_5,[5,5],scope = 'conv_5x5') #第四个分支 with tf.variable_scope('branch3'): branch_3 = slim.max_pool2d(x,[3,3],scope = 'max_pool') branch_3 = slim.conv2d(branch_3,d3_1,[1,1],scope = 'conv_1x1') #连接 net = tf.concat([branch_0,branch_1,branch_2,branch_3],axis = -1) return net #*************************************** 使用inception构建GoogleNet ********************************************* #使用inception构建GoogleNet #INPUTS: # inputs-----------输入 # num_classes------输出类别数目 # is_trainning-----batch_norm层是否使用训练模式,batch_norm和is_trainning密切相关 # 当is_trainning = True 时候,它使用一个batch数据的平均移动,方差值 # 当is_trainning = Flase时候,它就使用固定的值 # verbos-----------控制打印信息 # reuse------------是否重复使用 #*************************************************************************************************************** def googlenet(inputs,num_classes,reuse = None,is_trainning = None,verbose = False): with slim.arg_scope([slim.batch_norm],is_training = is_trainning): with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d], padding = 'SAME',stride = 1): net = inputs #googlnet的第一个块 with tf.variable_scope('block1',reuse = reuse): net = slim.conv2d(net,64,[5,5],stride = 2,scope = 'conv_5x5') if verbose: print('block1 output:{}'.format(net.shape)) #googlenet的第二个块 with tf.variable_scope('block2',reuse = reuse): net = slim.conv2d(net,64,[1,1],scope = 'conv_1x1') net = slim.conv2d(net,192,[3,3],scope = 'conv_3x3') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block2 output:{}'.format(net.shape)) #googlenet第三个块 with tf.variable_scope('block3',reuse = reuse): net = inception(net,64,96,128,16,32,32,scope = 'inception_1') net = inception(net,128,128,192,32,96,64,scope = 'inception_2') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block3 output:{}'.format(net.shape)) #googlenet第四个块 with tf.variable_scope('block4',reuse = reuse): net = inception(net,192,96,208,16,48,64,scope = 'inception_1') net = inception(net,160,112,224,24,64,64,scope = 'inception_2') net = inception(net,128,128,256,24,64,64,scope = 'inception_3') net = inception(net,112,144,288,24,64,64,scope = 'inception_4') net = inception(net,256,160,320,32,128,128,scope = 'inception_5') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block4 output:{}'.format(net.shape)) #googlenet第五个块 with tf.variable_scope('block5',reuse = reuse): net = inception(net,256,160,320,32,128,128,scope = 'inception1') net = inception(net,384,182,384,48,128,128,scope = 'inception2') net = slim.avg_pool2d(net,[2,2],stride = 2,scope = 'avg_pool') if verbose: print('block5 output:{}'.format(net.shape)) #最后一块 with tf.variable_scope('classification',reuse = reuse): net = slim.flatten(net) net = slim.fully_connected(net,num_classes,activation_fn = None,normalizer_fn = None,scope = 'logit') if verbose: print('classification output:{}'.format(net.shape)) return net #给卷积层设置默认的激活函数和batch_norm with slim.arg_scope([slim.conv2d],activation_fn = tf.nn.relu,normalizer_fn = slim.batch_norm) as sc: conv_scope = sc is_trainning_ph = tf.placeholder(tf.bool,name = 'is_trainning') #定义占位符 x_train_ph = tf.placeholder(shape = (None,x_train.shape[1],x_train.shape[2],x_train.shape[3]),dtype = tf.float32) x_test_ph = tf.placeholder(shape = (None,x_test.shape[1],x_test.shape[2],x_test.shape[3]),dtype = tf.float32) y_train_ph = tf.placeholder(shape = (None,),dtype = tf.int32) y_test_ph = tf.placeholder(shape = (None,),dtype = tf.int32) #实例化网络 with slim.arg_scope(conv_scope): train_out = googlenet(x_train_ph,10,is_trainning = is_trainning_ph,verbose = True) val_out = googlenet(x_test_ph,10,is_trainning = is_trainning_ph,reuse = True) #定义loss和acc with tf.variable_scope('loss'): train_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_train_ph,logits = train_out,scope = 'train') val_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_test_ph,logits = val_out,scope = 'val') with tf.name_scope('accurcay'): train_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(train_out,axis = -1,output_type = tf.int32),y_train_ph),tf.float32)) val_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(val_out,axis = -1,output_type = tf.int32),y_test_ph),tf.float32)) #定义训练op lr = 1e-2 opt = tf.train.MomentumOptimizer(lr,momentum = 0.9) #通过tf.get_collection获得所有需要更新的op update_op = tf.get_collection(tf.GraphKeys.UPDATE_OPS) #使用tesorflow控制流,先执行update_op再进行loss最小化 with tf.control_dependencies(update_op): train_op = opt.minimize(train_loss) #开启会话 sess = tf.Session() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) batch_size = 64 #开始训练 for e in range(10000): batch1 = np.random.randint(0,50000,size = batch_size) t_x_train = x_train[batch1][:][:][:] t_y_train = y_train[batch1] batch2 = np.random.randint(0,10000,size = batch_size) t_x_test = x_test[batch2][:][:][:] t_y_test = y_test[batch2] sess.run(train_op,feed_dict = {x_train_ph:t_x_train, is_trainning_ph:True, y_train_ph:t_y_train}) # if(e%1000 == 999): # loss_train,acc_train = sess.run([train_loss,train_acc], # feed_dict = {x_train_ph:t_x_train, # is_trainning_ph:True, # y_train_ph:t_y_train}) # loss_test,acc_test = sess.run([val_loss,val_acc], # feed_dict = {x_test_ph:t_x_test, # is_trainning_ph:False, # y_test_ph:t_y_test}) # print('STEP{}:train_loss:{:.6f} train_acc:{:.6f} test_loss:{:.6f} test_acc:{:.6f}' # .format(e+1,loss_train,acc_train,loss_test,acc_test)) saver.save(sess = sess,save_path = 'VGGModel\model.ckpt') print('Train Done!!') print('--'*60) sess.close() ``` 报错信息是 ``` Using TensorFlow backend. block1 output:(?, 16, 16, 64) block2 output:(?, 8, 8, 192) block3 output:(?, 4, 4, 480) block4 output:(?, 2, 2, 832) block5 output:(?, 1, 1, 1024) classification output:(?, 10) Traceback (most recent call last): File "<ipython-input-1-6385a760fe16>", line 1, in <module> runfile('F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py', wdir='F:/Project/TEMP/LearnTF/GoogleNet') File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py", line 177, in <module> y_train_ph:t_y_train}) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,32,32,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: gradients/block4/inception_4/concat_grad/ShapeN/_45 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23694_gradients/block4/inception_4/concat_grad/ShapeN", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] ``` 看了好多遍都不是喂数据的问题,百度说是summary出了问题,可是我也没有summary呀,头晕~~~~
tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?
```**tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?** 代码如下: !/usr/bin/python _*_ coding: utf-8 -*- from __future__ import print_function import tensorflow as tf import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time from tensorflow import keras print (tf.__version__) print (sys.version_info) for module in mpl ,np, pd, sklearn, tf, keras: print (module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() x_valid,x_train = x_train_all[:5000],x_train_all[5000:] y_valid,y_train = y_train_all[:5000],y_train_all[5000:] print (x_valid.shape, y_valid.shape) print (x_train.shape, y_train.shape) print (x_test.shape, y_test.shape) ``` ``` 报错如下: 2.1.0 sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0) matplotlib 2.2.5 numpy 1.16.6 pandas 0.24.2 sklearn 0.20.4 tensorflow 2.1.0 tensorflow_core.python.keras.api._v2.keras 2.2.4-tf Traceback (most recent call last): File "/home/join/test_demo/test2.py", line 26, in <module> (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/keras/data sets/fashion_mnist.py", line 59, in load_data imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28) File "/usr/lib/python2.7/gzip.py", line 261, in read self._read(readsize) File "/usr/lib/python2.7/gzip.py", line 315, in _read self._read_eof() File "/usr/lib/python2.7/gzip.py", line 354, in _read_eof hex(self.crc))) IOError: CRC check failed 0xa445bb78 != 0xe7f80d 3fL ``` ```
用vscope编写python程序,运行后无结果显示?
import numpy as np import matplotlib.pyplot as plt t = np.arange(0, 4, 0.1) plt.plot(t,t,t,t+2,t,t**2) ``` numpy包和matplotlib都安装到最新版本。 ```
Python中如何用numpy删除数据矩阵指定行
本人初学Python,数据大概在40M左右,txt格式,用numpy中的Loadtxt打开后,需要删除所有第二列为空的行,自己写的代码如下: import numpy as np data=np.loadtxt('GSE4187.txt',delimiter='\t',skiprows=0,dtype='str') def row(c,data): a,b=np.where(data==c) return int(a) for i in range(data.shape[0]): c=data[i][1] if c is '': data=np.delete(arr,row(c,data),axis=0) 试了很久运行出来的结果依旧是原来的数据,没有任何改变,请各位大神指点,非常感谢!
numpy数组中是否存在一个或多个0的重复次数超过次n次的区域求思路
本人小白啊,工作需要python,求思路啊。 我需要检测一个一维的numpy数组。 数组内大概10万条,要检测的空区数据为 0。 百度查到的资料说用循环,但是一个空区,我能知道怎么写,那多个空区怎么写?
Python用tar.gz文件安装时出现错误:No module named 'numpy.distutils._msvccompiler' in numpy.distutils及Unable to find vcvarsall.bat
系统win10 64位,python版本3.7.4。 在网上下载了scikit-learn-0.22.tar,解压后利用python setup.py install进行安装时报错。 代码如下: ``` PS C:\Users\TH.Liu> cd E:\python\Scripts\scikit-learn-0.22\scikit-learn-0.22 PS E:\python\Scripts\scikit-learn-0.22\scikit-learn-0.22> python setup.py install Partial import of sklearn during the build process. E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils Traceback (most recent call last): File "setup.py", line 303, in <module> setup_package() File "setup.py", line 299, in setup_package setup(**metadata) File "E:\python\lib\site-packages\numpy\distutils\core.py", line 137, in setup config = configuration() File "setup.py", line 182, in configuration config.add_subpackage('sklearn') File "E:\python\lib\site-packages\numpy\distutils\misc_util.py", line 1035, in add_subpackage caller_level = 2) File "E:\python\lib\site-packages\numpy\distutils\misc_util.py", line 1004, in get_subpackage caller_level = caller_level + 1) File "E:\python\lib\site-packages\numpy\distutils\misc_util.py", line 941, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "sklearn\setup.py", line 86, in configuration cythonize_extensions(top_path, config) File "E:\python\Scripts\scikit-learn-0.22\scikit-learn-0.22\sklearn\_build_utils\__init__.py", line 50, in cythonize_extensions basic_check_build() File "E:\python\Scripts\scikit-learn-0.22\scikit-learn-0.22\sklearn\_build_utils\pre_build_helpers.py", line 70, in basic_check_build compile_test_program(code) File "E:\python\Scripts\scikit-learn-0.22\scikit-learn-0.22\sklearn\_build_utils\pre_build_helpers.py", line 40, in compile_test_program extra_postargs=extra_postargs) File "E:\python\lib\distutils\_msvccompiler.py", line 346, in compile self.initialize() File "E:\python\lib\distutils\_msvccompiler.py", line 239, in initialize vc_env = _get_vc_env(plat_spec) File "E:\python\lib\distutils\_msvccompiler.py", line 135, in _get_vc_env raise DistutilsPlatformError("Unable to find vcvarsall.bat") distutils.errors.DistutilsPlatformError: Unable to find vcvarsall.bat ``` 实际上,这俩错误我都在网上找过许多解决方式,但都没有用。下载了VS2010并尝试过在cmd中设置VS90COMNTOOLS =%VS100COMNTOOLS%,依然无效。 抱着最后的希望来这里求助……希望能得到解决。 ———————————————————— 2019.12.17 更新: 在安装了VS2015之后,vcvarsall.bat的问题消失了,剩下的只有: ``` E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) E:\python\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils ``` 这些问题了。 大家说PowerShell容易出BUG,我换用CMD后依旧如此。 我依旧没有找到这些报错的解决方案,希望得到大佬的帮助!
使用numpy保存图像的缺点?
如题所述,我在思考这个问题,或者我已经做了一点点尝试,确实,使用npy保存的图像格式读取速度慢到了极致,且占用空间是png的两倍以上。 但是我不是很确定,因此想问问大家,是我的程序的问题还是npy并不适合保存大量的数据?谢谢!
openCV_python自带的ANN进行手写字体识别,报错。求助
![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479207_695592.png)![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479217_497206.png) 我用python3.6按照《OpenCV3计算机视觉》书上代码进行手写字识别,识别率很低,运行时还报了错:OpenCV(3.4.1) Error: Assertion failed ((type == 5 || type == 6) && inputs.cols == layer_sizes[0]) in cv::ml::ANN_MLPImpl::predict, file C:\projects\opencv-python\opencv\modules\ml\src\ann_mlp.cpp, line 411 ``` 具体代码如下:求大佬指点下 import cv2 import numpy as np import digits_ann as ANN def inside(r1, r2): x1, y1, w1, h1 = r1 x2, y2, w2, h2 = r2 if (x1 > x2) and (y1 > y2) and (x1 + w1 < x2 + w2) and (y1 + h1 < y2 + h2): return True else: return False def wrap_digit(rect): x, y, w, h = rect padding = 5 hcenter = x + w / 2 vcenter = y + h / 2 if (h > w): w = h x = hcenter - (w / 2) else: h = w y = vcenter - (h / 2) return (int(x - padding), int(y - padding), int(w + padding), int(h + padding)) ''' 注意:首次测试时,建议将使用完整的训练数据集,且进行多次迭代,直到收敛 如:ann, test_data = ANN.train(ANN.create_ANN(100), 50000, 30) ''' ann, test_data = ANN.train(ANN.create_ANN(10), 50000, 1) # 调用所需识别的图片,并处理 path = "C:\\Users\\64601\\PycharmProjects\Ann\\images\\numbers.jpg" img = cv2.imread(path, cv2.IMREAD_UNCHANGED) bw = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) bw = cv2.GaussianBlur(bw, (7, 7), 0) ret, thbw = cv2.threshold(bw, 127, 255, cv2.THRESH_BINARY_INV) thbw = cv2.erode(thbw, np.ones((2, 2), np.uint8), iterations=2) image, cntrs, hier = cv2.findContours(thbw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) rectangles = [] for c in cntrs: r = x, y, w, h = cv2.boundingRect(c) a = cv2.contourArea(c) b = (img.shape[0] - 3) * (img.shape[1] - 3) is_inside = False for q in rectangles: if inside(r, q): is_inside = True break if not is_inside: if not a == b: rectangles.append(r) for r in rectangles: x, y, w, h = wrap_digit(r) cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) roi = thbw[y:y + h, x:x + w] try: digit_class = ANN.predict(ann, roi)[0] except: print("except") continue cv2.putText(img, "%d" % digit_class, (x, y - 1), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0)) cv2.imshow("thbw", thbw) cv2.imshow("contours", img) cv2.waitKey() cv2.destroyAllWindows() ####### import cv2 import pickle import numpy as np import gzip """OpenCV ANN Handwritten digit recognition example Wraps OpenCV's own ANN by automating the loading of data and supplying default paramters, such as 20 hidden layers, 10000 samples and 1 training epoch. The load data code is taken from http://neuralnetworksanddeeplearning.com/chap1.html by Michael Nielsen """ def vectorized_result(j): e = np.zeros((10, 1)) e[j] = 1.0 return e def load_data(): with gzip.open('C:\\Users\\64601\\PycharmProjects\\Ann\\mnist.pkl.gz') as fp: # 注意版本不同,需要添加传入第二个参数encoding='bytes',否则出现编码错误 training_data, valid_data, test_data = pickle.load(fp, encoding='bytes') fp.close() return (training_data, valid_data, test_data) def wrap_data(): # tr_d数组长度为50000,va_d数组长度为10000,te_d数组长度为10000 tr_d, va_d, te_d = load_data() # 训练数据集 training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = list(zip(training_inputs, training_results)) # 校验数据集 validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]] validation_data = list(zip(validation_inputs, va_d[1])) # 测试数据集 test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]] test_data = list(zip(test_inputs, te_d[1])) return (training_data, validation_data, test_data) def create_ANN(hidden=20): ann = cv2.ml.ANN_MLP_create() # 建立模型 ann.setTrainMethod(cv2.ml.ANN_MLP_RPROP | cv2.ml.ANN_MLP_UPDATE_WEIGHTS) # 设置训练方式为反向传播 ann.setActivationFunction( cv2.ml.ANN_MLP_SIGMOID_SYM) # 设置激活函数为SIGMOID,还有cv2.ml.ANN_MLP_IDENTITY,cv2.ml.ANNMLP_GAUSSIAN ann.setLayerSizes(np.array([784, hidden, 10])) # 设置层数,输入784层,输出层10 ann.setTermCriteria((cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 100, 0.1)) # 设置终止条件 return ann def train(ann, samples=10000, epochs=1): # tr:训练数据集; val:校验数据集; test:测试数据集; tr, val, test = wrap_data() for x in range(epochs): counter = 0 for img in tr: if (counter > samples): break if (counter % 1000 == 0): print("Epoch %d: Trained %d/%d" % (x, counter, samples)) counter += 1 data, digit = img ann.train(np.array([data.ravel()], dtype=np.float32), cv2.ml.ROW_SAMPLE, np.array([digit.ravel()], dtype=np.float32)) print("Epoch %d complete" % x) return ann, test def predict(ann, sample): resized = sample.copy() rows, cols = resized.shape if rows != 28 and cols != 28 and rows * cols > 0: resized = cv2.resize(resized, (28, 28), interpolation=cv2.INTER_CUBIC) return ann.predict(np.array([resized.ravel()], dtype=np.float32)) ```
在安装好anaconda之后,在anaconda里面配置了新环境但是jupyter和numpy无法安装?
在配置好新环境后需要安装jupyter和numpy 但是无法安装,显示为权限受限和安装包错误,但是anaconda中原本的base环境里面可以运行jupyter和numpy。求大佬解答。
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
Python 植物大战僵尸代码实现(2):植物卡片选择和种植
这篇文章要介绍的是: - 上方植物卡片栏的实现。 - 点击植物卡片,鼠标切换为植物图片。 - 鼠标移动时,判断当前在哪个方格中,并显示半透明的植物作为提示。
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
记一次腾讯面试:进程之间究竟有哪些通信方式?如何通信? ---- 告别死记硬背
有一次面试的时候,被问到进程之间有哪些通信方式,不过由于之前没深入思考且整理过,说的并不好。想必大家也都知道进程有哪些通信方式,可是我猜很多人都是靠着”背“来记忆的,所以今天的这篇文章,讲给大家详细着讲解他们是如何通信的,让大家尽量能够理解他们之间的区别、优缺点等,这样的话,以后面试官让你举例子,你也能够顺手拈来。 1、管道 我们来看一条 Linux 的语句 netstat -tulnp | gr...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
网络(8)-HTTP、Socket、TCP、UDP的区别和联系
TCP/IP协议是传输层协议,主要解决数据如何在网络中传输,而HTTP是应用层协议,主要解决如何包装数据。 一、TCP与UDP的不同 1. 是否需要建立连接。 UDP在传送数据之前不需要先建立连接;TCP则提供面向连接的服务; 2. 是否需要给出确认 对方的传输层在收到UDP报文后,不需要给出任何确认,而 TCP需要给出确认报文,要提供可靠的、面向连接的传输服务。 3.虽然UDP不提供可靠交...
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
防劝退!数据结构和算法难理解?可视化动画带你轻松透彻理解!
大家好,我是 Rocky0429,一个连数据结构和算法都不会的蒟蒻… 学过数据结构和算法的都知道这玩意儿不好学,没学过的经常听到这样的说法还没学就觉得难,其实难吗?真难! 难在哪呢?当年我还是个小蒟蒻,初学数据结构和算法的时候,在忍着枯燥看完定义原理,之后想实现的时候,觉得它们的过程真的是七拐八绕,及其难受。 在简单的链表、栈和队列这些我还能靠着在草稿上写写画画理解过程,但是到了数论、图...
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
开挂的人生!那些当选院士,又是ACM/IEEE 双料Fellow的华人学者们
昨日,2019年两院院士正式官宣,一时间抢占了各大媒体头条。 朋友圈也是一片沸腾,奔走相告,赶脚比自己中了大奖还嗨皮! 谁叫咱家导师就是这么厉害呢!!! 而就在最近,新一年度的IEEE/ACM Fellow也将正式公布。 作为学术届的顶级荣誉,不自然地就会将院士与Fellow作比较,到底哪个含金量更高呢? 学术君认为,同样是专业机构对学者的认可,考量标准不一,自然不能一概而论。 但...
聊聊C语言和指针的本质
坐着绿皮车上海到杭州,24块钱,很宽敞,在火车上非正式地聊几句。 很多编程语言都以 “没有指针” 作为自己的优势来宣传,然而,对于C语言,指针却是与生俱来的。 那么,什么是指针,为什么大家都想避开指针。 很简单, 指针就是地址,当一个地址作为一个变量存在时,它就被叫做指针,该变量的类型,自然就是指针类型。 指针的作用就是,给出一个指针,取出该指针指向地址处的值。为了理解本质,我们从计算机模型说起...
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
代码详解:如何用Python快速制作美观、炫酷且有深度的图表
全文共12231字,预计学习时长35分钟生活阶梯(幸福指数)与人均GDP(金钱)正相关的正则图本文将探讨三种用Python可视化数据的不同方法。以可视化《2019年世界幸福报告》的数据为例,本文用Gapminder和Wikipedia的信息丰富了《世界幸福报告》数据,以探索新的数据关系和可视化方法。《世界幸福报告》试图回答世界范围内影响幸福的因素。报告根据对“坎特里尔阶梯问题”的回答来确定幸...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
(经验分享)作为一名普通本科计算机专业学生,我大学四年到底走了多少弯路
今年正式步入了大四,离毕业也只剩半年多的时间,回想一下大学四年,感觉自己走了不少弯路,今天就来分享一下自己大学的学习经历,也希望其他人能不要走我走错的路。 (一)初进校园 刚进入大学的时候自己完全就相信了高中老师的话:“进入大学你们就轻松了”。因此在大一的时候自己学习的激情早就被抛地一干二净,每天不是在寝室里玩游戏就是出门游玩,不过好在自己大学时买的第一台笔记本性能并不是很好,也没让我彻底沉...
如何写一篇技术博客,谈谈我的看法
前言 只有光头才能变强。 文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y 我一直推崇学技术可以写技术博客去沉淀自己的知识,因为知识点实在是太多太多了,通过自己的博客可以帮助自己快速回顾自己学过的东西。 我最开始的时候也是只记笔记,认为自己能看得懂就好。但如果想验证自己是不是懂了,可以写成技术博客。在写技术博客的...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
面试还搞不懂redis,快看看这40道面试题(含答案和思维导图)
Redis 面试题 1、什么是 Redis?. 2、Redis 的数据类型? 3、使用 Redis 有哪些好处? 4、Redis 相比 Memcached 有哪些优势? 5、Memcache 与 Redis 的区别都有哪些? 6、Redis 是单进程单线程的? 7、一个字符串类型的值能存储最大容量是多少? 8、Redis 的持久化机制是什么?各自的优缺点? 9、Redis 常见性...
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
互联网公司的裁员,能玩出多少种花样?
裁员,也是一门学问,可谓博大精深!以下,是互联网公司的裁员的多种方法:-正文开始-135岁+不予续签的理由:千禧一代网感更强。95后不予通过试用期的理由:已婚已育员工更有责任心。2通知接下来要过苦日子,让一部分不肯同甘共苦的员工自己走人,以“兄弟”和“非兄弟”来区别员工。3强制996。员工如果平衡不了工作和家庭,可在离婚或离职里二选一。4不布置任何工作,但下班前必须提交千字工作日报。5不给活干+...
【设计模式】单例模式的八种写法分析
网上泛滥流传单例模式的写法种类,有说7种的,也有说6种的,当然也不排除说5种的,他们说的有错吗?其实没有对与错,刨根问底,写法终究是写法,其本质精髓大体一致!因此完全没必要去追究写法的多少,有这个时间还不如跟着宜春去网吧偷耳机、去田里抓青蛙得了,一天天的....
《面试宝典》:检验是否为合格的初中级程序员的面试知识点,你都知道了吗?查漏补缺
欢迎关注文章系列,一起学习 《提升能力,涨薪可待篇》 《面试知识,工作可待篇》 《实战演练,拒绝996篇》 也欢迎关注公 众 号【Ccww笔记】,原创技术文章第一时间推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《面试知识,工作可待篇》-Java笔试面试基础知识大全 前言 是不是感觉找工作面试是那么难呢? 在找工作面试应在学习的基础进行总结面试知识点,工作也指日可待,欢...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
微博推荐算法简述
在介绍微博推荐算法之前,我们先聊一聊推荐系统和推荐算法。有这样一些问题:推荐系统适用哪些场景?用来解决什么问题、具有怎样的价值?效果如何衡量? 推荐系统诞生很早,但真正被大家所重视,缘起于以”facebook”为代表的社会化网络的兴起和以“淘宝“为代表的电商的繁荣,”选择“的时代已经来临,信息和物品的极大丰富,让用户如浩瀚宇宙中的小点,无所适从。推荐系统迎来爆发的机会,变得离用户更近: 快...
相关热词 c#导入fbx c#中屏蔽键盘某个键 c#正态概率密度 c#和数据库登陆界面设计 c# 高斯消去法 c# codedom c#读取cad文件文本 c# 控制全局鼠标移动 c# temp 目录 bytes初始化 c#
立即提问