numpy处理中文csv文件报错?

#coding = gbk
import numpy as np
c,v,m=np.loadtxt('e:\000718.csv', delimiter=',', usecols=(3,6,13),unpack=True)

其中,csv文件中第二列有中文字符,不管我设定coding是utf-8还是gbk,均报错。删除第二列,程序可以运行。
我修改000718.csv为utf格式,或另存为utf的txt格式,程序依然出错;猜测还是编码的问题,该如何解决,请教各位大神?
第一次发帖,希望能得到大家的帮助,不胜感激~
报错信息:
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 0-1: ordinal not in range(256)

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python读取csv文件后str转换为float

-------------------------------------------code--------------------------------- #coding:utf-8 import csv import xlwt #新建excel文件 myexcel = xlwt.Workbook() #新建sheet页 mysheet = myexcel.add_sheet("testsheet") csvfile = open("data.csv","r") #读取文件信息 reader = csv.reader(csvfile) l = 0 #通过循环获取单行信息 for line in reader: r = 0 #通过双重循环获取单个单元信息 for i in line: #通过双重循环写入excel表格 x=0 #将第六行第二列的str转换为float if l > 6: if r >= 1: #x=float(i) x = float(i) #print(i) if x > 160: print(l,r) mysheet.write(l,r,"high") else: mysheet.write(l,r,i) r=r+1 l=l+1 #最后保存到excel myexcel.save("myexcel.xls") -------------------------------------------code--------------------------------- 如上,x=float(i)处报错ValueError:could not convert string to float: '-' 此处数据类似 16.02 如何解决。

读取CSV文件中的数据并随即取值,出现object of type 'float' has no len()报错

我想通过读取CSV中的丢失率数据添加信道丢失率,在已经仿照之前为wifi速度设置rate,来写loss设置 ,丢失率的数据保存在stats文件夹的lossrate.csv中。运行程序在random库报错,未修改之前或者给直接设置loss=常数,运行正常。请问各位大神,我的程序哪里写错了,我都是仿照着之前的rate定义进行写的。(我自己写的部分已经用注释标出) ![图片说明](https://img-ask.csdn.net/upload/201905/09/1557413980_977708.png)![图片说明](https://img-ask.csdn.net/upload/201905/09/1557414084_275476.png)![图片说明](https://img-ask.csdn.net/upload/201905/09/1557414109_173608.png) ``` Process Process-22: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/lingjie/mp-congestion/scenarios/../scripts/multimod.py", line 41, in run loss = rand.choice(loss) File "/usr/lib/python3.6/random.py", line 258, in choice i = self._randbelow(len(seq)) TypeError: object of type 'float' has no len() Process Process-21: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/lingjie/mp-congestion/scenarios/../scripts/multimod.py", line 41, in run loss = rand.choice(loss) File "/usr/lib/python3.6/random.py", line 258, in choice i = self._randbelow(len(seq)) TypeError: object of type 'float' has no len() ```

Seaborn的使用问题(倒数第二行代码报错)

import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.formula.api import ols,glm #将数据集读入到pandas数据框中 wine = pd.read_csv('winequality-both.csv',sep=',',header=0) wine.columns = wine.columns.str.replace(' ','_') print(wine.head()) #显示所有变量的描述性统计量 print(wine.describe()) #找出唯一值 print(sorted(wine.quality.unique())) #计算值的概率 print(wine.quality.value_counts()) #按照葡萄酒类型显示质量的描述性统计量 print(wine.groupby('type')[['quality']].describe().unstack('type')) #按照葡萄酒的类型显示质量的特定分位数值 print(wine.groupby('type')[['quality']].quantile([0.25,0.75]).unstack('type')) #按照葡萄酒类型查看质量分布 red_wine = wine.loc[wine['type']=='red','quality'] white_wine = wine.loc[wine['type']=='white','quality'] sns.set_style("dark") print(sns.distplot(red_wine,norm_hist=True,kde=False,color="red",label="Red Wine")) print(sns.distplot(white_wine,norm_hist=True,kde=False,color="white",label="White Wine")) sns.axlabel("Quality Score","Density") plt.title("Distribution of Quality by Wine Type") plt.legend() plt.show() #检验红葡萄酒和白葡萄酒的平均质量是否有所不同 print(wine.groupby(['type'])[['quality']].agg(['std']) tstat, pvalue, df = sm.stats.ttest_ind(red_wine,white_wine) print('tstat:%.3f pvalue:%.4f' % (tstat,pvalue)) ![图片说明](https://img-ask.csdn.net/upload/201811/10/1541840583_803202.png)

python报错问题nameerror

nameerror:name GrackGeetest is not defined 查看了程序没有任何的错误,单词也没有拼写错误,GrackGeetest名字就是类名,没有任何问题 还有其他解决问题吗?

tensorflow写入tfrecord文件的问题

这个代码基本是按照tensorflow官方教程里面的代码写的,应该是一模一样了,但是却报错了 ``` def _bytes_feature(value): if isinstance(value,type(tf.constant(0))): value=value.numpy() return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def _float_feature(value): return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def feature_to_string(feature): strings=feature.SerializerToString() return strings n_boservations=int(1e4) feature0=np.random.choice([False,True],n_boservations) feature1=np.random.randint(0,5,n_boservations) strings=np.array([b'cat',b'dog',b'chicken',b'horse',b'goat']) feature2=strings[feature1] feature3=np.random.randn(n_boservations) #构建Example def serialize_example(feature0,feature1,feature2,feature3): feature={ 'feature0':_int64_feature(feature0), 'feature1':_int64_feature(feature1), 'feature2':_bytes_feature(feature2), 'feature3':_float_feature(feature3) } example_proto=tf.train.Example(features=tf.train.Features(feature=feature)) return example_proto.SerializeToString() features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3)) def generator(): for features in features_dataset: yield serialize_example(*features) #对数据集进行处理 serialized_features_dataset = tf.data.Dataset.from_generator( generator, output_types=tf.string, output_shapes=()) #写入文件 filename = 'test.tfrecord' writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_features_dataset) ``` ![图片说明](https://img-ask.csdn.net/upload/201908/08/1565249741_842362.png)

为什么读取数据第一行没有

![图片说明](https://img-ask.csdn.net/upload/201908/06/1565101807_406702.png)

python IOerror:write error

python服务报错:base.py IOerror:write error

关于python3.5中的bytes-like object和str

最近学scrapy爬虫的时候,用CsvItemExporter将Item写入csv文件时,报错 ``` TypeError: write() argument must be str, not bytes ``` 代码如下: ``` class StockPipelineCSV(object): def open_spider(self,spider): self.file = open('stocks_01.csv', 'w') self.exporter = CsvItemExporter(self.file) self.exporter.start_exporting() def close_spider(self,spider): self.exporter.finish_exporting() self.file.close() def process_item(self, item, spider): self.exporter.export_item(item) return item ``` 然后找到exporters的CsvItemExporter类中的export_item()函数以及其他相关函数: ``` #exporters.py def export_item(self, item): if self._headers_not_written: self._headers_not_written = False self._write_headers_and_set_fields_to_export(item) fields = self._get_serialized_fields(item, default_value='', include_empty=True) values = list(self._build_row(x for _, x in fields)) self.csv_writer.writerow(values) def _build_row(self, values): for s in values: try: yield to_native_str(s, self.encoding) except TypeError: yield s ``` ``` #python.py def to_native_str(text, encoding=None, errors='strict'): """ Return str representation of `text` (bytes in Python 2.x and unicode in Python 3.x). """ if six.PY2: return to_bytes(text, encoding, errors) else: return to_unicode(text, encoding, errors) ``` 发现这里没问题已经将item转换成存放str对象的一个list了,不知道问题究竟出在哪里?scrapy是初学,不知道是不是scrapy的itempipeline.py代码的问题? 网上查了半天说是要用w+b二进制方式打开,这样确实不会报错了,但写进去之后是乱码,顺序也随机(这个可能是我settings.py里没有设置) 然后我自己写了小程序测试,自己open一个csv文件,以w+b打开 ``` # coding: utf-8 import csv csvfile = open('D://t.csv', 'w+b') writer = csv.writer(csvfile) writer.writerow([str.encode('列1'), str.encode('列2'), str.encode('列3')]) data = [ str.encode('值1'), str.encode('值2'), str.encode('值3') ] writer.writerow(data) csvfile.close() ``` 结果报错: ``` TypeError: a bytes-like object is required, not 'str' ``` 但我明明把他转换成bytes了啊。最后换了一种方式,去掉str.encode(),用w方式写入,正常了。但对bytes-like object和str者二者很困惑,不理解之前的方式为什么会报错。

python 的 object of type 'float' has no len()?

1想根据excel的某一列进行表格拆分,但是会报错,object of type 'float' has no len(),请问如何解决。 import pandas as pd import xlsxwriter # 待拆分的Excel文件位置 file = r"C:\准备表.xlsx" # 拆分后的文件存放位置 result = r"C:\\拆分文件\\拆好的表.xlsx" # 读取待拆分的Excel文件 df = pd.read_excel(file) # 获取拆分条件:去重 jg_list = df[u'所属销售'].unique() # 按拆分条件分别保存新的Excel文件 for jg in jg_list: df=df[df[u'所属销售']==jg] df.to_excel(result,sheet_name=jg,index=False,engine='xlsxwriter') print('拆分完成!')

用神经网络训练模型,报错字符串不能转换为浮点,请问怎么解决?

import matplotlib.pyplot as plt from math import sqrt from matplotlib import pyplot import pandas as pd from numpy import concatenate from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import Adam import tensorflow ''' keras实现神经网络回归模型 ''' # 读取数据 path = 'data001.csv' # 删掉不用字符串字段 train = pd.read_csv(path) dataset = train.iloc[1:,:] # df转array values = dataset.values # 原始数据标准化,为了加速收敛 scaler = MinMaxScaler(feature_range=(0, 1)) scaled = scaler.fit_transform(values) y = scaled[:, -1] X = scaled[:, 0:-1] # 随机拆分训练集与测试集 from sklearn.model_selection import train_test_split train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.3) # 全连接神经网络 model = Sequential() input = X.shape[1] # 隐藏层128 model.add(Dense(128, input_shape=(input,))) model.add(Activation('relu')) # Dropout层用于防止过拟合 # model.add(Dropout(0.2)) # 隐藏层128 model.add(Dense(128)) model.add(Activation('relu')) # model.add(Dropout(0.2)) # 没有激活函数用于输出层,因为这是一个回归问题,我们希望直接预测数值,而不需要采用激活函数进行变换。 model.add(Dense(1)) # 使用高效的 ADAM 优化算法以及优化的最小均方误差损失函数 model.compile(loss='mean_squared_error', optimizer=Adam()) # early stoppping from keras.callbacks import EarlyStopping early_stopping = EarlyStopping(monitor='val_loss', patience=50, verbose=2) # 训练 history = model.fit(train_X, train_y, epochs=300, batch_size=20, validation_data=(test_X, test_y), verbose=2, shuffle=False, callbacks=[early_stopping]) # loss曲线 pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='test') pyplot.legend() pyplot.show() # 预测 yhat = model.predict(test_X) # 预测y逆标准化 inv_yhat0 = concatenate((test_X, yhat), axis=1) inv_yhat1 = scaler.inverse_transform(inv_yhat0) inv_yhat = inv_yhat1[:, -1] # 原始y逆标准化 test_y = test_y.reshape((len(test_y), 1)) inv_y0 = concatenate((test_X, test_y), axis=1) inv_y1 = scaler.inverse_transform(inv_y0) inv_y = inv_y1[:, -1] # 计算 RMSE rmse = sqrt(mean_squared_error(inv_y, inv_yhat)) print('Test RMSE: %.3f' % rmse) plt.plot(inv_y) plt.plot(inv_yhat) plt.show() ``` ``` 报错是:Traceback (most recent call last): File "F:/SSD/CNN.py", line 24, in <module> scaled = scaler.fit_transform(values) File "D:\anaconda\lib\site-packages\sklearn\base.py", line 464, in fit_transform return self.fit(X, **fit_params).transform(X) File "D:\anaconda\lib\site-packages\sklearn\preprocessing\data.py", line 334, in fit return self.partial_fit(X, y) File "D:\anaconda\lib\site-packages\sklearn\preprocessing\data.py", line 362, in partial_fit force_all_finite="allow-nan") File "D:\anaconda\lib\site-packages\sklearn\utils\validation.py", line 527, in check_array array = np.asarray(array, dtype=dtype, order=order) File "D:\anaconda\lib\site-packages\numpy\core\numeric.py", line 538, in asarray return array(a, dtype, copy=False, order=order) ValueError: could not convert string to float: 'label' label是csv文件里的列名,但是就算去掉,还是会报这个错误

吴恩达深度学习第四课第四周fr_utils.py报错,有人遇到过吗

Face Recognition/fr_utils.py, Line21中_get_session()和Line140中model无法找到引用,请问这是什么原因 加载模型时候会报如下错误: Using TensorFlow backend. 2018-08-26 21:30:53.046324: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Total Params: 3743280 Traceback (most recent call last): File "C:/Users/51530/PycharmProjects/DL/wuenda/Face/faceV3.py", line 60, in <module> load_weights_from_FaceNet(FRmodel) File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 133, in load_weights_from_FaceNet weights_dict = load_weights() File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 154, in load_weights conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) File "E:\anaconda\lib\site-packages\numpy\lib\npyio.py", line 1867, in genfromtxt raise ValueError(errmsg) ValueError: Some errors were detected ! Line #7 (got 2 columns instead of 1) Line #12 (got 3 columns instead of 1) Line #15 (got 2 columns instead of 1) 具体此文件: ``` #### PART OF THIS CODE IS USING CODE FROM VICTOR SY WANG: https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/utils.py #### import tensorflow as tf import numpy as np import os import cv2 from numpy import genfromtxt from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D import h5py import matplotlib.pyplot as plt _FLOATX = 'float32' def variable(value, dtype=_FLOATX, name=None): v = tf.Variable(np.asarray(value, dtype=dtype), name=name) _get_session().run(v.initializer) return v def shape(x): return x.get_shape() def square(x): return tf.square(x) def zeros(shape, dtype=_FLOATX, name=None): return variable(np.zeros(shape), dtype, name) def concatenate(tensors, axis=-1): if axis < 0: axis = axis % len(tensors[0].get_shape()) return tf.concat(axis, tensors) def LRN2D(x): return tf.nn.lrn(x, alpha=1e-4, beta=0.75) def conv2d_bn(x, layer=None, cv1_out=None, cv1_filter=(1, 1), cv1_strides=(1, 1), cv2_out=None, cv2_filter=(3, 3), cv2_strides=(1, 1), padding=None): num = '' if cv2_out == None else '1' tensor = Conv2D(cv1_out, cv1_filter, strides=cv1_strides, data_format='channels_first', name=layer+'_conv'+num)(x) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+num)(tensor) tensor = Activation('relu')(tensor) if padding == None: return tensor tensor = ZeroPadding2D(padding=padding, data_format='channels_first')(tensor) if cv2_out == None: return tensor tensor = Conv2D(cv2_out, cv2_filter, strides=cv2_strides, data_format='channels_first', name=layer+'_conv'+'2')(tensor) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+'2')(tensor) tensor = Activation('relu')(tensor) return tensor WEIGHTS = [ 'conv1', 'bn1', 'conv2', 'bn2', 'conv3', 'bn3', 'inception_3a_1x1_conv', 'inception_3a_1x1_bn', 'inception_3a_pool_conv', 'inception_3a_pool_bn', 'inception_3a_5x5_conv1', 'inception_3a_5x5_conv2', 'inception_3a_5x5_bn1', 'inception_3a_5x5_bn2', 'inception_3a_3x3_conv1', 'inception_3a_3x3_conv2', 'inception_3a_3x3_bn1', 'inception_3a_3x3_bn2', 'inception_3b_3x3_conv1', 'inception_3b_3x3_conv2', 'inception_3b_3x3_bn1', 'inception_3b_3x3_bn2', 'inception_3b_5x5_conv1', 'inception_3b_5x5_conv2', 'inception_3b_5x5_bn1', 'inception_3b_5x5_bn2', 'inception_3b_pool_conv', 'inception_3b_pool_bn', 'inception_3b_1x1_conv', 'inception_3b_1x1_bn', 'inception_3c_3x3_conv1', 'inception_3c_3x3_conv2', 'inception_3c_3x3_bn1', 'inception_3c_3x3_bn2', 'inception_3c_5x5_conv1', 'inception_3c_5x5_conv2', 'inception_3c_5x5_bn1', 'inception_3c_5x5_bn2', 'inception_4a_3x3_conv1', 'inception_4a_3x3_conv2', 'inception_4a_3x3_bn1', 'inception_4a_3x3_bn2', 'inception_4a_5x5_conv1', 'inception_4a_5x5_conv2', 'inception_4a_5x5_bn1', 'inception_4a_5x5_bn2', 'inception_4a_pool_conv', 'inception_4a_pool_bn', 'inception_4a_1x1_conv', 'inception_4a_1x1_bn', 'inception_4e_3x3_conv1', 'inception_4e_3x3_conv2', 'inception_4e_3x3_bn1', 'inception_4e_3x3_bn2', 'inception_4e_5x5_conv1', 'inception_4e_5x5_conv2', 'inception_4e_5x5_bn1', 'inception_4e_5x5_bn2', 'inception_5a_3x3_conv1', 'inception_5a_3x3_conv2', 'inception_5a_3x3_bn1', 'inception_5a_3x3_bn2', 'inception_5a_pool_conv', 'inception_5a_pool_bn', 'inception_5a_1x1_conv', 'inception_5a_1x1_bn', 'inception_5b_3x3_conv1', 'inception_5b_3x3_conv2', 'inception_5b_3x3_bn1', 'inception_5b_3x3_bn2', 'inception_5b_pool_conv', 'inception_5b_pool_bn', 'inception_5b_1x1_conv', 'inception_5b_1x1_bn', 'dense_layer' ] conv_shape = { 'conv1': [64, 3, 7, 7], 'conv2': [64, 64, 1, 1], 'conv3': [192, 64, 3, 3], 'inception_3a_1x1_conv': [64, 192, 1, 1], 'inception_3a_pool_conv': [32, 192, 1, 1], 'inception_3a_5x5_conv1': [16, 192, 1, 1], 'inception_3a_5x5_conv2': [32, 16, 5, 5], 'inception_3a_3x3_conv1': [96, 192, 1, 1], 'inception_3a_3x3_conv2': [128, 96, 3, 3], 'inception_3b_3x3_conv1': [96, 256, 1, 1], 'inception_3b_3x3_conv2': [128, 96, 3, 3], 'inception_3b_5x5_conv1': [32, 256, 1, 1], 'inception_3b_5x5_conv2': [64, 32, 5, 5], 'inception_3b_pool_conv': [64, 256, 1, 1], 'inception_3b_1x1_conv': [64, 256, 1, 1], 'inception_3c_3x3_conv1': [128, 320, 1, 1], 'inception_3c_3x3_conv2': [256, 128, 3, 3], 'inception_3c_5x5_conv1': [32, 320, 1, 1], 'inception_3c_5x5_conv2': [64, 32, 5, 5], 'inception_4a_3x3_conv1': [96, 640, 1, 1], 'inception_4a_3x3_conv2': [192, 96, 3, 3], 'inception_4a_5x5_conv1': [32, 640, 1, 1,], 'inception_4a_5x5_conv2': [64, 32, 5, 5], 'inception_4a_pool_conv': [128, 640, 1, 1], 'inception_4a_1x1_conv': [256, 640, 1, 1], 'inception_4e_3x3_conv1': [160, 640, 1, 1], 'inception_4e_3x3_conv2': [256, 160, 3, 3], 'inception_4e_5x5_conv1': [64, 640, 1, 1], 'inception_4e_5x5_conv2': [128, 64, 5, 5], 'inception_5a_3x3_conv1': [96, 1024, 1, 1], 'inception_5a_3x3_conv2': [384, 96, 3, 3], 'inception_5a_pool_conv': [96, 1024, 1, 1], 'inception_5a_1x1_conv': [256, 1024, 1, 1], 'inception_5b_3x3_conv1': [96, 736, 1, 1], 'inception_5b_3x3_conv2': [384, 96, 3, 3], 'inception_5b_pool_conv': [96, 736, 1, 1], 'inception_5b_1x1_conv': [256, 736, 1, 1], } def load_weights_from_FaceNet(FRmodel): # Load weights from csv files (which was exported from Openface torch model) weights = WEIGHTS weights_dict = load_weights() # Set layer weights of the model for name in weights: if FRmodel.get_layer(name) != None: FRmodel.get_layer(name).set_weights(weights_dict[name]) elif model.get_layer(name) != None: model.get_layer(name).set_weights(weights_dict[name]) def load_weights(): # Set weights path dirPath = './weights' fileNames = filter(lambda f: not f.startswith('.'), os.listdir(dirPath)) paths = {} weights_dict = {} for n in fileNames: paths[n.replace('.csv', '')] = dirPath + '/' + n for name in WEIGHTS: if 'conv' in name: conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) conv_w = np.reshape(conv_w, conv_shape[name]) conv_w = np.transpose(conv_w, (2, 3, 1, 0)) conv_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) weights_dict[name] = [conv_w, conv_b] elif 'bn' in name: bn_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) bn_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) bn_m = genfromtxt(paths[name + '_m'], delimiter=',', dtype=None) bn_v = genfromtxt(paths[name + '_v'], delimiter=',', dtype=None) weights_dict[name] = [bn_w, bn_b, bn_m, bn_v] elif 'dense' in name: dense_w = genfromtxt(dirPath+'/dense_w.csv', delimiter=',', dtype=None) dense_w = np.reshape(dense_w, (128, 736)) dense_w = np.transpose(dense_w, (1, 0)) dense_b = genfromtxt(dirPath+'/dense_b.csv', delimiter=',', dtype=None) weights_dict[name] = [dense_w, dense_b] return weights_dict def load_dataset(): train_dataset = h5py.File('datasets/train_happy.h5', "r") train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('datasets/test_happy.h5', "r") test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def img_to_encoding(image_path, model): img1 = cv2.imread(image_path, 1) img = img1[...,::-1] img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12) x_train = np.array([img]) embedding = model.predict_on_batch(x_train) return embedding ```

python+opencv+pyqt5 车牌批量识别报错

**我在单个识别的基础上更改了代码,改成了批量识别,运行程序报错:** **Process finished with exit code -1073740791 (0xC0000409)** 以下是识别单个图片的代码,运行OK ``` from PyQt5 import QtCore,QtGui, QtWidgets from PyQt5.QtGui import * from PyQt5.QtCore import Qt from PyQt5.QtWidgets import * from Recognition import PlateRecognition import cv2 import sys, os, xlwt import numpy as np class Ui_MainWindow(object): def __init__(self): self.RowLength = 0 self.Data = [['文件名称', '录入时间', '车牌号码', '车牌类型', '识别耗时', '车牌信息']] def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(1213, 670) MainWindow.setFixedSize(1213, 670) # 设置窗体固定大小 MainWindow.setToolButtonStyle(QtCore.Qt.ToolButtonIconOnly) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.scrollArea = QtWidgets.QScrollArea(self.centralwidget) self.scrollArea.setGeometry(QtCore.QRect(690, 10, 511, 491)) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") self.scrollAreaWidgetContents = QtWidgets.QWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 509, 489)) self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.label_0 = QtWidgets.QLabel(self.scrollAreaWidgetContents) self.label_0.setGeometry(QtCore.QRect(10, 10, 111, 20)) font = QtGui.QFont() font.setPointSize(11) self.label_0.setFont(font) self.label_0.setObjectName("label_0") self.label = QtWidgets.QLabel(self.scrollAreaWidgetContents) self.label.setGeometry(QtCore.QRect(10, 40, 481, 441)) self.label.setObjectName("label") self.label.setAlignment(Qt.AlignCenter) self.scrollArea.setWidget(self.scrollAreaWidgetContents) self.scrollArea_2 = QtWidgets.QScrollArea(self.centralwidget) self.scrollArea_2.setGeometry(QtCore.QRect(10, 10, 671, 631)) self.scrollArea_2.setWidgetResizable(True) self.scrollArea_2.setObjectName("scrollArea_2") self.scrollAreaWidgetContents_1 = QtWidgets.QWidget() self.scrollAreaWidgetContents_1.setGeometry(QtCore.QRect(0, 0, 669, 629)) self.scrollAreaWidgetContents_1.setObjectName("scrollAreaWidgetContents_1") self.label_1 = QtWidgets.QLabel(self.scrollAreaWidgetContents_1) self.label_1.setGeometry(QtCore.QRect(10, 10, 111, 20)) font = QtGui.QFont() font.setPointSize(11) self.label_1.setFont(font) self.label_1.setObjectName("label_1") self.tableWidget = QtWidgets.QTableWidget(self.scrollAreaWidgetContents_1) self.tableWidget.setGeometry(QtCore.QRect(10, 40, 651, 581)) # 581)) self.tableWidget.setObjectName("tableWidget") self.tableWidget.setColumnCount(6) self.tableWidget.setColumnWidth(0, 140) # 设置1列的宽度 self.tableWidget.setColumnWidth(1, 130) # 设置2列的宽度 self.tableWidget.setColumnWidth(2, 65) # 设置3列的宽度 self.tableWidget.setColumnWidth(3, 75) # 设置4列的宽度 self.tableWidget.setColumnWidth(4, 65) # 设置5列的宽度 self.tableWidget.setColumnWidth(5, 174) # 设置6列的宽度 self.tableWidget.setHorizontalHeaderLabels(["图片名称", "录入时间", "识别耗时", "车牌号码", "车牌类型", "车牌信息"]) self.tableWidget.setRowCount(self.RowLength) self.tableWidget.verticalHeader().setVisible(False) # 隐藏垂直表头) # self.tableWidget.setStyleSheet("selection-background-color:blue") # self.tableWidget.setAlternatingRowColors(True) self.tableWidget.setEditTriggers(QAbstractItemView.NoEditTriggers) self.tableWidget.raise_() self.scrollArea_2.setWidget(self.scrollAreaWidgetContents_1) self.scrollArea_3 = QtWidgets.QScrollArea(self.centralwidget) self.scrollArea_3.setGeometry(QtCore.QRect(690, 510, 341, 131)) self.scrollArea_3.setWidgetResizable(True) self.scrollArea_3.setObjectName("scrollArea_3") self.scrollAreaWidgetContents_3 = QtWidgets.QWidget() self.scrollAreaWidgetContents_3.setGeometry(QtCore.QRect(0, 0, 339, 129)) self.scrollAreaWidgetContents_3.setObjectName("scrollAreaWidgetContents_3") self.label_2 = QtWidgets.QLabel(self.scrollAreaWidgetContents_3) self.label_2.setGeometry(QtCore.QRect(10, 10, 111, 20)) font = QtGui.QFont() font.setPointSize(11) self.label_2.setFont(font) self.label_2.setObjectName("label_2") self.label_3 = QtWidgets.QLabel(self.scrollAreaWidgetContents_3) self.label_3.setGeometry(QtCore.QRect(10, 40, 321, 81)) self.label_3.setObjectName("label_3") self.scrollArea_3.setWidget(self.scrollAreaWidgetContents_3) self.scrollArea_4 = QtWidgets.QScrollArea(self.centralwidget) self.scrollArea_4.setGeometry(QtCore.QRect(1040, 510, 161, 131)) self.scrollArea_4.setWidgetResizable(True) self.scrollArea_4.setObjectName("scrollArea_4") self.scrollAreaWidgetContents_4 = QtWidgets.QWidget() self.scrollAreaWidgetContents_4.setGeometry(QtCore.QRect(0, 0, 159, 129)) self.scrollAreaWidgetContents_4.setObjectName("scrollAreaWidgetContents_4") self.pushButton_2 = QtWidgets.QPushButton(self.scrollAreaWidgetContents_4) self.pushButton_2.setGeometry(QtCore.QRect(20, 50, 121, 31)) self.pushButton_2.setObjectName("pushButton_2") self.pushButton = QtWidgets.QPushButton(self.scrollAreaWidgetContents_4) self.pushButton.setGeometry(QtCore.QRect(20, 90, 121, 31)) self.pushButton.setObjectName("pushButton") self.label_4 = QtWidgets.QLabel(self.scrollAreaWidgetContents_4) self.label_4.setGeometry(QtCore.QRect(10, 10, 111, 20)) font = QtGui.QFont() font.setPointSize(11) self.label_4.setFont(font) self.label_4.setObjectName("label_4") self.scrollArea_4.setWidget(self.scrollAreaWidgetContents_4) MainWindow.setCentralWidget(self.centralwidget) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) self.pushButton.clicked.connect(self.__openimage) # 设置点击事件 self.pushButton_2.clicked.connect(self.__writeFiles) # 设置点击事件 self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) self.ProjectPath = os.getcwd() # 获取当前工程文件位置 def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "车牌识别系统")) self.label_0.setText(_translate("MainWindow", "原始图片:")) self.label.setText(_translate("MainWindow", "")) self.label_1.setText(_translate("MainWindow", "识别结果:")) self.label_2.setText(_translate("MainWindow", "车牌区域:")) self.label_3.setText(_translate("MainWindow", "")) self.pushButton.setText(_translate("MainWindow", "打开文件")) self.pushButton_2.setText(_translate("MainWindow", "导出数据")) self.label_4.setText(_translate("MainWindow", "命令:")) self.scrollAreaWidgetContents_1.show() # 识别 def __vlpr(self, path): PR = PlateRecognition() result = PR.VLPR(path) return result def __show(self, result, FileName): # 显示表格 self.RowLength = self.RowLength + 1 if self.RowLength > 18: self.tableWidget.setColumnWidth(5, 157) self.tableWidget.setRowCount(self.RowLength) self.tableWidget.setItem(self.RowLength - 1, 0, QTableWidgetItem(FileName)) self.tableWidget.setItem(self.RowLength - 1, 1, QTableWidgetItem(result['InputTime'])) self.tableWidget.setItem(self.RowLength - 1, 2, QTableWidgetItem(str(result['UseTime']) + '秒')) self.tableWidget.setItem(self.RowLength - 1, 3, QTableWidgetItem(result['Number'])) self.tableWidget.setItem(self.RowLength - 1, 4, QTableWidgetItem(result['Type'])) if result['Type'] == '蓝色牌照': self.tableWidget.item(self.RowLength - 1, 4).setBackground(QBrush(QColor(3, 128, 255))) elif result['Type'] == '绿色牌照': self.tableWidget.item(self.RowLength - 1, 4).setBackground(QBrush(QColor(98, 198, 148))) elif result['Type'] == '黄色牌照': self.tableWidget.item(self.RowLength - 1, 4).setBackground(QBrush(QColor(242, 202, 9))) self.tableWidget.setItem(self.RowLength - 1, 5, QTableWidgetItem(result['From'])) # 显示识别到的车牌位置 size = (int(self.label_3.width()), int(self.label_3.height())) shrink = cv2.resize(result['Picture'], size, interpolation=cv2.INTER_AREA) shrink = cv2.cvtColor(shrink, cv2.COLOR_BGR2RGB) self.QtImg = QtGui.QImage(shrink[:], shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3, QtGui.QImage.Format_RGB888) self.label_3.setPixmap(QtGui.QPixmap.fromImage(self.QtImg)) def __writexls(self, DATA, path): wb = xlwt.Workbook(); ws = wb.add_sheet('Data'); # DATA.insert(0, ['文件名称','录入时间', '车牌号码', '车牌类型', '识别耗时', '车牌信息']) for i, Data in enumerate(DATA): for j, data in enumerate(Data): ws.write(i, j, data) wb.save(path) QMessageBox.information(None, "成功", "数据已保存!", QMessageBox.Yes) def __writecsv(self, DATA, path): f = open(path, 'w') # DATA.insert(0, ['文件名称','录入时间', '车牌号码', '车牌类型', '识别耗时', '车牌信息']) for data in DATA: f.write((',').join(data) + '\n') f.close() QMessageBox.information(None, "成功", "数据已保存!", QMessageBox.Yes) def __writeFiles(self): path, filetype = QFileDialog.getSaveFileName(None, "另存为", self.ProjectPath, "Excel 工作簿(*.xls);;CSV (逗号分隔)(*.csv)") if path == "": # 未选择 return if filetype == 'Excel 工作簿(*.xls)': self.__writexls(self.Data, path) elif filetype == 'CSV (逗号分隔)(*.csv)': self.__writecsv(self.Data, path) def __openimage(self): path, filetype = QFileDialog.getOpenFileName(None, "选择文件", self.ProjectPath, "JPEG Image (*.jpg);;PNG Image (*.png);;JFIF Image (*.jfif)") # ;;All Files (*) if path == "": # 未选择文件 return filename = path.split('/')[-1] # 尺寸适配 size = cv2.imdecode(np.fromfile(path, dtype=np.uint8), cv2.IMREAD_COLOR).shape if size[0] / size[1] > 1.0907: w = size[1] * self.label.height() / size[0] h = self.label.height() jpg = QtGui.QPixmap(path).scaled(w, h) elif size[0] / size[1] < 1.0907: w = self.label.width() h = size[0] * self.label.width() / size[1] jpg = QtGui.QPixmap(path).scaled(w, h) else: jpg = QtGui.QPixmap(path).scaled(self.label.width(), self.label.height()) self.label.setPixmap(jpg) result = self.__vlpr(path) if result is not None: self.Data.append( [filename, result['InputTime'], result['Number'], result['Type'], str(result['UseTime']) + '秒', result['From']]) self.__show(result, filename) else: QMessageBox.warning(None, "Error", "无法识别此图像!", QMessageBox.Yes) # 重写MainWindow类 class MainWindow(QtWidgets.QMainWindow): def closeEvent(self, event): reply = QtWidgets.QMessageBox.question(self, '提示', "是否要退出程序?\n提示:退出后将丢失所有识别数据", QtWidgets.QMessageBox.Yes | QtWidgets.QMessageBox.No, QtWidgets.QMessageBox.No) if reply == QtWidgets.QMessageBox.Yes: event.accept() else: event.ignore() if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) MainWindow = MainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) ``` **我改了一下,如下图,红色标记的添加的批量识别的代码,其他不变** ![图片说明](https://img-ask.csdn.net/upload/202003/19/1584628510_866364.png) ![图片说明](https://img-ask.csdn.net/upload/202003/19/1584629107_129702.png) 求大佬们帮忙看一下添加批量识别的代码是否正确,还有这个错肿么改,小白感激不尽

报错Traceback (most recent call last): File... .format(val=len(data), ind=len(index))) ValueError: Length of passed values is 400, index implies 1

我是个小菜鸟,在尝试写生成高斯分布的作业时被报错: ``` D:\Anaconda\python.exe "F:/All tasks in BFU/Study abroad/Internship2019.8 in Google/Homework/Course1/Exercise6/exercise6.py" Traceback (most recent call last): File "F:/All tasks in BFU/Study abroad/Internship2019.8 in Google/Homework/Course1/Exercise6/exercise6.py", line 20, in <module> y = func(x, mean, std) File "F:/All tasks in BFU/Study abroad/Internship2019.8 in Google/Homework/Course1/Exercise6/exercise6.py", line 15, in func f = math.exp(-((x - mu) ^ 2)/(2*sigma ^ 2))/(sigma * math.sqrt(2 * math.pi)) File "D:\Anaconda\lib\site-packages\pandas\core\ops.py", line 1071, in wrapper index=left.index, name=res_name, dtype=None) File "D:\Anaconda\lib\site-packages\pandas\core\ops.py", line 980, in _construct_result out = left._constructor(result, index=index, dtype=dtype) File "D:\Anaconda\lib\site-packages\pandas\core\series.py", line 262, in __init__ .format(val=len(data), ind=len(index))) ValueError: Length of passed values is 400, index implies 1 Process finished with exit code 1 ``` 我有安装anaconda,但是报错中貌似表明panda这个package的问题。请问大神大佬,我存在什么问题呀应该怎么解决⊙︿⊙,我好像没在网上找到和我一样的问题,不敢和网上的回答一样在命令提示符里输入命令怕搞错(。•́︿•̀。),是我比较菜鸟又急着所以麻烦了!! 附上我的作业代码: ``` import math import pandas as pd import numpy as np import matplotlib.pyplot as plt # import matplotlib.mlab as mlb data = pd.read_csv('example-exercise6.csv') # read file of data # data = data_['time'] mean = data.mean() # average of data std = data.std() # std def func(x, mu, sigma): f = math.exp(-((x - mu) ^ 2)/(2*sigma ^ 2))/(sigma * math.sqrt(2 * math.pi)) return f x = np.arange(60, 100, 0.1) y = func(x, mean, std) plt.plot(x, y) plt.hist(data, bins=10, rwidth=0.9, normed=True) # x = np.arange(145, 155,0.2) # y = normfun(x, mean, std) # plt.plot(x,y,'g',linewidth = 3) # plt.hist(data, bins = 6, color = 'b', alpha=0.5, rwidth = 0.9, normed=True) # plt.title('stakes distribution') # plt.xlabel('stakes time') # plt.ylabel('Probability') plt.show() ``` ( 其中csv文件是:) ``` 87 88 83 83 86 80 84 90 84 80 94 89 76 ```

python运行有错误:这是对数据进行分析生成可视化界面的程序(我是小白,请说下解决方法)

运行错误: C:\Users\Administrator\PycharmProjects\untitled\venv\Scripts\python.exe C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py 一共有:16590个 Building prefix dict from the default dictionary ... Loading model from cache C:\Users\ADMINI~1\AppData\Local\Temp\jieba.cache Loading model cost 0.808 seconds. Prefix dict has been built succesfully. Traceback (most recent call last): File "C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py", line 252, in <module> jiebaclearText(content) File "C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py", line 97, in jiebaclearText f_stop_text = f_stop.read() File "D:\python111\lib\codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 3: invalid start byte Process finished with exit code 1 代码如下: ''' data : 2019.3.28 goal : 可视化分析获取到的数据 ''' import csv time = [] nickName = [] gender = [] cityName = [] userLevel = [] score = [] content = '' # 读数据 def read_csv(): content = '' # 读取文件内容 with open(r'D:\maoyan.csv', 'r', encoding='utf_8_sig', newline='') as file_test: # 读文件 reader = csv.reader(file_test) i = 0 for row in reader: if i != 0: time.append(row[0]) nickName.append(row[1]) gender.append(row[2]) cityName.append(row[3]) userLevel.append(row[4]) score.append(row[5]) content = content + row[6] # print(row) i = i + 1 print('一共有:' + str(i - 1) + '个') return content import re, jieba # 词云生成工具 from wordcloud import WordCloud, ImageColorGenerator # 需要对中文进行处理 import matplotlib.font_manager as fm from pylab import * mpl.rcParams['font.sans-serif'] = ['SimHei'] from os import path d = path.dirname(__file__) stopwords_path = 'D:\ku\chineseStopWords.txt' # 评论词云分析 def word_cloud(content): import jieba, re, numpy from pyecharts import WordCloud import pandas as pd # 去除所有评论里多余的字符 content = content.replace(" ", ",") content = content.replace(" ", "、") content = re.sub('[,,。. \r\n]', '', content) segment = jieba.lcut(content) words_df = pd.DataFrame({'segment': segment}) # quoting=3 表示stopwords.txt里的内容全部不引用 stopwords = pd.read_csv(stopwords_path, index_col=False, quoting=3, sep="\t", names=['stopword'], encoding='utf-8') words_df = words_df[~words_df.segment.isin(stopwords.stopword)] words_stat = words_df.groupby(by=['segment'])['segment'].agg({"计数": numpy.size}) words_stat = words_stat.reset_index().sort_values(by=["计数"], ascending=False) test = words_stat.head(500).values codes = [test[i][0] for i in range(0, len(test))] counts = [test[i][1] for i in range(0, len(test))] wordcloud = WordCloud(width=1300, height=620) wordcloud.add("影评词云", codes, counts, word_size_range=[20, 100]) wordcloud.render(d + "\picture\c_wordcloud.html") # 定义个函数式用于分词 def jiebaclearText(text): # 定义一个空的列表,将去除的停用词的分词保存 mywordList = [] text = re.sub('[,,。. \r\n]', '', text) # 进行分词 seg_list = jieba.cut(text, cut_all=False) # 将一个generator的内容用/连接 listStr = '/'.join(seg_list) listStr = listStr.replace("class", "") listStr = listStr.replace("span", "") listStr = listStr.replace("悲伤逆流成河", "") # 打开停用词表 f_stop = open(stopwords_path, encoding="utf8") # 读取 try: f_stop_text = f_stop.read() finally: f_stop.close() # 关闭资源 # 将停用词格式化,用\n分开,返回一个列表 f_stop_seg_list = f_stop_text.split("\n") # 对默认模式分词的进行遍历,去除停用词 for myword in listStr.split('/'): # 去除停用词 if not (myword.split()) in f_stop_seg_list and len(myword.strip()) > 1: mywordList.append(myword) return ' '.join(mywordList) # 生成词云图 def make_wordcloud(text1): text1 = text1.replace("悲伤逆流成河", "") bg = plt.imread(d + "/static/znn1.jpg") # 生成 wc = WordCloud( # FFFAE3 background_color="white", # 设置背景为白色,默认为黑色 width=890, # 设置图片的宽度 height=600, # 设置图片的高度 mask=bg, # margin=10, # 设置图片的边缘 max_font_size=150, # 显示的最大的字体大小 random_state=50, # 为每个单词返回一个PIL颜色 font_path=d + '/static/simkai.ttf' # 中文处理,用系统自带的字体 ).generate_from_text(text1) # 为图片设置字体 my_font = fm.FontProperties(fname=d + '/static/simkai.ttf') # 图片背景 bg_color = ImageColorGenerator(bg) # 开始画图 plt.imshow(wc.recolor(color_func=bg_color)) # 为云图去掉坐标轴 plt.axis("off") # 画云图,显示 # 保存云图 wc.to_file(d + r"/picture/word_cloud.png") # 评论者性别分布可视化 def sex_distribution(gender): # print(gender) from pyecharts import Pie list_num = [] list_num.append(gender.count('0')) # 未知 list_num.append(gender.count('1')) # 男 list_num.append(gender.count('2')) # 女 attr = ["其他", "男", "女"] pie = Pie("性别饼图") pie.add("", attr, list_num, is_label_show=True) pie.render(d + r"\picture\sex_pie.html") # 评论者所在城市分布可视化 def city_distribution(cityName): city_list = list(set(cityName)) city_dict = {city_list[i]: 0 for i in range(len(city_list))} for i in range(len(city_list)): city_dict[city_list[i]] = cityName.count(city_list[i]) # 根据数量(字典的键值)排序 sort_dict = sorted(city_dict.items(), key=lambda d: d[1], reverse=True) city_name = [] city_num = [] for i in range(len(sort_dict)): city_name.append(sort_dict[i][0]) city_num.append(sort_dict[i][1]) import random from pyecharts import Bar bar = Bar("评论者城市分布") bar.add("", city_name, city_num, is_label_show=True, is_datazoom_show=True) bar.render(d + r"\picture\city_bar.html") # 每日评论总数可视化分析 def time_num_visualization(time): from pyecharts import Line time_list = list(set(time)) time_dict = {time_list[i]: 0 for i in range(len(time_list))} time_num = [] for i in range(len(time_list)): time_dict[time_list[i]] = time.count(time_list[i]) # 根据数量(字典的键值)排序 sort_dict = sorted(time_dict.items(), key=lambda d: d[0], reverse=False) time_name = [] time_num = [] print(sort_dict) for i in range(len(sort_dict)): time_name.append(sort_dict[i][0]) time_num.append(sort_dict[i][1]) line = Line("评论数量日期折线图") line.add( "日期-评论数", time_name, time_num, is_fill=True, area_color="#000", area_opacity=0.3, is_smooth=True, ) line.render(d + r"\picture\c_num_line.html") # 评论者猫眼等级、评分可视化 def level_score_visualization(userLevel, score): from pyecharts import Pie userLevel_list = list(set(userLevel)) userLevel_num = [] for i in range(len(userLevel_list)): userLevel_num.append(userLevel.count(userLevel_list[i])) score_list = list(set(score)) score_num = [] for i in range(len(score_list)): score_num.append(score.count(score_list[i])) pie01 = Pie("等级环状饼图", title_pos='center', width=900) pie01.add( "等级", userLevel_list, userLevel_num, radius=[40, 75], label_text_color=None, is_label_show=True, legend_orient="vertical", legend_pos="left", ) pie01.render(d + r"\picture\level_pie.html") pie02 = Pie("评分玫瑰饼图", title_pos='center', width=900) pie02.add( "评分", score_list, score_num, center=[50, 50], is_random=True, radius=[30, 75], rosetype="area", is_legend_show=False, is_label_show=True, ) pie02.render(d + r"\picture\score_pie.html") time = [] nickName = [] gender = [] cityName = [] userLevel = [] score = [] content = '' content = read_csv() # 1 词云 jiebaclearText(content) make_wordcloud(content) # pyecharts词云 # word_cloud(content) # 2 性别分布 sex_distribution(gender) # 3 城市分布 city_distribution(cityName) # 4 评论数 time_num_visualization(time) # 5 等级,评分 level_score_visualization(userLevel, score)

<AttributeError: 'str' object has no attribute 'decode'>,跪求大神解答。

``` from django.db import models # Create your models here. class Grades(models.Model): gname = models.CharField(max_length=20) gdate = models.DateTimeField() ggirlnum = models.IntegerField() gboynum = models.IntegerField() isDelete = models.BooleanField(default=False) class Students(models.Model): sname = models.CharField(max_length=20) sgender = models.BooleanField(default=True) sage = models.IntegerField() scontend = models.CharField(max_length=20) isDelete = models.BooleanField(default=False) # 关联外键 sgrade = models.ForeignKey(Grades, on_delete=models.CASCADE) ``` Django2.2, python3.7, 执行数据库迁移<python manage.py makemigrations>时报错,<AttributeError: 'str' object has no attribute 'decode'>

TensorFlow 自编码器 placeholder错误

``` import numpy as np import tensorflow as tf def xavier_init(fan_in, fan_out, constant=1): low = -constant * np.sqrt(6.0 / (fan_in + fan_out)) high = constant * np.sqrt(6.0 / (fan_in + fan_out)) return tf.random_uniform((fan_in, fan_out), minval=low, maxval=high, dtype=tf.float32) class AdditiveGaussionNoiseAutoencoder(object): def __init__(self, n_input, n_hidden, transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(), scale=0.1): self.n_input = n_input self.n_hidden = n_hidden self.transfer = transfer_function self.scale = tf.placeholder(tf.float32) self.training_scale = scale network_weights = self._initialize_weights() self.weights = network_weights self.x = tf.placeholder(tf.float32, [None, self.n_input]) self.hidden = self.transfer(tf.add(tf.matmul( self.x + scale * tf.random_normal((n_input,)), self.weights['w1']), self.weights['b1'])) self.reconstruction = tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2']) self.cost = tf.sqrt(tf.reduce_mean(tf.pow(tf.subtract( self.reconstruction, self.x), 2.0))) self.optimizer = optimizer.minimize(self.cost) init = tf.global_variables_initializer() self.sess = tf.Session() self.sess.run(init) def _initialize_weights(self): all_weights = dict() all_weights['w1'] = tf.Variable(xavier_init(self.n_input, self.n_hidden)) all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32)) all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype=tf.float32)) all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype=tf.float32)) return all_weights def partial_fit(self, X): cost, opt = self.sess.run((self.cost, self.optimizer), feed_dict={self.x: X, self.scale: self.training_scale}) return cost def calc_total_cost(self, X): return self.sess.run(self.cost, feed_dict={self.x: X, self.scale: self.training_scale}) def transform(self, X): return self.sess.run(self.hidden, feed_dict={self.x: X, self.scale: self.training_scale}) def generate(self, hidden=None): if hidden is None: hidden = np.random.normal(size=self.weights['b1']) return self.sess.run(self.reconstruction, feed_dict={self.hidden: hidden}) def reconstruct(self, X): return self.sess.run(self.reconstruction, feed_dict={self.x: X, self.scale: self.training_scale}) def getweights(self): return self.sess.run(self.weights['w1']) def getbiases(self): return self.sess.run(self.weights['b1']) ``` ``` import numpy as np import tensorflow as tf from DSAE import AdditiveGaussionNoiseAutoencoder import xlrd import sklearn.preprocessing as prep #数据读取,可转换为csv文件,好处理,参见ConvertData train_input = "/Users/Patrick/Desktop/traffic_data/train_500010092_input.xls" train_output = "/Users/Patrick/Desktop/traffic_data/train_500010092_output.xls" test_input = "/Users/Patrick/Desktop/traffic_data/test_500010092_input.xls" test_output = "/Users/Patrick/Desktop/traffic_data/test_500010092_output.xls" book_train_input = xlrd.open_workbook(train_input, encoding_override='utf-8') book_train_output = xlrd.open_workbook(train_output, encoding_override='utf-8') book_test_input = xlrd.open_workbook(test_input, encoding_override='utf-8') book_test_output = xlrd.open_workbook(test_output, encoding_override='utf-8') sheet_train_input = book_train_input.sheet_by_index(0) sheet_train_output = book_train_output.sheet_by_index(0) sheet_test_input = book_test_input.sheet_by_index(0) sheet_test_output = book_test_output.sheet_by_index(0) data_train_input = np.asarray([sheet_train_input.row_values(i) for i in range(2, sheet_train_input.nrows)]) data_train_output = np.asarray(([sheet_train_output.row_values(i) for i in range(2, sheet_train_output.ncols)])) data_test_input = np.asarray([sheet_test_input.row_values(i) for i in range(2, sheet_test_input.nrows)]) data_test_output = np.asarray(([sheet_test_output.row_values(i) for i in range(2, sheet_test_output.ncols)])) def standard_scale(X_train, X_test): preprocessor=prep.StandardScaler().fit(X_train) X_train=preprocessor.transform(X_train) X_test=preprocessor.transform(X_test) return X_train, X_test X_train, X_test = standard_scale(data_train_input, data_test_input) def get_block_form_data(data, batch_size, k): #start_index =0 start_index = k * batch_size return data[start_index:(start_index+batch_size)] training_epochs = 20 batch_size = 288 n_samples = sheet_test_output.nrows display_step = 1 stack_size = 3 hidden_size = [10, 8, 10] sdae = [] for i in range(stack_size): if i == 0: ae = AdditiveGaussionNoiseAutoencoder(n_input=12, n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) else: ae = AdditiveGaussionNoiseAutoencoder(n_input=hidden_size[i-1], n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) W = [] b = [] hidden_feacture = [] X_train = np.array([0]) for j in range(stack_size): if j == 0: X_train = data_train_input X_test = data_test_input else: X_train_pre = X_train X_train = sdae[j-1].transform(X_train_pre) print(X_train.shape) hidden_feacture.append(X_train) for epoch in range(training_epochs): avg_cost = 0. total_batch = int(n_samples / batch_size) for i in range(total_batch): batch_xs = get_block_form_data(X_train, batch_size, i) cost = sdae[j].partial_fit(batch_xs) avg_cost += cost / n_samples * batch_size if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost)) weight = sdae[j].getweights() W.append(weight) print(np.shape(W)) b.append(sdae[j].getbiases()) print(np.shape(b)) ``` 然后报错如下: ``` File "/Applications/PyCharm.app/Contents/helpers/pydev/pydev_run_in_console.py", line 53, in run_file pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DLmain.py", line 80, in <module> X_train = sdae[j-1].transform(X_train_pre) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DSAE.py", line 70, in transform feed_dict={self.x: X, self.scale: self.training_scale}) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 905, in run run_metadata_ptr) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1113, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (18143, 3) for Tensor 'Placeholder_1:0', which has shape '(?, 12)' PyDev console: starting. Python 3.4.5 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:47:57) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin ``` 实在是不知道该如何修改palceholder的shape 求帮忙讲解

幂律分布的曲线方程拟合相关问题

各位好: 我在尝试使用 Python3 把下面的图像拟合成符号幂律分布的曲线方程 ![图片说明](https://img-ask.csdn.net/upload/202005/04/1588554239_68515.png) 参考了论坛上某位高手的帖子:https://blog.csdn.net/kevinelstri/article/details/52685934 但和原贴中不同的是,帖子里面的数据是自动声成的, 而我的数据是从一张EXCEL中提取的。 在运行的过程中出现了报错: TypeError: zip argument #1 must support iteration ![图片说明](https://img-ask.csdn.net/upload/202005/04/1588554270_841133.png) 我的 EXCEL 原始数据是这样的求懂行的高手指点迷津 ![图片说明](https://img-ask.csdn.net/upload/202005/04/1588554300_472811.png) 以下是我的代码: ``` import matplotlib.pyplot as plt import numpy as np from sklearn import linear_model def open_excel(): # 从EXCEL 中获取数据 try: book = xlrd.open_workbook('D:/data/Excel_data.xlsx') #文件名,把文件与py文件放在同一目录下 except: print("open excel file failed!") try: sheet = book.sheet_by_name('data2') # 第一个页签 指定了其中的一个页签 return sheet except: print("locate worksheet in excel failed!") def extraction(): # 提取excel列信息 sheet = open_excel() # 这个在上面定义了 Xs = [] Ys = [] logXs = [] logYs = [] for i in range(1, sheet.nrows): #前3行是标题名,对应表中的字段名所以应该从第3行开始,计算机以0开始计数,所以值是2 Z =sheet.cell(i,0).value #取第i行第1列 # 信息:APP 名称 X = sheet.cell(i,1).value # 信息:APP 排名序号 Y = sheet.cell(i,2).value # 信息:APP 用户数 Xs.append(X) Ys.append(Y) logX=np.log10(X) logY=np.log10(Y) logXs.append(logX) logYs.append(logY) print("Xs 如下:") print(Xs) print("Ys 如下:") print(Ys) print("logXs 如下:") print(logXs) print("logYs 如下:") print(logYs) print("------------") plt.title("top50 APPs in 2020 China") plt.scatter(Xs, Ys, color='blue') plt.xlabel('Ranking',fontproperties='SimHei') plt.ylabel('Number of users (10000)',fontproperties='SimHei') plt.show() return logX,logY def DataFitAndVisualization(logX,logY): X_parameter=[] Y_parameter=[] for single_square_feet ,single_price_value in zip(logX,logY): X_parameter.append([float(single_square_feet)]) Y_parameter.append(float(single_price_value)) # 模型拟合 regr = linear_model.LinearRegression() regr.fit(X_parameter, Y_parameter) # 模型结果与得分 print('Coefficients: \n', regr.coef_,) print("Intercept:\n",regr.intercept_) # The mean square error print("Residual sum of squares: %.8f" % np.mean((regr.predict(X_parameter) - Y_parameter) ** 2)) # 残差平方和 # 可视化 plt.title("Log Data") plt.scatter(X_parameter, Y_parameter, color='black') plt.plot(X_parameter, regr.predict(X_parameter), color='blue',linewidth=3) plt.show() if __name__=="__main__": logX,logY=extraction() DataFitAndVisualization(logX,logY)` ```

利用ajax动态的提取mysql中的数据,并且在前端页面中展示出来

代码如下: 前端html: ``` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title></title> </head> <!--<script type="text/javascript" src="jquery.js"></script>--> <script type="text/javascript" src="http://echarts.baidu.com/gallery/vendors/echarts/echarts.min.js"></script> <script src="http://apps.bdimg.com/libs/jquery/2.1.4/jquery.min.js"></script> <body> <div id="main" style="width: 600px;height:400px;"></div> </body> </html> <script> var app = { xvalue: [], yvalue: [], z:[], }; // 发送ajax请求,从后台获取json数据 $(document).ready(function () { getData(); console.log(app.value1); console.log(app.timepoint) console.log(app.predictvalue1) }); function getData() { $.ajax({ url: '/test', data: {}, type: 'POST', async: false, dataType: 'json', success: function (data) { app.value1 = data.value1; app.predictvalue1=data.predictvalue1; value1 = app.value1; predictvalue1=app.predictvalue1; function trueData(i) { now = new Date(+now + oneDay); value = value1[i]; return { name: now.toString(), value: [ [now.getFullYear(), now.getMonth() + 1, now.getDate()].join('/'), Math.round(value) ] } } function predictData(i) { now1 = new Date(+now1 + oneDay); predictvalue = predictvalue1[i]; return { name: now1.toString(), value: [ [now1.getFullYear(), now1.getMonth() + 1, now1.getDate()].join('/'), Math.round(predictvalue) ] } } var data = []; var predictdata=[]; var now = +new Date(1997, 9, 3); var now1 = +new Date(1997, 9, 4); var oneDay = 24 * 3600 * 1000; for (var i = 0; i < value1.length; i++) { data.push(trueData(i)); } for (var i = 0; i < predictvalue1.length; i++) { predictdata.push(predictData(i)); } // 基于准备好的dom,初始化echarts实例 var myChart = echarts.init(document.getElementById('main')); option = { title: { text: '动态数据 + 时间坐标轴' }, tooltip: { trigger: 'axis', formatter: function (params) { params = params[0]; var date = new Date(params.name); return date.getDate() + '/' + (date.getMonth() + 1) + '/' + date.getFullYear() + ' : ' + params.value[1]; }, axisPointer: { animation: false } }, xAxis: { type: 'time', splitLine: { show: false } }, yAxis: { type: 'value', boundaryGap: [0, '100%'], splitLine: { show: false } }, series: [{ name: '真实数据', type: 'line', showSymbol: false, hoverAnimation: false, data: [], markLine: { itemStyle: { normal: { borderWidth: 1, lineStyle: { type: "dash", color: 'red', width: 2 }, show: true, color: '#4c5336' } }, data: [{ yAxis: 900 }] } }, { name: '预测数据', type: 'line', showSymbol: false, hoverAnimation: false, data: [], markLine: { itemStyle: { normal: { borderWidth: 1, lineStyle: { type: "dash", color: 'blue', width: 2 }, show: true, color: '#4c5336' } }, data: [{ yAxis: 900 }] } }] }; // 使用刚指定的配置项和数据显示图表。 myChart.setOption(option); setInterval(function () { for (var i = 0; i < 1; i++) { data.shift(); data.push(trueData(i)); } for (var i = 0; i < 1; i++) { predictdata.shift(); predictdata.push(predictData(i)); } myChart.setOption({ series: [{ data: data }, { data: predictdata }] }); }, 1000); } }) } </script> </body> </html> ``` 后端py,用的是flask框架: ``` import MySQLdb from flask import Flask, render_template, url_for import pymysql import pandas as pd import numpy as np from pandas import read_csv import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from keras.models import Sequential from keras.layers import LSTM, Dense, Activation,Dropout import json import operator from functools import reduce import math import tensorflow as tf from keras import initializers import time # 生成Flask实例 app = Flask(__name__) @app.route("/") def hello(): return render_template('new_file.html') # /test路由 接收前端的Ajax请求 @app.route('/test', methods=['POST']) def my_echart(): # 连接数据库 conn = MySQLdb.connect(host='127.0.0.1', port=3306, user='root', passwd='123456', db='test', charset='utf8') cur = conn.cursor() sql = 'SELECT timepoint,value1 from timeseries' cur.execute(sql) u = cur.fetchall() timepoint = [] value1 = [] for data in u: value1.append(data[1]) timepoint.append(data[0]) print(value1) # 转换成json格式 jsonData = {} jsonData['value1'] = value1 jsonData['timepoint']=timepoint # json.dumps()用于将dict类型的数据转换成str,因为如果直接将dict类型的数据写入json会报错,因此将数据写入时需要用到此函数 j = json.dumps(jsonData) cur.close() conn.close() # 在浏览器上渲染my_template.html模板(为了查看输出数据) return (j) if __name__ == '__main__': app.run(debug=True,port='5000') ``` 返回的数据是从mysql中读取的,现在我想用ajax的方法定时请求数据库的下一个数据到达前台,并且刷新页面显示出来,应该怎么修改代码? 数据库如下: ![图片说明](https://img-ask.csdn.net/upload/201905/24/1558685991_221903.jpg)

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

阿里面试官让我用Zk(Zookeeper)实现分布式锁

他可能没想到,我当场手写出来了

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

2020春招面试了10多家大厂,我把问烂了的数据库事务知识点总结了一下

2020年截止目前,我面试了阿里巴巴、腾讯、美团、拼多多、京东、快手等互联网大厂。我发现数据库事务在面试中出现的次数非常多。

立即提问
相关内容推荐