Python的类继承dict、list、set之后,该类的类方法怎么获取它的长度

Python的类继承dict、list、set之后,该类的类方法怎么获取它的长度?

 class Account(list):
    def __init__(self, account):
        list.__init__([])
        self.append(account)

    def getlen1(self):
        print(len(self))

    @classmethod
    def getlen2(cls):
        print(len(cls))

if __name__ == '__main__':
    a = Account(['jone', 27, '36'])
    a.getlen1()
    a.getlen2()

执行方法2会报错:
图片说明

3个回答

cls是类,self是自身,super才是基类。用cls访问的是类的静态方法,所以不能调用len

为啥要这么写 ,类的方法中只能调用类的属性,所以执行不成功,我瞎改了下,你可以看看:
class Account(list):
ll = []
def init(self, account):
list.__init__([])
self.append(account)

def getlen1(self):
    print(len(self))
    print self

@classmethod
def getlen2(cls,self):
    cls.ll = self
    print cls.ll
    print(len(cls.ll))

if name == '__main__':
a = Account(['jone', 27, '36'])
a.getlen1()
a.getlen2(a)

输出:
1
[['jone', 27, '36']]
[['jone', 27, '36']]
1

qq_32040767
碎破星空 想用类方法打印元素的长度的。类方法,看来不行。谢谢。
2 年多之前 回复

类方法可以访问类属性,至于能不能访问对象属性,查一查书

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
为什么Python中dict的查找速度和list一样?

用Python写了段程序,其中一部分是在2^20个数据中的查找,可是用dict实现起来慢, 后来发现改用list后竟然和之前的速度一样,请问这可能是什么原因呢?代码在这里 (http://ask.csdn.net/questions/224028 "")

python的dict,键值为数组时,更新键值出错

我在刷题的时候,想要将字母对应的坐标[i, j ]保存在对应字母的键值里,但是返回一直出错,没搞懂原因,希望大佬解答 ``` word = "ABCCED" board =[['A','B','C','E'],['S','F','C','S'],['A','D','E','E']] dic = dict.fromkeys(set(word),[]) for i in range(len(board)): for j in range(len(board[0])): print(board[i][j]) if board[i][j] in word: dic[board[i][j]].append([i,j]) ``` 输出是: ![图片说明](https://img-ask.csdn.net/upload/202002/19/1582093366_766995.png) 这个错误的输出中,每次添加的坐标数组都被添加在了所有key下面。

python算法编码问题咨询

以下是apriori关联算法的实例代码实现,是用python实现的,IDE是myeclipse,没有报错: #coding:utf-8 samples = [ ["I1","I2","I5"], ["I2","I4"], ["I2","I3"], ["I1","I2","I4"], ["I1","I3"], ["I2","I3"], ["I1","I3"], ["I1","I2","I3","I5"], ["I1","I2","I3"] ] min_support = 2 min_confidence = 0.6 fre_list = list() def get_c1(): global record_list global record_dict new_dict = dict() for row in samples: for item in row: if item not in fre_list: fre_list.append(item) new_dict[item] = 1 else: new_dict[item] = new_dict[item] + 1 fre_list.sort() print ("candidate set:") print_dict(new_dict) for key in fre_list: if new_dict[key] < min_support: del new_dict[key] print ("after pruning:") print_dict(new_dict) record_list = fre_list record_dict = record_dict def get_candidateset(): new_list = list() #自连接 for i in range(0,len(fre_list)): for j in range(0,len(fre_list)): if i == j: continue #如果两个k项集可以自连接,必须保证它们有k-1项是相同的 if has_samesubitem(fre_list[i],fre_list[j]): curitem = fre_list[i] + ',' + fre_list[j] curitem = curitem.split(",") curitem = list(set(curitem)) curitem.sort() curitem = ','.join(curitem) #如果一个k项集要成为候选集,必须保证它的所有子集都是频繁的 if has_infresubset(curitem) == False and already_constains(curitem,new_list) == False: new_list.append(curitem) new_list.sort() return new_list def has_samesubitem(str1,str2): str1s = str1.split(",") str2s = str2.split(",") if len(str1s) != len(str2s): return False nums = 0 for items in str1s: if items in str2s: nums += 1 str2s.remove(items) if nums == len(str1s) - 1: return True else: return False def judge(candidatelist): # 计算候选集的支持度 new_dict = dict() for item in candidatelist: new_dict[item] = get_support(item) print ("candidate set:") print_dict(new_dict) #剪枝 #频繁集的支持度要大于最小支持度 new_list = list() for item in candidatelist: if new_dict[item] < min_support: del new_dict[item] continue else: new_list.append(item) global fre_list fre_list = new_list print ("after pruning:") print_dict(new_dict) return new_dict def has_infresubset(item): # 由于是逐层搜索的,所以对于Ck候选集只需要判断它的k-1子集是否包含非频繁集即可 subset_list = get_subset(item.split(",")) for item_list in subset_list: if already_constains(item_list,fre_list) == False: return True return False def get_support(item,splitetag=True): if splitetag: items = item.split(",") else: items = item.split("^") support = 0 for row in samples: tag = True for curitem in items: if curitem not in row: tag = False continue if tag: support += 1 return support def get_fullpermutation(arr): if len(arr) == 1: return [arr] else: newlist = list() for i in range(0,len(arr)): sublist = get_fullpermutation(arr[0:i]+arr[i+1:len(arr)]) for item in sublist: curlist = list() curlist.append(arr[i]) curlist.extend(item) newlist.append(curlist) return newlist def get_subset(arr): newlist = list() for i in range(0,len(arr)): arr1 = arr[0:i]+arr[i+1:len(arr)] newlist1 = get_fullpermutation(arr1) for newlist_item in newlist1: newlist.append(newlist_item) newlist.sort() newlist = remove_dumplicate(newlist) return newlist def remove_dumplicate(arr): newlist = list() for i in range(0,len(arr)): if already_constains(arr[i],newlist) == False: newlist.append(arr[i]) return newlist def already_constains(item,curlist): import types items = list() if type(item) is types.StringType: items = item.split(",") else: items = item for i in range(0,len(curlist)): curitems = list() if type(curlist[i]) is types.StringType: curitems = curlist[i].split(",") else: curitems = curlist[i] if len(set(items)) == len(curitems) and len(list(set(items).difference(set(curitems)))) == 0: return True return False def print_dict(curdict): keys = curdict.keys() keys.sort() for curkey in keys: print ("%s:%s"%curkey,curdict[curkey]) # 计算关联规则的方法 def get_all_subset(arr): rtn = list() while True: subset_list = get_subset(arr) stop = False for subset_item_list in subset_list: if len(subset_item_list) == 1: stop = True rtn.append(subset_item_list) if stop: break return rtn def get_all_subses(s): from itertools import combinations return sum(map(lambda r: list(combinations(s, r)), range(1, len(s)+1)), []) def cal_associative_rule(frelist): rule_list = list() rule_dict = dict() for fre_item in frelist: fre_items = fre_item.split(",") subitem_list = get_all_subset(fre_items) for subitem in subitem_list: # 忽略为为自身的子集 if len(subitem) == len(fre_items): continue else: difference = set(fre_items).difference(subitem) rule_list.append("^".join(subitem)+"->"+"^".join(difference)) print ("The rule is:") for rule in rule_list: conf = cal_rule_confidency(rule) print (rule,conf) if conf >= min_confidence: rule_dict[rule] = conf print ("The associative rule is:") for key in rule_list: if key in rule_dict.keys(): print (key,":",rule_dict[key]) def cal_rule_confidency(rule): rules = rule.split("->") support1 = get_support("^".join(rules),False) support2 = get_support(rules[0],False) if support2 == 0: return 0 rule_confidency = float(support1)/float(support2) return rule_confidency if __name__ == '__main__': record_list = list() record_dict = dict() get_c1() # 不断进行自连接和剪枝,直到得到最终的频繁集为止;终止条件是,如果自连接得到的已经不再是频繁集 # 那么取最后一次得到的频繁集作为结果 while True: record_list = fre_list new_list = get_candidateset() judge_dict = judge(new_list) if len(judge_dict) == 0: break else: record_dict = judge_dict print ("The final frequency set is:") print (record_list) # 根据频繁集计算关联规则 cal_associative_rule(record_list) 运行后,出错,结果输出如下: Traceback (most recent call last): File "D:\Workspaces\MyEclipse 2017 CI\pythontest\src\pythontest.py", line 213, in <module> candidate set: get_c1() File "D:\Workspaces\MyEclipse 2017 CI\pythontest\src\pythontest.py", line 29, in get_c1 print_dict(new_dict) File "D:\Workspaces\MyEclipse 2017 CI\pythontest\src\pythontest.py", line 160, in print_dict keys.sort() AttributeError: 'dict_keys' object has no attribute 'sort' 找了很久,不知道出错点在哪,求指教。

python运行有错误:这是对数据进行分析生成可视化界面的程序(我是小白,请说下解决方法)

运行错误: C:\Users\Administrator\PycharmProjects\untitled\venv\Scripts\python.exe C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py 一共有:16590个 Building prefix dict from the default dictionary ... Loading model from cache C:\Users\ADMINI~1\AppData\Local\Temp\jieba.cache Loading model cost 0.808 seconds. Prefix dict has been built succesfully. Traceback (most recent call last): File "C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py", line 252, in <module> jiebaclearText(content) File "C:/Users/Administrator/PycharmProjects/untitled/dianying/src/analysis_data.py", line 97, in jiebaclearText f_stop_text = f_stop.read() File "D:\python111\lib\codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 3: invalid start byte Process finished with exit code 1 代码如下: ''' data : 2019.3.28 goal : 可视化分析获取到的数据 ''' import csv time = [] nickName = [] gender = [] cityName = [] userLevel = [] score = [] content = '' # 读数据 def read_csv(): content = '' # 读取文件内容 with open(r'D:\maoyan.csv', 'r', encoding='utf_8_sig', newline='') as file_test: # 读文件 reader = csv.reader(file_test) i = 0 for row in reader: if i != 0: time.append(row[0]) nickName.append(row[1]) gender.append(row[2]) cityName.append(row[3]) userLevel.append(row[4]) score.append(row[5]) content = content + row[6] # print(row) i = i + 1 print('一共有:' + str(i - 1) + '个') return content import re, jieba # 词云生成工具 from wordcloud import WordCloud, ImageColorGenerator # 需要对中文进行处理 import matplotlib.font_manager as fm from pylab import * mpl.rcParams['font.sans-serif'] = ['SimHei'] from os import path d = path.dirname(__file__) stopwords_path = 'D:\ku\chineseStopWords.txt' # 评论词云分析 def word_cloud(content): import jieba, re, numpy from pyecharts import WordCloud import pandas as pd # 去除所有评论里多余的字符 content = content.replace(" ", ",") content = content.replace(" ", "、") content = re.sub('[,,。. \r\n]', '', content) segment = jieba.lcut(content) words_df = pd.DataFrame({'segment': segment}) # quoting=3 表示stopwords.txt里的内容全部不引用 stopwords = pd.read_csv(stopwords_path, index_col=False, quoting=3, sep="\t", names=['stopword'], encoding='utf-8') words_df = words_df[~words_df.segment.isin(stopwords.stopword)] words_stat = words_df.groupby(by=['segment'])['segment'].agg({"计数": numpy.size}) words_stat = words_stat.reset_index().sort_values(by=["计数"], ascending=False) test = words_stat.head(500).values codes = [test[i][0] for i in range(0, len(test))] counts = [test[i][1] for i in range(0, len(test))] wordcloud = WordCloud(width=1300, height=620) wordcloud.add("影评词云", codes, counts, word_size_range=[20, 100]) wordcloud.render(d + "\picture\c_wordcloud.html") # 定义个函数式用于分词 def jiebaclearText(text): # 定义一个空的列表,将去除的停用词的分词保存 mywordList = [] text = re.sub('[,,。. \r\n]', '', text) # 进行分词 seg_list = jieba.cut(text, cut_all=False) # 将一个generator的内容用/连接 listStr = '/'.join(seg_list) listStr = listStr.replace("class", "") listStr = listStr.replace("span", "") listStr = listStr.replace("悲伤逆流成河", "") # 打开停用词表 f_stop = open(stopwords_path, encoding="utf8") # 读取 try: f_stop_text = f_stop.read() finally: f_stop.close() # 关闭资源 # 将停用词格式化,用\n分开,返回一个列表 f_stop_seg_list = f_stop_text.split("\n") # 对默认模式分词的进行遍历,去除停用词 for myword in listStr.split('/'): # 去除停用词 if not (myword.split()) in f_stop_seg_list and len(myword.strip()) > 1: mywordList.append(myword) return ' '.join(mywordList) # 生成词云图 def make_wordcloud(text1): text1 = text1.replace("悲伤逆流成河", "") bg = plt.imread(d + "/static/znn1.jpg") # 生成 wc = WordCloud( # FFFAE3 background_color="white", # 设置背景为白色,默认为黑色 width=890, # 设置图片的宽度 height=600, # 设置图片的高度 mask=bg, # margin=10, # 设置图片的边缘 max_font_size=150, # 显示的最大的字体大小 random_state=50, # 为每个单词返回一个PIL颜色 font_path=d + '/static/simkai.ttf' # 中文处理,用系统自带的字体 ).generate_from_text(text1) # 为图片设置字体 my_font = fm.FontProperties(fname=d + '/static/simkai.ttf') # 图片背景 bg_color = ImageColorGenerator(bg) # 开始画图 plt.imshow(wc.recolor(color_func=bg_color)) # 为云图去掉坐标轴 plt.axis("off") # 画云图,显示 # 保存云图 wc.to_file(d + r"/picture/word_cloud.png") # 评论者性别分布可视化 def sex_distribution(gender): # print(gender) from pyecharts import Pie list_num = [] list_num.append(gender.count('0')) # 未知 list_num.append(gender.count('1')) # 男 list_num.append(gender.count('2')) # 女 attr = ["其他", "男", "女"] pie = Pie("性别饼图") pie.add("", attr, list_num, is_label_show=True) pie.render(d + r"\picture\sex_pie.html") # 评论者所在城市分布可视化 def city_distribution(cityName): city_list = list(set(cityName)) city_dict = {city_list[i]: 0 for i in range(len(city_list))} for i in range(len(city_list)): city_dict[city_list[i]] = cityName.count(city_list[i]) # 根据数量(字典的键值)排序 sort_dict = sorted(city_dict.items(), key=lambda d: d[1], reverse=True) city_name = [] city_num = [] for i in range(len(sort_dict)): city_name.append(sort_dict[i][0]) city_num.append(sort_dict[i][1]) import random from pyecharts import Bar bar = Bar("评论者城市分布") bar.add("", city_name, city_num, is_label_show=True, is_datazoom_show=True) bar.render(d + r"\picture\city_bar.html") # 每日评论总数可视化分析 def time_num_visualization(time): from pyecharts import Line time_list = list(set(time)) time_dict = {time_list[i]: 0 for i in range(len(time_list))} time_num = [] for i in range(len(time_list)): time_dict[time_list[i]] = time.count(time_list[i]) # 根据数量(字典的键值)排序 sort_dict = sorted(time_dict.items(), key=lambda d: d[0], reverse=False) time_name = [] time_num = [] print(sort_dict) for i in range(len(sort_dict)): time_name.append(sort_dict[i][0]) time_num.append(sort_dict[i][1]) line = Line("评论数量日期折线图") line.add( "日期-评论数", time_name, time_num, is_fill=True, area_color="#000", area_opacity=0.3, is_smooth=True, ) line.render(d + r"\picture\c_num_line.html") # 评论者猫眼等级、评分可视化 def level_score_visualization(userLevel, score): from pyecharts import Pie userLevel_list = list(set(userLevel)) userLevel_num = [] for i in range(len(userLevel_list)): userLevel_num.append(userLevel.count(userLevel_list[i])) score_list = list(set(score)) score_num = [] for i in range(len(score_list)): score_num.append(score.count(score_list[i])) pie01 = Pie("等级环状饼图", title_pos='center', width=900) pie01.add( "等级", userLevel_list, userLevel_num, radius=[40, 75], label_text_color=None, is_label_show=True, legend_orient="vertical", legend_pos="left", ) pie01.render(d + r"\picture\level_pie.html") pie02 = Pie("评分玫瑰饼图", title_pos='center', width=900) pie02.add( "评分", score_list, score_num, center=[50, 50], is_random=True, radius=[30, 75], rosetype="area", is_legend_show=False, is_label_show=True, ) pie02.render(d + r"\picture\score_pie.html") time = [] nickName = [] gender = [] cityName = [] userLevel = [] score = [] content = '' content = read_csv() # 1 词云 jiebaclearText(content) make_wordcloud(content) # pyecharts词云 # word_cloud(content) # 2 性别分布 sex_distribution(gender) # 3 城市分布 city_distribution(cityName) # 4 评论数 time_num_visualization(time) # 5 等级,评分 level_score_visualization(userLevel, score)

Golang中Python的setdefault的类似物

<div class="post-text" itemprop="text"> <p>There is handy shortcut for dictionaries in python - setdefault method. For example, if I have dict that represents mapping from string to list, I can write something like this</p> <pre><code>if key not in map: map[key] = [] map[key].append(value) </code></pre> <p>this is too verbose and more pythonic way to do this is like so:</p> <pre><code>map.setdefault(key, []).append(value) </code></pre> <p>there is a defaultdict class, by the way.</p> <p>So my question is - is there something similar for maps in Go? I'm really annoyed working with types like map[string][]int and similar.</p> </div>

TypeError: unhashable type: 'list' 错误怎么处理

联系泰坦尼克号生存率项目时,对头衔进行one-hot编码但是出现TypeError: unhashable type: 'list' 具体代码如下/ #姓名 :使用split分割字符串,提取出头衔 def gettitle(name): str1=name.split(',')[1] str2=str1.split('.')[0] str3=str2.split()#去除字符串前后的指定字符(默认为空格) return str3 titleDF=pd.DataFrame() titleDF['title']=full['Name'].map(gettitle) titleDF['title'].value_counts()#显示不同类别的头衔并计数 #头衔进行one-hot编码 titleDF=pd.get_dummies(titleDF['title']) 求解答,已被折磨的身心俱疲

python爬取豆瓣电影 一直报错 求解决

==== RESTART: C:\Users\123\AppData\Local\Programs\Python\Python36\类的学习.py ==== Traceback (most recent call last): File "C:\Users\123\AppData\Local\Programs\Python\Python36\类的学习.py", line 29, in <module> movies_list=get_review(getHtmlText(url)) File "C:\Users\123\AppData\Local\Programs\Python\Python36\类的学习.py", line 20, in get_review dict['name']=tag_li.find('span','titlt')[0].string TypeError: 'NoneType' object is not subscriptable ——代码如下————————————————————————————— import requests from bs4 import BeautifulSoup import bs4 def getHtmlText(url): try: r = requests.get(url, timeout = 30); r.raise_for_status(); r.encoding = r.apparent_encoding; return r.text; except: return "" def get_review(html): movies_list=[] soup=BeautifulSoup(html,"html.parser") soup=soup.find('ol','grid_view') for tag_li in soup.find_all('li'): dict={} dict['rank']=tag_li.find('em').string dict['name']=tag_li.find('span','titlt')[0].string dict['score']=tag_li.find('span','rating_num').string if(tag_li.find('span','inq')): dict['desc']=tag_li.find('span','inq').string movies_list.append(dict) return movies_list if __name__=='__main__': for i in range(10): url='http://movie.douban.com/top250?start=%s&filter=&type=' %(i*25) movies_list=get_review(getHtmlText(url)) for movie_dict in movies_list: print('电影排名:'+movie_dict['rank']) print('电影名称:'+movie_dict.get('name')) print('电影评分:'+movie_dict.get('score')) print('电影评词:'+movie_dict.get('desc','无评词')) print('------------------------------------------------------')

Python题定义函数 求大佬帮忙

python:请定义一个函数check(),要求接收两个参数,分别是s(字符串),list(列表)。函数功能:测试list里的元素是否都在s里,若是则返回True,否则返回False

python问题: AttributeError

python 报错: AttributeError: class BaseEntity has no attribute '__class__' BaseEntity继承的时候报错没有__class__属性(新手,见谅) 源代码如下: class BaseEntity: @classmethod def load(clazz,index): return clazz.objects[index] @classmethod def loadField(clazz,index,field): return getattr(clazz.objects[index], field) @classmethod def updateField(clazz,index,field,value): obj=clazz.objects[index] setattr(obj, field, value) obj.save() @classmethod def insert(clazz,entity): res=clazz.objects(eid=entity.eid) if len(res)==0: entity.save() return True else: entity.index=res[0].index return False @classmethod def persist(clazz,entity): ''' :summary :更新id相同的,如果没有则插入 :param clazz: :param entity: ''' queryResult=clazz.objects(eid=entity.eid) if len(queryResult)==0: entity.save() else: entity.index=queryResult[0].index queryResult[0].delete() entity.save() @classmethod def delete(clazz,entity): queryResult=clazz.objects(eid=entity.eid) if len(queryResult)>0: queryResult[0].delete() class Article(Document,BaseEntity): eid=StringField(max_length=20,required=True) index=LongField() title=StringField(default=None) content=StringField(default=None) publistDate=DateTimeField(default=None) wordList=ListField(StringField()) topicVector=ListField(FloatField())

python的flask-web程序出错

1.python的flask-web程序不知道错在哪里,运行报错:The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.? from flask import Flask from flask import request from recommend import Recommend2 import json import glob import os.path configuration={ "DATA_PATH":"data", "PORT":8080 } #LOAD THE DATA JSON_LIST = [] paths = glob.glob( os.path.join(configuration["DATA_PATH"],"*.txt") ) for path in paths: fp = open(path,"r") JSON_LIST.append( json.load(fp) ) fp.close() def filter_json(json_list:list): """ 过滤每一条数据,用于测试集合 保留title :param json_list:电影数据 :return: 过滤后的结果 """ res = [] def func(inf:dict,json,key:str): """ 对于 A/B/C这样在同一个标签下有多个属性的进行过滤 :param inf: 字典 :param json: 需要过滤的单个json对象 :return: """ if key == "title": info["title"]=json["title"] return ks = j.get(key, "") k_l = ks.split("/") for k_i in range(len(k_l)): # print(ty_l[t_i]) info["%s%s" % (key,k_i + 1)] = k_l[k_i] for j in json_list: info = dict() info["title"] = j.get("title") for k in j : if k not in ["comment_list","date","runtime"]: func(info,j,k) res.append(info) return res # print(filter_json(JSON_LIST)) app = Flask(__name__) system = Recommend2(7*24*3600) @app.route("/recommend/get",methods=['GET']) def get(): """ 获取推荐结果 :return: 推荐结果 """ test_set=filter_json(JSON_LIST) res = system.result(test_set,5) def _filter(j): """ 将comment_list弄成字符串 """ nj = j.copy() nj["comment_list"] = "|".join(nj["comment_list"]) return nj return json.dumps( [ _filter(JSON_LIST[i]) for i in res] ) @app.route("/recommend/put",methods=["GET"]) def put(): """ 提取数据再学习 :return: """ movie_name = request.args.get("moviename",None) comment = request.args.get("comment",None) if (not movie_name) or (not comment): return json.dumps({"sta":"failed"}) print(movie_name) if comment == "good": temp = [ filter_json([x])[0] for x in JSON_LIST if x.get("title","") == movie_name ] system.learn(temp[0]) return json.dumps({"sta":"succeed"}) @app.route("/recommend/info") def get_info(): return json.dumps(system.getLike()) if __name__ == "__main__": app.run(port=configuration["PORT"],debug=True)

Python画散点图,利用mpldatacursor为每个点作鼠标点击标记,但所有点标签显示都一样

代码如下: sl=[Support[frozenset(j[0])] for j in self.Rule_List] cl=[Support[frozenset(j[0])|frozenset(j[1])]/Support[frozenset(j[0])] for j in self.Rule_List] A=[j[0] for j in self.Rule_List] B=[j[1] for j in self.Rule_List] # Plot a number of random dots for i in range(len(self.Rule_List)): ax.scatter(sl,cl, label=str(set(A[i]))+'\n——>\n'+str(set(B[i])),alpha=0.3,marker='o',cmap=plt.cm.summer,s= np.pi*(12*cl[i])**2,c=range(len(self.Rule_List)),edgecolors='k' ) datacursor(formatter='{label}'.format, draggable=True, bbox=dict(fc='white'),arrowprops=dict(arrowstyle='simple', fc='white', alpha=0.5)) 本来想根据每个点的信息设置不同的标签,但做图做出来后发现所有的点标签都一样,但标签数组A、B中所有点标签都是不一样的,求大神解答…… ![图片说明](https://img-ask.csdn.net/upload/202004/10/1586484049_406072.png) ![图片说明](https://img-ask.csdn.net/upload/202004/10/1586484028_715010.png)

显示'dict' object has no attribute 'append' 求大佬解答一下

![图片说明](https://img-ask.csdn.net/upload/202005/02/1588411382_473884.png)

关于python3.5中的bytes-like object和str

最近学scrapy爬虫的时候,用CsvItemExporter将Item写入csv文件时,报错 ``` TypeError: write() argument must be str, not bytes ``` 代码如下: ``` class StockPipelineCSV(object): def open_spider(self,spider): self.file = open('stocks_01.csv', 'w') self.exporter = CsvItemExporter(self.file) self.exporter.start_exporting() def close_spider(self,spider): self.exporter.finish_exporting() self.file.close() def process_item(self, item, spider): self.exporter.export_item(item) return item ``` 然后找到exporters的CsvItemExporter类中的export_item()函数以及其他相关函数: ``` #exporters.py def export_item(self, item): if self._headers_not_written: self._headers_not_written = False self._write_headers_and_set_fields_to_export(item) fields = self._get_serialized_fields(item, default_value='', include_empty=True) values = list(self._build_row(x for _, x in fields)) self.csv_writer.writerow(values) def _build_row(self, values): for s in values: try: yield to_native_str(s, self.encoding) except TypeError: yield s ``` ``` #python.py def to_native_str(text, encoding=None, errors='strict'): """ Return str representation of `text` (bytes in Python 2.x and unicode in Python 3.x). """ if six.PY2: return to_bytes(text, encoding, errors) else: return to_unicode(text, encoding, errors) ``` 发现这里没问题已经将item转换成存放str对象的一个list了,不知道问题究竟出在哪里?scrapy是初学,不知道是不是scrapy的itempipeline.py代码的问题? 网上查了半天说是要用w+b二进制方式打开,这样确实不会报错了,但写进去之后是乱码,顺序也随机(这个可能是我settings.py里没有设置) 然后我自己写了小程序测试,自己open一个csv文件,以w+b打开 ``` # coding: utf-8 import csv csvfile = open('D://t.csv', 'w+b') writer = csv.writer(csvfile) writer.writerow([str.encode('列1'), str.encode('列2'), str.encode('列3')]) data = [ str.encode('值1'), str.encode('值2'), str.encode('值3') ] writer.writerow(data) csvfile.close() ``` 结果报错: ``` TypeError: a bytes-like object is required, not 'str' ``` 但我明明把他转换成bytes了啊。最后换了一种方式,去掉str.encode(),用w方式写入,正常了。但对bytes-like object和str者二者很困惑,不理解之前的方式为什么会报错。

redis里lpush的一个问题,lpush不能传输字典?

用redis储存爬下来的数据的时候遇到一个报错,一直搞不明白,如下: ``` def save_to_redis(data): config = { 'host': '103.237.184.158', 'port': 6379, 'charset': 'utf8' } r = redis.Redis(**config) for item in data: r.lpush('InFo', item) ``` 执行了之后报错: redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a byte, string or number first. 想请问一下是怎么了

python报错:requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(10060, 'WSAETIMEDOUT')"))

代码如下: ``` #!/usr/bin/env python # coding=utf-8 #import importlib,sys #import sys #sys.setdefaultencoding('gbk') from urllib.parse import quote '''import sys import imp import sys reload(sys) sys.setdefaultencoding('utf8') ''' ''' import urllib import urllib2 import requests import sys sys.setdefaultencoding('utf-8') import jieba import json''' #from bs4 import BeautifulSoup import urllib.request import urllib.parse as parse import ssl import re import os,os.path import codecs import requests def getText(html): '''headers = {'Host': 'https://pypi.org','User-Agent':'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER','Referer': 'https://pypi.org/search/?c=Programming+Language+%3A%3A+Python+%3A%3A+3', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, sdch, br', 'Accept-Language': 'zh-CN,zh;q=0.8'} #req = urllib.urlretrieve(download_url,headers=headers) ''' #urllib.request.urlopen('https://www.lfd.uci.edu/~gohlke/pythonlibs/') #req = urllib.request.Request(url=url,headers=header) #headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0'} #import requests res = requests.get(html) res.encoding = 'utf-8' #print(res.text) words=res.text ''' soup = BeautifulSoup(res.text, "html.parser") words = "" for a1 in soup.find_all("a"): words = words + str(a1.string) ''' return words def file(url1,file_name,name): print(url1) headers = {'Host': 'https://files.pythonhosted.org/packages/','User-Agent':'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER','Referer': 'https://pypi.org/', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, sdch, br', 'Accept-Language': 'zh-CN,zh;q=0.8'} #req = urllib.urlretrieve(download_url,headers=headers) #urllib.request.urlopen('https://www.lfd.uci.edu/~gohlke/pythonlibs/') #req = urllib.request.Request(url=url,headers=header) request = requests.get(url=url1,headers=headers) #response = urllib.request.urlopen(request) global i i += 1 print(request.content) file = open(name ,'wb+') file.write(request.content) file.close() print(file_name) print("Completed : .... %d ..." % x) '''for i in range(len(name_list)): j=0 if name_list[i-24:i+1]=='https://pypi.org/project/': name_list1.append(name_list[i+1:i+60])''' def get(url): global name_list1 res=getText(url) #print('\n\n\n\n\n\nok\n\n\n\n\n\n\n\n\n\n') #name_list = getText(url) #print(res) print('html done,page:'+str(count)+'\n') for i in range(len(res)): #j=0 if (res[i-8:i+1]=='/project/')==True: name_list1.append('https://pypi.org'+res[i-8:i+20]) #print(name_list1) def trim(list1): k=0 list2=[] for i in list1: j=25 while j<len(list1[k]): if list1[k][j]=='/': list2.append(list1[k][0:j]) break j+=1 k+=1 return list2 def get1(url): """o=0 for n in len(url): if url[n]=='"': url=url[0:n-1]+'#files' """ global namelist url=url+'#files' #import requests res = requests.get(url) res.encoding = 'utf-8' #print(res.text) html=res.text for p in range(len(html)): stri='https://files' if html[p-len(stri):p]==stri: namelist.append(html[p-len(stri):p+170]) import httplib2 as httplib httplib.HTTPConnection._http_vsn = 10 httplib.HTTPConnection._http_vsn_str = 'HTTP/1.0' ''' #-*- coding:utf-8 -*- import time import hmac import hashlib import requests import json import mysql.connector import requests import httplib2 as httplib import urllib from urllib import unquote import json def query_total_flow(): header = {"Content-Type": "application/json", 'Connection': 'close', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'} post_data = { "operator": "xxxxxx", # 调用系统的名字 "type": "set", "set_id": [1], "set_name": [], "feature_type": ["入流量"], "date": "2019-06-15", "begintime": "23:55", "endtime": "23:59", } url = "http://xxx.xxx.xxx.xxx:80/xxxxx/xxxxx/xxxxx.cgi/json?" post_data = json.dumps(post_data, ensure_ascii=False, separators=(',',':')) print (post_data) # url = url + post_data url = url + urllib.urlencode({'data': post_data}) # data = urllib.urlencode({'data': post_data}) # print post_data # print data data = unquote(url) try: # print data print (data) response = requests.get(url, json=data, timeout=60, headers=header) print (response.headers) print (response.url) print (response.text.encode('utf-8')) if response['code'] != 0: result_dict = json.loads(response.text) data = result_dict["data"] print(data) print(data) set_info = [] return response raise exp_with_err_instance(err.RESULT_ERROR, 'can not find inst info') print ("none!") return [] except Exception as e: print ("Exception") raise if __name__ == "__main__": query_total_flow() ''' save_path = os.getcwd() ''' url = 'https://pypi.org/search/?c=Programming+Language+%3A%3A+Python+%3A%3A+3' name_list = getText(url) print(name_list) print('html done') #html.decode('utf-8') #print(name_list)''' x=1 files=os.listdir(save_path) #print(files) #print(type(name_list)) name_list1=[] #print(name_list) #for name in name_list: k=0 # name[k]=str(name1[k]) '''for i in range(len(name_list)): j=0 if name_list[i-25:i+1]=='https://pypi.org/project/': name_list1.append(name_list[i-25:i+20]) for u in range(len(name_list1[len(name_list1)])): if name_list1[len(name_list1)][u]==' ': name_list1[len(name_list1)]=name_list1[len(name_list1)][0:u-1] ''' global count count=2 name_list1=[] for count in range(51): get('https://pypi.org/search/?c=Programming+Language+%3A%3A+Python+%3A%3A+3&page='+str(count)) """ global m if k<len(name_list1): for l in range(len(name_list1[k])): if l-9>=0: if name_list1[k][l-4:l]=='.whl' or name_list1[k][l-3:l]=='.gz' or name_list1[k][l-4:l]=='.zip': j=1 m=l if j==1: name_list1[k]=name_list1[k][0:m] k+=1""" '''if j==0: name_list.remove(name)''' #file_name = os.path.join(save_path ,name) #i=0 #print(name) #print(name_list1) namelist=[] h=0 for y in trim(name_list1): get1(y) #print(namelist) '''if h==3: break''' h+=1 i=0 for name in namelist: j=0 for l in range(len(name)): if l-9>=0: if name[l-4:l]=='.whl' or name[l-3:l]=='.gz' or name[l-4:l]=='.zip': j=1 m=l break if j==1: name=name[0:m] k+=1 while m>0: if m<len(name): if name[m]=='/': filename=name[m+1:len(name)]#p] break m-=1 if filename in files: continue '''if name=='Delny‑0.4.1‑cp27‑none‑win_amd64.whl</a></li>\n<li>' or name==Delny‑0.4.1‑cp27‑none‑win32.whl</a></li> </ul> </: continue ''' print('no:'+str(x)) print('\ndownload '+name) # importlib.reload(sys) #imp.reload(sys) for l in range(len(name)): if l-9>=0: if name[l-4:l]=='.whl' or name[l-3:l]=='.gz' or name[l-4:l]=='.zip': j=1 m=l break if j==1: name=name[0:m] k+=1 p=m #string='https://download.lfd.uci.edu/pythonlibs/s2jqpv5t/' + name#[0:4+name.find('.whl')]#https://download.lfd.uci.edu/pythonlibs/s2jqpv5t/ print('00'+save_path) #file(name,save_path,filename) url1=name +'/' + filename url1=url1.encode() name=filename file_name=save_path #file = open(name ,'wb+') #file.write(url1 ) #file.close() #print(file_name) headers = {'Host': 'https://files.pythonhosted.org/packages/','User-Agent':'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER','Referer': 'https://pypi.org/', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, sdch, br', 'Accept-Language': 'zh-CN,zh;q=0.8'} #req = urllib.urlretrieve(download_url,headers=headers) #urllib.request.urlopen('https://www.lfd.uci.edu/~gohlke/pythonlibs/') #req = urllib.request.Request(url=url,headers=header) #request = urllib.request.urlopen(url1) #response = urllib.request.urlopen(request) urllib.request.urlretrieve(url1.decode(),name) i += 1 print(url1.decode()) #file = open(name ,'wt+') #file.write(str(req.content())) #file.close() print(file_name) print("Completed : .... %d ..." % x) '''for i in range(len(name_list)): j=0 if name_list[i-24:i+1]=='https://pypi.org/project/': name_list1.append(name_list[i+1:i+60])''' print('\n........'+filename+'..........complete\n') x=x+1 print('09') print('finished') ``` 报错: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license()" for more information. >>> ===================== RESTART: E:\2345Downloads\版本2下载whl.py ==================== Traceback (most recent call last): File "E:\2345Downloads\版本2下载whl.py", line 154, in <module> httplib.HTTPConnection._http_vsn = 10 AttributeError: module 'httplib2' has no attribute 'HTTPConnection' >>> 如果不加 ``` import httplib2 as httplib httplib.HTTPConnection._http_vsn = 10 httplib.HTTPConnection._http_vsn_str = 'HTTP/1.0' ``` 就会 Traceback (most recent call last): File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen httplib_response = self._make_request( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 416, in _make_request httplib_response = conn.getresponse() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 1322, in getresponse response.begin() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 303, in begin version, status, reason = self._read_status() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 264, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\socket.py", line 669, in readinto return self._sock.recv_into(b) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\contrib\pyopenssl.py", line 318, in recv_into raise SocketError(str(e)) OSError: (10060, 'WSAETIMEDOUT') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 439, in send resp = conn.urlopen( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen retries = retries.increment( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\retry.py", line 400, in increment raise six.reraise(type(error), error, _stacktrace) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\packages\six.py", line 734, in reraise raise value.with_traceback(tb) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen httplib_response = self._make_request( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 416, in _make_request httplib_response = conn.getresponse() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 1322, in getresponse response.begin() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 303, in begin version, status, reason = self._read_status() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\http\client.py", line 264, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\socket.py", line 669, in readinto return self._sock.recv_into(b) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\contrib\pyopenssl.py", line 318, in recv_into raise SocketError(str(e)) urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError("(10060, 'WSAETIMEDOUT')")) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "E:\2345Downloads\版本2下载whl.py", line 282, in <module> get1(y) File "E:\2345Downloads\版本2下载whl.py", line 141, in get1 res = requests.get(url) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 75, in get return request('get', url, params=params, **kwargs) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(10060, 'WSAETIMEDOUT')")) 求高手解决

Python 3.7 AttributeError: 'str' object has no attribute 'items' 报错怎么解决

![图片说明](https://img-ask.csdn.net/upload/201901/23/1548240235_359645.png) 求大佬帮忙看下这样的报错怎么解决

最近突发奇想用java去请求python写的接口结果出问题了

python代码如下: ``` import tornado from wtforms_tornado import Form import sys class hello(tornado.web.RequestHandler): def get(self): self.render("hello.html") class ajaxtest(tornado.web.RequestHandler): def set_default_headers(self): print("setting headers!!!") self.set_header("Access-Control-Allow-Origin", "*") self.set_header("Access-Control-Allow-Headers", "x-requested-with") self.set_header('Access-Control-Allow-Methods', 'POST, GET, OPTIONS') def get(self): data="你好我是刘德华" #data.encode("utf-8") print("get") self.write(data) def post(self): import json res=dict( hel="你好我是刘德华", d="ee" ) json = json.dumps(res) print("post") self.write(json) ``` python的代码应该是没有问题的 问题应该出在java上因为用直接用jQuery ajax请求是完全没有问题的 java代码如下: ``` package xiaoxiaomo; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintWriter; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLConnection; import java.util.List; import java.util.Map; public class test { /** * 向指定URL发送GET方法的请求 * * @param url * 发送请求的URL * @param param * 请求参数,请求参数应该是 name1=value1&name2=value2 的形式。 * @return URL 所代表远程资源的响应结果 */ public test() { // TODO Auto-generated constructor stub } public static String sendGet(String url, String param) { String result = ""; BufferedReader in = null; try { String urlNameString = url + "?" + param; URL realUrl = new URL(urlNameString); // 打开和URL之间的连接 URLConnection connection = realUrl.openConnection(); // 设置通用的请求属性 connection.setRequestProperty("accept", "*/*"); connection.setRequestProperty("connection", "Keep-Alive"); connection.setRequestProperty("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;SV1)"); // 建立实际的连接 connection.connect(); // 获取所有响应头字段 Map<String, List<String>> map = connection.getHeaderFields(); // 遍历所有的响应头字段 for (String key : map.keySet()) { System.out.println(key + "--->" + map.get(key)); } // 定义 BufferedReader输入流来读取URL的响应 in = new BufferedReader(new InputStreamReader( connection.getInputStream())); String line; while ((line = in.readLine()) != null) { result += line; } } catch (Exception e) { System.out.println("发送GET请求出现异常!" + e); e.printStackTrace(); } // 使用finally块来关闭输入流 finally { try { if (in != null) { in.close(); } } catch (Exception e2) { e2.printStackTrace(); } } return result; } /** * 向指定 URL 发送POST方法的请求 * * @param url * 发送请求的 URL * @param param * 请求参数,请求参数应该是 name1=value1&name2=value2 的形式。 * @return 所代表远程资源的响应结果 */ public static String sendPost(String url, String param) { PrintWriter out = null; BufferedReader in = null; String result = ""; try { URL realUrl = new URL(url); // 打开和URL之间的连接 URLConnection conn = realUrl.openConnection(); // 设置通用的请求属性 conn.setRequestProperty("accept", "*/*"); conn.setRequestProperty("connection", "Keep-Alive"); conn.setRequestProperty("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;SV1)"); // 发送POST请求必须设置如下两行 conn.setDoOutput(true); conn.setDoInput(true); // 获取URLConnection对象对应的输出流 out = new PrintWriter(conn.getOutputStream()); // 发送请求参数 out.print(param); // flush输出流的缓冲 out.flush(); // 定义BufferedReader输入流来读取URL的响应 in = new BufferedReader( new InputStreamReader(conn.getInputStream())); String line; while ((line = in.readLine()) != null) { result += line; } } catch (Exception e) { System.out.println("发送 POST 请求出现异常!"+e); e.printStackTrace(); } //使用finally块来关闭输出流、输入流 finally{ try{ if(out!=null){ out.close(); } if(in!=null){ in.close(); } } catch(IOException ex){ ex.printStackTrace(); } } return result; } public static String getEncoding(String str) { String encode = "GB2312"; try { if (str.equals(new String(str.getBytes(encode), encode))) { //判断是不是GB2312 String s = encode; return s; //是的话,返回“GB2312“,以下代码同理 } } catch (Exception exception) { } encode = "ISO-8859-1"; try { if (str.equals(new String(str.getBytes(encode), encode))) { //判断是不是ISO-8859-1 String s1 = encode; return s1; } } catch (Exception exception1) { } encode = "UTF-8"; try { if (str.equals(new String(str.getBytes(encode), encode))) { //判断是不是UTF-8 String s2 = encode; return s2; } } catch (Exception exception2) { } encode = "GBK"; try { if (str.equals(new String(str.getBytes(encode), encode))) { //判断是不是GBK String s3 = encode; return s3; } } catch (Exception exception3) { } return ""; } public static void main(String[] args) throws UnsupportedEncodingException { //发送 GET 请求 String s=test.sendGet("http://127.0.0.1:9999/ajax", "key=123&v=456"); // String str=new String(s.getBytes(),"utf-8"); String type=getEncoding(s); System.out.println("字符串的编码是:"+type); System.out.println(s); //发送 POST 请求 // String sr=test.sendPost("http://localhost:6144/Home/RequestPostString", "key=123&v=456"); // System.out.println(sr); } } ``` 执行效果如下中文乱码: ![图片说明](https://img-ask.csdn.net/upload/201805/16/1526480833_741806.png) 然后我转了字符串编码为utf-8执行结果如下最后一个字显示问号 ![图片说明](https://img-ask.csdn.net/upload/201805/16/1526480961_68315.png) 后面没有办法,我在python就把字符串转为utf-8 ![图片说明](https://img-ask.csdn.net/upload/201805/16/1526481027_280369.png) 结果执行以后编码是utf-8没错 但是还是乱码 这是咋回事啊 ![图片说明](https://img-ask.csdn.net/upload/201805/16/1526481133_525792.png)

TypeError: 'NoneType' object is not iterable

部分代码如下,运行后总是提示 Traceback (most recent call last): File "E:\project1\src\neirongxiangguanxing.py", line 156, in <module> setstatus = set(statuslist) TypeError: 'NoneType' object is not iterable,什么原因?有没有大神指导下?非常感谢 ``` for status in allstatus: dictvalue = [] words = jieba.cut(status.gettext()[:-1]) for word in words: if word not in stopwordslist: dictvalue.append(word) allstatusdict[status.getid()] = dictvalue allcomments = [] allcomments = readallcomments(inputfile) for comment in allcomments: statuslist = allstatusdict.get(comment.getid()) setstatus = set(statuslist) dictcomment = [] words = jieba.cut(comment.gettext()[:-1]) for word in words: if word not in stopwordslist: dictcomment.append(word) setcomment = set(dictcomment) ```

ValueError: shape mismatch: objects cannot be broadcast to a single shape

在使用matplotlib进行动态绘图时发生如题错误 源码: ``` import matplotlib.pyplot as plt import matplotlib.font_manager as font_manager import numpy as np import csv f=open("C:/Users/jyz_1/Desktop/datamodi.csv","r") y_list=[] t0=eval(input("时间间隔:")) POINTS = 10*t0+1 y_list = [0] * POINTS indx = 0 fig, ax = plt.subplots() ax.set_ylim([0,40]) ax.set_xlim([0, POINTS]) ax.set_autoscale_on(False) ax.set_xticks(range(0, 10*t0, t0)) ax.set_yticks(range(0,40,5)) ax.grid(True) line_y, = ax.plot(range(POINTS), y_list, label='y output', color='cornflowerblue') ax.legend(loc='upper center', ncol=4, prop=font_manager.FontProperties(size=10)) def y_output(ax): global indx, y_list, line_y if indx == 20: indx = 0 indx += 1 f=open("C:/Users/jyz_1/Desktop/datamodi.csv","r") y_list=[] reader=csv.reader(f) for low in reader: for y in low: y_list=np.append(y_list,eval(y)) line_y.set_ydata(y_list) ax.draw_artist(line_y) ax.figure.canvas.draw() timer = fig.canvas.new_timer(interval=100) timer.add_callback(y_output, ax) timer.start() plt.show() ``` 报错: > Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\tkinter\__init__.py", line 1883, in __call__ return self.func(*args) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\tkinter\__init__.py", line 804, in callit func(*args) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\backends\_backend_tk.py", line 114, in _on_timer TimerBase._on_timer(self) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\backend_bases.py", line 1187, in _on_timer ret = func(*args, **kwargs) File "C:\Users\jyz_1\Desktop\sensor_ver1.py", line 32, in y_output ax.draw_artist(line_y) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\axes\_base.py", line 2644, in draw_artist a.draw(self.figure._cachedRenderer) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper return draw(artist, renderer, *args, **kwargs) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\lines.py", line 759, in draw self.recache() File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\matplotlib\lines.py", line 679, in recache self._xy = np.column_stack(np.broadcast_arrays(x, y)).astype(float) File "<__array_function__ internals>", line 5, in broadcast_arrays File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\lib\stride_tricks.py", line 264, in broadcast_arrays shape = _broadcast_shape(*args) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\lib\stride_tricks.py", line 191, in _broadcast_shape b = np.broadcast(*args[:32]) ValueError: shape mismatch: objects cannot be broadcast to a single shape

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Intellij IDEA 实用插件安利

1. 前言从2020 年 JVM 生态报告解读 可以看出Intellij IDEA 目前已经稳坐 Java IDE 头把交椅。而且统计得出付费用户已经超过了八成(国外统计)。IDEA 的...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

魂迁光刻,梦绕芯片,中芯国际终获ASML大型光刻机

据羊城晚报报道,近日中芯国际从荷兰进口的一台大型光刻机,顺利通过深圳出口加工区场站两道闸口进入厂区,中芯国际发表公告称该光刻机并非此前盛传的EUV光刻机,主要用于企业复工复产后的生产线扩容。 我们知道EUV主要用于7nm及以下制程的芯片制造,光刻机作为集成电路制造中最关键的设备,对芯片制作工艺有着决定性的影响,被誉为“超精密制造技术皇冠上的明珠”,根据之前中芯国际的公报,目...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

百度工程师,获利10万,判刑3年!

所有一夜暴富的方法都写在刑法中,但总有人心存侥幸。这些年互联网犯罪高发,一些工程师高技术犯罪更是引发关注。这两天,一个百度运维工程师的案例传遍朋友圈。1...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

实时更新:计算机编程语言排行榜—TIOBE世界编程语言排行榜(2020年6月份最新版)

内容导航: 1、TIOBE排行榜 2、总榜(2020年6月份) 3、本月前三名 3.1、C 3.2、Java 3.3、Python 4、学习路线图 5、参考地址 1、TIOBE排行榜 TIOBE排行榜是根据全世界互联网上有经验的程序员、课程和第三方厂商的数量,并使用搜索引擎(如Google、Bing、Yahoo!)以及Wikipedia、Amazon、YouTube统计出排名数据。

立即提问
相关内容推荐