python3 出错求大神帮助list index out of range 10C

代码部分
c_d = dict(zip(candidate,dist))
cd_sorted = sorted(c_d.items(), key=lambda d:d[1])
print ("\n The person is: " ,cd_sorted[0][0])
dlib.hit_enter_to_continue()

运行出错显示:

Traceback (most recent call last):
File "girl-face-rec.py", line 66, in
print ("\n The person is: " ,cd_sorted[1][5])
IndexError: list index out of range

帮忙看看问题出在哪里?

12个回答

出错信息是list index out of range,意思是列表索引超出范围,看看你定义的列表索引是多少,很好解决的问题,满意望采纳,谢谢。

cd_sorted访问的时候索引越界了。看看它有什么数据

[1][5]表示数组越界了,表明你的数组数据长度不足,你打印数据看看就明白了

list index out of range列表的索引超过了范围,而你用德烈表是cd_sorted,看看它是不是二维的,事不是空的。

list index out of range,意思是列表索引超出范围 比如你的 list.size()为5,你list[6]就会报这个错

列表索引超出范围了

list index out of range,意思是列表索引超出范围 比如你的 list.size()为5,你list[6]就会报这个错

qq_28632639
wanqian_2019 list[5]也会报错,python索引是从0开始的
9 个月之前 回复

列表超出索引范围,应该是二维没有第五个元素

列表超出了索引,你返回的是列表,不是二维的吧

cd_sorted这个确定是二维的吗

共12条数据 1 尾页
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
IndexError: list index out of range
#coding:utf-8 -*- #! /user/bin/env/python # python vectorsum.py 1000 import sys import numpy as np from datetime import datetime """input format�� python vectorsum.py n n: the expected size for the vector""" def pythonSum(n): a = range(n) b = range(n) c = [] for i in range(n): a[i] = i ** 2 b[i] = i ** 3 c.append(a[i]+b[i]) return c def numpysum(n): a = np.arange(n) ** 2 b = np.arange(n) ** 3 c = a + b return c size = int(sys.argv[1]) start = datetime.now() c = pythonSum(size) delta = datetime.now() - start print ("The last 2 elements of the sum", c[-2:]) print ("pythonSum elspaed time in microseconds", delta.microseconds) start = datetime.now() c = numpysum(size) delta = datetime.now() - start print ("The last 2 elements of the sum", c[-2:]) print ("numpysum elspaed time in microseconds", delta.microseconds) 运行结果:Traceback (most recent call last): File "D:\workspace\PythonLearn1\src\NumpyLearn\__init__.py", line 26, in <module> size = int(sys.argv[1]) IndexError: list index out of range 我用的是Eclipse下的 python,请问这个越界了,应该在哪里输入 python vectorsum.py n,各位大神,帮忙解决一下吧
python 中出现 list index out of range错误
![图片说明](https://img-ask.csdn.net/upload/201506/03/1433321842_786610.png)目的是实现当falsh中有相同元素时,arrow中相对应的元素相加,同样core中相对应的元素相加。![图片说明](https://img-ask.csdn.net/upload/201506/03/1433322077_791804.png)这是我的程序,但是出现错误:![图片说明](https://img-ask.csdn.net/upload/201506/03/1433322143_492805.png) 不知到怎么解决,新手谢各位大神
Python新手的一个序列小问题 求大神指导
具体代码如下: # # -*- coding: UTF-8 -*- #根据给定的年月日打印出响应的日期 mouths = [ "January" "February" "March" "April" "May" "June" "July" "August" "September" "October" "November" "December" ] endings = ["st","nd","rd"]+17*["th"] \ +["st","nd","rd"]+7*["th"]\ +["st"] year = raw_input("Year: ") mouth = raw_input("Mouth(1-12): ") day = raw_input("Day(1-31): ") mouth_number = int(mouth) day_number = int(day) mouth_name = mouths[mouth_number-1] ordinal = day + endings[day_number-1] print mouth_name+' '+ordinal+','+year 个人感觉代码应该没问题,但是在运行这个程序的时候 ============= RESTART: C:/Python27/My Python Programs/suoyin.py ============= Year: 1997 Mouth(1-12): 3 Day(1-31): 5 Traceback (most recent call last): File "C:/Python27/My Python Programs/suoyin.py", line 34, in <module> mouth_name = mouths[mouth_number-1] IndexError: list index out of range 一直有这个问题,求大神指导。
用BS爬取网页内容之后提取标签属性,显示AttributeError: 'NoneType' object has no attribute 'text'。用print可以成功提取出文本内容,放在循环里就出错。
用BS爬取网页内容之后标签属性一直出错,显示AttributeError: 'NoneType' object has no attribute 'text' 我用print在循环之前试过是可以成功提取出文本内容的,不知道为什么在循环里就不行。求大神解惑! ``` #s = content[0].find('h5',class_="result-sub-header") #print(s.text.strip()) #遍历content,取出结果 #因为find_all返回的是一个list,再对list用find_all时,需要指定元素[0] for i in range(len(content)): #提取标题 t = content[i].find('a',class_="title") title = t.text.strip() #提取链接 url = 'https://www.forrester.com'+t['href'] #提取摘要 s = content[i].find('h5',class_="result-sub-header") summary = s.text.strip() #将提取的内容放在列表paper中 paper = [title,'Cloud Migration',url,summary] #把每个paper加到paperlist paperlist.append(paper) ```
通过python脚本实现相应文件操作,求大神帮忙
#!/usr/bin/python # -*- coding: UTF-8 -*- ## #功能要求 #1.通过表头检索把表头字符串在需求字符串内的csv移动到指定文件夹内 #2.把移动出来到csv合并,表头只用第一csv的(该处表头有数行) #3.我的代码。。完全不行啊 import pandas as pd import os import csv import shutil import glob def search_file(path): headers = ['Test9','Test4','Development13','Development3'] queue = [] queue.append(path); while len(queue) > 0: tmp = queue.pop(0) if (os.path.isdir(tmp)):#if it is folder for item in os.listdir(tmp): queue.append(os.path.join(tmp, item))#add the path into queue elif (os.path.isfile(tmp)):#if it is file csv_reader = csv.reader(open(tmp)) for row in csv_reader: header = row if header in headers: shutil.move('/Users/Downloads/Test')#move file search_file('/Users/Downloads')#if can work #file setting cwd = os.getcwd() read_path = 'Test' save_path = cwd save_name = 'Test1.csv' os.chdir(read_path) #Add the file into list csv_name_list = os.listdir() #get first file df = pd.read_csv( csv_name_list[0]) df.to_csv( cwd + '\\' + save_path + '\\' + save_name , encoding="utf_8",index=False) #Done n = glob.glob(pathname='*.csv') for i in range(n): df = pd.read_csv( csv_name_list[i] ) df.to_csv(cwd + '\\' + save_path + '\\' + save_name ,encoding="utf_8",index=False, header=False, mode='a+')
openpyxl写入excel后打开报错“已删除的记录: /xl/comments/comment1.xml 部分的 批注 (批注)”?
CompareExcel.py代码如下: ``` from openpyxl.styles import Font,Alignment,Side,Border,Color,colors,PatternFill from MMSpackagefile.codefile.ExcelData import ExcelData class CompareExcel(object): def __init__(self): pass def settruecelltype(self, sheet, row, col): cell = sheet.cell(row, col) font = Font(size=12, bold=False, name='Arial', color=colors.BLACK) thin = Side(border_style='thin', color='0000FF') border = Border(left=thin, right=thin, top=thin, bottom=thin) cell.font = font cell.border = border def setfalsecelltype(self, sheet, row, col): cell = sheet.cell(row, col) font = Font(size=12, bold=False, name='Arial', color=colors.RED) fill = PatternFill(start_color=colors.YELLOW, end_color=colors.YELLOW, fill_type='solid') thin = Side(border_style='thin', color='0000FF') border = Border(left=thin, right=thin, top=thin, bottom=thin) cell.fill = fill cell.font = font cell.border = border #expectlist,actlist 分别为预期返回list和实际返回list; #data 为案例sheet页中过滤的案例数据; #fullfield 为所有字段项去重(无论是否都需要比较,即是结果明细sheet页中标题字段)fulladdfield所有添加字段项去重 #categoryfield为从案例中获得的比较字段类; #sheetfield为配置文件excel里,比较字段sheet页里获取的list,其中list1为所有的子字段列表,list2为添加的字段列表,list3为两个dict,key分别为IsCompare和IsAdd def comp(self, expectlist, actlist, data, full, fulladdfield, categoryfield, sheetfield, sheet_result): comparefield = [] #为根据案例中的需比较字段类获得的需比较字段项 addfield = [] #为根据案例中的需比较字段类获得的需比较添加字段项 rownum = sheet_result.max_row fullli = ExcelData().sort_list(full) fullfield = fullli[:-1] if len(fulladdfield) != 0: fullfield = [e for e in fullfield if e not in fulladdfield] for j in categoryfield: for m in sheetfield: if m.get('checkfield') == 'IsCompare': comparefield.extend(list(m.get(j).split(','))) else: if not m.get(j) != m.get(j): addfield.extend(list(m.get(j).split(','))) if len(expectlist) == len(actlist) and len(expectlist) != 0: sumlogicvalue = CompareExcel().setresultsheet(expectlist, actlist, fullfield, fulladdfield, comparefield, addfield, data, sheet_result, rownum) if len(expectlist) != 1: sheet_result.merge_cells(start_row=rownum + 1, start_column=1, end_row=rownum + len(expectlist), end_column=1) # return sumlogicvalue elif len(expectlist) == len(actlist) and len(expectlist) == 0: rowvalue = [] rowvalue.append(data.get('case_id')) sumlogicvalue = False for q in range(2, 2*len(fullfield)+len(fulladdfield)+2): rowvalue.append('') rowvalue.append(sumlogicvalue) sheet_result.append(rowvalue) CompareExcel().settruecelltype(sheet_result, rownum+1, 1) for w in range(2, 2*len(fullfield)+len(fulladdfield)+3): CompareExcel().setfalsecelltype(sheet_result, rownum+1, w) # return sumlogicvalue elif len(expectlist) > len(actlist): commonexpectlist = [] commonactlist = [] surplusexpectlist = [] for i in expectlist: sign = 0 for j in actlist: if i['data']['id'] == j['data']['id']: sign = 1 commonexpectlist.append(i) commonactlist.append(j) if sign == 0: surplusexpectlist.append(i) CompareExcel().setresultsheet(commonexpectlist, commonactlist, fullfield, fulladdfield, comparefield, addfield, data, sheet_result, rownum) si = 'ex' CompareExcel().setsurlistvalue(si, surplusexpectlist, fullfield, fulladdfield, sheet_result, rownum, commonexpectlist) sheet_result.merge_cells(start_row=rownum + 1, start_column=1, end_row=rownum + len(expectlist), end_column=1) sumlogicvalue = False # return sumlogicvalue else: commonexpectlist = [] commonactlist = [] surplusexpectlist = [] for i in actlist: sign = 0 for j in expectlist: if i['data']['id'] == j['data']['id']: sign = 1 commonactlist.append(i) commonexpectlist.append(j) if sign == 0: surplusexpectlist.append(i) CompareExcel().setresultsheet(commonexpectlist, commonactlist, fullfield, fulladdfield, comparefield, addfield, data, sheet_result, rownum) si = 'ac' CompareExcel().setsurlistvalue(si, surplusexpectlist, fullfield, fulladdfield, sheet_result, rownum, commonexpectlist) sheet_result.merge_cells(start_row=rownum + 1, start_column=1, end_row=rownum + len(actlist), end_column=1) sumlogicvalue = False return sumlogicvalue def setresultsheet(self, expectlist, actlist, fullfield, fulladdfield, comparefield, addfield, data, sheet_result, rownum): for i in range(len(expectlist)): rowvalue = [] expectreturn = [] actreturn = [] addreturn = [] logicvalue = True columnnum = [] for j in range(len(fullfield)): if fullfield[j] in comparefield: # if fullfield[j] == 'name': # actlist[i]['data'][fullfield[j]] = '浦发银' if fullfield[j] in list(expectlist[i]['data'].keys()): expectreturn.append(expectlist[i]['data'][fullfield[j]]) else: expectreturn.append('') if fullfield[j] in list(actlist[i]['data'].keys()): actreturn.append(actlist[i]['data'][fullfield[j]]) else: actreturn.append('') if expectlist[i].get('data').get(fullfield[j]) != actlist[i].get('data').get(fullfield[j]): logicvalue = False columnnum.append(2+j) columnnum.append(2+j+len(fullfield)) columnnum.append(2*len(fullfield)+len(fulladdfield)+2) else: expectreturn.append('') actreturn.append('') if len(fulladdfield) != 0: logicvalue = False columnnum.append(2 * len(fullfield) + len(fulladdfield) + 2) for m in range(len(fulladdfield)): if fulladdfield[m] in addfield: columnnum.append(2 + 2 * len(fullfield) + m) if fulladdfield[m] in list(actlist[i].get('data').keys()): addreturn.append(actlist[i].get('data').get(fulladdfield[m])) else: addreturn.append('') else: addreturn.append('') if i == 0: rowvalue.append(data.get('case_id')) else: rowvalue.append('') rowvalue.extend(expectreturn) rowvalue.extend(actreturn) if len(addreturn) != 0: if len(addreturn) == 1: rowvalue.append(addreturn) else: rowvalue.extend(addreturn) rowvalue.append(logicvalue) sheet_result.append(rowvalue) for o in range(1, 2*len(fullfield)+len(fulladdfield)+3): if o in columnnum: CompareExcel().setfalsecelltype(sheet_result, rownum + 1 + i, o) else: CompareExcel().settruecelltype(sheet_result, rownum + 1 + i, o) return logicvalue def setsurlistvalue(self, sig, surplusexpectlist, fullfield, fulladdfield, sheet_result, rownum, commonexpectlist): for k in surplusexpectlist: surrowvalue = [''] expectlis = [] actlis = [] for l in fullfield: sign = 0 if l in list(k['data'].keys()): sign = 1 expectlis.append(k['data'][l]) if sign == 0: expectlis.append('') actlis.append('') if sig == 'ex': surrowvalue.extend(expectlis) surrowvalue.extend(actlis) else: surrowvalue.extend(actlis) surrowvalue.extend(expectlis) if len(fulladdfield) != 0: addlis = [] for m in fulladdfield: if m in list(k['data'].keys()): addlis.append(k['data'][m]) addlis.append('') surrowvalue.extend(addlis) surrowvalue.append('False') sheet_result.append(surrowvalue) for t in range(len(surplusexpectlist)): for n in range(2, len(surrowvalue) + 1): CompareExcel().setfalsecelltype(sheet_result, rownum + 1 + len(commonexpectlist) + t, n) ``` test_stock_info.py内容如下: ``` import pytest import os import time import openpyxl as op from MMSpackagefile.codefile.ExcelData import ExcelData from MMSpackagefile.codefile.CompareExcel import CompareExcel path1 = 'E:\\MMS\\myfirstproject\\MMSpackagefile\\configdatasource\\stock_info_option12.xlsx' path2 = 'E:\\MMS\\myfirstproject\\MMSpackagefile\\configdatasource\\mms_cofigfile.xlsx' executecase = ExcelData().getexecutecase(path1, 1, 0) #返回可执行案例的list configcomparefield = ExcelData().getexcelfield(path1, 0, 0) #返回stock_info_option12.xlsx第一个sheet页的比对字段list configcomparefield1 = ExcelData().getexcelfield(path1, 0, 0) dirpath = os.path.abspath('..') version_list = ExcelData().getversion(path2, 0, 1) #返回配置文件的版本信息 localtime = time.strftime('%Y%m%d%H%M%S', time.localtime()) path = dirpath + '\\Result\\' + version_list[0] + '_' + version_list[1] + '_' + localtime os.makedirs(path) excel = op.load_workbook(path2) excel.save(path + '\\mms_cofigfile_' + localtime + '.xlsx') exce = op.load_workbook(path1) comparefield, addfield, sheetfield = ExcelData().getfullfielditem(configcomparefield1) excel, sheet = ExcelData().setresulttitle(comparefield, addfield, exce) @pytest.fixture(scope='module') def stock_info(request): value = {} value['case_id'] = request.param['case_id'] value['title'] = request.param['title'] value['ticker'] = request.param['ticker'] value['exchange'] = request.param['exchange'] value['asset_class'] = request.param['asset_class'] value['IsCompare'] = request.param['IsCompare'] value['IsAdd'] = request.param['IsAdd'] return value @pytest.mark.parametrize('stock_info', executecase, indirect=True) def test_stock_info(stock_info): configdata = ExcelData().getversion(path2, 0, 1) oldreturnlist = ExcelData().getrequestdata(configdata[2], stock_info, 'stock-info') newreturnlist = ExcelData().getrequestdata(configdata[3], stock_info, 'stock-info') categoryfield = stock_info.get('IsCompare').split(',') isbool = CompareExcel().comp(oldreturnlist, newreturnlist, stock_info, comparefield, addfield, categoryfield, configcomparefield, sheet) ExcelData().setcolumnautowidth(sheet) excel.save(path + '\\stock_info_option12_' + localtime + '.xlsx') assert isbool ``` 现在有两个问题,一个是CompareExcel.py里函数setresultsheet最后一行logicvalue高亮显示: 提示:This inspection warns about local variables referenced before assignment.我并未在函数外部使用过这个变量logicvalue为什么会有这种提示? 第二个问题,我用openpyxl写excel文件,打开时候提示 ![图片说明](https://img-ask.csdn.net/upload/201912/17/1576586295_401469.png) 不太清楚这个提示是因为什么原因引起的,,之前也出现过这个问题,后来检查是因为重复保存的原因,改成保存一次就好了,现在又出现这个问题,几天了还没排查到原因,哪位大神看下?
Python爬虫结果为空TT
新手修改了网上代码,想要爬百度新闻的标题和简介,不知道为什么运行结果是空。在mac自带的python2上运行: ``` from urllib import urlopen import csv import re from bs4 import BeautifulSoup import sys reload(sys) sys.setdefaultencoding("utf-8") for k in range(1,36): url = "http://news.baidu.com/ns?word=低保&pn=%s&cl=2&ct=1&tn=news&rn=20&ie=utf-8&bt=0&et=0"% ((k-1)*20) csvfile = file("Dibao.csv", "ab+") writer = csv.writer(csvfile) content = urlopen(url).read() soup = BeautifulSoup(content,"lxml") list0 = [] list1 = [] list2 = [] list3 = [] for i in range(1,20): hotNews = soup.find_all("div", {"class", "result"})[i] a1=hotNews.find(name="a", attrs={"target": re.compile("_blank")}) list0.append(a1.text) a2=hotNews.find(name="p", attrs={"class": re.compile("c-author")}) t1=a2.text.split()[0] list1.append(t1) t2 = a2.text.split()[1] list2.append(t2) if t2.find(u"年") == 4: t3 = a2.text.split()[2] list3.append(t3) else: list3.append(" ") #将数据写入csv data = [] for i in range(0,20): data.append((list0[i], list1[i], list2[i],list3[i])) writer.writerows(data) csvfile.close() print "第" + str(k) + "页完成" ``` 报错: Traceback (most recent call last): File "<stdin>", line 12, in <module> IndexError: list index out of range 不懂参数过范围什么意思,新闻一共37页,每页20条。 希望有大神能帮忙看一下,多谢啦~
为什么python多线程调用threading重写run出现重复调用错误
【问题描述】 使用threading进行多线程继承,重写run方法,在进行多线程调用时,出现run重入问题,并且呈线程数递增,哪位大神帮忙看下。 【代码】 ``` class CMyThread(threading.Thread): m_print = None m_result = None m_func = None m_args = None m_lock = None def __init__(self, func, *args): threading.Thread.__init__(self) self.m_print = CPrintMgt() self.m_func = func self.m_args = args print 'CMutilProcess[{}, {}, {}, {}]'.format(self, func, args, self.getName()) def run(self): print 'CMutilProcess_run[{}, {}, {}, {}, {}]'.format(self, self.getName(), self.m_args, id(self), *self.m_args) self.m_result = self.m_func(*self.m_args) def multhread_process_handle_func(func, args): bResult = False output_result = [] try: threads = [] threadNum = len(args) # 创建线程 for i in range(threadNum): t = CMyThread(func, *tuple(args[i])) threads.append(t) # 启动线程 for i in range(len(threads)): threads[i].start() # 执行线程 for i in range(len(threads)): threads[i].join() # 获取结果 bResult = True # print_obj.fnPrintInfo('--------------------------------------------') print('--------------------------------------------') for i in range(len(threads)): if isinstance(threads[i].m_result, bool): bResultTemp = threads[i].m_result print('multhread[thread:{}]handle result:{}'.format(i, threads[i].m_result, bResultTemp)) bResult &= bResultTemp output_result.append(None) elif isinstance(threads[i].m_result, list) or isinstance(threads[i].m_result, tuple): bResultTemp = threads[i].m_result[0] print('multhread[thread:{}]handle result:{}'.format(i, threads[i].m_result, bResultTemp)) bResult &= bResultTemp output_result.append(list(threads[i].m_result[1:])) print('multhread[threadnum:{}]all handle result:{}'.format(len(threads), bResult)) print('--------------------------------------------') # print_obj.fnPrintInfo('--------------------------------------------') # print_obj.fnPrintInfo('多线程数量({})执行结果:{}'.format(threadNum, bResult)) # print_obj.fnPrintInfo('--------------------------------------------') except Exception as e: bResult = False # print_obj.fnPrintException(e) finally: return bResult, output_result ``` ![图片说明](https://img-ask.csdn.net/upload/201911/18/1574044674_742482.png)
TypeError: 'unicode' object is not callable 错误 python2.7
刚学python 写的def的线程都不可用,于是模仿别人单线程直接插多线程,就出现以下错误,写的爬虫,线程是出来了,但是就是不能调用unicode,求大神解答 ``` # -*- coding: utf-8 -* import sys reload(sys) sys.setdefaultencoding('utf8') import requests import re import time import threading import sys import Queue as queue import sys import datetime live = open('未爬.txt','w') die = open('已爬.txt','w') input_queue = queue.Queue() list = raw__input("--> Enter Lists : ") thread = input(" -> Thread : ") link = “************” head = {'User-agent':'Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30'} s = requests.session() g = s.get(link, headers=head) list = open(list, 'r')_ print('') print("-"*50) print("-"*50) while True: 网页导入 = list.readline().replace('\n','') if not www: continue bacot = email.strip().split(':') xxx = {''************''} cek = s.post(link, headers=head, data=xxx).text if "************" in cek: print("|未爬|----->"+网页+"") live.write(网页+"\n") else: print("|已爬 | -----> "+网页+" ") die.write(网页+"\n") for x in range(int(thread)): t = threading.Thread(target=cek) t.setDaemon(True) t.start() print('') print('-------------------------------------------------') print('')_ ```
Solution() takes no arguments何解?
站内站外找到的解答都是关于__init__()写错了的问题。可是我没用__init__()啊。求大神指教我错在哪里了??? 这是leetcode上的45题,看的别人的答案,在leetcode上能运行出来,自己在jupyter book上就运行不出来了(加了最后2句)。 ```python from typing import List class Solution: def jump(self, nums: List[int]) -> int: counter = 0 # 移动次数 curr_end = 0 # 当前跳跃的终点 curr_farthest = 0 # 最远位置的下标 for i in range(len(nums) - 1): curr_farthest = max(curr_farthest, i + nums[i]) if i == curr_end: counter += 1 curr_end = curr_farthest return counter a = Solution([2,3,1,1,4]) print(a.jump()) ``` 报错信息如下 TypeError Traceback (most recent call last) <ipython-input-7-6e1137fe93e9> in <module> 13 return counter 14 ---> 15 a = Solution([2,3,1,1,4]) 16 print(a.jump()) TypeError: Solution() takes no arguments
Python爬虫 急求大神帮忙 万分感谢
我用Python爬虫 爬取网页 爬出来获取特定数据 为什么大部分显示null 而且每次执行一次 数据就会增加一些 到底是代码问题还是网页问题 求大神指点代码下: def get_content(self, html): #获取一个网页的内容 div_list = html.xpath("//div[contains(@class,'listtyle')]") item_list = [] for div in div_list: for b in range(1,19): food_img= div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/img[@class='img']/@src") food_img=food_img[0] if len(food_img)>0 else None food_name = div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/div[@class='i_w']/div[@class='i']/div[@class='c1']/strong/text()") food_name = food_name[0] if len(food_name)>0 else None food_effect=div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/strong[@class='gx']/span/text()") food_effect = food_effect[0] if len(food_effect)>0 else None food_time=div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/div[@class='i_w']/div[@class='i']/div[@class='c2']/ul/li[@class='li1']/text()") food_time = food_time[0] if len(food_time)>0 else None food_taste=div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/div[@class='i_w']/div[@class='i']/div[@class='c2']/ul/li[@class='li2']/text()") food_taste = food_taste[0] if len(food_taste)>0 else None food_commentnum_likenum=div.xpath("./div[@class='listtyle1'][b]/a[@class='big']/div[@class='i_w']/div[@class='i']/div[@class='c1']/span/text()") food_commentnum_likenum = food_commentnum_likenum[0] if len(food_commentnum_likenum)>0 else None item=dict( food_img1=food_img, food_name1=food_name, food_effect1=food_effect, food_time1=food_time, food_taste1=food_taste, food_commentnum_likenum1=food_commentnum_likenum, ) item_list.append(item) return item_list
python函数,形参不确定的情况下,传入一个列表。只循环一次,是为什么?
def buttown(*out): for i in out: print ('greet '+str(i)) buttown(list(range(1,10))) 求大神赐教,为什么这个循环只打印了一次,就不会打印第二次了,为什么输出的结果是greet [1, 2, 3, 4, 5, 6, 7, 8, 9]。从逻辑上来说,代码没错啊,将1到10的列表传入到函数中。然后用变量i在列表out里循环。每循环一次打印一次变量的值。其结果不应该是这样的吗?求教,*out表示函数的参数未知。那我传入一个列表,在这里函数的实参变成几个了。初学python不懂的太多,求大神赐教,谢谢! greet 1 greet 2 greet 3 greet 4 greet 5 greet 6 greet 7 greet 8 greet 9
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教 ``` from __future__ import print_function import argparse from datetime import datetime import os import sys import time import scipy.misc import scipy.io as sio import cv2 from glob import glob import multiprocessing os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf import numpy as np from PIL import Image from utils import * N_CLASSES = 20 DATA_DIR = './datasets/CIHP' LIST_PATH = './datasets/CIHP/list/val2.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id2.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) RESTORE_FROM = './checkpoint/CIHP_pgn' # Load reader. with tf.name_scope("create_inputs") as scp1: reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, None) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list image_batch = tf.stack([image, image_rev]) label_batch = tf.expand_dims(label, dim=0) # Add one batch dimension. edge_gt_batch = tf.expand_dims(edge_gt, dim=0) h_orig, w_orig = tf.to_float(tf.shape(image_batch)[1]), tf.to_float(tf.shape(image_batch)[2]) image_batch050 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.50)), tf.to_int32(tf.multiply(w_orig, 0.50))])) image_batch075 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) image_batch125 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.25)), tf.to_int32(tf.multiply(w_orig, 1.25))])) image_batch150 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.50)), tf.to_int32(tf.multiply(w_orig, 1.50))])) image_batch175 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.75)), tf.to_int32(tf.multiply(w_orig, 1.75))])) ``` 新建网络 ``` # Create network. with tf.variable_scope('', reuse=False) as scope: net_100 = PGNModel({'data': image_batch}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_050 = PGNModel({'data': image_batch050}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_075 = PGNModel({'data': image_batch075}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_125 = PGNModel({'data': image_batch125}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_150 = PGNModel({'data': image_batch150}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_175 = PGNModel({'data': image_batch175}, is_training=False, n_classes=N_CLASSES) # parsing net parsing_out1_050 = net_050.layers['parsing_fc'] parsing_out1_075 = net_075.layers['parsing_fc'] parsing_out1_100 = net_100.layers['parsing_fc'] parsing_out1_125 = net_125.layers['parsing_fc'] parsing_out1_150 = net_150.layers['parsing_fc'] parsing_out1_175 = net_175.layers['parsing_fc'] parsing_out2_050 = net_050.layers['parsing_rf_fc'] parsing_out2_075 = net_075.layers['parsing_rf_fc'] parsing_out2_100 = net_100.layers['parsing_rf_fc'] parsing_out2_125 = net_125.layers['parsing_rf_fc'] parsing_out2_150 = net_150.layers['parsing_rf_fc'] parsing_out2_175 = net_175.layers['parsing_rf_fc'] # edge net edge_out2_100 = net_100.layers['edge_rf_fc'] edge_out2_125 = net_125.layers['edge_rf_fc'] edge_out2_150 = net_150.layers['edge_rf_fc'] edge_out2_175 = net_175.layers['edge_rf_fc'] # combine resize parsing_out1 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out1_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_175, tf.shape(image_batch)[1:3,])]), axis=0) parsing_out2 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out2_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_175, tf.shape(image_batch)[1:3,])]), axis=0) edge_out2_100 = tf.image.resize_images(edge_out2_100, tf.shape(image_batch)[1:3,]) edge_out2_125 = tf.image.resize_images(edge_out2_125, tf.shape(image_batch)[1:3,]) edge_out2_150 = tf.image.resize_images(edge_out2_150, tf.shape(image_batch)[1:3,]) edge_out2_175 = tf.image.resize_images(edge_out2_175, tf.shape(image_batch)[1:3,]) edge_out2 = tf.reduce_mean(tf.stack([edge_out2_100, edge_out2_125, edge_out2_150, edge_out2_175]), axis=0) raw_output = tf.reduce_mean(tf.stack([parsing_out1, parsing_out2]), axis=0) head_output, tail_output = tf.unstack(raw_output, num=2, axis=0) tail_list = tf.unstack(tail_output, num=20, axis=2) tail_list_rev = [None] * 20 for xx in range(14): tail_list_rev[xx] = tail_list[xx] tail_list_rev[14] = tail_list[15] tail_list_rev[15] = tail_list[14] tail_list_rev[16] = tail_list[17] tail_list_rev[17] = tail_list[16] tail_list_rev[18] = tail_list[19] tail_list_rev[19] = tail_list[18] tail_output_rev = tf.stack(tail_list_rev, axis=2) tail_output_rev = tf.reverse(tail_output_rev, tf.stack([1])) raw_output_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_output_all = tf.expand_dims(raw_output_all, dim=0) pred_scores = tf.reduce_max(raw_output_all, axis=3) raw_output_all = tf.argmax(raw_output_all, axis=3) pred_all = tf.expand_dims(raw_output_all, dim=3) # Create 4-d tensor. raw_edge = tf.reduce_mean(tf.stack([edge_out2]), axis=0) head_output, tail_output = tf.unstack(raw_edge, num=2, axis=0) tail_output_rev = tf.reverse(tail_output, tf.stack([1])) raw_edge_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_edge_all = tf.expand_dims(raw_edge_all, dim=0) pred_edge = tf.sigmoid(raw_edge_all) res_edge = tf.cast(tf.greater(pred_edge, 0.5), tf.int32) # prepare ground truth preds = tf.reshape(pred_all, [-1,]) gt = tf.reshape(label_batch, [-1,]) weights = tf.cast(tf.less_equal(gt, N_CLASSES - 1), tf.int32) # Ignoring all labels greater than or equal to n_classes. mIoU, update_op_iou = tf.contrib.metrics.streaming_mean_iou(preds, gt, num_classes=N_CLASSES, weights=weights) macc, update_op_acc = tf.contrib.metrics.streaming_accuracy(preds, gt, weights=weights) # # Which variables to load. # restore_var = tf.global_variables() # # Set up tf session and initialize variables. # config = tf.ConfigProto() # config.gpu_options.allow_growth = True # # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # # config=tf.ConfigProto(gpu_options=gpu_options) # init = tf.global_variables_initializer() # evaluate prosessing parsing_dir = './output' # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True ``` 以上是初始化网络和初始化参数载入模型,下面定义两个函数分别处理val1.txt和val2.txt两个列表内部的数据。 ``` # 处理第一个列表函数 def humanParsing1(): # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Create queue coordinator. coord = tf.train.Coordinator() # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) # 处理第二个列表函数 def humanParsing2(): # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # config=tf.ConfigProto(gpu_options=gpu_options) # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: # Create queue coordinator. coord = tf.train.Coordinator() sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") LIST_PATH = './datasets/CIHP/list/val1.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id1.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) # with tf.name_scope("create_inputs"): with tf.name_scope(scp1): tf.get_variable_scope().reuse_variables() reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, coord) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) if __name__ == '__main__': humanParsing1() humanParsing2() ``` 最终输出结果一直是第一个列表里面的循环,代码上用了 self.queue = tf.train.slice_input_producer([self.images, self.labels, self.edges], shuffle=shuffle),队列的方式进行多线程推理。最终得到的结果一直是第一个列表的循环,求大神告诉问题怎么解决。
Python 如何修改坐标轴单位?
求助各位大神,我的数据都是小数点后面很多位,现在想把纵坐标改成(0 0.05 0.01 0.15...)这种形式 看起来简洁点,应该怎么修改代码呢 用网上的方法好像都不成功![图片说明](https://img-ask.csdn.net/upload/201801/16/1516092786_323009.png) ``` #X加速度/阈值 pl.title('Acce_x') pl.xlabel('seq') pl.ylabel('') x=list_seq[1:len(list_seq)] #pl.plot(range(1,len(Acce_x_last)+1),Acce_x_last,label='Acce_x') pl.plot(range(len(list_gradient_x)),list_gradient_x,label='Acce_x gradient') pl.legend() pl.show() ``` 求大神解答~~~
python2 中tkinter 打开文件 中文乱码
这个程序的目的是写一个文件树软件,通过“打开”button 打开地址对话框,选择文件夹。如果是文件夹 双击则继续打开 如果是文件 双击则打开文件。 (双击打开文件还没写) 现在打开文件夹之后 很多地方显示乱码,而且打开某一个文件并不是马上显示出文件夹中所有文件,需要点进去再退回来才能显示。 现在程序应该有不少错误,一点点改吧。 首先请问大神 怎么解决中文乱码问题? 在运行的时候提示这种错误语言: Traceback (most recent call last): File "D:\Python27\lib\lib-tk\Tkinter.py", line 1547, in __call__ return self.func(*args) File "d:\Untitled-1.py", line 28, in setDirAndGo doLS() File "d:\Untitled-1.py", line 52, in doLS cwd.set(os.getcwd()+'\\'+tdir) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 3: ordinal not in range(128) ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import os from time import sleep from Tkinter import * import tkFileDialog #弹出选择路径的库 import tkMessageBox import fnmatch #选择文件的关键词 def dakai(): dirs.delete(0,END) default_dir = r"C:\Users\lenovo\Desktop" # 设置默认打开目录 path = tkFileDialog.askdirectory(title=u"选择文件", initialdir=(os.path.expanduser(default_dir))) doLS1(path) #双击时调用,双击时,设置背景色为红色,并调用doLS函数打开所选文件 def setDirAndGo(ev=None): last = cwd.get() dirs.config(selectbackground='red') check = dirs.get(dirs.curselection()) if not check: check = os.curdir cwd.set(check) doLS() #实现更新目录的核心函数 def doLS(ev=None): error = '' tdir = cwd.get() if not tdir:tdir=os.curdir #若路径输入错误,或者打开的是文件,而不是目录,则更新错误提示信息 if not os.path.exists(tdir): error = os.getcwd()+'\\'+tdir + ':未找到文件' elif not os.path.isdir(tdir): error = os.getcwd()+'\\'+tdir + ':未找到目录' if error: cwd.set(error) top2.update() sleep(1) if not (last): last = os.curdir cwd.set(os.curdir) dirs.config(selectbackground='LightSkyBlue') dirn.config(text=os.getcwd()+'\\'+tdir) top2.update() return cwd.set(os.getcwd()+'\\'+tdir) top2.update() dirlist = os.listdir(tdir)#os.listdir() 方法用于返回指定的文件夹包含的文件或文件夹的名字的列表。 dirlist.sort() os.chdir(tdir)#os.chdir() 方法用于改变当前工作目录到指定的路径。 #更新界面上方标签内容 dirl.config(text=os.getcwd().decode("gbk").encode("utf-8")) top2.update() dirs.delete(0,END) dirs.insert(END,os.pardir)#os.chdir(os.pardir) 切换到上级目录 即将上级目录.. 插入到dirs对象中 #把选定目录的文件或文件夹的名字的列表依次插入到dirs对象中 for eachFile in dirlist: dirs.insert(END,eachFile.decode("gbk").encode("utf-8")) #先解码 再编码 bingo! cwd.set(os.curdir) dirs.config(selectbackground='LightSkyBlue') def doLS1(path): error = '' tdir = path if not tdir:tdir=os.curdir #若路径输入错误,或者打开的是文件,而不是目录,则更新错误提示信息 if not os.path.exists(tdir): error = os.getcwd()+'\\'+tdir + ':未找到文件' elif not os.path.isdir(tdir): error = os.getcwd()+'\\'+tdir + ':未找到目录' if error: cwd.set(error) top2.update() sleep(1) if not (last): last = os.curdir cwd.set(os.curdir) dirs.config(selectbackground='LightSkyBlue') dirn.config(text=os.getcwd()+'\\'+tdir) top2.update() return cwd.set(os.getcwd()+'\\'+tdir) top2.update() dirlist = os.listdir(tdir)#os.listdir() 方法用于返回指定的文件夹包含的文件或文件夹的名字的列表。 dirlist.sort() os.chdir(tdir)#os.chdir() 方法用于改变当前工作目录到指定的路径。 #更新界面上方标签内容 dirl.config(text=os.getcwd().decode("gbk").encode("utf-8")) top2.update() dirs.delete(0,END) dirs.insert(END,os.pardir)#os.chdir(os.pardir) 切换到上级目录 即将上级目录.. 插入到dirs对象中 #把选定目录的文件或文件夹的名字的列表依次插入到dirs对象中 for eachFile in dirlist: dirs.insert(END,eachFile.decode("gbk").encode("utf-8")) cwd.set(os.curdir) dirs.config(selectbackground='LightSkyBlue') top2 = Tk() top2.title('营销集约管控中心-文件树') cwd = StringVar(top2) dirl = Label(top2,fg = 'blue') dirl.pack() dirfm = Frame(top2) dirsb = Scrollbar(dirfm) dirsb.pack(side=RIGHT,fill=Y) dirs = Listbox(dirfm,height=15,width=50,yscrollcommand=dirsb.set) #通过使用List的bind()方法,将鼠标双击事件绑定,并调用setDirAndGo函数 dirs.bind('<Double-1>',setDirAndGo) # 下面实现单击时,将所选文件路径加名字更新到下方输入框控件中,不能用self.dirs.bind('<Button-1>', self.setDirn)绑定单击事件,会出错 #dirs.bind("<<ListboxSelect>>", setDirn) dirsb.config(command=dirs.yview) dirs.pack(side=LEFT,fill=BOTH) dirfm.pack() #第二个框架bfm,放置按钮 bfm = Frame(top2) open = Button(bfm,text='打开',command=dakai,activeforeground='white',activebackground='blue') open.pack(side=LEFT) bfm.pack() if __name__ =='__main__': #设定初始目录为桌面 mainloop() ```
python爬虫抓取机票时出现的问题
我是在校学生,自学了点python,想用爬虫抓取机票价格可以更方便的了解特价票信息,所以在网上找了抓取的一些代码然后自己又改了一些,初步有自己想要的功能:挂在服务器上运行,一旦有特价票,向我的邮箱发信息。但是一直有问题,第一个是运行的时候会出下面这个错误(好像是列表越界): Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/python27/lib/python2.7/threading.py", line 551, in __bootstrap_inner self.run() File "/usr/local/python27/lib/python2.7/threading.py", line 755, in run self.function(*self.args, **self.kwargs) File "SpecialFlightPrice.py", line 72, in task_query_flight flights=getdate(city, today, enddate) File "SpecialFlightPrice.py", line 27, in getdate json_data = re.findall(pattern, price_html)[0] IndexError: list index out of range 还有一个问题就是我想每天定时对机票信息文件进行清空,但是写的代码却实现不了这个功能,请大神顺便帮我改一改。 先感谢! 下面是源码(我把个人的2个邮箱改成了xxxxxxxx,如果想运行需要把xxxxxxxx改成自己的2个邮箱,还有因为是挂在服务器上运行的,所以需要输入几个参数:出发地点,日期,日期后几天的机票): # -*- coding: utf-8 -*- import datetime import time import json import urllib import re import sys import threading from email.mime.text import MIMEText import smtplib from time import sleep from threading import Timer from _ast import While default_encoding = 'utf-8' reload(sys) sys.setdefaultencoding(default_encoding) def getdate(city,startdate,enddate): url = 'https://sjipiao.alitrip.com/search/cheapFlight.htm?startDate=%s&endDate=%s&' \ 'routes=%s-&_ksTS=1469412627640_2361&callback=jsonp2362&ruleId=99&flag=1' % (startdate, enddate,city) price_html = urllib.urlopen(url).read().strip() pattern = r'jsonp2362\(\s+(.+?)\)' re_rule = re.compile(pattern) json_data = re.findall(pattern, price_html)[0] price_json = json.loads(json_data) flights = price_json['data']['flights'] # flights Info return flights def sendmail(a,b,c,d): _user = "xxxxxxxxxxx@163.com" _pwd = "xxxxxxxxxxx" _to = "xxxxxxxxxxxxx@qq.com" msg = MIMEText('%s%s%s%s'%(a,b,c,d),'plain','utf-8') msg["Subject"] = "有特价票啦~" msg["From"] = _user msg["To"] = _to try: s = smtplib.SMTP_SSL("smtp.163.com", 465) s.login(_user, _pwd) s.sendmail(_user, _to, msg.as_string()) s.quit() print "Success!" except smtplib.SMTPException: print "Falied" def task_query_flight(): city=str(sys.argv[1]) year=int(sys.argv[2]) month=int(sys.argv[3]) day=int(sys.argv[4]) delay=int(sys.argv[5]) if city=='DL': city='DLC' elif city=='NJ': city='NKG' elif city=='BJ': city='BJS' today = datetime.date(year,month,day) enddate = today + datetime.timedelta(delay) print'从%s到%s的最便宜的机票价格是' % (today,enddate) flights=getdate(city, today, enddate) for f in flights: if f['discount'] <=2 : source = '从:%s-' % f['depName'] dest = '到:%s\t' % f['arrName'] price = '\t价格:%s%s(折扣:%s)\t' % ((f['price']), f['priceDesc'], f['discount']) depart_date = '\t日期:%s' % f['depDate'] print source+dest+price+depart_date with open('store.txt','a') as f: f.write(' ') with open('store.txt','r') as f: for line in f.readlines(): if '%s%s%s%s'%(source,dest,price,depart_date) in line: Timer(60,task_query_flight).start() else: sendmail(source, dest, price, depart_date) with open('store.txt', 'a') as f: f.write('%s%s%s%s'%(source,dest,price,depart_date)) Timer(60,task_query_flight).start() ''' 两个问题: 1、列表越界 list out of range 2、定时器只会运行一次 不知什么原因。 if 没找到discount<2的, 则 循环一直找 并且设定时器到某一时间即清空文件内容 ''' while True: task_query_flight() current_time = time.localtime(time.time()) if((current_time.tm_hour == 7) and (current_time.tm_min == 0)): with open('store1.txt','w') as f: f.truncate() time.sleep(60) if __name__ == '__main__': task_query_flight()
Python3 多进程 向子进程传参数Queue,子进程无法运行
``` #!/usr/bin/python from multiprocessing import Pool, Queue import time def Foo(i, q): print("sub", i) if __name__ == "__main__": q = Queue() pool = Pool(5) for i in range(10): pool.apply_async(func = Foo, args = (i, q, )) pool.close() pool.join() print('end') ``` 向子进程传了一个队列,子进程就全部无法运行。如果传一个list,传一个数都没问题。请大神指点指点。 ![图片说明](https://img-ask.csdn.net/upload/201812/28/1545996695_326091.png)
爬取股票信息,python没报错但不能爬取出结果!急求大神啊!!???
``` # -*- coding: utf-8 -*- from bs4 import BeautifulSoup import traceback import re import time import requests def GetHTMLSource(url): try: r=requests.get(url) r.raise_for_status () r.encoding = r.apparent_encoding return r.text except: print ( "异常" ) return "" def SetFileName(): dirname = time.strftime ( '%Y%m%d' , time.localtime ( time.time () ) ) #获取当前日期 dirname += 'sh' return dirname def getStockList(lst , stock_list_url): # 获得东方财富网上以sh6开头的股票代码 html = GetHTMLSource ( stock_list_url ) soupdata = BeautifulSoup ( html , 'html.parser' ) a = soupdata.find_all ( 'a' ) # 用find_all方法遍历所有'a'标签,并取出在'a'标签里面的’href’数据 for i in a: try: href = i.attrs[ 'href' ] lst.append ( re.findall ( r"sh6d{5}" , href )[ 0 ] ) except: continue def getStockInfo(lst , stock_info_url , fpath): ndate = time.strftime ( '%Y%m%d' , time.localtime ( time.time () ) ) for stock in lst: url = stock_info_url + stock + '.html' html = GetHTMLSource ( url ) try: if html == "": continue infoDict = {} soup = BeautifulSoup ( html, 'html.parser' ) stockInfo = soup.find ( 'div' , attrs={'class': 'stock-bets'} ) if stockInfo == None: continue keyData = stockInfo.find_all ( 'dt' ) valueData = stockInfo.find_all ( 'dd' ) inp = stock + "," + ndate + "," for i in range ( len ( keyData ) ): key = keyData[ i ].text val = valueData[ i ].text infoDict[ key ] = val inp += infoDict[ '最高' ] + "," + infoDict[ '换手率' ] + "," + infoDict[ '成交量' ] + "," + infoDict[ '成交额' ] + "" with open ( fpath , 'a' , encoding='utf-8' ) as f: f.write ( inp ) except: traceback.print_exc () continue def main(): stock_list_url = 'http://quote.eastmoney.com/stocklist.html' stock_info_url = 'https://gupiao.baidu.com/stock/' output_file = 'D://a.txt' slist = [] getStockList(slist,stock_list_url) getStockInfo(slist,stock_info_url,output_file) main() ```
Segnet网络用keras实现的时候报错ValueError,求大神帮忙看看
![图片说明](https://img-ask.csdn.net/upload/201904/05/1554454470_801036.jpg) 报错为:Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (32, 10) keras+tensorflow后端 代码如下 ``` # coding=utf-8 import matplotlib from PIL import Image matplotlib.use("Agg") import matplotlib.pyplot as plt import argparse import numpy as np from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, BatchNormalization, Reshape, Permute, Activation, Flatten # from keras.utils.np_utils import to_categorical # from keras.preprocessing.image import img_to_array from keras.models import Model from keras.layers import Input from keras.callbacks import ModelCheckpoint # from sklearn.preprocessing import LabelBinarizer # from sklearn.model_selection import train_test_split # import pickle import matplotlib.pyplot as plt import os from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) path = '/tmp/2' os.chdir(path) training_set = train_datagen.flow_from_directory( 'trainset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) test_set = test_datagen.flow_from_directory( 'testset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) def SegNet(): model = Sequential() # encoder model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (8,8) # decoder model.add(UpSampling2D(size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (256,256) model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(10, (1, 1), strides=(1, 1), padding='valid', activation='relu')) model.add(BatchNormalization()) model.add(Reshape((64*64, 10))) # axis=1和axis=2互换位置,等同于np.swapaxes(layer,1,2) model.add(Permute((2, 1))) #model.add(Flatten()) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.summary() return model def main(): model = SegNet() filepath = "/tmp/2/weights.best.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] history = model.fit_generator( training_set, steps_per_epoch=(training_set.samples / 32), epochs=20, callbacks=callbacks_list, validation_data=test_set, validation_steps=(test_set.samples / 32)) # Plotting the Loss and Classification Accuracy model.metrics_names print(history.history.keys()) # "Accuracy" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() if __name__ == '__main__': main() ``` 主要是这里,segnet没有全连接层,最后输出的应该是一个和输入图像同等大小的有判别标签的shape吗。。。求教怎么改。 输入图像是64 64的,3通道,总共10类,分别放在testset和trainset两个文件夹里
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
Vue + Spring Boot 项目实战(十四):用户认证方案与完善的访问拦截
本篇文章主要讲解 token、session 等用户认证方案的区别并分析常见误区,以及如何通过前后端的配合实现完善的访问拦截,为下一步权限控制的实现打下基础。
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
漫话:什么是平衡(AVL)树?这应该是把AVL树讲的最好的文章了
这篇文章通过对话的形式,由浅入深带你读懂 AVL 树,看完让你保证理解 AVL 树的各种操作,如果觉得不错,别吝啬你的赞哦。 1、若它的左子树不为空,则左子树上所有的节点值都小于它的根节点值。 2、若它的右子树不为空,则右子树上所有的节点值均大于它的根节点值。 3、它的左右子树也分别可以充当为二叉查找树。 例如: 例如,我现在想要查找数值为14的节点。由于二叉查找树的特性,我们可...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
开源并不是你认为的那些事
点击上方蓝字 关注我们开源之道导读所以 ————想要理清开源是什么?先要厘清开源不是什么,名正言顺是句中国的古代成语,概念本身的理解非常之重要。大部分生物多样性的起源,...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
《C++ Primer》学习笔记(六):C++模块设计——函数
专栏C++学习笔记 《C++ Primer》学习笔记/习题答案 总目录 https://blog.csdn.net/TeFuirnever/article/details/100700212 —————————————————————————————————————————————————————— 《C++ Primer》习题参考答案:第6章 - C++模块设计——函数 文章目录专栏C+...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
相关热词 c# 二进制截断字符串 c#实现窗体设计器 c#检测是否为微信 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片
立即提问