python3.1里urllib的问题 求帮忙解答!!谢谢!!

在使用urlib时,data = urllib.urlencode(values) 该语句处报错,提示AttributeError: 'module' object has no attribute 'urlencode'
在网上查说是python3之后需要data = urllib.parse.urlencode(values) 这样调用,但是这样调用后就报错AttributeError: 'module' object has no attribute 'parse' 请问有人知道这是什么情况嘛 拜托解答一下吧

1个回答

python3中确实urlencode()函数已经被转移到了urllib.parse里。
但你的环境怎么会没有parse。python现在新版本是3.5了。用最新版吧

oyljerry
oyljerry 回复guanbingchichi: 3中没有urllib2了,request也有变化
大约 3 年之前 回复
guanbingchichi
guanbingchichi 刚刚又在网上找了一下from urllib.parse import urlparse添加了这一句 而且urllib2也无法使用,需要添加import urllib.request as urllib2
大约 3 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
python3.65 urllib.request.Request()增加header报错
# coding:utf-8 import urllib.request import urllib.parse url = 'http://192.168.**.**:9080/api/transactions' header = { 'Content-Type': 'application/json' } values ={ "currentToken": { "simplifiedName": "ETH", "address": "0x5bcd4d0508bc86c48760d0805962261d260d7a88" }, "txid": "" } data = urllib.parse.urlencode(values) data = data.encode(encoding='UTF-8') request = urllib.request.Request(url, data, header) #request = urllib.request.Request(url, data) print("111") html = urllib.request.urlopen(request) print("222") html =html.read().decode('utf-8') print(html) #返回结果 报错 D:\tool\Python36\python.exe D:/Users/Administrator/PycharmProjects/coinPlatform/test/test6.py 111 Traceback (most recent call last): File "D:/Users/Administrator/PycharmProjects/coinPlatform/test/test6.py", line 21, in <module> html = urllib.request.urlopen(request) File "D:\tool\Python36\lib\urllib\request.py", line 223, in urlopen return opener.open(url, data, timeout) File "D:\tool\Python36\lib\urllib\request.py", line 532, in open response = meth(req, response) File "D:\tool\Python36\lib\urllib\request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "D:\tool\Python36\lib\urllib\request.py", line 570, in error return self._call_chain(*args) File "D:\tool\Python36\lib\urllib\request.py", line 504, in _call_chain result = func(*args) File "D:\tool\Python36\lib\urllib\request.py", line 650, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found Process finished with exit code 1 # coding:utf-8 import urllib.request import urllib.parse url = 'http://192.168.**.**:9080/api/transactions' header = { 'Content-Type': 'application/json' } values ={ "currentToken": { "simplifiedName": "ETH", "address": "0x5bcd4d0508bc86c48760d0805962261d260d7a88" }, "txid": "" } data = urllib.parse.urlencode(values) data = data.encode(encoding='UTF-8') #request = urllib.request.Request(url, data, header) request = urllib.request.Request(url, data) print("111") html = urllib.request.urlopen(request) print("222") html =html.read().decode('utf-8') print(html) 返回结果 成功 但没得到想要的结果 D:\tool\Python36\python.exe D:/Users/Administrator/PycharmProjects/coinPlatform/test/test6.py 111 222 {"code":0,"message":"success","data":{"currentToken":{},"transactions":[]}} Process finished with exit code 0
python 3 报错 urllib.error.URLError: <urlopen error unknown url type: "http>
尝试爬取新浪首页新闻到本地 程序报错 源码为: import urllib.request,re url="https://www.sina.com.cn/" req=urllib.request.Request(url) req.add_header("User-Agent","马赛克") pat1='<a target="_blank" href=(.*?)>.*?</a>' data1=urllib.request.urlopen(req).read().decode("UTF-8","ignore") allink=re.compile(pat1).findall(data1) for i in range(0,len(allink)): thislink=allink[i] pat2='<frame src=(.*?)>' req2=urllib.request.Request(url) req2.add_header("User-Agent","Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:65.0) Gecko/20100101 Firefox/65.0") thispage=urllib.request.urlopen(req2).read().decode("UTF-8","ignore") isframe=re.compile(pat2).findall(thispage) if len(isframe)==0: urllib.request.urlretrieve(thislink,"data/"+str(i)+".html") else: flink=isframe[0] urllib.request.urlretrieve(flink,"data/"+str(i)+".html") 报错信息: Traceback (most recent call last): File "/Users/tanzhouyan/Desktop/python/新闻爬虫.py", line 73, in <module> urllib.request.urlretrieve(thislink,"data/"+str(i)+".html") File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 247, in urlretrieve with contextlib.closing(urlopen(url, data)) as fp: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 548, in _open 'unknown_open', req) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1387, in unknown_open raise URLError('unknown url type: %s' % type) urllib.error.URLError: <urlopen error unknown url type: "http> 在网上一直没有找到解决方法,谢谢大家~
ubuntu16.04下python打开http/https报错
![ubuntu16.04下使用python中的urllib.urlopen()打开https报错IOError](https://img-ask.csdn.net/upload/201708/27/1503834919_805174.png) ``` >>> import urllib >>> urllib.urlopen('https://www.baidu.com') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/urllib.py", line 87, in urlopen return opener.open(url) File "/usr/local/lib/python2.7/urllib.py", line 210, in open return self.open_unknown(fullurl, data) File "/usr/local/lib/python2.7/urllib.py", line 222, in open_unknown raise IOError, ('url error', 'unknown url type', type) IOError: [Errno url error] unknown url type: 'https' ``` 报错信息:IOError: [Errno url error] unknown url type: 'https' ![ubuntu16.04下使用python中的urllib2.urlopen()打开https/http报错](https://img-ask.csdn.net/upload/201708/27/1503835100_415721.png) ``` >>> import urllib2 >>> urllib2.urlopen('https://www.baidu.com') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/local/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/usr/local/lib/python2.7/urllib2.py", line 452, in _open 'unknown_open', req) File "/usr/local/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 1266, in unknown_open raise URLError('unknown url type: %s' % type) urllib2.URLError: <urlopen error unknown url type: https> >>> urllib2.urlopen('http://www.baidu.com') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/local/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/usr/local/lib/python2.7/urllib2.py", line 447, in _open '_open', req) File "/usr/local/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 1228, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/local/lib/python2.7/urllib2.py", line 1198, in do_open raise URLError(err) urllib2.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> >>> ``` 报错信息: urllib2.URLError: <urlopen error unknown url type: https> urllib2.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> 这个要如何解决?求大神指点迷津,谢谢!
python3.7 安装requests报错,求大神支招?
python3.7 通过pip 安装requests 报错,百度上也查过很多方法 1、通过更改源 2、通过wheel安装 3、pip --timout=60000等方式都不能解决 公司每台电脑上都装了赛门铁克(Symantec),会不会与这个有影响。 求详细解决方式 报错如下: ![图片说明](https://img-ask.csdn.net/upload/201906/10/1560136696_190131.jpg) Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x0000000003F7B588>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/requests/
ModuleNotFoundError: No module named 'pip._vendor.urllib3.packages'出现这种错误是什么意思?
安装pip时候出现ModuleNotFoundError: No module named 'pip._vendor.urllib3.packages'我重新下载了urllib3安装包安装了urllib3成功了,Extracting urllib3-1.25.2-py3.7.egg to e:\计算机二级备考\python37\lib\site-packages urllib3 1.25.2 is already the active version in easy-install.pth Installed e:\计算机二级备考\python37\lib\site-packages\urllib3-1.25.2-py3.7.egg Processing dependencies for urllib3==1.25.2 Finished processing dependencies for urllib3==1.25.2(安装完成后的代码)然后运行pip list就出现了这种错误 要怎么处理呢?谢谢指导。 原来是想安装模块requests,后面弄了好久还是错误,能给出解决方法,感激不尽。
pip升级怎么提示这个 怎么 解决 老版本也删除了 我的是python 3.7
``` D:\python>python -m pip install --upgrade pip Collecting pip Downloading https://files.pythonhosted.org/packages/30/db/9e38760b32e3e7f40cce46dd5fb107b8c73840df38f0046d8e6514e675a1/pip-19.2.3-py2.py3-none-any.whl (1.4MB) 1% |▌ | 20kB 1.4kB/s eta 0:16:35Exception: Traceback (most recent call last): File "D:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 302, in _error_catcher yield File "D:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 384, in read data = self._fp.read(amt) File "D:\python\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 60, in read data = self.__fp.read(amt) File "D:\python\lib\http\client.py", line 447, in read n = self.readinto(b) File "D:\python\lib\http\client.py", line 491, in readinto n = self.fp.readinto(b) File "D:\python\lib\socket.py", line 589, in readinto return self._sock.recv_into(b) File "D:\python\lib\ssl.py", line 1049, in recv_into return self.read(nbytes, buffer) File "D:\python\lib\ssl.py", line 908, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\python\lib\site-packages\pip\_internal\basecommand.py", line 228, in main status = self.run(options, args) File "D:\python\lib\site-packages\pip\_internal\commands\install.py", line 291, in run resolver.resolve(requirement_set) File "D:\python\lib\site-packages\pip\_internal\resolve.py", line 103, in resolve self._resolve_one(requirement_set, req) File "D:\python\lib\site-packages\pip\_internal\resolve.py", line 257, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "D:\python\lib\site-packages\pip\_internal\resolve.py", line 210, in _get_abstract_dist_for self.require_hashes File "D:\python\lib\site-packages\pip\_internal\operations\prepare.py", line 310, in prepare_linked_requirement progress_bar=self.progress_bar File "D:\python\lib\site-packages\pip\_internal\download.py", line 837, in unpack_url progress_bar=progress_bar File "D:\python\lib\site-packages\pip\_internal\download.py", line 674, in unpack_http_url progress_bar) File "D:\python\lib\site-packages\pip\_internal\download.py", line 898, in _download_http_url _download_url(resp, link, content_file, hashes, progress_bar) File "D:\python\lib\site-packages\pip\_internal\download.py", line 618, in _download_url hashes.check_against_chunks(downloaded_chunks) File "D:\python\lib\site-packages\pip\_internal\utils\hashes.py", line 48, in check_against_chunks for chunk in chunks: File "D:\python\lib\site-packages\pip\_internal\download.py", line 586, in written_chunks for chunk in chunks: File "D:\python\lib\site-packages\pip\_internal\utils\ui.py", line 159, in iter for x in it: File "D:\python\lib\site-packages\pip\_internal\download.py", line 575, in resp_read decode_content=False): File "D:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 436, in stream data = self.read(amt=amt, decode_content=decode_content) File "D:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 401, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "D:\python\lib\contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "D:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 307, in _error_catcher raise ReadTimeoutError(self._pool, None, 'Read timed out.') pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. ```
关于python3.x 编写爬虫的报错问题
找了好几天的资料还是没能解决,拜托各位! ``` import urllib.error import urllib.request import urllib.parse url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=https://www.baidu.com/link HTTP/1.1' data = {} data['type']= 'AUTO' data['i'] = 'I am fine !' data['doctype'] = 'json' data['xmlVersion'] = '1.8' data['keyfrom'] = 'fanyi.web' data['ue'] = 'UTF-8' data['action'] = 'FY_BY_CLICKBUTTON' data['typoResult'] = 'true' head = {} head['User-Agent']= 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0' try: data = urllib.parse.urlencode(data).encode('utf-8') req = urllib.request.Request(url,data,head) response = urllib.request.urlopen(req) html = response.read().decode('utf-8') print(html) except urllib.error.HTTPError as e: print ('Error code : ',e.code) except urllib.error.URLError as e: print ('The reason: ',e.reason) ``` 抛出异常: ![图片说明](https://img-ask.csdn.net/upload/201703/09/1489022080_873412.png)
python3.4.1中,爬虫出现UnicodeEncodeError。
python3.4.1中,出现UnicodeEncodeError: 'ascii' codec can't encode characters in position 36-39: ordinal not in range(128) 以下是编写的代码: import urllib.request import urllib.parse from bs4 import BeautifulSoup import re def main(): keyword = input("请输入关键词:") keyword = urllib.parse.urlencode({"word":keyword}) response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword) html = response.read() soup = BeautifulSoup(html, "html.parser") for each in soup.find_all(href=re.compile("view")): content = ''.join([each.text]) url2 = ''.join(["http://baike.baidu.com", each["href"]]) response2 = urllib.request.urlopen(url2) html2 = response2.read() soup2 = BeautifulSoup(html2, "html.parser") if soup2.h2: content = ''.join([content, soup2.h2.text]) content = ''.join([content, " -> ", url2]) print(content) if __name__=="__main__": main()
Python3.5连接pymysql出现cursor问题
Ubuntu下Python3.5连接pymysql时出现错位为 with connect.cursor() as cursor: AttributeError: 'function' object has no attribute 'cursor' 以下为代码,点击错误跳到with connect.cursor() as cursor这里,不知道为什么,弄了两天了都没弄好,求大神告诉 # -*- coding:utf-8 -*- import urllib import urllib.request import re import random import pymysql.cursors #抓取所需内容 from pymysql import connect user_agent = ["Mozilla/5.0 (Windows NT 10.0; WOW64)", 'Mozilla/5.0 (Windows NT 6.3; WOW64)', 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11', 'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko', 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36', ] stock_total=[] #stock_total:所有页面的股票数据 stock_page:某页的股票数据 url='http://quote.stockstar.com/stock/ranklist_a_3_1_1.html' #伪装浏览器请求报头 request=urllib.request.Request(url=url,headers={"User-Agent":random.choice(user_agent)}) try: response=urllib.request.urlopen(request) except urllib.error.HTTPError as e: #异常检测 print(e.code) except urllib.error.URLError as e: print(e.reason) content=response.read().decode('gbk') #读取网页内容 #打印成功获取的页码 pattern=re.compile('<tbody[\s\S]*</tbody>') body=re.findall(pattern,str(content)) pattern=re.compile('>(.*?)<') stock_page=re.findall(pattern,body[0]) #正则匹配 stock_total.extend(stock_page) #删除空白字符 stock_last=stock_total[:] #stock_last为最终所要得到的股票数据 for data in stock_total: if data=='': stock_last.remove('') print('1') db = pymysql.Connect( host='localhost', user='root', passwd='111111', db='patest1', charset='utf8', cursorclass=pymysql.cursors.DictCursor ) try: with connect.cursor() as cursor: sql="insert into pachong values (%s, %s, %s, %s, %s)" param=[(stock_last[0]),(stock_last[1]),(stock_last[2]),(stock_last[3]),(stock_last[4]),(stock_last[5])] n=cursor.executemany(sql,param) connect.commit() finally: print('n') connect.close()
Python3.4用cx_Freeze打包的时候出问题了,不知道怎么解决啊
![CSDN移动问答][1] [1]: http://imgsrc.baidu.com/forum/pic/item/3f73612762d0f703bb75eadb0bfa513d2797c556.jpg 上面是EXE运行时的状况; 下面试代码: #! /usr/bin/python # -*- coding: utf-8 -*- import urllib.request,urllib.error try: resp=urllib.request.urlopen('http://su.bdimg.com/static/superplus/img/logo_white_ee663702.png') html=resp.read() spath="e:/1.png" f=open(spath,"wb") # Opens file for writing.Creates this file doesn't exist. f.write(html) f.close() #print(html) except urllib.error.HTTPError as err: print(err.code ) except urllib.error.URLError as err: print(err.code ) 当我使用cx_freeze打包一些简单的输入输出时完全可以运行
python 做的web 暴力猜解密码程序 出错
在Linux环境下运行 这是错误: Traceback (most recent call last): File "brute.py", line 15, in <module> response = urllib2.urlopen(req,timeout=100) File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 401, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 419, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1211, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.7/urllib2.py", line 1181, in do_open raise URLError(err) urllib2.URLError: <urlopen error [Errno -2] Name or service not known> 以下是代码: ``` import urllib2,urllib lista = ['0','1','2','3','4','5','6','7','8','9'] url = 'http://challenge.honyaedu.com:8886/hou15/10/login.php' header ={'User-Agent':'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36'} for a in lista: for b in lista: for c in lista: for d in lista: for e in lista: for f in lista: passw=a+b+c+d+e+f value = {'password':passw,'Submit':'%E7%A1%AE%E5%AE%9A'} data = urllib.urlencode(value) req = urllib2.Request(url,data,header) response = urllib2.urlopen(req,timeout=100) the_page = response.read() if passw =='000000': page=the_page else: if page!=the_page: print passw break ```
请求python3.7中 的url中文问题
import string import urllib import json import time from quopri import quote ISOTIMEFORMAT='%Y-%m-%d %X' outputFile = 'douban_movie.txt' fw = open(outputFile, 'w') fw.write('id;title;url;cover;rate\n') headers = {} headers["Accept"] = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" headers["Accept-Encoding"] = "gzip, deflate, sdch" headers["Accept-Language"] = "zh-CN,zh;q=0.8,en;q=0.6,zh-TW;q=0.4,ja;q=0.2" # headers["Cache-Control"] = "max-age=0" headers["Connection"] = "keep-alive" # headers["Cookie"] = 'bid="LJSWKkSUfZE"; ll="108296"; __utmt=1; regpop=1; _pk_id.100001.4cf6=32aff4d8271b3f15.1442223906.2.1442237186.1442224653.; _pk_ses.100001.4cf6=*; __utmt_douban=1; __utma=223695111.736177897.1442223906.1442223906.1442236473.2; __utmb=223695111.0.10.1442236473; __utmc=223695111; __utmz=223695111.1442223906.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=30149280.674845100.1442223906.1442236473.1442236830.3; __utmb=30149280.4.9.1442237186215; __utmc=30149280; __utmz=30149280.1442236830.3.2.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; ap=1' headers["Host"] = "movie.douban.com" headers["Referer"] = "http://movie.douban.com/" headers["Upgrade-Insecure-Requests"] = 1 headers["User-Agent"] = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36" # 获取tag request = urllib.request.Request(url="http://movie.douban.com/j/search_tags?type=movie") response = urllib.request.urlopen(request) tags = json.loads(response.read())['tags'] # 开始爬取 print ("********** START **********") print (time.strftime( ISOTIMEFORMAT, time.localtime() )) for tag in tags: print ("Crawl movies with tag: " + tag) print (time.strftime( ISOTIMEFORMAT, time.localtime() )) start = 0 while True: url = "http://movie.douban.com/j/search_subjects?type=movie&tag=" +tag.encode("utf-8")+"&page_limit=20&page_start="+str(start) #url = quote(url, safe=string.printable) request = urllib.request.Request(url=url) response = urllib.request.urlopen(request) movies = json.loads(response.read())['subjects'] if len(movies) == 0: break for item in movies: rate = item['rate'] title = item['title'] url = item['url'] cover = item['cover'] movieId = item['id'] record = str(movieId) + ';' + title + ';' + url + ';' + cover + ';' + str(rate) + '\n' fw.write(record.encode('utf-8')) print (tag + '\t' + title) start = start + 20 fw.close() ![图片说明](https://img-ask.csdn.net/upload/201906/02/1559463756_939891.png) ![图片说明](https://img-ask.csdn.net/upload/201906/02/1559463786_165838.png) ![图片说明](https://img-ask.csdn.net/upload/201906/02/1559463796_447639.png) ![图片说明](https://img-ask.csdn.net/upload/201906/02/1559463972_311111.png)
Python3.6+Selenium打不开Ie浏览器
【环境信息】 Python3.6+Selenium3.0.2+IE10+win7 【问题描述】 1、用如下代码打不开IE浏览器,但是能打开火狐浏览器 import unittest import os from selenium import webdriver class TestAutoMethods(unittest.TestCase): #打开Firefox浏览器 def test_openbrower(self): browser = webdriver.Firefox() browser.get("http://www.baidu.com") def test_FirstVase(self): #ie_driver = os.path.abspath(r"C:\Program Files(x86)\Internet Explorer\IEDriverServer.exe") #os.environ["webdriver.ie.driver"] = ie_driver browser = webdriver.Ie() browser.get("http://www.youdao.com") if __name__ == '__main__': unittest.main() 2、报错信息 复制代码 Error Traceback (most recent call last): File "D:\Users\chenle\PycharmProjects\untitled\test\FirstExam.py", line 14, in test_FirstVase browser = webdriver.Ie() File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\ie\webdriver.py", line 57, in __init__ desired_capabilities=capabilities) File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 92, in __init__ self.start_session(desired_capabilities, browser_profile) File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 179, in start_session response = self.execute(Command.NEW_SESSION, capabilities) File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 234, in execute response = self.command_executor.execute(driver_command, params) File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 408, in execute return self._request(command_info[0], url, body=data) File "C:\Program Files\Python36\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 478, in _request resp = opener.open(request, timeout=self._timeout) File "C:\Program Files\Python36\lib\urllib\request.py", line 526, in open response = self._open(req, data) File "C:\Program Files\Python36\lib\urllib\request.py", line 544, in _open '_open', req) File "C:\Program Files\Python36\lib\urllib\request.py", line 504, in _call_chain result = func(*args) File "C:\Program Files\Python36\lib\urllib\request.py", line 1346, in http_open return self.do_open(http.client.HTTPConnection, req) File "C:\Program Files\Python36\lib\urllib\request.py", line 1321, in do_open r = h.getresponse() File "C:\Program Files\Python36\lib\http\client.py", line 1331, in getresponse response.begin() File "C:\Program Files\Python36\lib\http\client.py", line 297, in begin version, status, reason = self._read_status() File "C:\Program Files\Python36\lib\http\client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response
python3.5错误'module' object is not callable
(1) import urllib.request from cons import headers def getUrlList(): req=urllib.request('https://mm.taobao.com/tstar/search/tstar_model.do?_input_charset=utf-8') req.add_header('user-agent',headers()) # print (headers()) html=urllib.urlopen(req, data='q&viewFlag=A&sortType=default&searchStyle=&searchRegion=city%3A&searchFansNum=&currentPage=1&pageSize=100').read() print (html) getUrlList() (2) import random headerstr='''Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)''' def headers(): header=headerstr.split('\n') length=len(header) return header[random.randint(0,length-1)] 运行(1) 产生错误如下: D:\programmingtools\anaconda\python.exe D:/programmingtools/pycharmpro/files/201711112013/taobeauty.py Traceback (most recent call last): File "D:/programmingtools/pycharmpro/files/201711112013/taobeauty.py", line 13, in <module> getUrlList() File "D:/programmingtools/pycharmpro/files/201711112013/taobeauty.py", line 6, in getUrlList req=urllib.request('https://mm.taobao.com/tstar/search/tstar_model.do?_input_charset=utf-8') TypeError: 'module' object is not callable Process finished with exit code 1
python3.5中解析json错误JSONDecodeError
代码如下,错误 json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) 试过用urllib获取这个界面后用json.loads解析,一样的报错QAQ ``` import requests url = 'http://changyan.itc.cn/v2/asset/scsUtil.js?v=20150826191' page = requests.get(url).json() ```
python3.5使用parse.unquote解码一段疑似url编码,但依然输出乱码
``` import re import requests import fmt import json from urllib import parse a=""" seatpolicys=x%C2%9C%C3%95%C2%8F%C2%B1j%C3%830%10%C2%86_E%C3%9Cl%C2%95%C2%93%C2%84%C2%9DHo%C2%A3Z%C2%A2%C2%886%C2%8E%C2%A9%C3%A5%C3%81%C2%84%40%20%5B%C2%B7Nq%C2%B24K%C2%A0%5D%3Aw%C3%88%C3%A3%C2%A4.y%C2%8B%C3%8AJp!d%C3%AAR%0A%C2%B7%C3%9Cw%C2%BF%C3%AE%C3%B4%C3%8D%60%24d%3A%025%C2%83r%C3%BA%C3%A0%C3%B2%26%C2%9F%1A%C3%AB%0C%C2%A8%C2%88%13(%C3%B4%C3%84%C2%82%C2%82n%C3%BD~%5Cl%C2%BEv%C2%AF%C2%90%40%C3%A5%C3%B5%C2%A37%C3%9A%C3%B7%C2%9C%23%C2%93%14%C2%B3P%04Q%C3%85%0A%09%5B%C2%98a%C3%8E%C2%91%C2%A2%C2%A4%02%09G%25P%C2%A52%C3%8C%C3%AB%C3%8Az%17%C3%B7%C2%86%07%C2%94%C2%8B%13%C2%BD%C2%AD%C3%83%C3%B1%C2%BA%C3%B0%013%C3%9A%C2%AFqU%C2%A9%C3%B3%7B%7D%17%C2%83%C2%A1%3FwC(%C2%A2fb%0B%C3%AF%C2%9B%C3%92%C2%9E%C2%89q%C3%95%10%40%C2%BC%C3%81%C3%B0%C2%A1y%C3%92Kf%C3%AC%C2%AAd%C3%86.%24%0F%C3%BB%C3%8D%C3%A7sK%C3%A4%C2%B8%7Bj%C3%BF%C2%B1-%C2%BFn%C3%8B%7FlWo%C3%87%C2%8F%C2%97%C3%83%C2%BE%C3%AD%C2%96%5B2N%7Fc%7B%C2%9A3A%C2%A2%C3%94%C3%9F%C3%98%C3%8E%C2%BF%01Cd%C3%93%C2%81 """ b=parse.unquote(a) print(b) ``` 以下是输出: ``` seatpolicys=x聹脮聫卤j脙0聠_E脺l聲聯聞聺Ho拢Z垄聢6聨漏氓脕聞@ [路Nq虏4K聽]:w脠茫陇.y聥脢Jp!d锚R 路脺w驴卯么脥`$d:5聝r煤脿貌&聼毛篓聢(么脛聜聜n媒~\l戮v炉聬@氓玫拢7脷梅聹#聯鲁PQ脜 [聵a脦聭垄陇 G%P楼2脤毛脢z梅聠聰聥陆颅脙帽潞冒3脷炉qU漏贸{}聝隆?wC(垄fb茂聸脪聻聣q脮@录脕冒隆y脪Kf矛陋d脝.$没脥莽sK盲赂{j每卤-驴n脣lWo脟聫聴脙戮铆聳[2Nc{職3A垄脭脽脴脦驴Cd脫聛 ```
【求教】使用pycharm编写python爬虫,连接不上本地MySQL服务器
#小白自学修炼中,在编写python爬虫时,需要连接上本地MySQL服务器,将爬取的内容存放。 求教问题描述>>> 问题: #pycharm编写的程序连接不上本地MySQL服务 ### 环境: * python3.6 * ide:pycharm * 本地mysql服务已开启。 mysql版本:5.7 ## 源代码: ``` # 导入开发包 from bs4 import BeautifulSoup as bs from urllib.request import urlopen import re import pymysql # 获得数据库的连接 connection = pymysql.connect( host='localhost', user='root', password='123456', db='baidu', charset='utf8mb4' ) try: # 获得会话指针 with connection.cursor() as cursor: # 创建sql语句 sql = "insert into urls ('urlname','urlhref') values (%s,%s)" # 向baiduurls表提交操作 cursor.execute(sql, ("1", "1")) # 提交操作 connection.commit() finally: connection.close() ``` ## ## 出现的问题 ``` Traceback (most recent call last): File "C:/Pycharm/pro_2020/百度百科爬虫/craw_url.py", line 12, in <module> db='baidu' raise exc pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' (timed out)") ``` 无法连接本地mysql数据库,另外数据库我在第三方工具上创建了一个名为“baidu”的数据库;我在命令行下查询了,确实创建着有。只是在pycharm里连接不到本地MySQL。 ## 尝试过、并失败了的的方法 ``` 防火墙已关闭,依然连接失败; net start mysql MySQL服务器doc命令行尝试连接,依然连接失败; host = 127.0.0.1,依然连接失败; ``` 真心在线求各位老哥老姐教教弟弟,不胜感激!
用linux安装软件出现的问题 ERROR: Exception: Traceback (most recent call last)
edx@ubuntu:~$ git clone https://github.com/kan-bayashi/PytorchWaveNetVocoder.git 正克隆到 'PytorchWaveNetVocoder'... remote: Enumerating objects: 330, done. remote: Counting objects: 100% (330/330), done. remote: Compressing objects: 100% (159/159), done. remote: Total 2328 (delta 220), reused 258 (delta 171), pack-reused 1998 接收对象中: 100% (2328/2328), 436.01 KiB | 736.00 KiB/s, 完成. 处理 delta 中: 100% (1436/1436), 完成. edx@ubuntu:~$ cd PytorchWaveNetVocoder/tools edx@ubuntu:~/PytorchWaveNetVocoder/tools$ make test -d venv || virtualenv -p python3.6 venv Running virtualenv with interpreter /usr/bin/python3.6 Using base prefix '/usr' New python executable in /home/edx/PytorchWaveNetVocoder/tools/venv/bin/python3.6 Also creating executable in /home/edx/PytorchWaveNetVocoder/tools/venv/bin/python Installing setuptools, pkg_resources, pip, wheel...done. . venv/bin/activate && pip install --upgrade pip Requirement already up-to-date: pip in ./venv/lib/python3.6/site-packages (19.3.1) . venv/bin/activate && cd ../ && pip install torch==1.0.1 torchvision==0.2.2 Collecting torch==1.0.1 Downloading https://files.pythonhosted.org/packages/f7/92/1ae072a56665e36e81046d5fb8a2f39c7728c25c21df1777486c49b179ae/torch-1.0.1-cp36-cp36m-manylinux1_x86_64.whl (560.0MB) |████████████████████████████████| 560.0MB 98kB/s eta 0:00:01ERROR: Exception: Traceback (most recent call last): File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 153, in _main status = self.run(options, args) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 382, in run resolver.resolve(requirement_set) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 201, in resolve self._resolve_one(requirement_set, req) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 365, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 313, in _get_abstract_dist_for req, self.session, self.finder, self.require_hashes File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 194, in prepare_linked_requirement progress_bar=self.progress_bar File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 465, in unpack_url progress_bar=progress_bar File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 316, in unpack_http_url progress_bar) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 551, in _download_http_url _download_url(resp, link, content_file, hashes, progress_bar) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 253, in _download_url hashes.check_against_chunks(downloaded_chunks) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/utils/hashes.py", line 80, in check_against_chunks for chunk in chunks: File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 223, in written_chunks for chunk in chunks: File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/utils/ui.py", line 160, in iter for x in it: File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_internal/download.py", line 212, in resp_read decode_content=False): File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 564, in stream data = self.read(amt=amt, decode_content=decode_content) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 507, in read data = self._fp.read(amt) if not fp_closed else b"" File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 65, in read self._close() File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 52, in _close self.__callback(self.__buf.getvalue()) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/controller.py", line 300, in cache_response cache_url, self.serializer.dumps(request, response, body=body) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/serialize.py", line 72, in dumps return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/__init__.py", line 46, in packb return Packer(**kwargs).pack(o) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 900, in pack self._pack(obj) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 891, in _pack nest_limit - 1) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 985, in _pack_map_pairs self._pack(v, nest_limit - 1) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 891, in _pack nest_limit - 1) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 984, in _pack_map_pairs self._pack(k, nest_limit - 1) File "/home/edx/PytorchWaveNetVocoder/tools/venv/lib/python3.6/site-packages/pip/_vendor/msgpack/fallback.py", line 847, in _pack return self._buffer.write(obj) MemoryError Makefile:6: recipe for target 'venv/bin/activate' failed make: *** [venv/bin/activate] Error 2 这是运行出来的错误 ,因为时第一次使用,所以没有看懂什么意思,想求助大佬们,怎么解决![图片说明](https://img-ask.csdn.net/upload/201911/07/1573114791_122624.png)这是代码运行make的文件程序
python爬虫出错 各位大神能不能帮我看一下是什么问题?python2.7
import urllib2 import urllib import re class BDTB: def __init__(self,baseUrl,see_LZ): self.baseURL = baseUrl self.seeLZ = '?see_lz='+str(see_LZ) def getPage(self,pageNum): try: url = self.baseURL + self.seeLZ + '&pn=' + str(pageNum) request = urllib2.Request(url) response =urllib2.urlopen(request) return response except urllib2.URLError , e: if hasattr(e,"reason"): print u"link fail,reason",e.reason return None def getTitle(self): page = self.getPage(1) pattern = re.compile('<h3 class="core_title_txt.*?>(.*?)</h3>',re.S) result = re.search(pattern,page) if result: print result.group(1) return result.group(1).strip() else: return None def getPageNum(self): page = self.getPage(1) print page.read() pattern = re.compile('<li class="l_reply_num.*?</span>.*?<span.*?>(.*?)</span>', re.S) result = re.search(pattern, page) if result: print result.group(1) return result.group(1).strip() else: return None def getContent(self): page = self.getPage(1) pattern = re.complie('<div id="post_content_.*?>(.*?)</div>',re.S) items = re.findall(pattern,page) for item in items: print item baseURL = "http://tieba.baidu.com/p/4866982459" bdtb = BDTB(baseURL,1) #bdtb.getPage(1) #bdtb.getTitle() #bdtb.getPageNum() bdtb.getContent() 运行getTitle()的错误: Traceback (most recent call last): File "F:\python学习\程序代码\爬虫\123.py", line 51, in <module> bdtb.getTitle() File "F:\python学习\程序代码\爬虫\123.py", line 23, in getTitle result = re.search(pattern,page) File "C:\Python27\lib\re.py", line 146, in search return _compile(pattern, flags).search(string) TypeError: expected string or buffer 运行getPageNum()的错误: Traceback (most recent call last): File "F:\python学习\程序代码\爬虫\123.py", line 52, in <module> bdtb.getPageNum() File "F:\python学习\程序代码\爬虫\123.py", line 34, in getPageNum result = re.search(pattern, page) File "C:\Python27\lib\re.py", line 146, in search return _compile(pattern, flags).search(string) TypeError: expected string or buffer 运行getContent()时候发生的错误: Traceback (most recent call last): File "F:\python学习\程序代码\爬虫\123.py", line 53, in <module> bdtb.getContent() File "F:\python学习\程序代码\爬虫\123.py", line 43, in getContent pattern = re.complie('<div id="post_content_.*?>(.*?)</div>',re.S) AttributeError: 'module' object has no attribute 'complie' 实在改不动了,忘各位大神帮忙!
Kafka实战(三) - Kafka的自我修养与定位
Apache Kafka是消息引擎系统,也是一个分布式流处理平台(Distributed Streaming Platform) Kafka是LinkedIn公司内部孵化的项目。LinkedIn最开始有强烈的数据强实时处理方面的需求,其内部的诸多子系统要执行多种类型的数据处理与分析,主要包括业务系统和应用程序性能监控,以及用户行为数据处理等。 遇到的主要问题: 数据正确性不足 数据的收集主要...
volatile 与 synchronize 详解
Java支持多个线程同时访问一个对象或者对象的成员变量,由于每个线程可以拥有这个变量的拷贝(虽然对象以及成员变量分配的内存是在共享内存中的,但是每个执行的线程还是可以拥有一份拷贝,这样做的目的是加速程序的执行,这是现代多核处理器的一个显著特性),所以程序在执行过程中,一个线程看到的变量并不一定是最新的。 volatile 关键字volatile可以用来修饰字段(成员变量),就是告知程序任何对该变量...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
GitHub开源史上最大规模中文知识图谱
近日,一直致力于知识图谱研究的 OwnThink 平台在 Github 上开源了史上最大规模 1.4 亿中文知识图谱,其中数据是以(实体、属性、值),(实体、关系、实体)混合的形式组织,数据格式采用 csv 格式。 到目前为止,OwnThink 项目开放了对话机器人、知识图谱、语义理解、自然语言处理工具。知识图谱融合了两千五百多万的实体,拥有亿级别的实体属性关系,机器人采用了基于知识图谱的语义感...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
微信支付崩溃了,但是更让马化腾和张小龙崩溃的竟然是……
loonggg读完需要3分钟速读仅需1分钟事件还得还原到昨天晚上,10 月 29 日晚上 20:09-21:14 之间,微信支付发生故障,全国微信支付交易无法正常进行。然...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
相关热词 c#委托 逆变与协变 c#新建一个项目 c#获取dll文件路径 c#子窗体调用主窗体事件 c# 拷贝目录 c# 调用cef 网页填表c#源代码 c#部署端口监听项目、 c#接口中的属性使用方法 c# 昨天
立即提问