用pymongo连接MongoDB出现"cannot pass logical session id "的问题

公司最近需要用到MongoDB,尝试用python的pymongo连接数据库,
但是在对数据库读写时就会报错,报错内容为:
"**pymongo.errors.WriteError: cannot pass logical session id unless fully upgraded to featureCompatibilityVersion 3.6.**"

pymongo的版本是"3.9.0",python的版本是"3.7"服务器mongoDB的版本是3.6.3

用robo 3T是可以操作数据库的,自己也尝试了一些方法,但是无效,还没有解决方向......

下面是代码块,Thanks!

from pymongo import MongoClient
clint=MongoClient('10.10.112.11',30010)

db=clint.test
collection=db.test

student = {
    'name': 'Jordan',}

result = collection.insert_one(student)

1个回答

升级服务器mongo版本,或者使用低版本的pymongo

best_wangwq
best_wangwq 确实,用pymongo 2.6 版本解决了问题
大约一个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
用python对mongodb的查找问题
在mongodb环境下可以用find()函数,但是在python环境下用find函数返回的是一个地址呢?请问怎么用能返回数据? mongodb下是正常的: ``` > db.posts.find() { "_id" : ObjectId("5587bf580e3c5241da958200"), "text" : "my first blog post", "tags" : [ "mongodb", "python", "pymongo" ], "author" : "jim" } { "_id" : ObjectId("5587c04d0e3c5241da958201"), "text" : "my second posts", "author" : "mike" } ``` 而在python下find_one可以,find()不行: ``` >>> import pymongo >>> from pymongo import MongoClient >>> client = MongoClient() >>> client = MongoClient('localhost',27017) >>> db = client.testdel >>> mycol = db.mycol >>> mycol.find_one() {u'description': u'MongoDB is no sql database', u'tags': [u'mongodb', u'database', u'NoSQL'], u'url': u'http://www.yiibai.com', u'title': u'MongoDB Overview', u'likes': 100.0, u'_id': ObjectId('5584b5a183a7c7ad13947748'), u'by': u'tutorials point'} >>> for post in mycol.find() File "<stdin>", line 1 for post in mycol.find() ^ SyntaxError: invalid syntax >>> for post in db.mycol.find() File "<stdin>", line 1 for post in db.mycol.find() ^ SyntaxError: invalid syntax >>> mycol.find() <pymongo.cursor.Cursor object at 0xb6d7bc4c> ``` 把错误也一块写进去了,希望不会让大家感觉混乱.为什么find_one()可以正常使用而find()就不行呢?希望老师不吝赐教.
pymongo中如何判断是否查找到某条记录
# *-* coding: utf-8 *-* #!/usr/bin/python import pymongo import time conn = pymongo.Connection("127.0.0.1",27017) db = conn.test #连接库test num = db.posts.count({'text':'赵云'}) print num 我想查text字段为赵云的记录个数,这个命令 :db.posts.count({'text':'赵云'})在mongo的控制台中可以正确执行,但是在python脚本中却提示下面的错误: TypeError: count() takes exactly 1 argument (2 given) 还请诸位朋友帮忙看一下,十分感谢
pycharm连接mongodb服务器失败的问题
pycharm和mongodb都按照网上方法安装好了的,为了连接安装了pymongo,是用的pip安装的,而且在cmd里用pip list,显示pymongo是安装了得,但是为何在pycharm里,就是连不上mongodb??问题显示为 [MongoPlugin] Error when connecting to 666: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: connect}}]
mongodb连接python出现错误,找不到方法
我的是ubuntu系统,上面装的是python2.7,配置了virtualenv环境,在该环境内安装了mongodb(版本2.4.9),也装了pymongo(版本3.0.2),在连接数据库的时候写了下面代码: ``` from mongoengine import * import pymongo connection = pymongo.Connection('localhost',27017) #或者下面的也是同样的错误提示 from pymongo import Connection() ``` 都是提示AttributeError: 'module' object has no attribute 'Connection',请问是什么问题呢?希望各位老师不吝赐教,谢谢.
请问如何使用pymongo对数字类型字段进行模糊查询?
现在的Python代码片段如下: ``` from pymongo import MongoClient conn = MongoClient('192.168.4.166', 27017) db = conn.pinduoduo yxyjDB = db.goodsYxyj data = list(yxyjDB.find({'sales': {r'$regex': '^[1-9]\d*$'}})) print(data) ``` 数据库中存在的数据: ``` { "_id" : ObjectId("5bcedc7d3b541dbc309fc2fa"), "goods_id" : NumberLong(2332781120), "goods_type_id" : 1, "goods_name" : "test", "sales" : 3784, "sign_code" : "test", "link_url" : "test" } ``` 想请教下,Pymongo如何对mongodb的数字字段进行模糊查询? 现在使用正则表达式是可以对字符串类型进行模糊查询的,但是数字字段怎么配都不行。。
mongo python用pymongo find 获取数据感觉很慢
mongo python用pymongo find 获取数据感觉很慢 for each in db.find(projection={"_id":1, "title":1, "core":1, content":1}): 没有做 where, 只有 project, 经过 5万条document的测试: 消耗 300多秒。 这个速度应该不科学吧。 是不是我 的 pymongo 写法错误,或者 mongo 有 快速的find 方法?
python在使用pymongo模块时报错
我在使用pymongo模块的时候出现错误,导致mongodb连接不上,求各位大神帮忙,,谢谢,,谢谢,,错误如图:![图片说明](https://img-ask.csdn.net/upload/201710/17/1508215097_443426.png)
pymongo 子条件的操作如何进行?
mongodb pymongo: 有两条记录是这样 {"a":"AAA" , "b":[ {"b11": "b11value" ,"b12":"b12value"} , { "b21":"b21value" , "b22":"b22value" } ]} -------------------------------------------------------------- {"a":"BBB" , "b":[ {"b11": "b11value" ,"b12":"b12value"} , { "b21":"b21value" , "b22":"b22value" } ]} 我想更新a为AAA且b中b11的值为b11value的值为b11111111value: 结果应该是这样: {"a":"AAA" , "b":[ {"b11": "b11111111value" ,"b12":"b12value"} , { "b21":"b21value" , "b22":"b22value" } ]} ----------------------------------------------- {"a":"BBB" , "b":[ {"b11": "b11value" ,"b12":"b12value"} , { "b21":"b21value" , "b22":"b22value" } ]} 以及把a为AAA且b中b11的值为b11value的(b11删除) 结果应该是这样: {"a":"AAA" , "b":[ { "b21":"b21value" , "b22":"b22value" } ]} ----------------------------------------------- {"a":"BBB" , "b":[ {"b11": "b11value" ,"b12":"b12value"} , { "b21":"b21value" , "b22":"b22value" } ]} 一个更新,一个是删除 这两个怎么写? sql语句应该怎么写?谢谢
flask蓝图模式下,怎么使用pymongo
``` from flask import Flask from flask-pymongo import PyMongo app = Flask(__name__) app.config.update( MONGO_HOST='localhost', MONGO_PORT=27017, MONGO_USERNAME='bjhee', MONGO_PASSWORD='111111', MONGO_DBNAME='flask' ) mongo = PyMongo(app) ``` 这是直接在入口文件里写的方法 现在加入里蓝图 home.py ``` from flask import Blueprint import mysql.connector, logging from flask_pymongo import PyMongo import app home = Blueprint('home', __name__, url_prefix='/home') mongo = PyMongo(app) // 这里应该怎么写 因为不存在app @home.route('/test') def move_tickets_to_mongo(): res = mongo.db.acct_data_logs.find({}) ``` 这里会报错, 因为找不到app
pycharm连接上mongoDB数据库,但是无法可视化查看数据库中的内容
1安装mongoDB,配置成功 2在pycharm上配置连接成功,也能够查看到数据库,及数据库中存储内容的大小, !!!!! **但是无法可视化查看数据库存储的具体内容**。 ![图片说明](https://img-ask.csdn.net/upload/201905/22/1558492699_944239.jpg) ![图片说明](https://img-ask.csdn.net/upload/201905/22/1558492770_713157.jpg) ``` import requests import json from bs4 import BeautifulSoup import re import csv import time import pymongo # 建立连接 client = pymongo.MongoClient('localhost',27017) # 在mongo中新建名为weather的数据库 book_weather = client['weather'] # 在weather库中新建名为sheet_weather_3的表 sheet_weather = book_weather['sheet_weather_3'] csv_file = csv.reader(open('china-city-list.csv','r',encoding='UTF-8')) print(csv_file) data1=list(csv_file) # date1 = data1.split("\r") for i in range(3): data1.remove(data1[0]) for item in data1[:10]: print(item[0:11]) url = 'https://free-api.heweather.net/s6/weather/now?location=' + item[0] + '&key=756d9a91c4f74734847eab104367c984' print(url) strhtml = requests.get(url) strhtml.encoding='utf8' time.sleep(1) dic = strhtml.json() # print(strhtml.text) sheet_weather.insert_one(dic) ```
python+django能够同时使用mongodb和mysql两种数据库引擎吗?
各位朋友们好,我原来做一个项目:因为表没有关联,所以用的是非关系型数据库mongodb,项目开发环境用的是python+django,部署在ubuntu上。当时用了pymongo做python和mongodb的连接,用mongoengine做了django与mongodb的连接。 现在来了新的需求,表之间有关联,因此我准备添加一个mysql数据库。 请教各位朋友们,django展示网页的时候,能够同时展示mongodb中表的内容和mysql中表的内容吗?settings.py中应该如何配置数据库呢? 希望朋友们能指点一下,万分感谢。
为什么电脑安装了pymongo等模块,在命令行可以import,但是在web上不能
apache的错误日志这样显示: File "D:\\AppServ\\www\\XYingPY\\db\\MongoDB.py", line 1, in <module>\r [Wed Dec 17 10:17:13 2014] [error] [client 127.0.0.1] import pymongo\r [Wed Dec 17 10:17:13 2014] [error] [client 127.0.0.1] ImportError: No module named pymongo\r 就是不能import,但是在命令行可以,版本应该是没问题的
pymongo导出数据大小变小?
使用pymongo导出数据,但是查询到的数据量大小和循环写文件的大小差别很大,从来没有遇到过这个问题,网上也找不到相关的,求大神解答! ```python doc = {"$and": [ {"del": "0"}, {"$or": [ {"updatetime": {"$gt": update_time}}, {"score": {"$nin": ["A", "B", "C", "D", "E", "F"]}} ]} ]} ret = mongo_table.find(doc, {"@id": 1}) # ret.count() 有18w+大小 if not ret or ret.count() < 1: return off = 0 for item in ret: off += 1 # off只有7388,为什么呢? print off ```
pymongo.errors.AutoReconnect: [WinError 10053]
使用pymongo 报错,已经多次删除mongod.lock并重启mongodb,然而还是无用,服务器禁用了防火墙,到底是什么原因导致的呢 您的主机中的软件中止了一个已建立的连接。
【求教】mongoDB插入速度怎么比MySQL还慢
MySQL版本:5.7.13 MongoDB版本:3.2 操作系统:Windows server 2008 R2 内存:8G python 2.7.11 本人MongoDB萌新一枚,用python分别写了一个循环插入的测试脚本,数据量为30万 MongoDB: ``` from pymongo import MongoClient import time def get_db(): #建立连接 client = MongoClient("localhost", 27017) #test,还有其他写法 db = client.test print "建立MongoDB数据库连接" return db def get_collection(db): #选择集合 collection = db['test'] print "连接数据库:test" return collection def insert(collection): i=0 f = open("phonenumbers.txt") f1=open("result_mongo.txt","w") # 返回一个文件对象 line = f.readline() # 调用文件的 readline()方法 #print line, start=time.clock() while line: user = {"name":"%s"%(line.strip('\n'))} collection.insert(user) line = f.readline() i=i+1 if i%30000==0: end = time.clock() print "%f: %f s" % (i,end - start) f1.write("%f条记录用时:%f s \n"%(i,end - start)) print "%f: %f s" % (i,end - start) print 'task over' f.close() f1.close() db=get_db() collection=get_collection(db) insert(collection) ``` MySQL: ``` #Mysql conn = MySQLdb.connect(host='localhost',port = 3306, user='root',passwd='root',db ='test',charset='utf8') cursor = conn.cursor() print 'connect Mysql success!' i=0 f = open("phonenumbers.txt") f1=open("result.txt","w") # 返回一个文件对象 line = f.readline() # 调用文件的 readline()方法 #print line, start=time.clock() while line: #print line.strip('\n') sql_content = """insert into t_phone(phone_number) values('%s')"""%(line.strip('\n')) #print sql_content cursor.execute(sql_content.decode('utf8').encode('gb18030')) # print(line, end = '')   # 在 Python 3中使用 i=i+1 if i%30000==0: end = time.clock() print "%f: %f s" % (i,end - start) f1.write("%f条记录用时:%f s \n"%(i,end - start)) line = f.readline() print 'task over' f.close() f1.close() conn.commit() cursor.close() conn.close() ``` 运行时间如下: ``` MySQL 30000.000000: 5.953894 s 60000.000000: 11.355339 s 90000.000000: 16.826594 s 120000.000000: 22.311345 s 150000.000000: 27.833271 s 180000.000000: 33.445231 s 210000.000000: 38.899494 s 240000.000000: 44.386738 s 270000.000000: 49.829280 s 300000.000000: 55.298867 s MongoDB 30000.000000: 17.713415 s 60000.000000: 35.223699 s 90000.000000: 52.518638 s 120000.000000: 69.901784 s 150000.000000: 87.370721 s 180000.000000: 105.004178 s 210000.000000: 122.643773 s 240000.000000: 140.226097 s 270000.000000: 157.490818 s 300000.000000: 175.007099 s 各位大神这是怎么回事? ```
pip install pymongo 总是报错
报错代码如下: C:\Windows\system32>pip install pymongo Collecting pymongo Using cached pymongo-3.4.0.tar.gz Building wheels for collected packages: pymongo Running setup.py bdist_wheel for pymongo ... error Failed building wheel for pymongo Running setup.py clean for pymongo Failed cleaning build dir for pymongo Failed to build pymongo Installing collected packages: pymongo Running setup.py install for pymongo ... error Exception: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\compat\__init__.py", line 73, in console_to_str return s.decode(sys.__stdout__.encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 42: invalid start byte During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\commands\install.py", lin e 342, in run prefix=options.prefix_path, File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_set.py", line 784 , in install **kwargs File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_install.py", line 878, in install spinner=spinner, File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 676, in call_subprocess line = console_to_str(proc.stdout.readline()) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\compat\__init__.py", line 75, in console_to_str return s.decode('utf_8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 42: invalid start byte During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\commands\install.py", lin e 385, in run requirement_set.cleanup_files() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_set.py", line 729 , in cleanup_files req.remove_temporary_source() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_install.py", line 977, in remove_temporary_source rmtree(self.source_dir) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 49, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 212, in call raise attempt.get() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 247, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\six.py", line 686 , in reraise raise value File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 200, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 102, in rmtree onerror=rmtree_errorhandler) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 488, in rmtree return _rmtree_unsafe(path, onerror) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 387, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 114, in rmtree_errorhandler func(path) PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'C:\\Us ers\\Aaron\\AppData\\Local\\Temp\\pip-build-v6n4yggt\\pymongo' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\basecommand.py", line 215 , in main status = self.run(options, args) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\commands\install.py", lin e 385, in run requirement_set.cleanup_files() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\build.py", line 38, in __exit__ self.cleanup() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\build.py", line 42, in cleanup rmtree(self.name) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 49, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 212, in call raise attempt.get() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 247, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\six.py", line 686 , in reraise raise value File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_vendor\retrying.py", lin e 200, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 102, in rmtree onerror=rmtree_errorhandler) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 488, in rmtree return _rmtree_unsafe(path, onerror) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 378, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 387, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 114, in rmtree_errorhandler func(path) PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'C:\\Us ers\\Aaron\\AppData\\Local\\Temp\\pip-build-v6n4yggt\\pymongo'
django取出pymongo中存储的中文图片路径后,读取该路径在网页上展示没有出现图片
问题是这样的: 我搭建的环境是:python2.7 + mongoengine + pymongo + django + Ubuntu14 python处理一些带有中文路径的图片,把这个路径存储在数据库pymongo中,保存的时候该路径是转换为:utf-8类型存储到数据库中,然后django通过mongoengine来连接pymongo的数据库,从中取出这个图片的路径, 希望向各位朋友们请教一下下面的问题,十分感谢 1遍历这些图片的路径时,采用的是:os.walk()方法,调用如下: 1 def visitDir_walk(path , sFileSuffix): 2 codedetect = chardet.detect(path)["encoding"] 3 path = unicode(path , codedetect).encode("gbk") 4 fileNames = list() 5 for root,dirs,files in os.walk(path): 6 for i in range(0 , len(files)): 7 file = files[i] 8 if file.endswith(sFileSuffix): 9 sFileName = os.path.join(root , file) 10 codedetect = chardet.detect(sRealFileName)["encoding"] 11 sRealFileName = unicode(sRealFileName , "gbk").encode("gbk") 12 fileNames.append( sRealFileName ) 13 return fileNames 我发现:如果os.walk中传入的如果是unicode类型的中文路径,遍历报错,这是为什么? 如果在第11行中我改为: sRealFileName = unicode(sRealFileName , "gbk"),也会报错 2经过chardet模块分析之后:该路径变成了unicode类型,而且编码方式为ascii,这一点很奇怪,因为存储的时候: picture = {} picture["path"] = unicode(picPath, "gbk").encode("utf-8") 已经将获取的图片路径转换为utf-8的str类型的路径,但是从数据库拿出来却变成了ascii的unicode类型的路径 3取出的这个图片路径在django网页展示时,图片不显示,我猜想很可能是和图片的路径中带有中文有关,但我即使对取出的图片路径unicode类型的路径,转换为utf-8或者gbk,再展示该图片,还是无法显示。我也尝试不对该路径做任何编码转换,图片仍然不显示。 恳请各位朋友们指点一下,回答一下这三个问题,被中文路径折磨了1个月,仍然没有解决这个问题。 在此先提前感谢各位朋友。
一个百度拇指医生爬虫,想要先实现爬取某个问题的所有链接,但是爬不出来东西。求各位大神帮忙看一下这是为什么?
#写在前面的话 在这个爬虫里我想实现把百度拇指医生里关于“咳嗽”的链接全部爬取下来,下一步要进行的是把爬取到的每个链接里的items里面的内容爬取下来,但是我在第一步就卡住了,求各位大神帮我看一下吧。之前刚刚发了一篇问答,但是不知道怎么回事儿,现在找不到了,(貌似是被删了...?)救救小白吧!感激不尽! 这个是我的爬虫的结构 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574787999_274479.png) ##ks: ``` # -*- coding: utf-8 -*- import scrapy from kesou.items import KesouItem from scrapy.selector import Selector from scrapy.spiders import Spider from scrapy.http import Request ,FormRequest import pymongo class KsSpider(scrapy.Spider): name = 'ks' allowed_domains = ['kesou,baidu.com'] start_urls = ['https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M'] def parse(self, response): item = KesouItem() contents = response.xpath('.//h3[@class="t"]') for content in contents: url = content.xpath('.//a/@href').extract()[0] item['url'] = url yield item if self.offset < 760: self.offset += 10 yield scrapy.Request(url = "https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=" + str(self.offset) + "&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M",callback=self.parse,dont_filter=True) ``` ##items: ``` # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class KesouItem(scrapy.Item): # 问题ID question_ID = scrapy.Field() # 问题描述 question = scrapy.Field() # 医生回答发表时间 answer_pubtime = scrapy.Field() # 问题详情 description = scrapy.Field() # 医生姓名 doctor_name = scrapy.Field() # 医生职位 doctor_title = scrapy.Field() # 医生所在医院 hospital = scrapy.Field() ``` ##middlewares: ``` # -*- coding: utf-8 -*- # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class KesouSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Request, dict # or Item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) class KesouDownloaderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called return None def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) ``` ##piplines: ``` # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import pymongo from scrapy.utils.project import get_project_settings settings = get_project_settings() class KesouPipeline(object): def __init__(self): host = settings["MONGODB_HOST"] port = settings["MONGODB_PORT"] dbname = settings["MONGODB_DBNAME"] sheetname= settings["MONGODB_SHEETNAME"] # 创建MONGODB数据库链接 client = pymongo.MongoClient(host = host, port = port) # 指定数据库 mydb = client[dbname] # 存放数据的数据库表名 self.sheet = mydb[sheetname] def process_item(self, item, spider): data = dict(item) self.sheet.insert(data) return item ``` ##settings: ``` # -*- coding: utf-8 -*- # Scrapy settings for kesou project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'kesou' SPIDER_MODULES = ['kesou.spiders'] NEWSPIDER_MODULE = 'kesou.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'kesou (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False USER_AGENT="Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0" # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'kesou.middlewares.KesouSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'kesou.middlewares.KesouDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'kesou.pipelines.KesouPipeline': 300, } # MONGODB 主机名 MONGODB_HOST = "127.0.0.1" # MONGODB 端口号 MONGODB_PORT = 27017 # 数据库名称 MONGODB_DBNAME = "ks" # 存放数据的表名称 MONGODB_SHEETNAME = "ks_urls" # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' ``` ##run.py: ``` # -*- coding: utf-8 -*- from scrapy import cmdline cmdline.execute("scrapy crawl ks".split()) ``` ##这个是运行出来的结果: ``` PS D:\scrapy_project\kesou> scrapy crawl ks 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: kesou) 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twis.7.0, Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryphy 2.6.1, Platform Windows-10-10.0.18362-SP0 2019-11-27 00:14:17 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'kesou', 'COOKIES_ENABLED': False, 'NEWSPIDER_MODULE': 'spiders', 'SPIDER_MODULES': ['kesou.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67 2019-11-27 00:14:17 [scrapy.extensions.telnet] INFO: Telnet Password: 051629c46f34abdf 2019-11-27 00:14:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled item pipelines: ['kesou.pipelines.KesouPipeline'] 2019-11-27 00:14:19 [scrapy.core.engine] INFO: Spider opened 2019-11-27 00:14:19 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-27 00:14:19 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-27 00:14:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX% (referer: None) 2019-11-27 00:14:20 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFFSYX%2B1M> (referer: None) Traceback (most recent call last): File "d:\anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "D:\scrapy_project\kesou\kesou\spiders\ks.py", line 19, in parse item['url'] = url File "d:\anaconda3\lib\site-packages\scrapy\item.py", line 73, in __setitem__ (self.__class__.__name__, key)) KeyError: 'KesouItem does not support field: url' 2019-11-27 00:14:20 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/KeyError': 1, 'start_time': datetime.datetime(2019, 11, 26, 16, 14, 19, 863597)} 2019-11-27 00:14:21 [scrapy.core.engine] INFO: Spider closed (finished) ```
pymongo find出的文档字段顺序乱了
> 比如原来的doc是:{ "_id" : 3, "name" : "sss", "age" : 17 } > find的结果打印出来,变成了{"age" : 17, "_id" : 3, "name" : "sss" } > 我需要find后,修改一些东西,再保存到另一个collection中, > 怎样才能保证原来的顺序find出来呢?不胜感激!
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
【JSON解析】浅谈JSONObject的使用
简介 在程序开发过程中,在参数传递,函数返回值等方面,越来越多的使用JSON。JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,同时也易于机器解析和生成、易于理解、阅读和撰写,而且Json采用完全独立于语言的文本格式,这使得Json成为理想的数据交换语言。 JSON建构于两种结构: “名称/值”对的集合(A Collection of name/va...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
只因接了一个电话,程序员被骗 30 万!
今天想给大家说一个刚刚发生在我身边的一起真实的诈骗经历,我的朋友因此被骗走30万。注:为了保护当事人隐私,部分情节进行了修改。1平安夜突来的电话开始以为就像普通的诈骗一样,想办法让你把钱...
我一个37岁的程序员朋友
周末了,人一旦没有点事情干,心里就瞎想,而且跟几个老男人坐在一起,更容易瞎想,我自己现在也是 30 岁了,也是无时无刻在担心自己的职业生涯,担心丢掉工作没有收入,担心身体机能下降,担心突...
python自动下载图片
近日闲来无事,总有一种无形的力量萦绕在朕身边,让朕精神涣散,昏昏欲睡。 可是,像朕这么有职业操守的社畜怎么能在上班期间睡瞌睡呢,我不禁陷入了沉思。。。。 突然旁边的IOS同事问:‘嘿,兄弟,我发现一个网站的图片很有意思啊,能不能帮我保存下来提升我的开发灵感?’ 作为一个坚强的社畜怎么能说自己不行呢,当时朕就不假思索的答应:‘oh, It’s simple. Wait for me for a ...
一名大专同学的四个问题
【前言】   收到一封来信,赶上各种事情拖了几日,利用今天要放下工作的时机,做个回复。   2020年到了,就以这一封信,作为开年标志吧。 【正文】   您好,我是一名现在有很多困惑的大二学生。有一些问题想要向您请教。   先说一下我的基本情况,高考失利,不想复读,来到广州一所大专读计算机应用技术专业。学校是偏艺术类的,计算机专业没有实验室更不用说工作室了。而且学校的学风也不好。但我很想在计算机领...
复习一周,京东+百度一面,不小心都拿了Offer
京东和百度一面都问了啥,面试官百般刁难,可惜我全会。
Java 14 都快来了,为什么还有这么多人固守Java 8?
从Java 9开始,Java版本的发布就让人眼花缭乱了。每隔6个月,都会冒出一个新版本出来,Java 10 , Java 11, Java 12, Java 13, 到2020年3月份,...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
python爬取百部电影数据,我分析出了一个残酷的真相
2019年就这么匆匆过去了,就在前几天国家电影局发布了2019年中国电影市场数据,数据显示去年总票房为642.66亿元,同比增长5.4%;国产电影总票房411.75亿元,同比增长8.65%,市场占比 64.07%;城市院线观影人次17.27亿,同比增长0.64%。 看上去似乎是一片大好对不对?不过作为一名严谨求实的数据分析师,我从官方数据中看出了一点端倪:国产票房增幅都已经高达8.65%了,为什...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
【程序人生】程序员接私活常用平台汇总
00. 目录 文章目录00. 目录01. 前言02. 程序员客栈03. 码市04. 猪八戒网05. 开源众包06. 智城外包网07. 实现网08. 猿急送09. 人人开发10. 开发邦11. 电鸭社区12. 快码13. 英选14. Upwork15. Freelancer16. Dribbble17. Remoteok18. Toptal19. AngelList20. Topcoder21. ...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
立即提问