Syn_Wll
2022-09-23 09:44
采纳率: 78.6%
浏览 47

python爬虫报错:Max retries exceeded with url

问题遇到的现象和发生背景

看教学视频别人成功运行一下这段代码,但自己手动尝试去不行

用代码块功能插入代码,请勿粘贴截图
import requests
from bs4 import BeautifulSoup as bs

# Load the webpage content
r = requests.get("https://keithgalli.github.io/web-scraping/example.html")

# Convert to a beautiful soup object
soup = bs(r.content)

# Print out our html
print(soup)

运行结果及报错内容

HTTPSConnectionPool(host='keithgalli.github.io', port=443): Max retries exceeded with url: /web-scraping/example.html (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000017C25B28AF0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

我想要达到的结果

成功运行

4条回答 默认 最新

相关推荐 更多相似问题