CSDNRGY 2020-09-15 18:04 采纳率: 88.6%
浏览 1004
已采纳

为什么通过selenium+browsermob-proxy获取到的请求不全呢?

技术框架:selenium + browsermob-proxy
selenium 获取页面元素
browsermob-proxy 获取请求信息

场景1:打开Chrome,输入localhost:8082,在NetWork中可以看到有23个请求,并且其中有我要的业务请求
图片说明

场景2:通过selenium + browsermob-proxy程序,打开localhost:8082,只能获取到6个请求,其他请求丢失了,这是为什么呢?
图片说明

我的代码

from browsermobproxy import Server
from selenium import webdriver
import os
from urllib import parse
from time import sleep

server = Server(r'/Users/renguanyu/app/browsermob-proxy/2.1.4/bin/browsermob-proxy')
server.start()
proxy = server.create_proxy()

chromedriver = "/usr/local/bin/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
url = parse.urlparse (proxy.proxy).path
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument("--proxy-server={0}".format(url))
driver = webdriver.Chrome(chromedriver,chrome_options =chrome_options)
driver.implicitly_wait(60)
proxy.new_har("http://localhost:8082/", options={'captureHeaders': True,'captureContent': True})
driver.get("http://localhost:8082/")
sleep(3)

# 打印network
result = proxy.har
log = result["log"]
entries = log["entries"]
list = []
for entrie in entries:
    request = entrie["request"]
    request_url = request["url"]
    response = entrie["response"]
    status = response["status"]

    dict = {
        "url": request_url,
        "status": status
    }
    list.append(dict)

# sleep(30)
print("request_list")
for item in list:
    print(item)
print("request_list_length", len(list))

proxy.close()
driver.quit()
  • 写回答

4条回答 默认 最新

  • 星光不问赶路人~ 2020-09-15 19:31
    关注

    from browsermobproxy import Server
    from selenium import webdriver
    from selenium.webdriver.chrome.options import Options
    import time

    server = Server("D:\apk\lyl\browsermob-proxy-2.1.4\bin\browsermob-proxy.bat")
    server.start()
    proxy = server.create_proxy()

    chrome_options = Options()
    chrome_options.add_argument('--proxy-server={0}'.format(proxy.proxy))

    driver = webdriver.Chrome(chrome_options=chrome_options)
    #要访问的地址
    base_url = "www.abc.coml"
    proxy.new_har("ht_list2", options={ 'captureContent': True})

    driver.get(base_url)
    #此处最好暂停几秒等待页面加载完成,不然会拿不到结果
    time.sleep(3)
    result = proxy.har

    for entry in result['log']['entries']:
    _url = entry['request']['url']
    print(_url)
    # # 根据URL找到数据接口,这里要找的是 http://git.liuyanlin.cn/get_ht_list 这个接口
    if "http://git.liuyanlin.cn/get_ht_list" in _url:
    _response = entry['response']
    _content = _response['content']
    # 获取接口返回内容
    print(_response)

    server.stop()
    driver.quit()

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(3条)

报告相同问题?