关于Python下的Beaytifulsoup爬虫

刚学爬虫,想爬房天下网站新房信息,其他都爬出来了,就是价格这个一直弄不了,各位大神帮忙看看吧import requests
from bs4 import BeautifulSoup
import time
import csv

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'}
for i in range(0,2):
link = 'http://newhouse.xian.fang.com/house/s/a77-b91/?ctm=1.xian.xf_search.page.' + str(i)
r = requests.get(link, headers=headers)
r.encoding = 'gb2312'
soup = BeautifulSoup(r.text, 'lxml')
fang_list = soup.find_all('div', class_='nlc_details')

articles = []
for fang in fang_list:
    xiaoqvming = fang.find('div', class_='nlcd_name').a.text.strip()
    huxing_list = fang.find('div', class_='house_type clearfix').text.strip()
    qvyv = fang.find('span', class_='sngrey').text.strip()
    address = fang.find('div', class_='address').text.strip()
    zhuangtai = fang.find('span', class_='inSale').text.strip()
    tags = fang.find('div', class_=['fangyuan','pr']).a.text.strip()
    price = fang.find('div', class_='nhouse_price').span.get_text()
    articles.append([xiaoqvming,huxing,qvyv,address,zhuangtai,tags,price])print (xiaoqvming,huxing,qvyv,address,zhuangtai,tags,price)![图片说明](https://img-ask.csdn.net/upload/201711/27/1511760270_282359.png)![图片说明](https://img-ask.csdn.net/upload/201711/27/1511760360_844086.png)![图片说明](https://img-ask.csdn.net/upload/201711/27/1511760297_185914.png)![图片说明](https://img-ask.csdn.net/upload/201711/27/1511760383_771431.png)

1个回答

.span[0].get_text()

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
立即提问
相关内容推荐