Python代码
- #-*- codeing = utf-8 -*-
- #@Time : 2020/9/23 0023 14:40
- #@Author : Chiser
- #@Flie : IVI测试.py
- #@Software: PyCharm
- import requests
- from lxml import etree
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36'
- }
- response = requests.get('http://ivi.bupt.edu.cn',headers = headers)
- response.encoding = 'utf-8'
- pagehtml = etree.HTML(response.text)
- html = pagehtml.xpath('//div/div[@style="margin-top:50px"]/div')
- m3u_lsit = '#EXTM3U\n#EXTINF:-1,'
- page_url = 'http://ivi.bupt.edu.cn'
- f = open('直播链.m3u','w')
- for item in html:
- title = ''.join(item.xpath('./p/text()'))
- title = ''.join(title.split()) + '\n'
- url = ''.join(item.xpath('./a[2]/@href'))
- url = ''.join(url.split())
- f.write(str(m3u_lsit + title + page_url + url))
- f.close()
笔记
在爬取http://ivi.bupt.edu.cn/时遇到很多问题,最开始使用beautifulsoup4正则表达式获取时出现获取节目m3u8地址不完整,后面索性不用BeautifulSoup4转为lxml解析.
用lxml时开始遇到了第一问题,打印输出出现乱码,通过查找资料了解到需要把requests请求获取到的字符串定义为utf-8编码格式
- response.encoding = 'utf-8'
还有一个方法是在requests请求时在后面加上content,即:
- response = requests.get('http://ivi.bupt.edu.cn',headers = headers).content
接着遇到了数据筛选问题,在打印输出筛选后的数据时多出了['数据']后面查阅资料可通过.join删除如空格等符号