Python爬虫包BeautifulSoup学习实例(五)

yipeiwu_com5年前Python爬虫

本文为大家分享了Python爬虫包BeautifulSoup学习实例,具体内容如下

BeautifulSoup

使用BeautifulSoup抓取豆瓣电影的一些信息。

# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date:  2016-12-24 16:18:01
# @Last Modified by:  HaonanWu
# @Last Modified time: 2016-12-24 17:25:33

import urllib2
import json
from bs4 import BeautifulSoup

def nowplaying_movies(url):
  user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
  headers = {'User-Agent':user_agent}
  request = urllib2.Request(url = url, headers = headers)
  response = urllib2.urlopen(request)
  soup_packetpage = BeautifulSoup(response, 'lxml')
  items = soup_packetpage.findAll("li", class_="list-item")
  # items = soup_packetpage.findAll("li", {"class" : "list-item"}) 等价写法
  movies = []
  for item in items:
    if item.attrs['data-category'] == 'nowplaying':
      movie = {}
      movie['title'] = item.attrs['data-title']
      movie['score'] = item.attrs['data-score']
      movie['director'] = item.attrs['data-director']
      movie['actors'] = item.attrs['data-actors']
      movies.append(movie)
      print('%(title)s|%(score)s|%(director)s|%(actors)s' % movie)

  return movies



if __name__ == '__main__':
  url = 'https://movie.douban.com/nowplaying/beijing/'
  movies = nowplaying_movies(url)

  print('%s' % json.dumps(movies, sort_keys=True, indent=4, separators=(',', ': ')))

HTMLParser

使用HTMLParser实现上述功能

这里有一些HTMLParser的基础教程

由于HtmlParser自2006年以后就再没更新,目前很多人推荐使用jsoup代替它。

# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date:  2016-12-24 15:57:54
# @Last Modified by:  HaonanWu
# @Last Modified time: 2016-12-24 17:03:27
from HTMLParser import HTMLParser
import urllib2
import json

class MovieParser(HTMLParser):
  def __init__(self):
    HTMLParser.__init__(self)
    self.movies = []

  def handle_starttag(self, tag, attrs):
    def _attr(attrlist, attrname):
      for attr in attrlist:
        if attr[0] == attrname:
          return attr[1]
      return None
    if tag == 'li' and _attr(attrs, 'data-title') and _attr(attrs, 'data-category') == 'nowplaying':
      movie = {}
      movie['title'] = _attr(attrs, 'data-title')
      movie['score'] = _attr(attrs, 'data-score')
      movie['director'] = _attr(attrs, 'data-director')
      movie['actors'] = _attr(attrs, 'data-actors')
      self.movies.append(movie)
      print('%(title)s|%(score)s|%(director)s|%(actors)s' % movie)


def nowplaying_movies(url):
  headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'}  
  req = urllib2.Request(url, headers=headers)
  s = urllib2.urlopen(req)
  parser = MovieParser()
  parser.feed(s.read())
  s.close()
  return parser.movies


if __name__ == '__main__':
  url = 'https://movie.douban.com/nowplaying/beijing/'
  movies = nowplaying_movies(url)

  print('%s' % json.dumps(movies, sort_keys=True, indent=4, separators=(',', ': ')))

以上全部为本篇文章的全部内容,希望对大家的学习有所帮助,也希望大家多多支持【听图阁-专注于Python设计】。

相关文章

总结python爬虫抓站的实用技巧

总结python爬虫抓站的实用技巧

前言 写过的这些脚本有一个共性,都是和web相关的,总要用到获取链接的一些方法,累积不少爬虫抓站的经验,在此总结一下,那么以后做东西也就不用重复劳动了。 1.最基本的抓站 impor...

Python爬取qq空间说说的实例代码

具体代码如下所示: #coding:utf-8 #!/usr/bin/python3 from selenium import webdriver import time impo...

实践Python的爬虫框架Scrapy来抓取豆瓣电影TOP250

实践Python的爬虫框架Scrapy来抓取豆瓣电影TOP250

安装部署Scrapy 在安装Scrapy前首先需要确定的是已经安装好了Python(目前Scrapy支持Python2.5,Python2.6和Python2.7)。官方文档中介绍了三种...

Python进阶之使用selenium爬取淘宝商品信息功能示例

本文实例讲述了Python进阶之使用selenium爬取淘宝商品信息功能。分享给大家供大家参考,具体如下: # encoding=utf-8 __author__ = 'Jonny'...

深入理解Python爬虫代理池服务

深入理解Python爬虫代理池服务

在公司做分布式深网爬虫,搭建了一套稳定的代理池服务,为上千个爬虫提供有效的代理,保证各个爬虫拿到的都是对应网站有效的代理IP,从而保证爬虫快速稳定的运行,当然在公司做的东西不能开源出来。...