python爬取51job中hr的邮箱

yipeiwu_com4年前Python爬虫

本文实例为大家分享了python爬取51job中hr的邮箱具体代码,供大家参考,具体内容如下

#encoding=utf8
import urllib2
import cookielib
import re
import lxml.html
from _ast import TryExcept
from warnings import catch_warnings

f = open('/root/Desktop/51-01.txt','a+')

def read(city):
  url = 'http://www.51job.com/'+city
  cj = cookielib.MozillaCookieJar() 
  cookie_support = urllib2.HTTPCookieProcessor(cj) 
  opener = urllib2.build_opener(cookie_support) 
  opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
  urllib2.install_opener(opener)
  response = urllib2.urlopen(url)
  http = response.read()
  rex = 'http://jobs.51job.com/hot/.*?html'
  value = re.findall(rex, http)
  for i in value:
    print i
    try:
      readpage(i)
    except:
      pass
    
def readpage(url):
  cj = cookielib.MozillaCookieJar() 
  cookie_support = urllib2.HTTPCookieProcessor(cj) 
  opener = urllib2.build_opener(cookie_support) 
  opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
  urllib2.install_opener(opener)
  html = urllib2.urlopen(url,timeout = 2).read()
  doc = lxml.html.fromstring(html)
  rex = r'[\w\.-]+@(?:[A-Za-z0-9]+\.)+[A-Za-z]+'
  results = doc.xpath('//div[@class="tmsg inbox"]/div[@class="con_msg"]/div[@class="in"]/p/text()')
  for i in results:
    xx = re.compile(rex)
    for j in xx.findall(i):
      print j
      f.write(j+'\n')
      f.flush()
  

if __name__ == '__main__':
  city_list = ['zhangjiagang','zhanjiang','zhaoqing','zibo']
  for i in city_list:
    f.write(i+'\n')
    f.flush()
    try:
      read(i)
    except:
      pass
  f.flush()
  f.close()

city_list大家自己整理一下,只能帮你们到这里了,谢谢大家的阅读,继续关注【听图阁-专注于Python设计】更多精彩内容。

相关文章

Python如何爬取实时变化的WebSocket数据的方法

Python如何爬取实时变化的WebSocket数据的方法

一、前言 作为一名爬虫工程师,在工作中常常会遇到爬取实时数据的需求,比如体育赛事实时数据、股市实时数据或币圈实时变化的数据。如下图: Web 领域中,用于实现数据'实时'更新的手段...

python3.7简单的爬虫实例详解

python3.7简单的爬虫,具体代码如下所示: #https://www.runoob.com/w3cnote/python-spider-intro.html #Python...

python 爬虫 批量获取代理ip的实例代码

实例如下所示: import urllib.request import os, re,sys,time try: from StringIO import StringIO ex...

Python爬虫框架Scrapy基本用法入门教程

Python爬虫框架Scrapy基本用法入门教程

本文实例讲述了Python爬虫框架Scrapy基本用法。分享给大家供大家参考,具体如下: Xpath <html> <head> <title>...

Python爬取视频(其实是一篇福利)过程解析

窗外下着小雨,作为单身程序员的我逛着逛着发现一篇好东西,来自知乎 你都用 Python 来做什么?的第一个高亮答案。 到上面去看了看,地址都是明文的,得,赶紧开始吧。 下载流式文件,re...