Python爬虫实现全国失信被执行人名单查询功能示例

yipeiwu_com5年前Python爬虫

本文实例讲述了Python爬虫实现全国失信被执行人名单查询功能。分享给大家供大家参考,具体如下:

一、需求说明

利用百度的接口,实现一个全国失信被执行人名单查询功能。输入姓名,查询是否在全国失信被执行人名单中。

二、python实现

版本1:

# -*- coding:utf-8*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import time
import requests
time1=time.time()
import pandas as pd
import json
iname=[]
icard=[]
def person_executed(name):
  for i in range(0,30):
    try:
      url="https://sp0.baidu.com/8aQDcjqpAAV3otqbppnN2DJv/api.php?resource_id=6899" \
      "&query=%E5%A4%B1%E4%BF%A1%E8%A2%AB%E6%89%A7%E8%A1%8C%E4%BA%BA%E5%90%8D%E5%8D%95" \
      "&cardNum=&" \
      "iname="+str(name)+ \
      "&areaName=" \
      "&pn="+str(i*10)+ \
      "&rn=10" \
      "&ie=utf-8&oe=utf-8&format=json"
      html=requests.get(url).content
      html_json=json.loads(html)
      html_data=html_json['data']
      for each in html_data:
        k=each['result']
        for each in k:
          print each['iname'],each['cardNum']
          iname.append(each['iname'])
          icard.append(each['cardNum'])
    except:
      pass
if __name__ == '__main__':
  name="郭**"
  person_executed(name)
  print len(iname)
  #####################将数据组织成数据框###########################
  data=pd.DataFrame({"name":iname,"IDCard":icard})
  #################数据框去重####################################
  data1=data.drop_duplicates()
  print data1
  print len(data1)
  #########################写出数据到excel#########################################
  pd.DataFrame.to_excel(data1,"F:\\iname_icard_query.xlsx",header=True,encoding='gbk',index=False)
  time2=time.time()
  print u'ok,爬虫结束!'
  print u'总共耗时:'+str(time2-time1)+'s'

三、效果展示

"D:\Program Files\Python27\python.exe" D:/PycharmProjects/learn2017/全国失信被执行人查询.py
郭** 34122319790****5119
郭** 32032119881****2419
郭** 32032119881****2419
3
                IDCard name
0  34122319790****5119  郭**
1  32032119881****2419  郭**
2
ok,爬虫结束!
总共耗时:7.72000002861s
Process finished with exit code 0

版本2:

# -*- coding:utf-8*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import time
import requests
time1=time.time()
import pandas as pd
import json
iname=[]
icard=[]
courtName=[]
areaName=[]
caseCode=[]
duty=[]
performance=[]
disruptTypeName=[]
publishDate=[]
def person_executed(name):
  for i in range(0,30):
    try:
      url="https://sp0.baidu.com/8aQDcjqpAAV3otqbppnN2DJv/api.php?resource_id=6899" \
      "&query=%E5%A4%B1%E4%BF%A1%E8%A2%AB%E6%89%A7%E8%A1%8C%E4%BA%BA%E5%90%8D%E5%8D%95" \
      "&cardNum=&" \
      "iname="+str(name)+ \
      "&areaName=" \
      "&pn="+str(i*10)+ \
      "&rn=10" \
      "&ie=utf-8&oe=utf-8&format=json"
      html=requests.get(url).content
      html_json=json.loads(html)
      html_data=html_json['data']
      for each in html_data:
        k=each['result']
        for each in k:
          print each['iname'],each['cardNum'],each['courtName'],each['areaName'],each['caseCode'],each['duty'],each['performance'],each['disruptTypeName'],each['publishDate']
          iname.append(each['iname'])
          icard.append(each['cardNum'])
          courtName.append(each['courtName'])
          areaName.append(each['areaName'])
          caseCode.append(each['caseCode'])
          duty.append(each['duty'])
          performance.append(each['performance'])
          disruptTypeName.append(each['disruptTypeName'])
          publishDate.append(each['publishDate'])
    except:
      pass
if __name__ == '__main__':
  name="郭**"
  person_executed(name)
  print len(iname)
  #####################将数据组织成数据框###########################
  # data=pd.DataFrame({"name":iname,"IDCard":icard})
  detail_data=pd.DataFrame({"name":iname,"IDCard":icard,"courtName":courtName,"areaName":areaName,"caseCode":caseCode,"duty":duty,"performance":performance,\
               "disruptTypeName":disruptTypeName,"publishDate":publishDate})
  #################数据框去重####################################
  # data1=data.drop_duplicates()
  # print data1
  # print len(data1)
  detail_data1=detail_data.drop_duplicates()
  # print detail_data1
  # print len(detail_data1)
  #########################写出数据到excel#########################################
  pd.DataFrame.to_excel(detail_data1,"F:\\iname_icard_query.xlsx",header=True,encoding='gbk',index=False)
  time2=time.time()
  print u'ok,爬虫结束!'
  print u'总共耗时:'+str(time2-time1)+'s'

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总

希望本文所述对大家Python程序设计有所帮助。

相关文章

python抓取豆瓣图片并自动保存示例学习

环境Python 2.7.6,BS4,在powershell或命令行均可运行。请确保安装了BS模块复制代码 代码如下:# -*- coding:utf8 -*-# 2013.12.36...

Python3爬取英雄联盟英雄皮肤大图实例代码

Python3爬取英雄联盟英雄皮肤大图实例代码

爬虫思路 初步尝试 我先查看了network,并没有发现有可用的API;然后又用bs4去分析英雄列表页,但是请求到html里面,并没有英雄列表,在英雄列表的节点上,只有“正在加载中”这样...

Python下使用Scrapy爬取网页内容的实例

上周用了一周的时间学习了Python和Scrapy,实现了从0到1完整的网页爬虫实现。研究的时候很痛苦,但是很享受,做技术的嘛。 首先,安装Python,坑太多了,一个个爬。由于我是wi...

Python实现简易Web爬虫详解

简介: 网络爬虫(又被称为网页蜘蛛),网络机器人,是一种按照一定的规则,自动地抓信息的程序或者脚本。假设互联网是一张很大的蜘蛛网,每个页面之间都通过超链接这根线相互连接,那么我们的爬虫小...

Python爬虫通过替换http request header来欺骗浏览器实现登录功能

Python爬虫通过替换http request header来欺骗浏览器实现登录功能

以豆瓣为例,访问https://www.douban.com/contacts/list 来查看自己关注的人,要登录才能查看。 如果用requests.get()方法获取这个http,没...