python使用mitmproxy抓取浏览器请求的方法

yipeiwu_com6年前Python爬虫

最近要写一款基于被动式的漏洞扫描器,因为被动式是将我们在浏览器浏览的时候所发出的请求进行捕获,然后交给扫描器进行处理,本来打算自己写这个代理的,但是因为考虑到需要抓取https,所以最后找到Mitmproxy这个程序。

安装方法:

pip install mitmproxy

接下来通过一个案例程序来了解它的使用,下面是目录结构

sproxy

|utils

|__init__.py

|parser.py

|sproxy.py

sproxy.py代码

#coding=utf-8
 
from pprint import pprint
from mitmproxy import flow, proxy, controller, options
from mitmproxy.proxy.server import ProxyServer
from utils.parser import ResponseParser
 
# http static resource file extension
static_ext = ['js', 'css', 'ico', 'jpg', 'png', 'gif', 'jpeg', 'bmp']
# media resource files type
media_types = ['image', 'video', 'audio']
 
# url filter
url_filter = ['baidu','360','qq.com']
 
static_files = [
 'text/css',
 'image/jpeg',
 'image/gif',
 'image/png',
]
 
class WYProxy(flow.FlowMaster):
 
 
 def __init__(self, opts, server, state):
  super(WYProxy, self).__init__(opts, server, state)
 
 def run(self):
  try:
   pprint("proxy started successfully...")
   flow.FlowMaster.run(self)
  except KeyboardInterrupt:
   pprint("Ctrl C - stopping proxy")
   self.shutdown()
 
 def get_extension(self, flow):
  if not flow.request.path_components:
   return ''
  else:
   end_path = flow.request.path_components[-1:][0]
   split_ext = end_path.split('.')
   if not split_ext or len(split_ext) == 1:
    return ''
   else:
    return split_ext[-1:][0][:32]
 
 def capture_pass(self, flow):
 	# filter url
  url = flow.request.url
  for i in url_filter:		
			if i in url:
				return True
 
  """if content_type is media_types or static_files, then pass captrue"""
  extension = self.get_extension(flow)
  if extension in static_ext:
   return True
 
  # can't catch the content_type
  content_type = flow.response.headers.get('Content-Type', '').split(';')[:1][0]
  if not content_type:
   return False
 
  if content_type in static_files:
   return True
 
  http_mime_type = content_type.split('/')[:1]
  if http_mime_type:
   return True if http_mime_type[0] in media_types else False
  else:
   return False
 
 @controller.handler
 def request(self, f):
 	pass
 
 @controller.handler
 def response(self, f):
  try:
   if not self.capture_pass(f):
    parser = ResponseParser(f)
    result = parser.parser_data()
    if f.request.method == "GET":
     print result['url']
    elif f.request.method == "POST":
     print result['request_content'] # POST提交的参数
 
  except Exception as e:
   raise e
 
 @controller.handler
 def error(self, f):
 	pass
  # print("error", f)
 
 @controller.handler
 def log(self, l):
 	pass
  # print("log", l.msg)
 
def start_server(proxy_port, proxy_mode):
 port = int(proxy_port) if proxy_port else 8090
 mode = proxy_mode if proxy_mode else 'regular'
 
 if proxy_mode == 'http':
  mode = 'regular'
 
 opts = options.Options(
  listen_port=port,
  mode=mode,
  cadir="~/.mitmproxy/",
  )
 config = proxy.ProxyConfig(opts)
 state = flow.State()
 server = ProxyServer(config)
 m = WYProxy(opts, server, state)
 m.run()
 
 
if __name__ == '__main__':
	start_server("8090", "http")

parser.py

# from __future__ import absolute_import
 
class ResponseParser(object):
 """docstring for ResponseParser"""
 
 def __init__(self, f):
  super(ResponseParser, self).__init__()
  self.flow = f
 
 def parser_data(self):
  result = dict()
  result['url'] = self.flow.request.url
  result['path'] = '/{}'.format('/'.join(self.flow.request.path_components))
  result['host'] = self.flow.request.host
  result['port'] = self.flow.request.port
  result['scheme'] = self.flow.request.scheme
  result['method'] = self.flow.request.method
  result['status_code'] = self.flow.response.status_code
  result['content_length'] = int(self.flow.response.headers.get('Content-Length', 0))
  result['request_header'] = self.parser_header(self.flow.request.headers)
  result['request_content'] = self.flow.request.content
  return result
 
 @staticmethod
 def parser_multipart(content):
  if isinstance(content, str):
   res = re.findall(r'name=\"(\w+)\"\r\n\r\n(\w+)', content)
   if res:
    return "&".join([k + '=' + v for k, v in res])
   else:
    return ""
  else:
   return ""
 
 @staticmethod
 def parser_header(header):
  headers = {}
  for key, value in header.items():
   headers[key] = value
  return headers
 
 @staticmethod
 def decode_response_text(content):
  for _ in ['UTF-8', 'GB2312', 'GBK', 'iso-8859-1', 'big5']:
   try:
    return content.decode(_)
   except:
    continue
  return content

参考链接:

https://github.com/ring04h/wyproxy

以上这篇python使用mitmproxy抓取浏览器请求的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持【听图阁-专注于Python设计】。

相关文章

Python实现抓取百度搜索结果页的网站标题信息

Python实现抓取百度搜索结果页的网站标题信息

比如,你想采集标题中包含“58同城”的SERP结果,并过滤包含有“北京”或“厦门”等结果数据。 该Python脚本主要是实现以上功能。 其中,使用BeautifulSoup来解析HTM...

Python爬虫实现(伪)球迷速成

Python爬虫实现(伪)球迷速成

还有4天就世界杯了,作为一个资深(伪)球迷,必须要实时关注世界杯相关新闻,了解各个球队动态,这样才能在一堆球迷中如(大)鱼(吹)得(特)水(吹),迎接大家仰慕的目光! 给大家分享一个快...

python爬虫中多线程的使用详解

queue介绍 queue是python的标准库,俗称队列.可以直接import引用,在python2.x中,模块名为Queue。python3直接queue即可 在python中,多个...

python支持多线程的爬虫实例

python是支持多线程的, 主要是通过thread和threading这两个模块来实现的,本文主要给大家分享python实现多线程网页爬虫 一般来说,使用线程有两种模式, 一种是创建线...

python利用urllib实现爬取京东网站商品图片的爬虫实例

python利用urllib实现爬取京东网站商品图片的爬虫实例

本例程使用urlib实现的,基于python2.7版本,采用beautifulsoup进行网页分析,没有第三方库的应该安装上之后才能运行,我用的IDE是pycharm,闲话少说,直接上代...