python通过urllib2爬网页上种子下载示例

yipeiwu_com6年前Python基础

通过urllib2、re模块抓种子

思路

1.用程序登录论坛(如果需要登录才能访问的版块)

2.访问指定版块

3.遍历帖子(先取指定页,再遍历页面所有帖子的url)

4.循环访问所有帖子url,从帖子页面代码中取种子下载地址(通过正则表达式或第三方页面解析库)

5.访问种子页面下载种子

复制代码 代码如下:

import urllib
import urllib2
import cookielib
import re
import sys
import os

# site is website address | fid is part id
site = "http://xxx.yyy.zzz/"
source = "thread0806.php?fid=x&search=&page="

btSave = "./clyzwm/"
if os.path.isdir(btSave):
 print btSave + " existing"
else:
 os.mkdir(btSave)

logfile = "./clyzwm/down.log"
errorfile = "./clyzwm/error.log"
sucfile = "./clyzwm/sucess.log"

headers = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36', 
           'Referer' : 'http://xxx.yyy.zzz/'}

def btDown(url, dirPath):
 logger(logfile, "download file : " + url)
 try:
  #pageCode = urllib2.urlopen(url).read()
  #print pageCode
  btStep1 = re.findall('http://[\w]+\.[\w]+\.[\w]{0,4}/[\w]{2,6}\.php\?[\w]{2,6}=([\w]+)', url, re.I)
  #print btStep1
  if len(btStep1)>0:
   ref = btStep1[0]
   downsite = ""
   downData = {}
   if len(ref)>20:
    downsite = re.findall('http://www.[\w]+\.[\w]+/', url)[0]
    downsite = downsite + "download.php"
    reff = re.findall('input\stype=\"hidden\"\sname=\"reff\"\svalue=\"([\w=]+)\"', urllib2.urlopen(url).read(), re.I)[0]
    downData = {'ref': ref, 'reff':reff, 'submit':'download'}
   else:
    downsite = "http://www.downhh.com/download.php"
    downData = {'ref': ref, 'rulesubmit':'download'}
   #print "bt site - " +  downsite + "\n downData:"
   #print downData
   downData = urllib.urlencode(downData)
   downReq = urllib2.Request(downsite, downData)
   downReq.add_header('User-Agent','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36')
   downPost = urllib2.urlopen(downReq)
   stream = downPost.read(-1)
   if (len(stream) > 1000):
    downPost.close()
    name = btStep1[0]+ ".torrent"
    fw = open(dirPath + name, 'w')
    fw.write(stream)
    fw.close()
    logger(sucfile, url+"\n")
   else:
    logger(errorfile, url+"\n")
 except urllib2.URLError, e:
  print e.reason

def logger(logfile, msg):
 print msg
 fw = open(logfile, 'a')
 fw.write(msg)
 fw.close()

for i in range(1, 1000):
 logger(logfile, "\n\n\n@ page " + str(i) + " ...")
 part = site + source + str(i)

 content = urllib2.urlopen(part).read()
 content = content.decode('gbk').encode('utf8')
 #print content

 pages = re.findall('<a\s+href=\"(htm_data/[\d]+/[\d]+/[\d]+\.html).*?<\/a>', content,re.I)
 #print pages

 for page in pages:
  page = site + page;
  #logger(logfile, "\n# visiting " + page + " ...")
  pageCode = urllib2.urlopen(page).read()
  #print pageCode
  zzJump = re.findall('http://www.viidii.info/\?http://[\w]+/[\w]+\?[\w]{2,6}=[\w]+' ,pageCode)  
  #zzJump = re.findall('http://www.viidii.info/\?http://[\w/\?=]*', pageCode)
  if len(zzJump) > 0:
   zzJump = zzJump[0]
   #print "- jump page - " + zzJump
   pageCode = urllib2.urlopen(page).read()
   zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.php\?[\w]{2,6}=[\w]+' ,pageCode)
   if len(zzPage) > 0:
    zzPage = zzPage[0]
    logger(logfile, "\n- zhongzi page -" + zzPage)
    btDown(zzPage, btSave)
   else:
    logger(logfile, "\n. NOT FOUND .")
  else:
   logger(logfile, "\n... NOT FOUND ...")
  zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.php\?ref=[\w]+' ,pageCode)

相关文章

网易有道2017内推编程题 洗牌(python)

本文实例为大家分享了网易有道2017内推编程题:洗牌,供大家参考,具体内容如下 ''' [编程题] 洗牌 时间限制:1秒 空间限制:32768K 洗牌在生活中十分常见,现在需要写一个程...

Django文件存储 自己定制存储系统解析

要自己写一个存储系统,可以依照以下步骤: 1.写一个继承自django.core.files.storage.Storage的子类。 from django.core.files.s...

Python调用C语言开发的共享库方法实例

在helloworld工程中,编写了一个简单的两个数值相加的程序,编译成为共享库后,如何使用python对其进行调用呢? 使用ll命令列出当前目录下的共享库,其中共享库名为libhell...

python绘制漏斗图步骤详解

python绘制漏斗图步骤详解

pyecharts中的Funnel函数可以绘制漏斗图,自动根据数据大小生成由大到小自上而下排列的一个漏斗样的图形。 1、导入Funnel模块。 from pyecharts import...

Python学习小技巧之列表项的排序

本文介绍的是关于Python列表项排序的相关内容,分享出来供大家参考学习,下面来看看详细的介绍: 典型代码1: data_list = [6, 9, 1, 3, 0, 10, 100...