python 爬取古诗文存入mysql数据库的方法

yipeiwu_com5年前Python爬虫

使用正则提取数据,请求库requests,看代码,在存入数据库时,报错ERROR 1054 (42S22): Unknown column ‘title' in ‘field list'。原来是我写sql 有问题,sql = “insert into poem(title,author,content,create_time) values({},{},{},{})”.format(title, author,content,crate_time)
应该写成sql = “insert into poem(title,author,content,create_time) values('{}','{}','{}','{}')”.format(title, author,content,crate_time)

把插入的值放入引号中。

import datetime
import re
import pymysql
import requests
url = "https://www.gushiwen.org/"
headers = {
 'User-Agent': "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50"}
class Spiderpoem(object):
 conn = pymysql.Connect(host="localhost", port=3306, user="root", password='mysql', database='poem_data',
       charset="utf8")
 cs1 = conn.cursor()
 def get_requests(self, url, headers=None):
  """发送请求"""
  resp = requests.get(url, headers=headers)
  if resp.status_code == 200:
   # print(resp.request.headers)
   return resp.text
  return None
 def get_parse(self, response):
  """解析网页"""
  re_data = {
   "title": r'<div\sclass="sons">.*?<b>(.*?)</b>.*?</div>',
   "author": r'<p>.*?class="source">.*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>.*?</p>',
   "content": r'<div\sclass="contson".*?>(.*?)</div>'
  }
  titles = self.reg_con(re_data["title"], response)
  authors = self.reg_con(re_data["author"], response)
  poems_list = self.reg_con(re_data["content"], response)
  contents = list()
  for item in poems_list:
   ite = re.sub(r'<.*?>|\s', "", item)
   contents.append(ite.strip())
  for value in zip(titles, authors, contents):
   title, author, content = value
   author = "".join([author[0], '.', author[1]])
   poem = {
    "title": title,
    "author": author,
    "content": content
   }
   yield poem
 def reg_con(self, params, response):
  """正则匹配"""
  if not response:
   return "请求错误"
  param = re.compile(params, re.DOTALL) # re.DOTALL 匹配换行等价于re.S
  result = re.findall(param, response)
  return result
 @classmethod
 def save_data(cls, poem):
  title = poem.get("title")
  author = poem.get("author")
  content = poem.get("content")
  crate_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
  sql = "insert into poem(title,author,content,create_time) values('{}','{}','{}','{}')".format(title, author,
                          content,
                          crate_time)
  count = cls.cs1.execute(sql)
  print(count)
  cls.conn.commit()
 def main(self):
  resp = self.get_requests(url, headers)
  for it in self.get_parse(resp):
   self.save_data(it)
  self.cs1.close()
  self.conn.close()
if __name__ == '__main__':
 Spiderpoem().main()

总结

以上所述是小编给大家介绍的python 爬取古诗文存入mysql数据库的方法,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对【听图阁-专注于Python设计】网站的支持!
如果你觉得本文对你有帮助,欢迎转载,烦请注明出处,谢谢!

相关文章

windows下搭建python scrapy爬虫框架步骤

windows下搭建python scrapy爬虫框架步骤

网络上现有的windows下搭建scrapy教程都比较旧,一般都是咔咔咔安装一堆软件,太麻烦,这是因为scrapy框架用到好多不同的模块,其实查阅最新的官网scrapy文档,在windo...

以视频爬取实例讲解Python爬虫神器Beautiful Soup用法

1.安装BeautifulSoup4 easy_install安装方式,easy_install需要提前安装 easy_install beautifulsoup4 pip安装方...

python爬虫爬取笔趣网小说网站过程图解

python爬虫爬取笔趣网小说网站过程图解

首先:文章用到的解析库介绍 BeautifulSoup: Beautiful Soup提供一些简单的、python式的函数用来处理导航、搜索、修改分析树等功能。 它是一个工具箱,通过解析...

Python爬虫常用小技巧之设置代理IP

设置代理IP的原因 我们在使用Python爬虫爬取一个网站时,通常会频繁访问该网站。假如一个网站它会检测某一段时间某个IP的访问次数,如果访问次数过多,它会禁止你的访问。所以你可以设置...

python抓取网页中的图片示例

复制代码 代码如下:#coding:utf8import reimport urllibdef getHTML(url):    page = urllib...