Scrapy 之中间件(Middleware)


背包
背包 2022-09-21 09:18:49 65956
分类专栏: 资讯

Scrapy 结构概述:
在这里插入图片描述

一、下载器中间件(Downloader Middleware)

如上图标号4、5处所示,下载器中间件用于处理scrapy的request和response的钩子框架,如在request中设置代理ip,header等,检测response的HTTP响应码等。

scrapy已经自带来一堆下载器中间件。

{
    'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
    'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
    'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
    'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
    'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
}
n punctuation">, \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0"
]

PROXIES = [
    '1.85.220.195:8118',
    '60.255.186.169:8888',
    '118.187.58.34:53281',
    '116.224.191.141:8118',
    '120.27.5.62:9090',
    '119.132.250.156:53281',
    '139.129.166.68:3128'
]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

代理ip中间件

import random

class Proxy_Middleware():

    def __init__(self, crawler):
        self.proxy_list = crawler.settings.PROXY_LIST
        self.ua_list = crawler.settings.USER_AGENT_LIST

	@classmethod
	def from_crawler(cls, crawler):
	    return cls(crawler)

    def process_request(self, request, spider):
        try:
			ua = random.choice(self.ua_list)
        	request.headers.setdefault('User-Agent', ua)
        	
            proxy_ip_port = random.choice(self.proxy_list)
            request.meta['proxy'] = 'http://' + proxy_ip_port
        except request.exceptions.RequestException:
            spider.logger.error('some error happended!')

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

重试中间件

有时使用代理会被远程拒绝或超时等错误,这时我们需要换代理ip重试,重写scrapy.downloadermiddlewares.retry.RetryMiddleware

from scrapy.downloadermiddlewares.retry import RetryMiddleware
from scrapy.utils.response import response_status_message

class My_RetryMiddleware(RetryMiddleware):
	def __init__(self, crawler):
        self.proxy_list = crawler.settings.PROXY_LIST
        self.ua_list = crawler.settings.USER_AGENT_LIST

	@classmethod
	def from_crawler(cls, crawler):
	    return cls(crawler)

    def process_response(self, request, response, spider):
        if request.meta.get('dont_retry', False):
            return response

        if response.status in self.retry_http_codes:
            reason = response_status_message(response.status)
            try:
                ua = random.choice(self.ua_list)
	        	request.headers.setdefault('User-Agent', ua)
	        	
	            proxy_ip_port = random.choice(self.proxy_list)
	            request.meta['proxy'] = 'http://' + proxy_ip_port
            except request.exceptions.RequestException:
                spider.logger.error('获取讯代理ip失败!')

            return self._retry(request, reason, spider) or response
        return response
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
 scrapy中对接selenium

from scrapy.http import HtmlResponse
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from gp.configs import *


class ChromeDownloaderMiddleware(object):

    def __init__(self):
        options = webdriver.ChromeOptions()
        options.add_argument('--headless')   设置无界面
        if CHROME_PATH:
            options.binary_location = CHROME_PATH
        if CHROME_DRIVER_PATH:   初始化Chrome驱动
            self.driver = webdriver.Chrome(chrome_options=options, executable_path=CHROME_DRIVER_PATH)  
        else:
            self.driver = webdriver.Chrome(chrome_options=options)   初始化Chrome驱动

    def __del__(self):
        self.driver.close()

    def process_request(self, request, spider):
        try:
            print('Chrome driver begin...')
            self.driver.get(request.url)   获取网页链接内容
            return HtmlResponse(url=request.url, body=self.driver.page_source, request=request, encoding='utf-8',
                                status=200)   返回HTML数据
        except TimeoutException:
            return HtmlResponse(url=request.url, request=request, encoding='utf-8', status=500)
        finally:
            print('Chrome driver end...')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

二、Spider中间件(Spider Middleware)

如文章第一张图所示,spider中间件用于处理response及spider生成的item和Request

启动自定义spider中间件必须先开启settings中的设置

SPIDER_MIDDLEWARES = {
    'myproject.middlewares.CustomSpiderMiddleware': 543,
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None,
}
  • 1
  • 2
  • 3
  • 4

同理,数字越小越靠近引擎,process_spider_input()优先处理,数字越大越靠近spider,process_spider_output()优先处理,关闭用None

编写自定义spider中间件

process_spider_input(response, spider)

当response通过spider中间件时,这个方法被调用,返回None

process_spider_output(response, result, spider)

当spider处理response后返回result时,这个方法被调用,必须返回Request或Item对象的可迭代对象,一般返回result

process_spider_exception(response, exception, spider)

当spider中间件抛出异常时,这个方法被调用,返回None或可迭代对象的Request、dict、Item

补充一张图:

在这里插入图片描述

参考文档:

  1. https://docs.scrapy.org/en/latest/topics/spider-middleware.html
  2. https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
文章知识点与官方知识档案匹配,可进一步学习相关知识
Python入门技能树网络爬虫Scrapy框架127519 人正在系统学习中

网站声明:如果转载,请联系本站管理员。否则一切后果自行承担。

本文链接:https://www.xckfsq.com/news/show.html?id=8781
赞同 0
评论 0 条
背包L0
粉丝 0 发表 5 + 关注 私信
上周热门
Kingbase用户权限管理  2020
信刻全自动光盘摆渡系统  1749
信刻国产化智能光盘柜管理系统  1419
银河麒麟添加网络打印机时,出现“client-error-not-possible”错误提示  1014
银河麒麟打印带有图像的文档时出错  924
银河麒麟添加打印机时,出现“server-error-internal-error”  715
麒麟系统也能完整体验微信啦!  657
统信桌面专业版【如何查询系统安装时间】  633
统信操作系统各版本介绍  624
统信桌面专业版【全盘安装UOS系统】介绍  598
本周热议
我的信创开放社区兼职赚钱历程 40
今天你签到了吗? 27
信创开放社区邀请他人注册的具体步骤如下 15
如何玩转信创开放社区—从小白进阶到专家 15
方德桌面操作系统 14
我有15积分有什么用? 13
用抖音玩法闯信创开放社区——用平台宣传企业产品服务 13
如何让你先人一步获得悬赏问题信息?(创作者必看) 12
2024中国信创产业发展大会暨中国信息科技创新与应用博览会 9
中央国家机关政府采购中心:应当将CPU、操作系统符合安全可靠测评要求纳入采购需求 8

添加我为好友,拉您入交流群!

请使用微信扫一扫!