- pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple
- 或者(如果已经安装了Anaconda)
- conda install -c conda-forge scrapy
新建存放项目的文件夹scrapyproject,并黑窗口cd到该文件夹下:
创建项目
scrapy startproject 项目名称
PyCharm打开项目:
创建一个爬虫:打开Terminal执行如下【限制域只能是域名,不能含有http前缀】
scrapy genspider 爬虫名称 要爬取的限制域
执行完毕后,在spiders文件夹下会有一个基础爬虫文件
根路径下新建文件main.py,内容如下:【修改一下爬虫名称】
- from scrapy.cmdline import execute
- import os
- import sys
-
- if __name__ == '__main__':
- sys.path.append(os.path.dirname(os.path.abspath(__file__)))
- execute(['scrapy', 'crawl', '爬虫名称'])
启动main.py即可启动爬虫程序。
Setting初始化只定义了四个参数,其他都是注释掉的。
- # Scrapy settings for testscrapy project
- #
- # For simplicity, this file contains only settings considered important or
- # commonly used. You can find more settings consulting the documentation:
- #
- # https://docs.scrapy.org/en/latest/topics/settings.html
- # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
- # https://docs.scrapy.org/en/latest/topics/spider-middleware.html
-
- BOT_NAME = 'testscrapy' # 爬虫名称,爬取的时候user-agent默认就是这个名字
-
- SPIDER_MODULES = ['testscrapy.spiders'] # 爬虫模块
- NEWSPIDER_MODULE = 'testscrapy.spiders' # 爬虫模块
-
- # Crawl responsibly by identifying yourself (and your website) on the user-agent
- # USER_AGENT = 'testscrapy (+http://www.yourdomain.com)'
-
- # Obey robots.txt rules
- ROBOTSTXT_OBEY = True # 是否遵守网站爬虫协议,当设置为False时,就代表,无论此网站让我不让我爬,我都要爬。
-
- # Configure maximum concurrent requests performed by Scrapy (default: 16)
- # CONCURRENT_REQUESTS = 32 # 并发请求数,如果对方没有做反爬虫机制,就可以开很大的并发,这样就可以一下子返回很多的数据
-
- # Configure a delay for requests for the same website (default: 0)
- # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
- # See also autothrottle settings and docs
- # DOWNLOAD_DELAY = 3 # 下载延迟秒数(下载速度),限制频繁访问,防止被封号,不给别人的网站造成太大的压力
- # The download delay setting will honor only one of:
- # CONCURRENT_REQUESTS_PER_DOMAIN = 16 # 针对每个域名限制 n 个并发,最大为 16
- # CONCURRENT_REQUESTS_PER_IP = 16 # 单IP访问的并发数,如果有值则忽略,对域名并发树的限制,并且下载速度也应用在每个IP,因为一个域名可以有很多个IP,公司可以有很多个服务器,对外部来说只有一个域名
-
- # Disable cookies (enabled by default)
- # COOKIES_ENABLED = False # 返回的 response 是否解析 cookies
-
- # Disable Telnet Console (enabled by default)
- # TELNETCONSOLE_ENABLED = False # 通过Telnet 可以监听当前爬虫的状态、信息、操作爬虫等,使用方法是:打开cmd,使用 telent 127.0.0.1 6023 以及est() ,即可进入操作界面
-
- # Override the default request headers:
- # DEFAULT_REQUEST_HEADERS = {
- # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
- # 'Accept-Language': 'en',
- # } # 默认的请求头,每个请求都可以携带,如果针对每个请求头设置,也可以在此界面进行设置
-
- # Enable or disable spider middlewares
- # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
- # SPIDER_MIDDLEWARES = {
- # 'testscrapy.middlewares.TestscrapySpiderMiddleware': 543,
- # } # 爬虫中间件
-
- # Enable or disable downloader middlewares
- # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
- # DOWNLOADER_MIDDLEWARES = {
- # 'testscrapy.middlewares.TestscrapyDownloaderMiddleware': 543,
- # } # 下载中间件
-
- # Enable or disable extensions
- # See https://docs.scrapy.org/en/latest/topics/extensions.html
- # EXTENSIONS = {
- # 'scrapy.extensions.telnet.TelnetConsole': None,
- # } # 其他扩展
-
- # Configure item pipelines
- # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
- # ITEM_PIPELINES = {
- # 'testscrapy.pipelines.TestscrapyPipeline': 300,
- # } # 自定义PIPELINES处理请求,主要为了存储数据使用,每行后面的整型值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内
-
- # Enable and configure the AutoThrottle extension (disabled by default)
- # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
- # AUTOTHROTTLE_ENABLED = True
- # The initial download delay
- # AUTOTHROTTLE_START_DELAY = 5
- # The maximum download delay to be set in case of high latencies
- # AUTOTHROTTLE_MAX_DELAY = 60
- # The average number of requests Scrapy should be sending in parallel to
- # each remote server
- # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
- # Enable showing throttling stats for every response received:
- # AUTOTHROTTLE_DEBUG = False
-
- # Enable and configure HTTP caching (disabled by default)
- # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
- # HTTPCACHE_ENABLED = True
- # HTTPCACHE_EXPIRATION_SECS = 0
- # HTTPCACHE_DIR = 'httpcache'
- # HTTPCACHE_IGNORE_HTTP_CODES = []
- # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'