从 Python 运行 Scrapy

Scrapy run from Python(从 Python 运行 Scrapy)
本文介绍了从 Python 运行 Scrapy的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

I am trying to run Scrapy from Python. I'm looking at this code which (source):

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log
from testspiders.spiders.followall import FollowAllSpider

spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here

My issue is that I'm confused on how to adjust this code to run my own spider. I have called my spider project "spider_a" which specifies the domain to crawl within the spider itself.

What I am asking is, if I run my spider with the following code:

scrapy crawl spider_a

How do I adjust the example python code above to do the same?

解决方案

Just import it and pass to crawler.crawl(), like:

from testspiders.spiders.spider_a import MySpider

spider = MySpider()
crawler.crawl(spider)

这篇关于从 Python 运行 Scrapy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Leetcode 234: Palindrome LinkedList(Leetcode 234:回文链接列表)
How do I read an Excel file directly from Dropbox#39;s API using pandas.read_excel()?(如何使用PANDAS.READ_EXCEL()直接从Dropbox的API读取Excel文件?)
subprocess.Popen tries to write to nonexistent pipe(子进程。打开尝试写入不存在的管道)
I want to realize Popen-code from Windows to Linux:(我想实现从Windows到Linux的POpen-code:)
Reading stdout from a subprocess in real time(实时读取子进程中的标准输出)
How to call type safely on a random file in Python?(如何在Python中安全地调用随机文件上的类型?)