Scrapy Could not find spider Error

Posted by Nacari on Stack Overflow See other posts from Stack Overflow or by Nacari
Published on 2010-05-22T00:57:46Z Indexed on 2010/05/22 3:10 UTC
Read the original article Hit count: 304

Filed under:
|
|
|
|

I have been trying to get a simple spider to run with scrapy, but keep getting the error:

Could not find spider for domain:stackexchange.com

when I run the code with the expression scrapy-ctl.py crawl stackexchange.com. The spider is as follow:

from scrapy.spider import BaseSpider
from __future__ import absolute_import


class StackExchangeSpider(BaseSpider):
    domain_name = "stackexchange.com"
    start_urls = [
        "http://www.stackexchange.com/",
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        open(filename, 'wb').write(response.body)

SPIDER = StackExchangeSpider()`

Another person posted almost the exact same problem months ago but did not say how they fixed it, http://stackoverflow.com/questions/1806990/scrapy-spider-is-not-working I have been following the turtorial exactly at http://doc.scrapy.org/intro/tutorial.html, and cannot figure out why it is not working.

© Stack Overflow or respective owner

Related posts about python

Related posts about scrapy