[新人求助] Python中关于scrapy项目中scrapy.Request没有回调的问题如何解决?
import scrapy
from Demo.items import DemoItem
class QuotesSpider(scrapy.Spider):
name = ‘quotes’
allowed_domains = [‘quores.toscrape.com’]
start_urls = [‘http://quotes.toscrape.com/’]
def parse(self, response):
quotes = response.css(’.quote’)
for quote in quotes:
item = DemoItem()
text = quote.css(’.text::text’).extract_first()
author = quote.css(’.author::text’).extract_first()
tags = quote.css(’.tags .tag::text’).extract()
item[‘text’] = text
item[‘author’] = author
item[‘tags’] = tags
yield item
next = response.css(’.pager .next a::attr(“href”)’).extract_first()
url = response.urljoin(next)
if next:
yield scrapy.Request(url=url,callback=self.parse)
[新人求助] Python中关于scrapy项目中scrapy.Request没有回调的问题如何解决?
按照教程里写的,但是我这代码只爬取了一页就结束了,求大佬帮忙看看
这个问题我遇到过。在Scrapy里,scrapy.Request不执行回调通常是因为没把callback参数传对,或者回调函数本身有问题。
核心原因和解决方案:
-
最可能的原因:回调函数没正确传递 新手常犯的错是写
callback=self.parse_detail时,把函数调用()也加上了,这会导致回调立即执行而不是作为引用传递。# 错误写法 ❌ yield scrapy.Request(url, callback=self.parse_detail()) # 正确写法 ✅ yield scrapy.Request(url, callback=self.parse_detail) -
确保回调函数名正确 检查
callback=后面的函数名是否和Spider类里定义的方法名完全一致(包括大小写)。 -
检查回调函数的参数 Scrapy默认会把Response对象传给回调函数。确保你的函数定义包含了
response参数:def parse_detail(self, response): # ✅ 必须有response参数 # 你的解析逻辑 pass
一个完整可运行的示例:
import scrapy
class MySpider(scrapy.Spider):
name = 'example'
start_urls = ['http://example.com']
def parse(self, response):
# 提取详情页链接
detail_url = response.css('a.detail-link::attr(href)').get()
if detail_url:
# ✅ 正确传递回调函数引用
yield scrapy.Request(
response.urljoin(detail_url),
callback=self.parse_detail # 注意:没有括号!
)
def parse_detail(self, response): # ✅ 正确定义response参数
item = {
'title': response.css('h1::text').get(),
'url': response.url
}
yield item
快速自查清单:
- callback参数后面有没有多余的括号?
- 函数名拼写是否正确?
- 回调函数是否定义了
response参数? - 请求是否通过
yield发出了?
总结:仔细检查callback参数的写法,确保传递的是函数引用而非调用结果。
求助啊
把 next 打印出来
next 哪行,不要 extract_first ()试试。
把 if next:去掉就能行了,亲测!
next 打印出来是 '/page/2/‘
url 是’http://quotes.toscrape.com/page/2/’
我这还是只打印了一页,不知啥情况
贴出 scrapy 结束的日志
2019-01-10 11:35:18 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to ‘http’: <GET http://http//quotes.toscrape.com/page/2>
2019-01-10 11:35:18 [scrapy.core.engine] INFO: Closing spider (finished)
2019-01-10 11:35:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{‘downloader/request_bytes’: 446,
‘downloader/request_count’: 2,
‘downloader/request_method_count/GET’: 2,
‘downloader/response_bytes’: 2701,
‘downloader/response_count’: 2,
‘downloader/response_status_count/200’: 1,
‘downloader/response_status_count/404’: 1,
‘finish_reason’: ‘finished’,
‘finish_time’: datetime.datetime(2019, 1, 10, 3, 35, 18, 314550),
‘item_scraped_count’: 10,
‘log_count/DEBUG’: 14,
‘log_count/INFO’: 7,
‘offsite/domains’: 1,
‘offsite/filtered’: 9,
‘request_depth_max’: 1,
‘response_received_count’: 2,
‘scheduler/dequeued’: 1,
‘scheduler/dequeued/memory’: 1,
‘scheduler/enqueued’: 1,
‘scheduler/enqueued/memory’: 1,
‘start_time’: datetime.datetime(2019, 1, 10, 3, 35, 14, 371325)}
2019-01-10 11:35:18 [scrapy.core.engine] INFO: Spider closed (finished)
已解决,修改代码为 yield scrapy.http.Request(url, callback=self.parse, dont_filter=True)

