Python库LogParser v0.8.0发布:如何配合ScrapydWeb实现Scrapy爬虫日志的定期增量解析与进度可视化

GitHub 开源

https://github.com/my8100/logparser

安装

  • 通过 pip:
pip install logparser
  • 通过 git:
git clone https://github.com/my8100/logparser.git
cd logparser
python setup.py install

使用方法

作为 service 运行

  1. 请先确保当前主机已经安装和启动 Scrapyd
  2. 通过命令 logparser 启动 LogParser
  3. 访问 http://127.0.0.1:6800/logs/stats.json (假设 Scrapyd 运行于端口 6800)
  4. 访问 http://127.0.0.1:6800/logs/projectname/spidername/jobid.json 以获取某个爬虫任务的日志分析详情

配合 ScrapydWeb 实现爬虫进度可视化

详见 https://github.com/my8100/scrapydweb visualization

在 Python 代码中使用

In [1]: from logparser import parse

In [2]: log = “”“2018-10-23 18:28:34 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: demo) …: 2018-10-23 18:29:41 [scrapy.statscollectors] INFO: Dumping Scrapy stats: …: {‘downloader/exception_count’: 3, …: ‘downloader/exception_type_count/twisted.internet.error.TCPTimedOutError’: 3, …: ‘downloader/request_bytes’: 1336, …: ‘downloader/request_count’: 7, …: ‘downloader/request_method_count/GET’: 7, …: ‘downloader/response_bytes’: 1669, …: ‘downloader/response_count’: 4, …: ‘downloader/response_status_count/200’: 2, …: ‘downloader/response_status_count/302’: 1, …: ‘downloader/response_status_count/404’: 1, …: ‘dupefilter/filtered’: 1, …: ‘finish_reason’: ‘finished’, …: ‘finish_time’: datetime.datetime(2018, 10, 23, 10, 29, 41, 174719), …: ‘httperror/response_ignored_count’: 1, …: ‘httperror/response_ignored_status_count/404’: 1, …: ‘item_scraped_count’: 2, …: ‘log_count/CRITICAL’: 5, …: ‘log_count/DEBUG’: 14, …: ‘log_count/ERROR’: 5, …: ‘log_count/INFO’: 75, …: ‘log_count/WARNING’: 3, …: ‘offsite/domains’: 1, …: ‘offsite/filtered’: 1, …: ‘request_depth_max’: 1, …: ‘response_received_count’: 3, …: ‘retry/count’: 2, …: ‘retry/max_reached’: 1, …: ‘retry/reason_count/twisted.internet.error.TCPTimedOutError’: 2, …: ‘scheduler/dequeued’: 7, …: ‘scheduler/dequeued/memory’: 7, …: ‘scheduler/enqueued’: 7, …: ‘scheduler/enqueued/memory’: 7, …: ‘start_time’: datetime.datetime(2018, 10, 23, 10, 28, 35, 70938)} …: 2018-10-23 18:29:42 [scrapy.core.engine] INFO: Spider closed (finished)”""

In [3]: d = parse(log, headlines=1, taillines=1)

In [4]: d Out[4]: OrderedDict([(‘head’, ‘2018-10-23 18:28:34 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: demo)’), (‘tail’, ‘2018-10-23 18:29:42 [scrapy.core.engine] INFO: Spider closed (finished)’), (‘first_log_time’, ‘2018-10-23 18:28:34’), (‘latest_log_time’, ‘2018-10-23 18:29:42’), (‘elapsed’, ‘0:01:08’), (‘first_log_timestamp’, 1540290514), (‘latest_log_timestamp’, 1540290582), (‘datas’, []), (‘pages’, 3), (‘items’, 2), (‘latest_matches’, {‘resuming_crawl’: ‘’, ‘latest_offsite’: ‘’, ‘latest_duplicate’: ‘’, ‘latest_crawl’: ‘’, ‘latest_scrape’: ‘’, ‘latest_item’: ‘’, ‘latest_stat’: ‘’}), (‘latest_crawl_timestamp’, 0), (‘latest_scrape_timestamp’, 0), (‘log_categories’, {‘critical_logs’: {‘count’: 5, ‘details’: []}, ‘error_logs’: {‘count’: 5, ‘details’: []}, ‘warning_logs’: {‘count’: 3, ‘details’: []}, ‘redirect_logs’: {‘count’: 1, ‘details’: []}, ‘retry_logs’: {‘count’: 2, ‘details’: []}, ‘ignore_logs’: {‘count’: 1, ‘details’: []}}), (‘shutdown_reason’, ‘N/A’), (‘finish_reason’, ‘finished’), (‘last_update_timestamp’, 1547559048), (‘last_update_time’, ‘2019-01-15 21:30:48’)])

In [5]: d[‘elapsed’] Out[5]: ‘0:01:08’

In [6]: d[‘pages’] Out[6]: 3

In [7]: d[‘items’] Out[7]: 2

In [8]: d[‘finish_reason’] Out[8]: ‘finished’


Python库LogParser v0.8.0发布:如何配合ScrapydWeb实现Scrapy爬虫日志的定期增量解析与进度可视化

3 回复

这个需求很明确,就是要把LogParser和ScrapydWeb结合起来用。我来给你个完整的配置方案。

首先安装最新版本:

pip install logparser==0.8.0 scrapydweb

核心是配置LogParser的定时任务来解析Scrapyd的日志,然后让ScrapydWeb展示解析结果。创建配置文件logparser_config.py

# logparser_config.py
import os
from datetime import timedelta

# Scrapyd服务器配置
SCRAPYD_SERVERS = [
    ('127.0.0.1', 6800),  # (host, port)
]

# 日志目录配置(根据你的Scrapyd实际路径调整)
LOG_DIRS = [
    '/path/to/scrapyd/logs',  # Scrapyd日志目录
]

# 解析配置
PARSE_INTERVAL = timedelta(minutes=5)  # 每5分钟解析一次
KEEP_DATA_DAYS = 7  # 保留7天数据

# 数据库配置(使用SQLite)
DB_URL = 'sqlite:///scrapy_logs.db'

# 要解析的日志字段
LOG_FIELDS = [
    'spider_name',
    'job_id',
    'items_scraped',
    'pages_crawled',
    'finish_reason',
    'start_time',
    'finish_time',
    'log_level',
    'message'
]

然后创建定时解析脚本schedule_parser.py

# schedule_parser.py
import schedule
import time
from logparser import LogParser
from datetime import datetime
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def parse_incremental_logs():
    """增量解析Scrapyd日志"""
    try:
        parser = LogParser()
        
        # 设置只解析最近修改的日志文件
        since_last_parse = datetime.now() - timedelta(minutes=5)
        
        stats = parser.parse_incremental(
            log_dirs=LOG_DIRS,
            since=since_last_parse,
            scrapyd_servers=SCRAPYD_SERVERS
        )
        
        logger.info(f"解析完成: {stats['parsed_files']}个文件, "
                   f"{stats['new_records']}条新记录")
        
        # 导出解析结果供ScrapydWeb使用
        parser.export_to_json('scrapy_logs_status.json')
        
    except Exception as e:
        logger.error(f"解析失败: {e}")

# 设置定时任务
schedule.every(5).minutes.do(parse_incremental_logs)

if __name__ == '__main__':
    logger.info("启动定时日志解析服务...")
    parse_incremental_logs()  # 立即执行一次
    
    while True:
        schedule.run_pending()
        time.sleep(60)

最后配置ScrapydWeb来展示解析结果。创建scrapydweb_config.py

# scrapydweb_config.py
import json
from flask import render_template

class LogVisualization:
    def __init__(self):
        self.log_data = self.load_log_data()
    
    def load_log_data(self):
        """加载LogParser导出的数据"""
        try:
            with open('scrapy_logs_status.json', 'r') as f:
                return json.load(f)
        except FileNotFoundError:
            return {}
    
    def get_spider_progress(self, spider_name):
        """获取指定爬虫的进度"""
        if spider_name in self.log_data.get('spiders', {}):
            spider_data = self.log_data['spiders'][spider_name]
            return {
                'total_items': spider_data.get('total_items', 0),
                'items_scraped': spider_data.get('items_scraped', 0),
                'progress_percent': spider_data.get('progress', 0),
                'last_activity': spider_data.get('last_log_time'),
                'status': spider_data.get('status', 'unknown')
            }
        return None
    
    def get_dashboard_data(self):
        """获取仪表板数据"""
        return {
            'active_spiders': self.log_data.get('active_spiders', 0),
            'total_items_today': self.log_data.get('daily_stats', {}).get('items', 0),
            'success_rate': self.log_data.get('success_rate', 0),
            'recent_errors': self.log_data.get('recent_errors', [])
        }

# 在ScrapydWeb中注册扩展
def setup_log_visualization(app):
    log_viz = LogVisualization()
    
    @app.route('/logparser_dashboard')
    def logparser_dashboard():
        dashboard_data = log_viz.get_dashboard_data()
        return render_template('logparser_dashboard.html', **dashboard_data)
    
    @app.route('/spider_progress/<spider_name>')
    def spider_progress(spider_name):
        progress = log_viz.get_spider_progress(spider_name)
        return render_template('spider_progress.html', 
                             spider_name=spider_name, 
                             progress=progress)

运行步骤:

  1. 启动Scrapyd服务
  2. 运行python schedule_parser.py启动定时解析
  3. 启动ScrapydWeb:scrapydweb
  4. 访问http://localhost:5000/logparser_dashboard查看可视化界面

这样就能实现日志的定期增量解析和进度可视化了。注意根据实际情况调整日志目录路径和解析频率。

一句话建议:用schedule库做定时触发,通过增量解析减少资源消耗。


为啥不用 grafana ?

膜拜下大佬,以前一直用 spiderkeeper,可那东西坑实在太多了。。

回到顶部