多线程的Python Web Crawler被卡住了

Multi-threaded Python Web Crawler Got Stuck(多线程的Python Web Crawler被卡住了)
本文介绍了多线程的Python Web Crawler被卡住了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个Python网络爬虫程序,我想让它成为多线程的。现在我已经完成了基本部分,下面是它的功能:

  1. 线程从队列获取URL;

  2. 该线程从页面提取链接,检查链接是否存在于池(集合)中,并将新链接放入队列和池;

  3. 该线程将URL和http响应写入CSV文件。

但当我运行爬虫程序时,它最终总是被卡住,没有正确退出。我已经翻阅了Python的官方文档,但仍然一无所知。

代码如下:

#!/usr/bin/env python
#!coding=utf-8

import requests, re, urlparse
import threading
from Queue import Queue
from bs4 import BeautifulSoup

#custom modules and files
from setting import config


class Page:

    def __init__(self, url):

        self.url = url
        self.status = ""
        self.rawdata = ""
        self.error = False

        r = ""

        try:
            r = requests.get(self.url, headers={'User-Agent': 'random spider'})
        except requests.exceptions.RequestException as e:
            self.status = e
            self.error = True
        else:
            if not r.history:
                self.status = r.status_code
            else:
                self.status = r.history[0]

        self.rawdata = r

    def outlinks(self):

        self.outlinks = []

        #links, contains URL, anchor text, nofollow
        raw = self.rawdata.text.lower()
        soup = BeautifulSoup(raw)
        outlinks = soup.find_all('a', href=True)

        for link in outlinks:
            d = {"follow":"yes"}
            d['url'] = urlparse.urljoin(self.url, link.get('href'))
            d['anchortext'] = link.text
            if link.get('rel'):
                if "nofollow" in link.get('rel'):
                    d["follow"] = "no"
            if d not in self.outlinks:
                self.outlinks.append(d)


pool = Queue()
exist = set()
thread_num = 10
lock = threading.Lock()
output = open("final.csv", "a")

#the domain is the start point
domain = config["domain"]
pool.put(domain)
exist.add(domain)


def crawl():

    while True:

        p = Page(pool.get())

        #write data to output file
        lock.acquire()
        output.write(p.url+" "+str(p.status)+"
")
        print "%s crawls %s" % (threading.currentThread().getName(), p.url)
        lock.release()

        if not p.error:
            p.outlinks()
            outlinks = p.outlinks
            if urlparse.urlparse(p.url)[1] == urlparse.urlparse(domain)[1] :
                for link in outlinks:
                    if link['url'] not in exist:
                        lock.acquire()
                        pool.put(link['url'])
                        exist.add(link['url'])
                        lock.release()
        pool.task_done()            


for i in range(thread_num):
    t = threading.Thread(target = crawl)
    t.start()

pool.join()
output.close()

如有任何帮助,我们将不胜感激!

谢谢

马库斯

推荐答案

爬网函数有一个无限的While循环,没有可能的退出路径。 条件True的计算结果始终为True,循环继续,如您所说

未正确退出

修改爬网函数的While循环以包括条件。例如,当保存到CSV文件的链接数量超过某个最小数量时,则退出While循环。

def crawl():
    while len(exist) <= min_links:
        ...

这篇关于多线程的Python Web Crawler被卡住了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

concurrent.futures.ThreadPoolExecutor / Multithreading runs out of memory (Killed)(ConCurent.futures.ThreadPoolExecutor/多线程内存不足(已终止))
how to use multi-threading for optimizing face detection?(如何使用多线程优化人脸检测?)
Using Pycuda Multiple Threads(使用Pycuda多线程)
Multiple thread with Autobahn, ApplicationRunner and ApplicationSession(带有Autobahn、ApplicationRunner和ApplicationSession的多线程)
Can you perform multi-threaded tasks within Django?(您可以在Django中执行多线程任务吗?)
Formatting output of multithreading output(格式化多线程输出的输出)