国学大师词库爬虫

得意时要看淡,失意时要看开。不论得意失意,切莫大意;不论成功失败,切莫止步。志得意满时,需要的是淡然,给自己留一条退路;失意落魄时,需要的是泰然,给自己觅一条出路国学大师词库爬虫,希望对大家有帮助,欢迎收藏,转发!站点地址:www.bmabk.com,来源:原文

  1. 代查词汇下载地址:https://jhc001.lanzouw.com/iWAtlwcuixa
    密码:bxp6
  2. 爬虫代码:
#coding=utf-8
#coding=gbk
import requests
from lxml import etree
import os


def spider(name):

    try:
        response=requests.get('http://www.guoxuedashi.net/zidian/so.php?sokeyci='+name+'&submit=&kz=12&cilen=0')
        tree=etree.HTML(response.text)
        lis=tree.xpath('//div[@class="info_txt2 clearfix"]/a[1]/@href')
        # print(lis)
        if lis != []:
            r_lis='http://www.guoxuedashi.net'+lis[0]
            detail_page1(name,r_lis)

        else:
            response = requests.get('http://www.guoxuedashi.net/renwu/?sokeylishi='+name)
            tree=etree.HTML(response.text)
            lis=tree.xapth('//dl[@class="clearfix"]/dd[1]/a/@href')
            if lis!=[]:
                r_lis='http://www.guoxuedashi.net'+lis[0]
                detail_page2(name,r_lis)
            else:
                print('两者搜索结果均为空')

    except:
        print('异常')



# 词典-词首 解析
def detail_page1(name,r_lis):
    # r_lis='http://www.guoxuedashi.net/hydcd/7876o.html'
    response = requests.get(r_lis)
    # print(response.text)
    tree=etree.HTML(response.text)
    lis=tree.xpath('//div[@class="info_txt2 clearfix"]/p[2]/span/span/text()')
    if lis:
        detail=lis[0].split('。')[0]
        print(name+'\r\n'+detail)
        save_data(name,detail)
    else:
        lis = tree.xpath('//div[@class="info_txt2 clearfix"]/text()'|'//div[@class="info_txt2 clearfix"]/font/span/text()')
        detail=lis[1]+'\n'+lis[2]+'\n'+lis[3]
        print(name+'\r\n'+detail)
        save_data(name,detail)

# 历史-人物 解析
def detail_page2(name,r_lis):
    # r_lis='http://www.guoxuedashi.net/renwu/10838abax/'
    response=requests.get(r_lis)
    # print(response.text)
    tree=etree.HTML(response.text)
    lis=tree.xpath('//div[@class="info_content zj clearfix"]/span/p/text()')
    detail=lis[2].split('。')[0]
    print(name+detail)
    save_data(name, detail)

# 读取数据
def read_word():
    with open('./words.txt','r',encoding='utf-8') as fp:
        words=fp.readlines()
        # print(words)
        for word in words:
            name=word.replace('\n','')
            # print(name)
            spider(name)


# 保存数据
def save_data(name,detail):
    with open('./results/results.txt','a',encoding='utf-8') as fp:
        result=name+':'+detail+'\n'
        fp.write(result)



if __name__ == '__main__':
	os.makedirs('./results',exist_ok=True)
    read_word()
    # spider('张飞')
    # detail_page2()
  1. 代码纯纯单线程,效率出奇的低,还有很多不足之处,希望各位大神不吝赐教

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/156955.html

(0)
飞熊的头像飞熊bm

相关推荐

发表回复

登录后才能评论
极客之音——专业性很强的中文编程技术网站,欢迎收藏到浏览器,订阅我们!