当前位置: 首页 > news >正文

建设网站软件无锡网站制作哪家便宜

建设网站软件,无锡网站制作哪家便宜,河南旅游网页设计,wordpress查询页面1. Scrapy install 准备知识 pip 包管理Python 安装XpathCssWindows安装 Scrapy $- pip install scrapy Linux安装 Scrapy $- apt-get install python-scrapy 2. Scrapy 项目创建 在开始爬取之前#xff0c;必须创建一个新的Scrapy项目。进入自定义的项目目录中- pip install scrapy Linux安装 Scrapy $- apt-get install python-scrapy 2. Scrapy 项目创建 在开始爬取之前必须创建一个新的Scrapy项目。进入自定义的项目目录中运行下列命令 $- scrapy startproject mySpider 其中 mySpider 为项目名称可以看到将会创建一个 mySpider 文件夹使用命令查看目录结构 $- tree mySpider 3. Scrapy 自定义爬虫类 通过Scrapy的Spider基础模版顺便建立一个基础的爬虫。也可以不用Scrapy命令建立基础爬虫 $- scrapy genspider gzrbSpider dayoo.com scrapy genspider是一个命令也是scrapy最常用的几个命令之一。至此一个最基本的爬虫项目已经建立完毕了. 文件描述 序列文件名描述1scrapy.cfg是整个Scrapy项目的配置文件2settings.py是上层目录中scrapy.cfg定义的设置文件(决定由谁去处理爬取的内容)3init.pyc是__init__.py的字节码文件4init.py作用就是将它的上级目录变成了一个模块 否则文件夹没有__init__.py不能作为模块导入5items.py是定义爬虫最终需要哪些项 (决定爬取哪些项目)5pipelines.pyScrapy爬虫爬取了网页中的内容后这些内容怎么处理就取决于pipelines.py如何设置 (决定爬取后的内容怎样处理)6gzrbSpider.py自定义爬虫类决定怎么爬 命令描述 序列操作描述1模拟爬广州日报网页scrapy shell https://www.dayoo.com2模拟查看节点数据response.xpath(.//div[classmt35]//ul[classnews-list]).extract()  3运行爬虫scrapy crawl gzrbSpider4. Scrapy 处理逻辑 文件 \spiders\gzrbSpider.py import scrapy from mySpider.items import MySpiderItemclass gzrbSpider(scrapy.Spider):name gzrbSpiderallowed_domains [dayoo.com/]start_urls (https://www.dayoo.com,)def parse(self, response):subSelector response.xpath(.//div[classmt35]//ul[classnews-list])items []for sub in subSelector:item MySpiderItem()item[newName] sub.xpath(./li/a/text()).extract()items.append(item)return items文件 Item.py # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass MySpiderItem(scrapy.Item):# define the fields for your item here like:# name scrapy.Field()newName scrapy.Field()文件 Setting.py # Scrapy settings for mySpider project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME mySpiderSPIDER_MODULES [mySpider.spiders] NEWSPIDER_MODULE mySpider.spiders# Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT mySpider(http://www.yourdomain.com)# Obey robots.txt rules ROBOTSTXT_OBEY True# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS 32# Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN 16 #CONCURRENT_REQUESTS_PER_IP 16# Disable cookies (enabled by default) #COOKIES_ENABLED False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED False# Override the default request headers: #DEFAULT_REQUEST_HEADERS { # Accept: text/html,application/xhtmlxml,application/xml;q0.9,*/*;q0.8, # Accept-Language: en, #}# Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES { # mySpider.middlewares.mySpiderSpiderMiddleware: 543, #}# Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES { # mySpider.middlewares.mySpiderDownloaderMiddleware: 543, #}# Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS { # scrapy.extensions.telnet.TelnetConsole: None, #}# Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES {mySpider.pipelines.mySpiderPipeline: 300, }# Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED True # The initial download delay #AUTOTHROTTLE_START_DELAY 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG False# Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED True #HTTPCACHE_EXPIRATION_SECS 0 #HTTPCACHE_DIR httpcache #HTTPCACHE_IGNORE_HTTP_CODES [] #HTTPCACHE_STORAGE scrapy.extensions.httpcache.FilesystemCacheStorage文件 pipelines.py # Define your item pipelines here # # Dont forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface from itemadapter import ItemAdapter import time# class mySpiderPipeline: # def process_item(self, item, spider): # return itemclass MySpiderPipeline(object):def process_item(self, item, spider):now time.strftime(%Y-%m-%d, time.localtime())fileName gzrb now .txtfor it in item[newName ]:with open(fileName,encodingutf-8,mode a) as fp:# fp.write(item[newName ][0].encode(utf8) \n\n)fp.write(it \n\n)return item 本文代码结果展示 5. Scrapy 扩展 Xpath: Css:
http://www.hkea.cn/news/14321426/

相关文章:

  • 国外扁平化设计网站农村基本制度建设网站
  • 佛山网站建设哪家评价高关键词优化排名哪家好
  • 培训网站视频不能拖动怎么办全国文明网联盟网站建设
  • 赤峰城乡建设局网站政务网站模板
  • dw做网站的导航栏wordpress文章保存目录
  • wordpress网页版山西网站seo
  • 如何查询网站的建站工具网站建设模型
  • 肥城网站制作网站建设费用 优帮云
  • 苏州园区网站设计公司杭州品牌推广
  • 从化哪里做网站好网站维护是什么职位
  • 武威百度做网站多少钱公司网站建设费用估计
  • 重庆网站seo排名更改wordpress主题字体颜色
  • 濮阳公司做网站wordpress物流企业主题
  • h5四合一网站建设wordpress关闭手机主题
  • 网站建设 维护 编程网站页面设计策划书
  • 备案关闭网站墨刀怎么做网站
  • 网站建设高端培训学校微网站建设完 不知道怎么推广咋办
  • 渭南建网站仪征网站建设
  • 工信部网站 登陆有什么网站可以做外贸出口信息
  • 上海做网站推广关键词网站设计制作公司地址
  • 网站产品管理模块wordpress图片页面模板
  • 网站开发的论文怎么写给我看免费观看
  • ftp 网站基于php网站开发环境
  • 营销网站和展示型网站wordpress战队模板
  • 泰兴网站设计新网店怎么免费推广
  • 设置一个网站到期页面北京网站备案拍照
  • 做网站多少钱大概班级优化大师免费下载学生版
  • 一个网站开发的权限ucenter使用自己做的网站
  • 通信建设资质管理信息系统网站网页制作的视频教程
  • 自动做网页的网站几款免费流程图制作软件