文档介绍:摘要
互联网上的服务数量增长快速,网站提供的关于服务的信息也各异,导致用户难以正确、快速的找到合适的服务,为了解决以上问题,需要获取和保存服务的WSDL的URL和相关的服务信息。
本文完成了一个Web服务信息分析、抓取的网络爬虫,主要研究开发内容有:1)分析服务网站结构:在抓取某个网站上的服务前,要人工的分析出该网站哪些页面需要抓取,哪些页面中哪些信息需要抓取;2)页面信息提取:根据人工分析出的哪些页面中哪些信息需要抓取的结构,使用HtmlParser工具抓取页面中的信息。3)下载WSDL文档:在抓取到了服务WSDL的URL后,使用HtmlClient下载WSDL文档。4)加载服务信息到数据库:把在页面信息提取中得到的关于服务的信息存入数据库中。
通过对Web服务的抓取,用户可以通过统一的入口来访问这些服务,而不用在互联网上盲目搜索。为服务的使用和开发提供更加便利的平台环境。
关键词:爬虫;网络服务;WSDL;
ABSTRACT
The number of services on the Increase quickly,the information sites provide about services is also different, leading to that it is difficult for users to correctly and quickly find the right services, in order to solve the above problems, needing to capture and store the service's WSDL URL and related service information.
This pleted a Web Crawler about Web service information analysis and main research and development contents are: 1) Analysis Service Web site structure: before capturing the services on a Web site, needing to manually analyze which pages of the site need to be captured, and which information in which pages needs to be captured; 2) Page information extraction: According to the manual analysis that which information in which paper needs to be captured, then using HtmlParser tools to capture the information in the page; 3) Downloading the WSDL document: capturing the service WSDL URL, and then using HtmlClient to download the WSDL ) Loading service information into the database: store the information about the service into the database.
After Crawling the Web service, users can access Web services through a unified portal , rather than blindly searching on the .Providing a more convenient platform environment for the use and development of services.
Key words:Crawler;Web service;WSDL;
目录
第一章绪论 1
问题的背景和意义 1
研究目标和目的 1
全文安排 2
第二章相关工作和需求分析 3
相关工作 3
功能说明 4
运行环境 4
第三章面向web service的网络爬虫详细设计 5
总体架构 5
数据库设计 6
程序包设计 7
流程设计 8
第四章面向web service的聚焦网络爬虫技术实现 13
4.