įor more information please refer to usage documentation and tutorials. $ trafilatura -u "" # outputs main content and comments as plain text. Primary installation method is with a Python package manager: pip install trafilatura. Most efficient open-source library in ScrapingHub’s article extraction benchmarkīest overall tool according to Gaël Lejeune & Adrien Barbaresi, Bien choisir son outil d’extraction de contenu à partir du Web (2020, PDF, French) To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory. XML (with metadata, text formatting and page structure) and TEI-XMLįor detailed results see the benchmark and evaluation script. Metadata (title, author, date, site name, categories and tags)įormatting and structural elements: paragraphs, titles, lists, quotes, code, line breaks, in-line text formattingĬSV (with metadata, tab-separated values) Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml) URLs, HTML files or parsed HTML trees usable as inputĮfficient and polite processing of download queuesĬonversion of previously downloaded files Seamless and parallel processing, online and offline:.URL management (blacklists, filtering and de-duplication) Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS) This tool can be useful for quantitative research in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security. It also has to be robust and reasonably fast, it runs in production on millions of documents. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the noise caused by recurring elements (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to make sense of the data. It aims at staying handy and modular: no database is required, the output can be converted to various commonly used formats. Its main applications are web crawling, downloads, scraping, and extraction of main texts, metadata and comments. It includes discovery, extraction and text processing components. Trafilatura is a Python package and command-line tool designed to gather text on the Web.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |