From de9d6642ad2a57830f559fce22e36e3d68c5c70f Mon Sep 17 00:00:00 2001 From: Terry Truong Date: Sat, 1 Oct 2022 21:06:06 +1000 Subject: Add scripts for Wikipedia dump extraction --- backend/hist_data/enwiki/README.md | 60 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 backend/hist_data/enwiki/README.md (limited to 'backend/hist_data/enwiki/README.md') diff --git a/backend/hist_data/enwiki/README.md b/backend/hist_data/enwiki/README.md new file mode 100644 index 0000000..e50c7e2 --- /dev/null +++ b/backend/hist_data/enwiki/README.md @@ -0,0 +1,60 @@ +This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page). + +# Downloaded Files +- `enwiki-20220501-pages-articles-multistream.xml.bz2`
+ Contains text content and metadata for pages in enwiki. + Obtained via . + Some file content and format information was available from + . +- `enwiki-20220501-pages-articles-multistream-index.txt.bz2`
+ Obtained like above. Holds lines of the form offset1:pageId1:title1, + providing, for each page, an offset into the dump file of a chunk of + 100 pages that includes it. + +# Dump-Index Files +- `gen_dump_index_db.py`
+ Creates a database version of the enwiki-dump index file. +- `dump_index.db`
+ Generated by `gen_dump_index_db.py`.
+ Tables:
+ - `offsets`: `id INT PRIMARY KEY, title TEXT UNIQUE, offset INT, next_offset INT` + +# Description Files +- `gen_desc_data.py`
+ Reads through pages in the dump file, and adds short-description info to a database. +- `desc_data.db`
+ Generated by `gen_desc_data.py`.
+ Tables:
+ - `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE` + - `redirects`: `id INT PRIMARY KEY, target TEXT` + - `descs`: `id INT PRIMARY KEY, desc TEXT` + +# Image Files +- `gen_img_data.py`
+ Used to find infobox image names for page IDs, and store them into a database. +- `download_img_license_info.py`
+ Used to download licensing metadata for image names, via wikipedia's online API, and store them into a database. +- `img_data.db`
+ Used to hold metadata about infobox images for a set of page IDs. + Generated using `get_enwiki_img_data.py` and `download_img_license_info.py`.
+ Tables:
+ - `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT`
+ `img_name` may be NULL, which means 'none found', and is used to avoid re-processing page IDs. + - `imgs`: + `id INT PRIMARY KEY, name TEXT UNIQUE, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT` +
+ Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability. +- `download_imgs.py`
+ Used to download image files into imgs/. + +# Page View Files +- `pageviews/pageviews-*-user.bz2` + Each holds wikimedia article page view data for some month. + Obtained via . + Some format info was available from . +- `gen_pageview_data.py`
+ Reads pageview/* and `dump_index.db`, and creates a database holding average monthly pageview counts. +- `pageview_data.db`
+ Generated using `gen_pageview_data.py`.
+ Tables:
+ - `views`: `title TEXT PRIMARY KEY, id INT UNIQUE, views INT` -- cgit v1.2.3