This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page). # Downloaded Files - `enwiki-20220501-pages-articles-multistream.xml.bz2`
Contains text content and metadata for pages in enwiki. Obtained via (site suggests downloading from a mirror). Some file content and format information was available from . - `enwiki-20220501-pages-articles-multistream-index.txt.bz2`
Obtained like above. Holds lines of the form offset1:pageId1:title1, providing, for each page, an offset into the dump file of a chunk of 100 pages that includes it. # Dump-Index Files - `gen_dump_index_db.py`
Creates a database version of the enwiki-dump index file. - `dump_index.db`
Generated by `gen_dump_index_db.py`.
Tables:
- `offsets`: `title TEXT PRIMARY KEY, id INT UNIQUE, offset INT, next_offset INT` # Description Files - `gen_desc_data.py`
Reads through pages in the dump file, and adds short-description info to a database. - `desc_data.db`
Generated by `gen_desc_data.py`.
Tables:
- `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE` - `redirects`: `id INT PRIMARY KEY, target TEXT` - `descs`: `id INT PRIMARY KEY, desc TEXT` # Image Files - `gen_img_data.py`
Used to find infobox image names for page IDs, and store them into a database. - `download_img_license_info.py`
Used to download licensing metadata for image names, via wikipedia's online API, and store them into a database. - `img_data.db`
Used to hold metadata about infobox images for a set of page IDs. Generated using `get_enwiki_img_data.py` and `download_img_license_info.py`.
Tables:
- `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT`
`img_name` may be null, which means 'none found', and is used to avoid re-processing page IDs. - `imgs`: `name TEXT PRIMARY KEY, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT`
Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability. - `download_imgs.py`
Used to download image files into imgs/. # Page View Files - `pageviews/pageviews-*-user.bz2` Each holds wikimedia article page view data for some month. Obtained via . Some format info was available from . - `gen_pageview_data.py`
Reads pageview/* and `dump_index.db`, and creates a database holding average monthly pageview counts. - `pageview_data.db`
Generated using `gen_pageview_data.py`.
Tables:
- `views`: `title TEXT PRIMARY KEY, id INT, views INT` # Other Files - `lookup_page.py`
Running `lookup_page.py title1` looks in the dump for a page with a given title, and prints the contents to stdout. Uses dump_index.db.