diff options
| author | Terry Truong <terry06890@gmail.com> | 2022-09-11 14:55:42 +1000 |
|---|---|---|
| committer | Terry Truong <terry06890@gmail.com> | 2022-09-11 15:04:14 +1000 |
| commit | 5de5fb93e50fe9006221b30ac4a66f1be0db82e7 (patch) | |
| tree | 2567c25c902dbb40d44419805cebb38171df47fa /backend/tol_data/enwiki/README.md | |
| parent | daccbbd9c73a5292ea9d6746560d7009e5aa666d (diff) | |
Add backend unit tests
- Add unit testing code in backend/tests/
- Change to snake-case for script/file/directory names
- Use os.path.join() instead of '/'
- Refactor script code into function defs and a main-guard
- Make global vars all-caps
Some fixes:
- For getting descriptions, some wiki redirects weren't properly resolved
- Linked images were sub-optimally propagated
- Generation of reduced trees assumed a wiki-id association implied a description
- Tilo.py had potential null dereferences by not always using a reduced node set
- EOL image downloading didn't properly wait for all threads to end when finishing
Diffstat (limited to 'backend/tol_data/enwiki/README.md')
| -rw-r--r-- | backend/tol_data/enwiki/README.md | 63 |
1 files changed, 63 insertions, 0 deletions
diff --git a/backend/tol_data/enwiki/README.md b/backend/tol_data/enwiki/README.md new file mode 100644 index 0000000..ba1de33 --- /dev/null +++ b/backend/tol_data/enwiki/README.md @@ -0,0 +1,63 @@ +This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page). + +# Downloaded Files +- `enwiki-20220501-pages-articles-multistream.xml.bz2` <br> + Contains text content and metadata for pages in enwiki. + Obtained via <https://dumps.wikimedia.org/backup-index.html> (site suggests downloading from a mirror). + Some file content and format information was available from + <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>. +- `enwiki-20220501-pages-articles-multistream-index.txt.bz2` <br> + Obtained like above. Holds lines of the form offset1:pageId1:title1, + providing, for each page, an offset into the dump file of a chunk of + 100 pages that includes it. + +# Dump-Index Files +- `gen_dump_index_db.py` <br> + Creates a database version of the enwiki-dump index file. +- `dumpIndex.db` <br> + Generated by `gen_dump_index_db.py`. <br> + Tables: <br> + - `offsets`: `title TEXT PRIMARY KEY, id INT UNIQUE, offset INT, next_offset INT` + +# Description Database Files +- `gen_desc_data.py` <br> + Reads through pages in the dump file, and adds short-description info to a database. +- `desc_data.db` <br> + Generated by `gen_desc_data.py`. <br> + Tables: <br> + - `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE` + - `redirects`: `id INT PRIMARY KEY, target TEXT` + - `descs`: `id INT PRIMARY KEY, desc TEXT` + +# Image Database Files +- `gen_img_data.py` <br> + Used to find infobox image names for page IDs, storing them into a database. +- `downloadImgLicenseInfo.py` <br> + Used to download licensing metadata for image names, via wikipedia's online API, storing them into a database. +- `img_data.db` <br> + Used to hold metadata about infobox images for a set of pageIDs. + Generated using `get_enwiki_img_data.py` and `download_img_license_info.py`. <br> + Tables: <br> + - `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT` <br> + `img_name` may be null, which means 'none found', and is used to avoid re-processing page-ids. + - `imgs`: `name TEXT PRIMARY KEY, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT` <br> + Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability. +- `downloadImgs.py` <br> + Used to download image files into imgs/. + +# Page View Files +- `pageviews/pageviews-*-user.bz2` + Each holds wikimedia article page view data for some month. + Obtained via <https://dumps.wikimedia.org/other/pageview_complete/monthly/>. + Some format info was available from <https://dumps.wikimedia.org/other/pageview_complete/readme.html>. +- `gen_pageview_data.py` <br> + Reads pageview/*, and creates a database holding average monthly pageview counts. +- `pageview_data.db` <br> + Generated using `gen_pageview_data.py`. <br> + Tables: <br> + - `views`: `title TEXT PRIMARY KEY, id INT, views INT` + +# Other Files +- `lookup_page.py` <br> + Running `lookup_page.py title1` looks in the dump for a page with a given title, + and prints the contents to stdout. Uses dumpIndex.db. |
