aboutsummaryrefslogtreecommitdiff
path: root/backend/tolData/enwiki/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'backend/tolData/enwiki/README.md')
-rw-r--r--backend/tolData/enwiki/README.md63
1 files changed, 0 insertions, 63 deletions
diff --git a/backend/tolData/enwiki/README.md b/backend/tolData/enwiki/README.md
deleted file mode 100644
index 76f9ee5..0000000
--- a/backend/tolData/enwiki/README.md
+++ /dev/null
@@ -1,63 +0,0 @@
-This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page).
-
-# Downloaded Files
-- enwiki-20220501-pages-articles-multistream.xml.bz2 <br>
- Contains text content and metadata for pages in enwiki.
- Obtained via <https://dumps.wikimedia.org/backup-index.html> (site suggests downloading from a mirror).
- Some file content and format information was available from
- <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>.
-- enwiki-20220501-pages-articles-multistream-index.txt.bz2 <br>
- Obtained like above. Holds lines of the form offset1:pageId1:title1,
- providing, for each page, an offset into the dump file of a chunk of
- 100 pages that includes it.
-
-# Dump-Index Files
-- genDumpIndexDb.py <br>
- Creates a database version of the enwiki-dump index file.
-- dumpIndex.db <br>
- Generated by genDumpIndexDb.py. <br>
- Tables: <br>
- - `offsets`: `title TEXT PRIMARY KEY, id INT UNIQUE, offset INT, next_offset INT`
-
-# Description Database Files
-- genDescData.py <br>
- Reads through pages in the dump file, and adds short-description info to a database.
-- descData.db <br>
- Generated by genDescData.py. <br>
- Tables: <br>
- - `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE`
- - `redirects`: `id INT PRIMARY KEY, target TEXT`
- - `descs`: `id INT PRIMARY KEY, desc TEXT`
-
-# Image Database Files
-- genImgData.py <br>
- Used to find infobox image names for page IDs, storing them into a database.
-- downloadImgLicenseInfo.py <br>
- Used to download licensing metadata for image names, via wikipedia's online API, storing them into a database.
-- imgData.db <br>
- Used to hold metadata about infobox images for a set of pageIDs.
- Generated using getEnwikiImgData.py and downloadImgLicenseInfo.py. <br>
- Tables: <br>
- - `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT` <br>
- `img_name` may be null, which means 'none found', and is used to avoid re-processing page-ids.
- - `imgs`: `name TEXT PRIMARY KEY, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT` <br>
- Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability.
-- downloadImgs.py <br>
- Used to download image files into imgs/.
-
-# Page View Files
-- pageviews/pageviews-*-user.bz2
- Each holds wikimedia article page view data for some month.
- Obtained via <https://dumps.wikimedia.org/other/pageview_complete/monthly/>.
- Some format info was available from <https://dumps.wikimedia.org/other/pageview_complete/readme.html>.
-- genPageviewData.py <br>
- Reads pageview/*, and creates a database holding average monthly pageview counts.
-- pageviewData.db <br>
- Generated using genPageviewData.py. <br>
- Tables: <br>
- - `views`: `title TEXT PRIMARY KEY, id INT, views INT`
-
-# Other Files
-- lookupPage.py <br>
- Running `lookupPage.py title1` looks in the dump for a page with a given title,
- and prints the contents to stdout. Uses dumpIndex.db.