aboutsummaryrefslogtreecommitdiff
path: root/backend/hist_data/enwiki/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'backend/hist_data/enwiki/README.md')
-rw-r--r--backend/hist_data/enwiki/README.md60
1 files changed, 60 insertions, 0 deletions
diff --git a/backend/hist_data/enwiki/README.md b/backend/hist_data/enwiki/README.md
new file mode 100644
index 0000000..e50c7e2
--- /dev/null
+++ b/backend/hist_data/enwiki/README.md
@@ -0,0 +1,60 @@
+This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page).
+
+# Downloaded Files
+- `enwiki-20220501-pages-articles-multistream.xml.bz2` <br>
+ Contains text content and metadata for pages in enwiki.
+ Obtained via <https://dumps.wikimedia.org/backup-index.html>.
+ Some file content and format information was available from
+ <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>.
+- `enwiki-20220501-pages-articles-multistream-index.txt.bz2` <br>
+ Obtained like above. Holds lines of the form offset1:pageId1:title1,
+ providing, for each page, an offset into the dump file of a chunk of
+ 100 pages that includes it.
+
+# Dump-Index Files
+- `gen_dump_index_db.py` <br>
+ Creates a database version of the enwiki-dump index file.
+- `dump_index.db` <br>
+ Generated by `gen_dump_index_db.py`. <br>
+ Tables: <br>
+ - `offsets`: `id INT PRIMARY KEY, title TEXT UNIQUE, offset INT, next_offset INT`
+
+# Description Files
+- `gen_desc_data.py` <br>
+ Reads through pages in the dump file, and adds short-description info to a database.
+- `desc_data.db` <br>
+ Generated by `gen_desc_data.py`. <br>
+ Tables: <br>
+ - `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE`
+ - `redirects`: `id INT PRIMARY KEY, target TEXT`
+ - `descs`: `id INT PRIMARY KEY, desc TEXT`
+
+# Image Files
+- `gen_img_data.py` <br>
+ Used to find infobox image names for page IDs, and store them into a database.
+- `download_img_license_info.py` <br>
+ Used to download licensing metadata for image names, via wikipedia's online API, and store them into a database.
+- `img_data.db` <br>
+ Used to hold metadata about infobox images for a set of page IDs.
+ Generated using `get_enwiki_img_data.py` and `download_img_license_info.py`. <br>
+ Tables: <br>
+ - `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT` <br>
+ `img_name` may be NULL, which means 'none found', and is used to avoid re-processing page IDs.
+ - `imgs`:
+ `id INT PRIMARY KEY, name TEXT UNIQUE, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT`
+ <br>
+ Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability.
+- `download_imgs.py` <br>
+ Used to download image files into imgs/.
+
+# Page View Files
+- `pageviews/pageviews-*-user.bz2`
+ Each holds wikimedia article page view data for some month.
+ Obtained via <https://dumps.wikimedia.org/other/pageview_complete/monthly/>.
+ Some format info was available from <https://dumps.wikimedia.org/other/pageview_complete/readme.html>.
+- `gen_pageview_data.py` <br>
+ Reads pageview/* and `dump_index.db`, and creates a database holding average monthly pageview counts.
+- `pageview_data.db` <br>
+ Generated using `gen_pageview_data.py`. <br>
+ Tables: <br>
+ - `views`: `title TEXT PRIMARY KEY, id INT UNIQUE, views INT`