aboutsummaryrefslogtreecommitdiff
path: root/backend/tolData/enwiki/README.md
diff options
context:
space:
mode:
authorTerry Truong <terry06890@gmail.com>2022-07-11 01:54:08 +1000
committerTerry Truong <terry06890@gmail.com>2022-07-11 01:54:08 +1000
commit5fe71ea7b9d9a5d2dc6e8e5ce5b9193629eed74d (patch)
tree3b8b9d7299540a812ec93e224f8fc71249a98860 /backend/tolData/enwiki/README.md
parenta8f80a02b88055cfcb45664ce3a3d24c2b2da98c (diff)
Make backend dev server script serve the image files
Previously, image files in backend/data/img were moved to, or symlinked from, public/. This needed to be changed before each build, otherwise vite would end up copying gigabytes of images.
Diffstat (limited to 'backend/tolData/enwiki/README.md')
-rw-r--r--backend/tolData/enwiki/README.md52
1 files changed, 52 insertions, 0 deletions
diff --git a/backend/tolData/enwiki/README.md b/backend/tolData/enwiki/README.md
new file mode 100644
index 0000000..90d16c7
--- /dev/null
+++ b/backend/tolData/enwiki/README.md
@@ -0,0 +1,52 @@
+This directory holds files obtained from/using [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page).
+
+# Downloaded Files
+- enwiki-20220501-pages-articles-multistream.xml.bz2 <br>
+ Obtained via <https://dumps.wikimedia.org/backup-index.html> (site suggests downloading from a mirror).
+ Contains text content and metadata for pages in enwiki.
+ Some file content and format information was available from
+ <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>.
+- enwiki-20220501-pages-articles-multistream-index.txt.bz2 <br>
+ Obtained like above. Holds lines of the form offset1:pageId1:title1,
+ providing, for each page, an offset into the dump file of a chunk of
+ 100 pages that includes it.
+
+# Generated Dump-Index Files
+- genDumpIndexDb.py <br>
+ Creates an sqlite-database version of the enwiki-dump index file.
+- dumpIndex.db <br>
+ Generated by genDumpIndexDb.py. <br>
+ Tables: <br>
+ - `offsets`: `title TEXT PRIMARY KEY, id INT UNIQUE, offset INT, next_offset INT`
+
+# Description Database Files
+- genDescData.py <br>
+ Reads through pages in the dump file, and adds short-description info to a database.
+- descData.db <br>
+ Generated by genDescData.py. <br>
+ Tables: <br>
+ - `pages`: `id INT PRIMARY KEY, title TEXT UNIQUE`
+ - `redirects`: `id INT PRIMARY KEY, target TEXT`
+ - `descs`: `id INT PRIMARY KEY, desc TEXT`
+
+# Image Database Files
+- genImgData.py <br>
+ Used to find infobox image names for page IDs, storing them into a database.
+- downloadImgLicenseInfo.py <br>
+ Used to download licensing metadata for image names, via wikipedia's online API, storing them into a database.
+- imgData.db <br>
+ Used to hold metadata about infobox images for a set of pageIDs.
+ Generated using getEnwikiImgData.py and downloadImgLicenseInfo.py. <br>
+ Tables: <br>
+ - `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT` <br>
+ `img_name` may be null, which means 'none found', and is used to avoid re-processing page-ids.
+ - `imgs`: `name TEXT PRIMARY KEY, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT` <br>
+ Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability.
+- downloadImgs.py <br>
+ Used to download image files into imgs/.
+
+# Other Files
+- lookupPage.py <br>
+ Running `lookupPage.py title1` looks in the dump for a page with a given title,
+ and prints the contents to stdout. Uses dumpIndex.db.
+