aboutsummaryrefslogtreecommitdiff
path: root/backend/hist_data/README.md
diff options
context:
space:
mode:
authorTerry Truong <terry06890@gmail.com>2022-10-02 12:23:19 +1100
committerTerry Truong <terry06890@gmail.com>2022-10-02 12:23:19 +1100
commitd70b96295d768aa0c80bf66639ad7a56bdef92a8 (patch)
tree3f960ad83e4158fff1c0931d838033392a3391ec /backend/hist_data/README.md
parent1b4fc8667714ef4ce9f326bd14f795fc2417ecb9 (diff)
Add gen_pop_data.py
Diffstat (limited to 'backend/hist_data/README.md')
-rw-r--r--backend/hist_data/README.md25
1 files changed, 18 insertions, 7 deletions
diff --git a/backend/hist_data/README.md b/backend/hist_data/README.md
index c55549e..5b64462 100644
--- a/backend/hist_data/README.md
+++ b/backend/hist_data/README.md
@@ -18,27 +18,38 @@ This directory holds files used to generate the history database data.db.
- If 3, same as 1, but 'end' and 'end_upper' are 'preferably Gregorian'.
For example, Galileo Galilei's birth date appears 'preferably Julian', but his death date does not.
- If 0, they denote a number of years CE (if positive) or BCE (if negative).
+- `pop`: <br>
+ Format: `id INT PRIMARY KEY, pop INT` <br>
+ Associates each event with a popularity measure (currently an average monthly viewcount)
# Generating the Database
+## Environment
+Some of the scripts use third-party packages:
+- `jdcal`: For date conversion
+- `indexed_bzip2`: For parallelised bzip2 processing.
+- `mwxml`, `mwparserfromhell`: For parsing Wikipedia dumps.
+- `requests`: For downloading data.
+
## Generate Event Data
1. Obtain a Wikidata JSON dump in wikidata/, as specified in it's README.
1. Run `gen_events_data.py`, which creates `data.db`, and adds the `events` table.
-## Generate Description Data
-1. Obtain an enwiki dump in enwiki/, as specified in the README.
-1. In enwiki/, run `gen_dump_index.db.py`, which generates a database for indexing the dump.
-1. In enwiki/, run `gen_desc_data.py`, which extracts page descriptions into a database.
-1. Run
-
## Generate Popularity Data
1. Obtain 'page view files' in enwiki/, as specified in it's README.
-1. Run
+1. Run `gen_pop_data.py`, which adds the `pop` table, using data in enwiki/ and the `events` table.
## Generate Image Data and Popularity Data
1. In enwiki/, run `gen_img_data.py` which looks at pages in the dump that match entries in `events`,
looks for infobox image names, and stores them in an image database.
+ Uses popularity data in enwiki/ to find the top N events in each event category.
1. In enwiki/, run `download_img_license_info.py`, which downloads licensing info for found
images, and adds them to the image database.
1. In enwiki/, run `download_imgs.py`, which downloads images into enwiki/imgs/.
+1. Run
+
+## Generate Description Data
+1. Obtain an enwiki dump in enwiki/, as specified in the README.
+1. In enwiki/, run `gen_dump_index.db.py`, which generates a database for indexing the dump.
+1. In enwiki/, run `gen_desc_data.py`, which extracts page descriptions into a database.
1. Run