From 30851ce8a6bf60cba48de372e7c923167cc17d8a Mon Sep 17 00:00:00 2001 From: Terry Truong Date: Sun, 2 Oct 2022 21:18:13 +1100 Subject: Add gen_desc_data.py Add unit test, update README --- backend/hist_data/README.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) (limited to 'backend/hist_data/README.md') diff --git a/backend/hist_data/README.md b/backend/hist_data/README.md index c5cf66f..32836e2 100644 --- a/backend/hist_data/README.md +++ b/backend/hist_data/README.md @@ -5,7 +5,7 @@ This directory holds files used to generate the history database data.db. Format: `id INT PRIMARY KEY, title TEXT UNIQUE, start INT, start_upper INT, end INT, end_upper INT, fmt INT, ctg TEXT`
- Each row has a Wikidata ID, Wikipedia title, start and end dates, and an event category. + Each row has an ID, Wikipedia title, start and end dates, and an event category. - `start*` and `end*` specify start and end dates. `start_upper`, `end`, and `end_upper`, are optional. If `start_upper` is present, it and `start` denote an uncertain range of start times. @@ -27,15 +27,18 @@ This directory holds files used to generate the history database data.db. - `event_imgs`:
Format: `id INT PRIMARY KEY, img_id INT`
Assocates events with images +- `descs`
+ Format: `title TEXT PRIMARY KEY, desc TEXT`
+ Associates an event's enwiki title with a short description. # Generating the Database ## Environment Some of the scripts use third-party packages: - `jdcal`: For date conversion -- `indexed_bzip2`: For parallelised bzip2 processing. -- `mwxml`, `mwparserfromhell`: For parsing Wikipedia dumps. -- `requests`: For downloading data. +- `indexed_bzip2`: For parallelised bzip2 processing +- `mwxml`, `mwparserfromhell`: For parsing Wikipedia dumps +- `requests`: For downloading data ## Generate Event Data 1. Obtain a Wikidata JSON dump in wikidata/, as specified in it's README. @@ -59,4 +62,5 @@ Some of the scripts use third-party packages: 1. Obtain an enwiki dump in enwiki/, as specified in the README. 1. In enwiki/, run `gen_dump_index.db.py`, which generates a database for indexing the dump. 1. In enwiki/, run `gen_desc_data.py`, which extracts page descriptions into a database. -1. Run +1. Run `gen_desc_data.py`, which adds the `descs` table, using data in enwiki/, + and the `events` and `images` tables (only adds descriptions for events with images). -- cgit v1.2.3