aboutsummaryrefslogtreecommitdiff
path: root/backend/hist_data/enwiki/README.md
blob: 76d33e5cf6e7b0e214a57f5db60eb8adb2725c05 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page).

# Downloaded Files
-   `enwiki-20220501-pages-articles-multistream.xml.bz2` <br>
    Contains text content and metadata for pages in enwiki.
    Obtained via <https://dumps.wikimedia.org/backup-index.html>.
    Some file content and format information was available from
        <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>.
-   `enwiki-20220501-pages-articles-multistream-index.txt.bz2` <br>
    Obtained like above. Holds lines of the form offset1:pageId1:title1,
    providing, for each page, an offset into the dump file of a chunk of
    100 pages that includes it.

# Dump-Index Files
-   `gen_dump_index_db.py` <br>
    Creates a database version of the enwiki-dump index file.
-   `dump_index.db` <br>
    Generated by `gen_dump_index_db.py`. <br>
    Tables: <br>
    -   `offsets`: `id INT PRIMARY KEY, title TEXT UNIQUE, offset INT, next_offset INT`

# Page View Files
-   `pageviews/pageviews-*-user.bz2`
    Each holds wikimedia article page view data for some month.
    Obtained via <https://dumps.wikimedia.org/other/pageview_complete/monthly/>.
    Some format info was available from <https://dumps.wikimedia.org/other/pageview_complete/readme.html>.
-   `gen_pageview_data.py` <br>
    Reads pageview/* and `dump_index.db`, and creates a database holding average monthly pageview counts.
-   `pageview_data.db` <br>
    Generated using `gen_pageview_data.py`. <br>
    Tables: <br>
    -   `views`: `title TEXT PRIMARY KEY, id INT UNIQUE, views INT`

# Image Files
-   `gen_img_data.py` <br>
    Finds infobox image names for page IDs, and stores them into a database.
-   `download_img_license_info.py` <br>
    Downloads licensing metadata for image names, via wikipedia's online API, and stores them into a database.
-   `img_data.db` <br>
    Holds metadata about infobox images for a set of page IDs.
    Generated using `gen_img_data.py` and `download_img_license_info.py`. <br>
    Tables: <br>
    -   `page_imgs`: `page_id INT PRIMARY KEY, title TEXT UNIQUE, img_name TEXT` <br>
        `img_name` may be NULL, which means 'none found', and is used to avoid re-processing page IDs.
    -   `imgs`:
            `id INT PRIMARY KEY, name TEXT UNIQUE, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT`
            <br>
        Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability.
-   `download_imgs.py` <br>
    Downloads image files into imgs/.

# Description Files
-   `gen_desc_data.py` <br>
    Reads through pages in the dump file, and adds short-description info to a database.
-   `desc_data.db` <br>
    Generated by `gen_desc_data.py`. <br>
    Tables: <br>
    -   `pages`:     `id INT PRIMARY KEY, title TEXT UNIQUE`
    -   `redirects`: `id INT PRIMARY KEY, target TEXT`
    -   `descs`:     `id INT PRIMARY KEY, desc TEXT`