aboutsummaryrefslogtreecommitdiff
path: root/backend/tolData/enwiki/README.md
blob: 76f9ee50ab9755e5bcd0083c528cad0b80a76700 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
This directory holds files obtained/derived from [English Wikipedia](https://en.wikipedia.org/wiki/Main_Page).

# Downloaded Files
-   enwiki-20220501-pages-articles-multistream.xml.bz2 <br>
    Contains text content and metadata for pages in enwiki.
    Obtained via <https://dumps.wikimedia.org/backup-index.html> (site suggests downloading from a mirror).
    Some file content and format information was available from
        <https://meta.wikimedia.org/wiki/Data_dumps/What%27s_available_for_download>.
-   enwiki-20220501-pages-articles-multistream-index.txt.bz2 <br>
    Obtained like above. Holds lines of the form offset1:pageId1:title1,
    providing, for each page, an offset into the dump file of a chunk of
    100 pages that includes it.

# Dump-Index Files
-   genDumpIndexDb.py <br>
    Creates a database version of the enwiki-dump index file.
-   dumpIndex.db <br>
    Generated by genDumpIndexDb.py. <br>
    Tables: <br>
    -   `offsets`: `title TEXT PRIMARY KEY, id INT UNIQUE, offset INT, next_offset INT`

# Description Database Files
-   genDescData.py <br>
    Reads through pages in the dump file, and adds short-description info to a database.
-   descData.db <br>
    Generated by genDescData.py. <br>
    Tables: <br>
    -   `pages`:     `id INT PRIMARY KEY, title TEXT UNIQUE`
    -   `redirects`: `id INT PRIMARY KEY, target TEXT`
    -   `descs`:     `id INT PRIMARY KEY, desc TEXT`

# Image Database Files
-   genImgData.py <br>
    Used to find infobox image names for page IDs, storing them into a database.
-   downloadImgLicenseInfo.py <br>
    Used to download licensing metadata for image names, via wikipedia's online API, storing them into a database.
-   imgData.db <br>
    Used to hold metadata about infobox images for a set of pageIDs.
    Generated using getEnwikiImgData.py and downloadImgLicenseInfo.py. <br>
    Tables: <br>
    -   `page_imgs`: `page_id INT PRIMAY KEY, img_name TEXT` <br>
        `img_name` may be null, which means 'none found', and is used to avoid re-processing page-ids.
    -   `imgs`: `name TEXT PRIMARY KEY, license TEXT, artist TEXT, credit TEXT, restrictions TEXT, url TEXT` <br>
        Might lack some matches for `img_name` in `page_imgs`, due to licensing info unavailability.
-   downloadImgs.py <br>
    Used to download image files into imgs/.

# Page View Files
-   pageviews/pageviews-*-user.bz2
    Each holds wikimedia article page view data for some month.
    Obtained via <https://dumps.wikimedia.org/other/pageview_complete/monthly/>.
    Some format info was available from <https://dumps.wikimedia.org/other/pageview_complete/readme.html>.
-   genPageviewData.py <br>
    Reads pageview/*, and creates a database holding average monthly pageview counts.
-   pageviewData.db <br>
    Generated using genPageviewData.py. <br>
    Tables: <br>
    -   `views`: `title TEXT PRIMARY KEY, id INT, views INT`

# Other Files
-   lookupPage.py <br>
    Running `lookupPage.py title1` looks in the dump for a page with a given title,
    and prints the contents to stdout. Uses dumpIndex.db.