Eric 6 User's Guide/Editing with Eric/Quick-And-Dirty Editor

Quick-And-Dirty Editor
You want to import some data into a database. To do so, you need to manipulate some of the text for consistency, turning text like this:

and this:

and this:

into this:

A perfect quick-and-dirty project for Python and Eric. You decide that this will be a console-only application – no need for classes or a fancy GUI just to clean up some text files.

Background
It all started innocently enough. You just wanted to see how Covid-19 infections were going in some places near and dear to you, like the UK where you have relatives. The web charts you'd looked at weren't flexible enough to display what you wanted to see. The sites offering them credited the Center for Systems Science and Engineering at Johns Hopkins University (JHU CSSE) as the data source. It turned out that JHU was making the data publicly accessible on github.

Checking it out, you found what looked like .csv files. "Great - I'll just download some of those, open them up in LibreOffice Calc, and knock together some charts of my own."

It wasn't quite that simple. There are global reports and more detailed ones limited to the United States. Your interest was global. There are two types of data files: time series and daily reports.

For the time series, JHU updates three omnibus table each day, for confirmed cases, deaths and recovered. Each table contains location names as rows and dates as columns. The number of cases appears at their intersection. A new date column is added each day. New location rows are added intermittently.

The daily reports contain locations in rows, a Last_Updated column and columns for confirmed, deaths, recovered and active cases, plus, at various times, other data like latitude and longitude.

The report names are in the form " .csv" but when you downloaded a few, they turned out to be in HTML. Looking at the data in your browser, you found you could get the data for a page out as bare text by using the page's RAW button. File, Save in the browser generated a basic csv file. A bit of a hassle, but manageable. You were only looking for a few dates.

You downloaded some data, wangled out your charts and were content.

Time marched on. So did Covid. You kept going back for more. It got tedious. You were less content.

Now, you're a bit of data person and have some basic SQL skills. You knew you could roll up a open source relational database management system (RDBMS) like Firebird and maybe work with it using some kind of GUI tool. “Great,” you thought, “all I have to do is import the data into the RDBMS. Then I can slice, dice and chart to my heart’s content.”

If only life were that simple. You'd been working with the time series reports, but found they didn't fit a database construct comfortably. For example, the reports add a new column for each date. In database terms, that's ALTER DATABASE which is an uncommon way to add new data - more usually it's done by adding a row using INSERT. There were other issues. You decided to work with the daily reports instead, combining them into one big text file to import into the database.

But the daily reports had some problems too. Some of the locations names were inconsistent: in early days “Mainland China” was used, later it was just “China”. “South Korea” became “Korea, South” then “South Korea” again. As the virus spread, coverage expanded to include more countries, provinces/states and US counties. The date format flipped back and forth a few times from m/d/yyyy to yyyy-mm-dd. A unique text key for each location was eventually added. A table of numeric universal IDs (UIDs) for each location was also created, but the UIDs are not used in the main data files, only the combined key text.

It also turns out that the reports are issued daily but the data isn’t necessarily new each day. Some jurisdictions report much less frequently, so including their data for each new date would create duplication.

It became a classic Extract, Transform, Load process.

ETL Plan
Extract fixed location data (Latitude, Longitude, etc) into separate tables for US_Counties, Provinces_States and Countries_Regions with Combined_Key string for each.
 * Mainland China -> China
 * Korea, South -> South Korea

Extract date from Last_Updated column and restructure to ISO date standard (yyyy-mm-dd).

Add Combined_Key column where missing and populate with concatenation of US_County, Province/State and Country/Region.

Drop unnecessary fields.

Write the lines to an output file using the date as the filename.

Once all the files have been processed, combine them all into one big Python set (to eliminate duplicates) and save to a file. Finally, import the data into the database.

Eric – New Project
Start Eric and select Project, New. The Project Properties window will be displayed. It’s mostly self-explanatory. This is a quick-and-dirty text-based project so the Project Type is Console.