JavaScript Loaders

Monday, July 19, 2021

Time Travel with py datatable 1.0

R package data.table has become a tool of choice when working with big tabular data thanks to its versatility and performance. Its Python counterpart py datatable follows R cousin in performance and steadily catches up in functionality. A notable omission - temporal data types - were introduced in version 1.0 by means of two new types: 

  • datatable.Type.date32 to represent and store particular calendar date without a time component and
  • datatable.Type.time64 to store specific moment in time (i.e. date with a time component)

and the datatable.time family of functions: https://datatable.readthedocs.io/en/latest/api/time.html

Let's have a brief overview of how to use them.

datatable.Type.date32

This type represents a calendar date without a time component and internally stores date as a 32-bit signed integer counting the number of days since (positive) or before (negative) the epoch (1970-01-01). Thus, this type includes dates within the range of approximately ±5.8 million years which places the oldest stored date into the Late Miocene Epoch and the maximum one into completely unknown even to science fiction year 5,879,610 of the 58797th century in the future:



 




 

There are various ways to initialize and/or create date32 column inside datatable:







 

or







 

Remember to use ISO 8601 format when representing dates as strings, otherwise parsing fails silently: 



 

 

 

 

 

 

If a frame already contains dates as strings then using combination of the constructor function datatable.time.ymd()  (to create date32 type), cast function datatable.as_type() (to convert str to int) and string slicer datatable.str.slice() (to substring date elements) suffices to parse and create corresponding date32 value all within datatable API:








datatable.Type.time64

This type represents a specific moment in time and is stored internally as a 64-bit integer containing the number of nanoseconds since the epoch (1970-01-01) in UTC:


 

 

 

 

Similarly time64 can be created in the same fashion as date32 type above, for example:


 

 

 

 

Again a time string should include ISO 8601 format as well. To create time from its components use a constructor function datatable.time.ymdt():








datatable.time.* Functions

To effectively use datatable date32 and time64 types there are special functions included that are part of datatable.time family:

  • constructors ymd() and ymdt() and
  • date and time part functions: day(), day_of_week(), hour(), minute(), month(), nanosecond(), second(), year()

Using constructors was showcased already and the part functions will come handy when filtering data etc., e.g.:



Wednesday, January 6, 2021

IMDb datasets: 3 centuries of movie rankings visualized

The Question

I am a sucker for IMDb ratings so don't judge me. They are my priors before watching almost anything on a screen (home screen that is). But between movies (feature films), TV movies and TV (mini) series IMDb ratings are highly inconsistent. For example, series The Boys has rating 8.7 and so does movie Goodfellas by Martin Scorsese. Does it make sense The Boys ranked as high as #16 rated movie title in the whole IMDb database (among those with at least 25,000 user votes)? Or, in other words, if and how much apples vs. oranges those ratings are?

 

Rating Distributions

To start I downloaded IMDb datasets (here). Let's show distributions of title ratings depending on the types: movie (i.e. feature film), TV movie, TV mini series, and TV series between fiction and documentaries:

Title ratings drift towards higher values depending on their types (shown on the right): movie, TV movie, TV mini series, and TV series. So indeed ratings of movies and TV series come from different distributions representing different things like apples and oranges. But how much different they are? (we will focus on fiction titles only from this point on.)

 

Percentiles

If a title has all time best rating then no doubt it's worth giving a try (let's say among titles with at least 1000 votes - number of votes is rather important consideration but we let it slide here and may come back to votes later). Why? Because 100% of other titles are rated below or at best the same and that indicates exceptional qualities. In statistics such rating has a name: 100th percentile. Following the same logic 99th percentile represents rating above 99% of all titles in the database (again, don't forget about minimum threshold for number of votes to be considered).

Based on above we can assign IMDb titles to groups based on the highest percentile they belong to: 99% percentile suggests that the title is very best, 95% - excellent, 90% - very good, 75% - good, 50% - average, and 25% - bad. Feel free to assign and name percentiles differently in your analysis but we stick with this convention for this post. Last piece of the puzzle is taking percentiles not across whole IMDb set but rather for each title type separately and compare them:

Going back to our example, 8.7 in TV Series places The Boys firmly in "Excellent" (95th percentile), while Goodfellas at 8.7 sits at the top of "Very Best" (99th percentile) in movies - noticeable difference between the two.

The difference becomes even more meaningful when looking at the lower tiers "Very Good" (90th percentile) and below: while rating of 7.6 suffices for a movie (e.g. Love Actually) to place in "Very Good", a TV series must achieve rating of 8.4 to qualify for the same 90th percentile. In fact, a TV Series with 7.6 rating (like Grey's Anatomy) places just above "Average" 50th percentile. Furthermore, the rating of 8 would place a movie firmly in top 5% while the same 8 for a TV series barely cracks top 25%.

 

Percentiles Extra

Comparing and analyzing ratings between title types can be helped by organizing and visualizing the same percentile data in a few different ways:

  • Overlapping bar charts by title types:


  • Line chart by title types:


  • Line chart by percentiles:



What About Documentaries?

The title percentiles above excluded documentaries. To be able to compare ratings between fiction and documentary titles the following visual computes and dissects rating percentiles between fiction and documentaries by title types:


For whatever reason IMDb users rate documentaries more generously than their fiction counterparts across all title types.

 

Historical Perspective Mixed with Film Trivia

The oldest film on IMDb is Passage de Venus made in 1874, is ranked 6.9 with 1282 votes (as of January 2020), and is filed under title type short and genre Documentary. In chronological order it is followed by 2 titles in 1878 (short animation Le singe musicien and short documentary Sallie Gardner at a Gallop), 1 in 1881 (short documentary Athlete Swinging a Pick), 1 in 1883 (short documentary Buffalo Running), and 1 in 1885 (short animation L'homme machine). Starting with 1887 that cranked up 45 titles total there are no more gap years, but such production feast will be surpassed only 1894 with 97 titles. First movie title (and only that year) Reproduction of the Corbett and Fitzsimmons Fight was filmed in 1897 under Documentary, News, and Sport genres. Lastly, first year when total number of titles exceeded its year numerical value is 1952 with 2059 shorts, movies, etc. under the belt. Did I just say last? One more factoid if you excuse me: movie production in 2020 (35,109 titles total) dropped us exactly 10 years back when 35,062 titles were produced in 2010, while the absolute record belongs to 2017 with 51231 films total.

What about visualizing film production over time?

 



Final Thoughts

IMDb dataset turned out to be richer and deeper than I expected and I just scratched the surface. There is plenty to play with - genres, runtimes, adult movies (yes, probably for compliance IMDb flags each title as adult or not), and, of course, ratings. IMDb uses adjusted (weighted) rating formula (based on averages and number of user votes) in their rankings (see Weighted Average Ratings) so the title averageRating we looked at can't be taken at the face value after all.


Source Code

IMDb R notebook with all data prep and visualizations for this post found here: https://github.com/grigory93/r-notebooks/blob/master/IMDB-movie-ratings.Rmd

Information courtesy of IMDb (http://www.imdb.com).
Used with permission.

Monday, April 13, 2020

H2O.ai Academic Program for Professors and Students: Part 2 - Creating Your First (Time Series) Experiment

Part 1 of this blog series discussed how to:
  1. apply for free academic license of H2O.ai automated machine learning (AutoML) platform Driverless AI,
  2. spin up a VM with budget-oriented cloud provider Paperspace that can host Driverless AI,
  3. install Driverless AI on VM including configuration that utizlizes powerful GPUs available on Paperspace.
In part 2 we'll show how to:
  1. upgrade Driverless AI on Linux VM
  2. organize modeling workflow in Driverless AI
  3. manipulate and load datasets
  4. perform automated data exploration
  5. create models to forecast time series
  6. analyze time series model created with Driverless AI

Back to Paperspace VM

At the end of part 1 we had fully functional instance of Driverless AI but 
  • it was likely stopped by Paperspace due to inactivity and 
  • new version 1.8.5 of H2O Driverless AI has been released since.
So we begin by starting Paperspace VM and upgrading H2O software to the latest version, but please adjust steps below to your specific circumstances and possibly newer version of Driverless AI.

1. Starting VM in Paperspace

When you log in back to Paperspace and go to console (under Core -> Compute) it will show VM you created in the state "Off". Next, press anywhere on the machine box:

Next screen displays machine terminal view including button to start VM:


After pressing Start machine button wait for terminal window to appear indicating that VM restarted successfully:

2. Upgrading Driverless AI

Locate your original email from Paperspace from when you created VM in part 1 that has ssh command and password (unless you changed it since). You can use either web-based terminal window shown above or a terminal application like Mac OS Terminal to ssh (I prefer the latter as it allows easy copy and paste on Mac OS):

At this point we can upgrade to release 1.8.5.1 (the latest version at the time of this writing). To locate installer point your browser to https://www.h2o.ai/download/ and click on the button for latest stable version of Driverless AI (1.8 LTS at this time):


This brings you to the download page for version 1.8.5.1 (or later). Make sure it displays Linux (X86) tab (first tab) and copy location of installer file by right-clicking on the Download link corresponding to DEB Ubuntu 16.04/Ubuntu 18.04 option:

Go back to terminal window and enter wget command and paste file location:
wget https://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/dai/rel-1.8.5-64/x86_64-centos7/dai_1.8.5.1_amd64.deb

After waiting for wget to finish download perform upgrade of Driverless AI with these 4 commands (don't forget to change file name with your version, given how frequently H2O does releases likely you will be installing newer version):
sudo systemctl stop dai
sudo dpkg -i
dai_1.8.5.1_amd64.deb
sudo nvidia-smi -pm 1
sudo systemctl daemon-reload
sudo systemctl start dai
After executing commands above terminal screen should look similar to this:

Test that upgrade took place and Driverless AI is running by pointing your browser to the public ip address of your VM and port 12345:



Using the same credentials as in part 1 in step 24 (h2oai/h2oai) login to Driverless AI and go to Resources -> System Info to observe that parameters of your system are in accordance with Paperspace spec:

Because disk size I used is rather small there is already 57% of disk used. To free up space I recommend removing installer file(s) used to install Driverless AI:
rm *.deb

This freed up over 10G of space in my case.
 

3. Troubleshooting 

Sometimes Driverless AI doesn't start successfully which manifests in browser unable to establish connection. In that case enable persistence mode for Nvidia GPUs and restart Driverless AI with these commands:
sudo nvidia-smi -pm 1
sudo systemctl stop dai

sudo systemctl start dai
Ultimately, the goal is to run nvidia-smi -pm 1 each time system starts. One way to accomplish this is with cron utility by adding a task that executes each time the system restarts
  1. run sudo crontab -e to edit crontab file containing cron tasks for root
  2. if using for first time crontab prompts to pick an editor
  3. add as first line (after comments):
    @reboot nvidia-smi -pm 1 

Dataset

4. Preparing Data

We prepared data to demonstrate how to experiment and analyze models with Driverless AI. Departing from trivial examples like titanic or other ML "Hello, World!" types a COVID-19 theme made sense. But it won't be exponential growth / curve of COVID-19 cases modeling which is while important and extremely powerful already found on H2O blog Modeling Currently Infected Case by COVID-19 Using H2O Driverless AI by Marios Michailidis. To compliment this analysis let's look into forecasting demand for certain product groups. We prepared data with package gtrendsR utilizing Google Trends to proxy demand for popular products during COVID-19 crisis:


The dataset contains daily Google trends (search interest) for products represented by keywords (serving as a proxy to real demand) in United States, Canada, and Great Britain (majority English speaking) from 2020-12-28 through 2020-04-04 (before breaking it up which is discussed next):

Lastly, two datasets are created: one for training containing data since 2020-01-01 except for last week (2020-03-31 through 2020-04-06) that makes up test set exactly 7 days long. The reason we allocated test data for one week is because our model will forecast next 7 days of demand and having test data spanning exactly the same time period is ideal.

In later installments of this series we show how to create a dataset with Google Trends inside Driverless AI using data recipes.

Getting Started with Driverless AI


5. Navigating Driverless AI

Several unofficial rules of Driverless AI will guide us throughout this post starting with how to organize the flow  of activities:
An unofficial rule #1:

Always try following the same flow of actions as the order of navigation tabs on top from left to right:

The tabs that lead you throughout Driverless AI workflow are:
  1. Datasets: displays and manages datasets ingested into Driverless AI system (action: ingesting and preparing data);
  2. AutoViz: displays and manages list of automated visualization dashboards (one per dataset, action: automated exploratory data analysis);
  3. Experiments: displays and manages list of Driverless AI experiments (multiple experiments per dataset, action: creating machine learning models);
  4. Diagnostics: displays and manages list of model diagnostic dashboards (mulitple diagnostics per experiment and dataset possible, action: analyzing model performance);
  5. MLI: displays and managers list of Machine Learning Interpretability dashboards (usually one explanation dashboard per experiment, action: explaining and interpreting models);
  6. Deployments: displays and manages list of model deployments (multiple deployments per experiment possible depending on environment, action: deploying models).
To better understand the flow and relationships between Driverless AI artifacts review the following diagram:
Thus, a complete Driverless AI workflow consists of ingesting a dataset, exploring it with AutoViz, creating a model (experiment) trained on a dataset, diagnosing a model, exploring a model in MLI, and finally deploying it.

5. Loading Data

In Driverless AI go to Datasets tab and click on Add Datasets, then pick Upload File option:

Browser will open file picker where you can choose one or multiple files at once:

This will trigger upload, auto-parsing and saving of Google trend data files from your local machine to Driverless AI storage:

Alternatively, since both files are also available from the public S3 bucket here: https://s3.console.aws.amazon.com/s3/buckets/h2o-public-test-data/smalldata/timeSeries/?region=us-east-1, you can import using Amazon S3 option by entering S3 url: s3://h2o-public-test-data/smalldata/timeSeries/

Driverless AI will warn but still let you create duplicate datasets if you confirm your intent. Besides wasting disk space it may introduce confusion down the road so choose either uploading from your machine or import from S3 but not both (but feel free to try differnt options - you can always delete extra datasets after all).

6. Dataset Details

An unofficial rule #2:
Check after dataset auto-parsing to confirm that data imported as expected, e.g column names, data types, missing vlaues, etc.
Main reason for checking after Driverless AI is not because its auto-parsing functoinality is lacking, but much simpler: there is always ambiguity in data that may result in multiple acceptable formats and/or data types. For example, categorical column represented with only numericals results usually parsed as numeric data type. Dataset details let user both review and correct data type decision made by auto-parsing: in Datasets tab click on product_demand_train.csv to see available actions: Details, Visualize, Split, Predict, Rename, Download, and Delete:

After choosing Details Driverless AI displays Dataset Details view:


This view contains fsummary statistics and distribution plot for each column. It also offers ways to alter data types and data formats to correct auto-parsing as mentioned above by the rule #2: one example - backtracking from numeric type to string (or categorical) for values containing only numeric characters. 

7. Analyzing Data with AutoViz

An unofficial rule #3:
Never build a model without visualizing data in AutoViz first.
For advanced exploration choose next option in Dataset menu - Visualize:


Driverless AI will take you to Visualizations tab:

Click on product_demand_test.csv to display AutoViz dashboard that contains different types of advanced visualizations selected for the dataset:

So what just happened? By choosing Visualize we triggered fully automated process that runs statistical tests, unsuperivsed models, and anomaly detection, then selects interesting observations leaving out trivial ones, and finally compiles them into visual dashboard to represent results. Such workflow received a name in Driverless AI - AutoViz - and is characterized with:
  1. univariate analysis on dataset features: outliers, skewedness, spikey distributions, gaps in distributions;
  2. multivariate analysis on dataset: correlations (including between numeric and categorical features), varying boxplots, heteroscedastic boxplots, biplots, multivariate outliers, k-means clustering, 1-NN, SVD and more;
  3. qualifying which results to include, for example, correlated scatterplots include pairs of features with value of squared Pearson’s r greater than 0.95;
  4. aggregating the data to display larger points: "the bigger the point is, the bigger number of exemplars (aggregated points) the plot covers".
For details please read chapter in the docs Visualizing Datasets and we leave AutoViz with illustration of k-means clustering analysis that found 23 clusters (and no multivariate outliers) in the training dataset displaying graphics with Parallel Coordinates Plot:

8. Starting Experiment

To begin AutoML workflow go to Datasets tab and click on product_demand_train.csv, then choose Predict

Driverless AI will switch to a supervised machine learning experiment prompting to select a target:
 
Select hits first and observe that the other options get filled automatically:


What just happened? Driverless AI determined that:
  1. because target variable hits is integer and it contains over 100 unique values the problem type is regression;
  2. set RMSE as optimization metric;
  3. number of rows ~10K and number of features 4 in training data;
  4. set default settings for accuracy to 7, time to 2, and interpretability to 8. The higher accuracy (1-10) the more effort invested to reach better results. The higher time (1-10) the longer experiment spends on searching best transformations and hyper parameters. The higher interpretability (1-10) the less complex models and transformations are used;
  5. finally, high level plan for experiment pipeline shows:
    • all training data will be used (no sampling);
    • algorithms to try: Decision Tree, LightGBM, and XGBoost;
    • models and validation schema used during feature evolution phase: 3-fold cross-validation;
    • final model and validation schema to train it on: 6 model ensemble trained on 3-fold cross-validation;
    • feature evolution genetic algorith phase configuration in terms of number of individuals and generations (iterations): 8 individuals, 48 iterations;
    • early stopping for feature evolution: 5 iterations of no improvement;
    • any constraints on features: monotonicity constraint and pre-pruning of features based on permutation importance are enabled;
    • number of models to train for:
      • target transform tuning: 36, 
      • model and feature tuning: 192, 
      • feature evolution: 288, and
      • final model: 6;
    • esimated runtime is in minutes (usually very crude estimate);
    • model will auto-finish after 1 day and model will auto-abort after 7 days.
At this point, if experiment setup is complete you should safely start it by pressing on Launch Experiment button. Which brings up 
An unofficial rule #4:
When creating first time model always use default (or "lower") settings in Driverless AI.
The word lower was quoted because interpretability setting moves model performance in opposite direction - from 10 "lowest" to 1 "highest" (you can think of it as 
 complexity = 11 - interpretabilty
to have all 3 settings move consistently). 

If the data were indeed i.i.d. then a regression setup above would be enough to start experimenting per last rule.  

9. Time Series Model Setup

But Google trends data pertains to time series model, and Driverless AI supports it with its Time Series Lag-Based Recipe so we continue with experiment setup:

Extra steps to setup time series model:
  1. set dataset product_demand_test.csv as Test. This could (better should, see rule #5 below) have been done for regression or other types of models as well, but in case of lag-based time series recipe it has additional important role: indicating how far in the future we want model predictions to be (forecast horizon below).
  2. set column date as Time Column which effectively makes experiment time series and triggers displaying of Time Series Settings on the right side of the screen.
  3. set columns geo and keyword in Time Groups Columns (TGC) to identify multiple time series by state (geo) and keyword in the data. This is arguably most powerful feature in Driverless AI approach as it allows single model to forecast on multiple time series having access to both single time series and aggreated data.
  4. Time column is automatically parsed to determine time dimension, interval and periodicity including proposed forecast horizon based on the time span in test (see 1.).
  5. set scorer to MAE as one of standard metrics for time series model performance.
  6. observe new values of the settings: 8/4/8 and feel free to change them to "lower" values if you like (remember rule #4).
  7. finally, review a few notable chaninges in experiment pipeline:
    • validation schema switched to 4 time-based validation splits (time-based splits are always necessary when time column defined, even if time series lag-based recipe is disabled in Expert Settings);
    • LightGBM and XGBoost are the only models used;
    • new lag-based transformation were added: Lags, EwmaLags, LagsAggregates, and LagsInteraction;
    • greater number of models will be created due to increase in accuracy and time settings. 
While working through experiment setup we relied on a few more conventions. 
An unofficial rule #5:
Always strive to assign a test dataset in experiment.
Test data (or holdout) is never used during training of the modeling pipeline, which means final model is the same with or without it (except for Kaggle mode in Expert Settings disabled by default). Driverless AI computes test predictions to provide an estimate for generalization (or out-of-sample) error at very end.
An unofficial rule #6:
When creating first time model avoid using Expert Settings unless absolutely necessary for experiment setup.
Seldom an option in Expert Settings is necessary for model setup. One example is when data is non-i.i.d. (time dependent) so time column is set but time series lag-based recipe doesn't apply and should be disabled in Expert Settings. Google Trends dataset is a mulitple time series problem that Driverless AI can comfortably handle without advanced customizations to start. That doesn't mean that certain tweaking in Expert Settings - especially inside its Time Series tab - will not come handy later.
An unofficial rule #7:
Do not leave TGC set to AUTO but rather set column or columns identifying multiple time series (i.e. TGC) explicitly.
While Driverless AI is certainly capable of recognizing automatically TGC (columns that group data into multiple time series), by setting TGC you elminate slightest chance for uncertainty.

10. Time Series Model

Launch Experiment by pressing namesake button - wait for a couple of minutes while reading and clearing out experiment notifications (notifications are always available to review via Notifications link above CPU/Memory timeline) so you can observe current state of experiment workflow:

When the experiment completes Driverless AI displays final model:

Completed experiment view consists of:
  1. Experiment setup;
  2. Evolution pipeline displaying models generated during feature and model tuning and selection; 
  3. Top variable in terms of feature transformations selected during evolution;
  4. Experiment summary;
  5. Available actions:
    • deploy to a cloud or locally
    • Interpret the model (MLI)
    • Diagnose the model
    • Score on another dataset
    • Transform on another dataset
    • Download predictions (training or test)
    • Download Python scoring pipeline
    • Download MOJO scoring pipeline
    • Visualize scoring pipeline
    • Download summary and logs
    • Download Autoreport (Auto Documentation)

11. Time Series Model Analysis

Each action belongs to its own post so we focus on model analysis with MLI. Because the experiment used lag-based time series recipe Driverless AI will engage special flavor of MLI for time series. Press the Interpret This Model button to see Driverless AI start processing and computing predictions, its errors, Shapley values and more:

MLI for time series comes handy to visualize each time series per group to compare predictions and actuals. When interpreting model completes it displays MLI view including:
  • MAE Time Series plot: errors across validation (holdout) and forecast horizon averaged for all time series;
  • Test metrics for top 5 and bottom 5 groups (time series identified by their TGC values);
  • Actual vs. predicted plot across holdout and forecast horizon plus actual for training time for any choice of time series entered with its values as shown:

12. Shapley Values

MLI for time series is a powerful diagnostics tool for analysis of multiple time series models. But it goes beyond diagnostics as it includes Shapley values that go beyond diagnostics to explain key factors (features) contributing to each prediction. For example, enter TGC values US,milk to display its time series and click on the peak value in forecast horizon period as shown below:

Driverless AI displays a Shapley values bar chart of feature contributions for the prediction on that date. For example, as shown above on the 4th of April the biggest impact was from the feature representing 7-day lag (TargetLag:date:geo:keyword.5). You can continue changing dates to see how contributions shift and features become more or less impactful across forecast horizon time line. Remember that this analysis is specific to the time series for locaton:US with keyword:milk so switching to different time series may produce similar or drastically different results for the same Driverless AI model.


13. What's Next?

The model we just created could be considered as a baseline model. The next would be iterating over experiments to achieve higher score by means of:
  1. increasing accuracy setting;
  2. increasing time setting;
  3. lowering interpretability setting (increasing complexity);
  4. adjusting Expert Settings.
An unofficial rule #8:
When iterating apply "atomic" changes to next experiment to allow attributing increase or decrease in model performance to a single factor associated with that "atomic" change.
For example, if you increase accuracy and/or time then keep all other parameters intact. Likewise, if you adjust certain parameter in Expert Settings then keep the rest of parameters the same. Then whether a model gets better (or worse or the same) only that "atomic" change could be responsible for the effect. Embrace the change if performance increased or discard the change if not.

What constitutes "atomic" change? The easy answer is a change to single setting or parameter, but it could also be a set of related parameters that work together - typical examples are accuracy and time settings or switching algorithms on/off in Expert Settings.

Happy experimenting!