JavaScript Loaders

Tuesday, December 20, 2016

Correlation Primer with Aster and R

Calculating correlations is often starting point before more advanced analytical steps take place. Big data (long data) always presents computational challenges of both scale and distributed nature. In turn they may get aggravated by the presence of large number of features (wide data). But challenges do not stop here as complex relationships induce analysis of correlations across subsets and groups.

Such mix of long and wide becomes more common in the age of internet-of-things, sensor and machine data with non-human data sources dominating analytical use cases.  
Thus, when computing correlations on big data the following capabilities matter:
  • scale on large distributed data sets (long data)
  • scale on wide distributed data sets (wide data / large number of features)
  • flexibility on wide data sets (ability to permutate features such as Cartesian combinations, one-to-many, etc.)
  • correlations on subsets and groups.
Correlations in R comes standard with stats function cor but it doesn't meet most of the capabilities above. As always Teradata Aster big data analytical platform offers both scalability and functionality far exceeding capabilities above. And thanks to Aster R (TeradataAsterR) package it is available without leaving R environment.

With Aster and R integration there are multiple ways of correlating on datasets. Before sending you to the link for detailed discussion I summarized approaches discussed there by the capabilities:


Method / Solution features Variable (columns) Permutations Calculating for Groups SQL-MR In-database R
Aster R ta.cor
N
N
Y
N
Aster R in-database ta.tapply
N
Y
N
Y
toaster computeCorrelations
Y
Y
Y
N

Please visit my latest RPubs post for detailed discussion and comparison of these methods.

Tuesday, May 31, 2016

Running similar but independent jobs in parallel on Aster with R

No surprise that Teradata Aster runs each SQL, SQL-MR, and SQL-GR command in parallel on many clusters with distributed data. But when faced with the task of running many similar but independent jobs one has to do extra work to parallelize them in Aster. When running a SQL script the next command has to wait for the previous to finish. This makes sense when commands contribute to the pipeline with results of each job passed down to next one. But what if the jobs are independent and produce their own results each. For example, cross-validation of linear regression or other models is divided into independent jobs each working with its respective partition (of total K in case of K-fold cross-validation). These jobs could run in parallel in Aster with little help from R. This post will illustrate how to run K linear regression models in parallel in Aster as part of the K-fold cross-validation procedure.


The Problem

Cross-validation is important technique in machine learning that receives its own chapters in the textbooks (e.g. see Chapter 7 here). In our examples we implement a K-fold cross-validation method to demonstrate how to run parallel jobs in Aster with R. The implementation of K-fold cross-validation that will be given is neither exhaustive nor exemplary as it introduces certain bias (based on month of the year) into the models. But this approach could definitely lead to a general solution for cross-validation and other problems involving execution of many similar but independent tasks on Aster platform.

Further more, the examples will be concerned only with the step in K-fold cross-validation that creates K models on overlapping but different partitions of the training dataset. We will show how to construct K independent linear regression models in parallel on Aster, each for one of the K partitions of the table (not the same as table partitioning in Aster).


Data and R Packages

We will use Dallas Open Data data set available from here (including Aster load scripts).
To simplify examples we will also use R package toaster for Aster and several other packages - all available from CRAN:



Data set, Model and K Folds

Dallas Open Data has information on building permits across city of Dallas for the period between January 2012 through May 2014 stored in the table dallasbuildingpermits. We can quickly analyze this table from R with toaster and see its numerical columns:


which results in:
[1] "area" "value" "lon" "lat"
These 4 fields will make up our simple linear model to determine the value of construction using its area and location. And now the same in R terms:


This problem is not beyond R memory limits but our goal is to execute linear regression in Aster. We enlist toaster's computeLm function that returns R lm object:

Lastly, we need to define the folds (partitions) on the table to build linear regression model on each of them. Usually, this step performs equal and random division into partitions. Doing this with R and Aster is actually not extremely difficult but will take us beyond the scope of the main topic. For this reason alone we propose quick and dirty method of dividing building permits into 12 partitions (K=12) using issue date's month value (in SQL):

Again, do not replicate this method in real cross-validation task but use it as a template or a prototype only.
To make each fold's compliment (used to train 12 models later) we simply exclude each month's data, e.g. selecting the compliment to the fold 6 in its entirety (in SQL):


Computing Cross-Validation Models in Aster with R

Before we get to parallel execution with R we show how to script in R Aster cross-validation of linear regression. To begin we use standard R for loop and computeLm with the argument where that limits data to the required fold just like in SQL example above:


This results in the list fit.folds that contains 12 linear regression models for each fold respectively.
Next, we replace the for loop with the specialized foreach function designed for parallel execution in R. There is no parallel execution yet but all necessary structure for transition to parallel processing:

foreach performs the same iterations from 1 to 12 as for loop and combines results into list by default.


Parallel Computing in Aster with R

Finally, we are ready to enable parallel execution in R. For more details on using package doParallel see here, but the following suffices to enable a parallel backend in R on Windows:



After that foreach with operator %dopar% automatically recognizes parallel backend cl and runs its iterations in parallel:



Comparing with foreach %do% earlier notice extra handling for ODBC connection inside foreach %dopar%. This is necessary due to inability of sharing the same database connection between parallel processes (or threads, depending on the backend implementation). Effectively, with each loop we reconnect to Aster with a brand new connection by reusing original connection's configuration in function odbcReConnect.


Elaspsed Time

Lastly, let's see if the whole thing was worth the effort. Chart below contains elapsed times (in seconds) for all 3 types of loops: for loop in R, foreach %do% (sequential), and foreach %dopar% (parallel):


Sunday, April 24, 2016

Map of the Windows Fonts Registered with R

If you already found package extrafont then you probably found how to load and use Windows fonts in R visualizations. But just in case, everything to get started with extrafont is found here and summarized for using fonts in Windows for on-screen or bitmap output below:


One thing to add is a summary of all Windows fonts registered in R. This will come handy when designing new visualizations and deciding on which font or combination of fonts and their faces to use. The code below produces a table where rows are fonts and columns are faces with font name printed using both the font and the face (if available) in each table cell:


The resulting table is this handy visual:



You can download this image or produce your own with the code above.

Saturday, April 16, 2016

Creating and Tweaking Bubble Chart with ggplot2

This article will take us step-by-step over incremental changes to produce a bubble chart using ggplot2 that looks like this:


Data and Setup

We'll encounter the plot above once again at the very end after explaining each step with code changes and observing intermediate plots. Without getting into details what it means (curios reader can find out here) the dataset behind is defined as:


It contains 2 data points and 4 attributes: three numerical Aster_experience, R_experience, and  coverage, and one categorical product. Remember that the data won't change a bit while the plot progression unfolds.

As-Is Scatterplot

The starting plot is simple scatterplot using coordinates x and y as Aster_experience, R_experience (line 3), point size as coverage, and point color as product (line 4) (this type of scatterplot has a special name - bubble chart):




Fixing Point Sizes

Immediate fix would be making the smaller point big enough to see it with the help of scale_size function and its range argument (line 3) (strange enough but sibling function scale_size_area doesn't have such argument) that specifies the minimum and maximum size of the plotting symbol after transformation1 :



Magic Quadrant: adding lines and customizing axises

Next refinement aims at the magic quadrant concept which fits this data well. In this case it's "R Experience" vs. "Aster Experience" and whether there is more or less of each. Achieving this effect involves fake axes using geom_hline and geom_vline (line 3), and customizing actual axes using scale (line 5-6) and theme functions (line 8-12):




Adding Text and Color to Points 

Typical for bubble charts its points get both colored and labeled, which also makes color bar legend obsolete. We use geom_text to label points (line 5) and scale_color_manual to assign new colors and remove color bar legend (line 11):



Customizing Legend

The next step happened to tackle the most advanced problem while working on the plot. The guide legend for size above looks rather awkward. Ideally, it matches the two points we have in both color and size. It turned out (and rightly so) that the function scale_size is responsible for its appearance (line 8). In particular, number of legend positions overrides argument breaks, and controling appearance including colors of the legend performed with guide_legend and override.aes:



Finishing Touch with Custom Theme

We finish cleaning the plot using package ggthemes and its theme_tufte function (line 10):




As promised, we finished exactly where we started.


1 Scale size (area or radius).

Saturday, January 30, 2016

R Graph Objects: igraph vs. network

While working on new graph functions for my package toaster I had to pick from the R packages that represent graphs. The choice was between network and graph objects from the network and igraph correspondingly - the two most prominent packages for creating and manipulating graphs and networks in R.

Interchangeability of network and graph objects


One can always use them interchangeably with little effort using package intergraph. Its sole purpose is providing "coercion routines for network data objects". Simply use its asNetwork and asIgraph functions to convert from one network representation to another:

library(igraph)
library(network)
library(intergraph)
 
# igraph 
pkg.igraph = graph_from_edgelist(edges.mat, directed = TRUE)
pkg.network.from.igraph = asNetwork(pkg.igraph)
all.equal(length(get.edgelist(pkg.igraph)), length(as.matrix(pkg.network.from.igraph, "edgelist")))
 
# network
pkg.network = network(edges.mat)
pkg.igraph.from.network = asIgraph(pkg.network)
all.equal(length(as.matrix(pkg.network, "edgelist")), length(get.edgelist(pkg.igraph.from.network)))
Created by Pretty R at inside-R.org

For more on using intergraph functions see tutorial.

Package dependencies with miniCRAN


To assess relative importance of packages network and igraph we will use package miniCRAN. Its access to CRAN packages' metadata including dependencies via "Depends", "Imports", "Suggests" provides necessary information about package relationships. Built-in makeDepGraph function recursively retrieves these dependencies and builds corresponding graph:

library(miniCRAN)
 
cranInfo = pkgAvail()
 
plot(makeDepGraph(c("network"), availPkgs = cranInfo))
plot(makeDepGraph(c("igraph"), availPkgs = cranInfo))
Created by Pretty R at inside-R.org

               
                   


Unfortunately, these dependency graphs show how network and igraph depend on other CRAN packages while the goal is to evaluate relationships the other way around: how much other CRAN packages depend on the two.

This will require some assembly as we construct a network of packages manually with edges being directed relationships (one of "Depends", "Imports", or "Suggests") as defined in DESCRIPTION for all packages. The following code builds this igraph object (we chose igraph for its functions utilized later):

cranInfoDF = as.data.frame(cranInfo, stringsAsFactors = FALSE)
 
edges = ddply(cranInfoDF, .(Package), function(x) {
  # split all implied (depends, imports, and suggests) packages and then concat into single array
  l = unlist(sapply(x[c('Depends','Imports','Suggests')], strsplit, split="(,|, |,\n|\n,| ,| , )"))
 
  # remove version info and empty fields that became NA
  l = gsub("^([^ \n(]+).*$", "\\1", l[!is.na(l)])
 
  # take care of empty arrays
  if (is.null(l) || length(l) == 0) 
    NULL
  else
    data.frame(Package = x['Package'], Implies = l, stringsAsFactors = FALSE)
} )
 
edges.mat = as.matrix(edges, ncol=2, dimnames=c('from','to'))
pkg.graph = graph_from_edgelist(edges.mat, directed = TRUE)
Created by Pretty R at inside-R.org

The resulting network pkg.graph contains all CRAN packages and their relationships. Let's extract and compare the neighborhoods for the two packages we are interested in:

# build subgraphs for each package
subgraphs = make_ego_graph(pkg.graph, order=1, nodes=c("igraph","network"), mode = "in")
g.igraph = subgraphs[[1]]
g.network = subgraphs[[2]]
 
# plotting subgraphs
V(g.igraph)$color = ifelse(V(g.igraph)$name == "igraph", "orange", "lightblue")
plot(g.igraph, main="Packages pointing to igraph")
V(g.network)$color = ifelse(V(g.network)$name == "network", "orange", "lightblue")
plot(g.network, main="Packages pointing to network")
Created by Pretty R at inside-R.org



The igraph neighborhood is much denser populated subgraph than the network neighborhood and hence its importance and acceptance must be higher.

Package Centrality Scores


Package igraph can produce various centrality measures on the nodes of a graph. In particular, pagerank centrality and eigenvector centrality scores are principal indicators of the importance of a node in given graph. We finish this exercise with validation using centrality scores for our initial conclusion that igraph package is more accepted and utilized across CRAN ecosystem than network package:

# PageRank
pkg.pagerank = page.rank(pkg.graph, directed = TRUE)
 
# Eigenvector Centrality
pkg.ev = evcent(pkg.graph, directed = TRUE)
 
toplot = rbind(data.frame(centrality="pagerank", type = c('igraph','network'), 
                          value = pkg.pagerank$vector[c('igraph','network')]),
               data.frame(centrality="eigenvector", type = c('igraph','network'), 
                          value = pkg.ev$vector[c('igraph','network')]))
 
library(ggplot2)
library(ggthemes)
ggplot(toplot) +
  geom_bar(aes(type, value, fill=type), stat="identity") +
  facet_wrap(~centrality, ncol = 2)
Created by Pretty R at inside-R.org



Conclusion


Both packages igraph and network are widely used across CRAN ecosystem. Due to its versatility and rich set of functions igraph leads in acceptance and importance. But as far as graph objects concern it is still a matter of the requirements to prefer one's or another's objects in R.