Data Mining Algorithms In R/Frequent Pattern Mining/The FP-Growth Algorithm

In Data Mining the task of finding frequent pattern in large databases is very important and has been studied in large scale in the past few years. Unfortunately, this task is computationally expensive, especially when a large number of patterns exist.

The FP-Growth Algorithm, proposed by Han in, is an efficient and scalable method for mining the complete set of frequent patterns by pattern fragment growth, using an extended prefix-tree structure for storing compressed and crucial information about frequent patterns named frequent-pattern tree (FP-tree). In his study, Han proved that his method outperforms other popular methods for mining frequent patterns, e.g. the Apriori Algorithm and the TreeProjection. In some later works it was proved that FP-Growth has better performance than other methods, including Eclat and Relim. The popularity and efficiency of FP-Growth Algorithm contributes with many studies that propose variations to improve his performance.

This chapter describes the algorithm and some variations and discuss features of the R language and strategies to implement the algorithm to be used in R. Next, a brief conclusion and future works are proposed.

The algorithm
The FP-Growth Algorithm is an alternative way to find frequent itemsets without using candidate generations, thus improving performance. For so much it uses a divide-and-conquer strategy. The core of this method is the usage of a special data structure named frequent-pattern tree (FP-tree), which retains the itemset association information.

In simple words, this algorithm works as follows: first it compresses the input database creating an FP-tree instance to represent frequent items. After this first step it divides the compressed database into a set of conditional databases, each one associated with one frequent pattern. Finally, each such database is mined separately. Using this strategy, the FP-Growth reduces the search costs looking for short patterns recursively and then concatenating them in the long frequent patterns, offering good selectivity.

In large databases, it's not possible to hold the FP-tree in the main memory. A strategy to cope with this problem is to firstly partition the database into a set of smaller databases (called projected databases), and then construct an FP-tree from each of these smaller databases.

The next subsections describe the FP-tree structure and FP-Growth Algorithm, finally an example is presented to make it easier to understand these concepts.

FP-Tree structure
The frequent-pattern tree (FP-tree) is a compact structure that stores quantitative information about frequent patterns in a database.

Han defines the FP-tree as the tree structure io below :


 * 1) One root labeled as “null” with a set of item-prefix subtrees as children, and a frequent-item-header table (presented in the left side of Figure 1);
 * 2) Each node in the item-prefix subtree consists of three fields:
 * Item-name: registers which item is represented by the node;
 * Count: the number of transactions represented by the portion of the path reaching the node;
 * Node-link: links to the next node in the FP-tree carrying the same item-name, or null if there is none.


 * 1) Each entry in the frequent-item-header table consists of two fields:
 * Item-name: as the same to the node;
 * Head of node-link: a pointer to the first node in the FP-tree carrying the item-name.

Additionally the frequent-item-header table can have the count support for an item. The Figure 1 below show an example of a FP-tree.

Figure 1: An example of an FP-tree from.

The original algorithm to construct the FP-Tree defined by Han in is presented below in Algorithm 1.

Algorithm 1: FP-tree construction


 * Input: A transaction database DB and a minimum support threshold ?.


 * Output: FP-tree, the frequent-pattern tree of DB.


 * Method: The FP-tree is constructed as follows.
 * Scan the transaction database DB once. Collect F, the set of frequent items, and the support of each frequent item. Sort F in support-descending order as FList, the list of frequent items.
 * Create the root of an FP-tree, T, and label it as “null”. For each transaction Trans in DB do the following:
 * Select the frequent items in Trans and sort them according to the order of FList. Let the sorted frequent-item list in Trans be [ p | P], where p is the first element and P is the remaining list. Call insert tree([ p | P], T ).
 * The function insert tree([ p | P], T ) is performed as follows. If T has a child N such that N.item-name = p.item-name, then increment N ’s count by 1; else create a new node N, with its count initialized to 1, its parent link linked to T , and its node-link linked to the nodes with the same item-name via the node-link structure. If P is nonempty, call insert tree(P, N ) recursively.

By using this algorithm, the FP-tree is constructed in two scans of the database. The first scan collects and sort the set of frequent items, and the second constructs the FP-Tree.

FP-Growth Algorithm
After constructing the FP-Tree it's possible to mine it to find the complete set of frequent patterns. To accomplish this job, Han in presents a group of lemmas and properties, and thereafter describes the FP-Growth Algorithm as presented below in Algorithm 2.

Algorithm 2: FP-Growth


 * Input: A database DB, represented by FP-tree constructed according to Algorithm 1, and a minimum support threshold ?.


 * Output: The complete set of frequent patterns.


 * Method: call FP-growth(FP-tree, null).


 * Procedure FP-growth(Tree, a) {


 * (01) if Tree contains a single prefix path then { // Mining single prefix-path FP-tree


 * (02)    let P be the single prefix-path part of Tree;


 * (03)    let Q be the multipath part with the top branching node replaced by a null root;


 * (04)    for each combination (denoted as ß) of the nodes in the path P do


 * (05)       generate pattern ß ∪ a with support = minimum support of nodes in ß;


 * (06)    let freq pattern set(P) be the set of patterns so generated;


 * }


 * (07) else let Q be Tree;


 * (08) for each item ai in Q do { // Mining multipath FP-tree


 * (09)    generate pattern ß = ai ∪ a with support = ai .support;


 * (10)    construct ß’s conditional pattern-base and then ß’s conditional FP-tree Tree ß;


 * (11)   if Tree ß ≠ Ø then


 * (12)       call FP-growth(Tree ß, ß);


 * (13)   let  freq pattern set(Q) be the set of patterns so generated;


 * }


 * (14) return(freq pattern set(P) ∪ freq pattern set(Q) ∪ (freq pattern set(P) × freq pattern set(Q)))


 * }

When the FP-tree contains a single prefix-path, the complete set of frequent patterns can be generated in three parts: the single prefix-path P, the multipath Q, and their combinations (lines 01 to 03 and 14). The resulting patterns for a single prefix path are the enumerations of its subpaths that have the minimum support (lines 04 to 06). Thereafter, the multipath Q is defined (line 03 or 07) and the resulting patterns from it are processed (lines 08 to 13). Finally, in line 14 the combined results are returned as the frequent patterns found.

An example
This section presents a simple example to illustrate how the previous algorithm works. The original example can be viewed in.

Consider the transactions below and the minimum support as 3:

To build the FP-Tree, frequent items support are first calculated and sorted in decreasing order resulting in the following list: { B(6), E(5), A(4), C(4), D(4) }. Thereafter, the FP-Tree is iteratively constructed for each transaction, using the sorted list of items as shown in Figure 2.

Figure 2: Constructing the FP-Tree iteratively.

As presented in Figure 3, the initial call to FP-Growth uses the FP-Tree obtained from the Algorithm 1, presented in Figure 2 (f), to process the projected trees in recursive calls to get the frequent patterns in the transactions presented before.

Using a depth-first strategy the projected trees are determined to items D, C, A, E and B, respectively. First the projected tree for D is recursively processed, projecting trees for DA, DE and DB. In a similar manner the remaining items are processed. At the end of process the frequent itemset is: { DAE, DAEB, DAB, DEB, DA, DE, DB, CE, CEB, CB, AE, AEB, AB, EB }.

Figure 3: Projected trees and frequent patterns founded by the recursively calls to FP-Growth Algorithm.

FP-Growth Algorithm Variations
As mentioned before, the popularity and efficiency of FP-Growth Algorithm contributes with many studies that propose variations to improve its performance. In this section some of them are briefly described.

DynFP-Growth Algorithm
The DynFP-Growth, has focused in improving the FP-Tree algorithm construction based on two observed problems:


 * 1) The resulting FP-tree is not unique for the same “logical” database;
 * 2) The process needs two complete scans of the database.

To solve the first problem Gyorödi C., et al. proposes the usage of a support descending order together with a lexicographic order, ensuring in this way the uniqueness of the resulting FP-tree for different “logically equivalent” databases. To solve the second problem they proposed devising a  dynamic  FP-tree  reordering  algorithm,  and  employing  this algorithm  whenever  a  “promotion”  to  a  higher  order  of  at  least  one  item  is detected.

An important feature in this approach is that it's not necessary to rebuild the FP-Tree when the actual database is updated. It's only needed to execute the algorithm again taking into consideration the new transactions and the stored FP-Tree.

Another adaptation proposed, because of the dynamic reordering process, is a modification in the original structures, by replacing the single linked list with a doubly linked list for linking the tree nodes to the header and adding a master-table to the same header. See for more details.

FP-Bonsai Algorithm
The FP-Bonsai improve the FP-Growth performance by reducing (pruning) the FP-Tree using the ExAnte data-reduction technique. The pruned FP-Tree was called FP-Bonsai. See for more details.

AFOPT Algorithm
Investigating the FP-Growth algorithm performance Liu proposed the AFOPT algorithm in. This algorithm aims at improving the FP-Growth performance in four perspectives:


 * Item Search Order: when the search space is divided, all items are sorted in some order. The number of the conditional databases constructed can differ very much using different items search orders;
 * Conditional Database Representation: the traversal and construction cost of a conditional database heavily depends on its representation;
 * Conditional Database Construction Strategy: constructing every conditional database physically can be expensive affecting the  mining cost  of  each  individual  conditional  database;
 * Tree Traversal Strategy: the traversal cost of a tree is minimal using top-down traversal strategy.

See for more details.

NONORDFP Algorithm
The Nonordfp algorithm was motivated by the running time and the space   required for the FP-Growth algorithm. The theoretical difference is the main data structure (FP-Tree), which is more compact and which is not needed to rebuild it for each conditional step. A compact, memory efficient representation of an FP-tree by using Trie data structure, with memory layout that allows faster traversal, faster allocation, and optionally projection was introduced. See for more details.

FP-Growth* Algorithm
This algorithm was proposed by Grahne et al. , and is based in his conclusion about the usage of CPU time to compute frequent item sets using FP-Growth. They observed that 80% of CPU time was used for traversing FP-Trees. Therefore, they used an array-based data structure combined with the FP-Tree data structure to reduce the traversal time, and incorporates several optimization techniques. See for more details.

PPV, PrePost, and FIN Algorithm
These three algorithms were proposed by Deng et al. , and are based on three novel data structures called Node-list, N-list , and Nodeset respectively for facilitating the mining process of frequent itemsets. They are based on a FP-tree with each node encoding with pre-order traversal and post-order traversal. Compared with Node-lists, N-lists and Nodesets are more efficient. This causes the efficiency of PrePost and FIN is higher than that of PPV. See   for more details.

Data Visualization in R
Normally the data used to mine frequent item sets are stored in text files. The first step to visualize data is load it into a data-frame (an object to represent the data in R).

The function read.table could be used in the following way:

Another function in R to load data is called scan. See the R Data Import/Export Manual for details.

The visualization of the data can be done in two ways:


 * Using the variable name (var), to list the data in a tabular presentation.
 * And summary(var), to list a summary of the data.

Example:

In the example above the data in “boolean.data”, that have a simple binary database, was loaded in the data-frame variable data. Typing the name of the variable in the command line, its content is printed, and typing the summary command the frequency occurrence of each item is printed. The summary function works differently. It depends on the type of data in the variable, see  for more details.

The functions presented previously can be useful, but to frequent item set datasets use an specific package called arules which is better to visualize the data.

Using arules, several functions are made available:


 * read.transactions: used to load the database file into a variable.
 * inspect: used to list the transactions.
 * length: returns the number of transactions.
 * image: plots an image with all transactions in a matrix format.
 * itemFrequencyPlot: calculates the frequency of each item and plots it in a bar graphic.

Example:

In this example we can see the difference in the usage of the variable name in the command line. From transactions, only the number of rows (transactions) and cols (items) are printed. The result of image(data) and itemFrequencyPlot(data, support = 0.1) are presented in the figures 4 and 5 below.

Figure 5: Result of the itemFrequencyPlot(data, support = 0.1) call.

Implementation in R
The R   provides several facilities for data manipulation, calculation and graphical display very useful for data analysis and mining. It can be used as both a statistical library and a programming language.

As a statistical library, it provides a set of functions to summary data, matrix facilities, probability distributions, statistical models and graphical procedures.

As a programming language, it provides a set of functions, commands and methods to instantiate and manage values of different type of objects (including lists, vectors and matrices), user interaction (input and output from console), control statements (conditional and loop statements), creation of functions, calls to external resources and create packages.

This chapter isn't accomplished to present details about R resources and will focus on the challenges to implement an algorithm using R or to be used in R. However, to better understanding the R power, some basic examples based in are presented in Appendix A.

To implement an algorithm using R, normally it would be necessary to create complex objects to represent the data structures to be processed. Also, it would be necessary to implement complex functions to process this data structures. Thinking in the specific case of implementing the FP-Growth algorithm could be very hard to represent and process an FPTree using only the R resources. Moreover, for performance reasons it could be interesting to implement the algorithm using other languages and integrate it with R. Other reasons for using other languages are to get better memory management and to use existing packages.

Two ways to integrate R with other languages are available and will be briefly presented below: creating a package and making an external call using interface functions. Next it is presented the FP-Growth implementation used in this work and the efforts to integrate it with R. For both would be necessary to install the RTools.

Creating a Package
Package is a mechanism for loading optional code implemented in other languages in R. The R distribution itself includes about 25 packages, and some extra packages used in this WikiBook can be listed:


 * aRules
 * arulesNBMiner
 * arulesSequences
 * cluster

To create a package it's necessary to follow some specifications. The sources of an R package consist in a directory structure described below:


 * Root: the root directory containing a DESCRIPTION file and some optional files (INDEX, NAMESPACE, configure, cleanup, LICENCE, COPYING and NEWS).
 * R: contains only R code files that could be executed by the R command source(filename) to create R objects used by users. Alternatively, this directory can have a file sysdata.rda. This file has a saved image of R objects created in an execution of R console.
 * data: aimed to have data files, either to be made available via lazy-loading or for loading using function data. These data files could be from three different types: plain R code (.r or .R), tables (.tab, .txt, or .csv) or saved data from R console (.RData or .rda). Some additional compressed file can be used to table’s files.
 * demo: contains scripts in pain R code (for running using function demo) that demonstrate some of the functionality of the package
 * exec: could contain additional executables the package needs, typically scripts for interpreters such as the shell, Perl, or Tcl.
 * inst: its content will be copied to the installation directory after it is built and its makefile can create files to be installed. May contain all information files that intended to be viewed by end users.
 * man: should contain only documentation files for the objects in the package (using an specific R documentation format). An empty man directory causes an installation error.
 * po: used for files related to internalization, in other words, to translate errors and warning messages.
 * src: contains the sources, headers, makevars and makefiles. The supported languages are: C, C++, FORTRAN 77, Fortran 9x, Objective C and Objective C++. It’s not possible to mix all these languages in a single package, but mix C and FORTRAN 77 or C and C++ seems to be successful. However, there ways to make usage from other packages.
 * tests: used for additional package-specific test code.

Once a source package is created, it must be installed by the command line in the OS console:

Alternatively, packages can be downloaded and installed from within R, using the command line in the R console:

See the Installation and Administration manual, for details.

After installed, the package needs to be loaded to be used, using the command line in the R console:

Making external call using interface functions
Making external call using interface functions is a simple way to use external implementation without complies with all rules described before to create a package to R.

First the code needs to include R.h header file that comes with R installation.

To compile a source code is needs to use the compiler R at the OS command line:

Compiled code to be used in R needs to be loaded as a shared object in Unix-like OS, or as a DLL in Windows OS. To load or unload it can be used the commands in the R console:

After the load, the external code can be called using some of these functions:
 * .C
 * .Call
 * .Fortran
 * .External

Two simple examples are presented below, using .C function:

Example 1: Hello World

Example 2: Calling C with an integer vector 

The FP-Growth Implementation
The FP-Growth implementation used in this work was implemented by Christian Borgelt, a principal researcher at European Centre for Soft Computing. He also implemented the code used in arules package for Eclat and Apriori algorithms. The source code can be downloaded in his personal site.

As described by Borgelt, there are two implementation variants of the core operation of computing a projection of an FP-tree. In addition, projected FP-trees are optionally pruned by removing items that has becoming in-frequent (using FP-Bonsai approach).

The source code is divided into three main folders (packages):


 * fpgrowth: contains the main file that implements the algorithm and manages the FP-Tree;
 * tract: manages item sets, transactions and its reports;
 * util: facilities to be used in fpgrowth and tract.

The syntax to call this implementation, from the OS command line, is:

There are options to choose the limits of items per set, the minimum support, evaluation measure, to configure the input and output format, and so on.

A simple calling to FP-Growth, and its results, using the test1.tab example file (that comes with source code) as input file, the test1.out, and minimum support as 30%, could be made as follows:

The presented result shows some information about Copyright and some execution data, as the number of items and transactions and the number of frequent set (21 in this example). The content of input and output files is presented below.

The input file content:

The output file content:

Calling FP-Growth from R
As observed before, to create a package are imposed a several rules creating a standard directory structure and content to make it available an external source code. An alternative presented before is to creating a shared object, or a DLL, to be called using specific R functions (.C, .CALL, and so on).

To start a job of adapt an existing code to compose a package can be a hard job and spending too much time. An interesting approach is to iteratively create and adapt a shared object, or DLL, and make tests to validate it and after improve the adaptations in some iterations, when a satisfactory result has been done, start to work in a package version.

The intent iterations to make it available the C implementation in R are:


 * 1. Create a simple command line call, without parameters making only two changes in the original source (the fpgrowth.c file):
 * Rename the main function to FP-Growth with the same signature;
 * Create a function to be called from R, creating the parameters from a configuration file (containing only a string with the same syntax of the command line, broken it in an array to be used as the argument array to FP-Growth function;
 * 2. Compile the code project within the R compile command, including the R.h reader file and call it using R;
 * 3. Implement the input parameters from the R call, eliminating the usage of a configuration file, including the change to define a input file name to data-frames in R;
 * 4. Preparing the output in a R data-frame to be returned to R;
 * 5. Create the R package.

The first iteration could be done easily, without any surprise.

Unfortunately, the second iteration, that sounds to be ease to be done either, in a practice proved to be very hard. The R compile command does not work with makefiles and the compile original code with it could not be done. After some experiments, the strategy was changed to build a library with the adapted code, without the function created to be called from R, and then create a new code containing this function and making use of the compiled library. Next, calling the new code, compiled as a DLL, from R raises execution errors. Debugging the execution, wasting several time, was detected that some compile configurations to create the library was wrong. To solving this problem, some tests are made creating an executable version to be run using OS command line until all execution errors are solved. However, solved this errors, another unexpected behavior was founded. Calling the version compiled using R command from R console the incompatible cygwin version error was rose in loading DLL function. Several experiments, changing the compilation parameters, different versions of cygwin, and so on were tried, but have no success (these tests are made only under Windows OS). So, having no success in the second iteration, the next step was compromised.

The main expected challenge in third and fourth iterations is to interface the R data types and structure with its correspondents in the C language, either to dataset input and other input parameters to be converted and used internally than to output dataset needed to be created to be returned to R. An alternative is to adapt all the code to use the data received. However, it sounds to be more complex to be done.

The fifth iteration sounds to be a bureaucratic work. Once the code has been entirely adapted and validated, create the additional directory and required content should be an easy task.

Conclusion and Future Works
In this chapter an efficient and scalable algorithm to mine frequent patterns in databases was presented: the FP-Growth. This algorithm uses a useful data structure, the FP-Tree, to store information about frequent patterns. Also an implementation of the algorithm was presented. Additionally, some features of R language and experiments to adapt the algorithm source code to be used in R. We could observe that the job to make this adaptation is hard, and cannot be done in short time. Unfortunately, have no time yet to conclude this adaptation.

As a future work would be interesting to better understand the implementation of external resources on R and complete the job proposed in this work, and after comparing results with other algorithms to mining frequent itemsets available in R.

Appendix A: Examples of R statements
Some basic examples based in.

Getting help about functions

Creating an object

Numeric expressions

Printing an object value

Creating a vector

Vector operations

Categhoric data

Sequences

Matrices

Lists

Data Frames (represents database tables)

Conditional statement

Case statement

Loop statements

Creating and calling functions