As described in How to beat the bookie: Value Betting I want to use Value Betting to beat the bookie. To identify value, I have to be able to calculate the probability of a specific sports event (e.g. Home-Win for Team A) as accurately as possible. Therefore, I want to develop, test, simulate and process different predictive models. As a DWH architect I know, that a good system architecture helps a lot to support such a developing process. That’s why I formed the concept of the TripleA DWH – the Advanced Agile Analytical Data Warehouse.
Data Vault meets Exasol
The whole concept is a best of breed approach. Data Vault 2.0 is used as the base architecture, modeling method and gets combined with the analytical database Exasol. For the development and processing of the predictive models I use the statistical programming language R. This combination offers you some benefits, which I summed up with the 3 A‘s – Advanced, Agile, Analytical.
As the base Data Vault 2.0 architecture my architecture consists of 4 layers:
- Stage Layer
- Raw Data Layer
- Analytical Layer
- Information Layer
The Stage Layer is the entry point of your data. The data of each data source you want to use is first stored at this layer. So that the data is available inside the database and can be pushed from one layer to the other.
At the second layer, the “Raw Data Layer”, the data is integrated and persisted in a Data Vault model. The data types of the different data sources and some other basic rules will be checked. Moving the data from layer 1 to 2 is implemented with ELT processes.
At layer 3 the “Analytical Layer” all the statistical magic happens. There you can calculate new performance figures and execute predictive models, which both will extend your existing raw data model with new context.
Based on all the information created at the “Analytical Layer”, you can create all your needed reports in the “Information Layer”.These reports can be defined queries, which list all possible bets you should play, as well as a adjusted data model to analyse your data with a data visualization product like Tableau.
At layer 2 and 3 the data modeling methods of Data Vault 2.0 are used to model the data. Data Vault is relative new data modeling technique, which gets more and more used for Data Warehouses. It offers the potential for a real agile development inside of an analytical system.
Hint: I will describe data modeling with Data Vault more in detail in another post, where I write about how I implemented http://www.football-data.co.uk as the first data source of my analytical system.
In comparison to other data modeling methods (e.g. 3NF) Data Vault provides some clear advantages.
Pattern based loading:
In a Data Vault model exist only 3 different types of tables. Hub (Objects), Links (Relations) and Satellites (Context). Every type of table as the same structure and therefore can be loaded in the same way. That’s why building the ELT processes, which load the source data to the Raw Data Layer is very fast and easy.
Data Vault follows a hard zero impact approach. That means that existing data structures and process need not be changed, if new data is added. This is extremely beneficial, while testing different performance figures at the Analytical Layer. Because of separation of objects (Hubs) / relations (Links) and the context (Satellites), you can easily extend your data model with new context / new performance figures without the need to change something on your existing data model.
Of course, you want to test a predictive model before you add it to your standard data processing. The R-SDK of Exasol offers me the possibility to directly write data to the database. So I can add the result of a prediction as new context to my existing data vault model. After that, I am able to use simple SQL statements to simulate my prediction against the historic odds of a bookie and determine, whether a model gains profit or not. A successful model can then be added to the standard automated loading process.
The main reason, why I decided to use R as the programming language is very simple: The Exasol database offers the possibility to execute R code inside the database.
You can write so called User Defined Functions (UDFs) in Exasol in R and then use this function inside SQL Statements. For advanced models like Forests, Regressions and Neural Networks the trained model can be stored at the file system of the database (EXABuckets) and loaded during the SQL execution.
That’s just great!
Data Vault 2.0 provides one big innovation in comparison to Data Vault 1.0. Hash keys are now used as the primary keys for the different tables. These hash keys are calculated based on the business key of your object (e.g. the team name). This is a big advantage compared to the usage of sequences. You can now e.g. spread your data model over different technologies and are able to combine e.g. a relation DBMS with a Hadoop cluster. So you can combine structured and unstructured data.
I currently do not know, whether I need this. I am sure, I will mostly work with structured data. But it is definitely an advantage of this architecture.
One use case could also be to use a Hadoop cluster as the staging layer. So you don’t need too much think about the structure of your source data and just pump it in your storage. After that, you can use the schema-on-read functionality of Hive to build the interface to your Data Vault model in the Raw Data Layer.
This architecture is a great base to develop a predictive system for sports betting. In further posts I will explain, how you can develop predictive models and how they are implemented at this architectur.
If you have further questions, feel free to leave a comment or contact me @Mo_Nbg.