One of the pivotal features of SkySpark is our time series database, sometimes called a process historian or simply a historian. Many different people are curious about what a time series database is and why we developed our own database technology. In this article I'm going to dive into some of the key design patterns used by the SkySpark historian and why its so darn fast.
What is a Time Series Database
Anything that stores data keyed by timestamps could be called a time series database. In SkySpark we are specifically talking about sensor data which is typically timestamp samples of an analog or digital sensor, setpoint, or command. For example if we have a sensor sampling a zone temperature every 15min, then the data stream would look something like:
There are some characteristics of time series sensor data which are important to observe:
high resolution data can yield an immense volume of timestamp/value pairs. If we store minutely data we have 525,600 samples per year. 100,000 points with two years of minutely data is 105 billion samples! Although that isn't actually an extremely large system, a traditional database with 100 billion rows would be considered a really big database. So efficient storage and access these types of data volume is a top priority
time series data is virtually almost always "append only". Once data is stored we rarely change it, but we are always adding newly acquired timestamp samples
time series data is always accessed by a time range: we want to query data by year, month, week, day, date range, etc
time series data queries must always be sorted: we almost always want to analyze or view data in temporal order
we often wish to work with "rolled up" aggregations of the data; for example if working with years worth of 15min energy data we might only care about total daily or monthly energy usage
if working with data which spans timezones, then proper handling of timezones is critical. Most analysis of time series data is done in the context of local time (for example comparing energy/equipment use against occupancy times)
Problems with Relational Databases
A common technique is to store time series data in an off the shelf RDBMS like MySql, MS-SQL, etc. But the problem with these general purpose databases is they make trade-offs which are extremely un-optimized for the characteristics described above. They are designed for random access updates versus a stream of append only data. And without an explicit index on the tables, a RDBMS makes no guarantees about indexing a time range or reading the data in sorted order.
Consider what happens under the covers when you query a relational database for a single week of data in a table which stores several years of data. Without an index the entire data set has be read off disk, the predicate tested to see if each timestamp matches the filter, and then the data has to be sorted in memory.
If you setup indexing on your tables, then the size of your data on disk explodes! Most relational databases use a general purpose b-tree structure to index a table. If you have a table with billions of rows, the overhead to maintain the index can quickly add up. For this type of data the index can exceed the disk size of the data itself.
The other major problem with an RDBMS is that they are designed to query the data in one process, then move all the data over a network connection to another process for computation. If you want to compute monthly roll-ups for years worth of data, that can be a huge amount of data to pass over the network before computation can even begin.
So let's look at the techniques SkySpark uses to optimize this problem:
Storage
In SkySpark we store data to disk so that it is always indexed by timestamp and always sorted by timestamp. The way we store data means we can index the data by timestamp with virtually no additional indexing overhead. Furthermore we can utilize the fact that we store samples to disk in temporal order to our advantage by applying a bunch of compression techniques. A typical RDBMS without any indexing is probably going to require at least 12 bytes to store a timestamp and 32-bit floating value. If you add overhead for indexing then this size might double or triple. In SkySpark this same data is stored with an average of 12 bits (1.5 bytes). We've seen customers switch from a relational database to SkySpark and achieve an order of magnitude in disk space savings. Efficient storage of each timestamp/value sample really matters when we are talking about billions of samples.
Read Performance
When you query a specific time range in SkySpark we know a) where to begin reading from disk and b) that we are reading sorted data straight from disk. And because we have compressed data we are often reading fewer bytes from disk. Since disk access is orders of magnitude slower than RAM access, the number of blocks read from disk before you can start crunching your data is the #1 issue in how well your database performs. In SkySpark we don't spend any time reading index information from disk and we don't have to perform any sorting in memory.
So how fast can we read data? Even on my run-of-the-mill machine and disk drive I consistently get benchmarks for querying 800,000 samples/sec. To crunch a year's worth of 15 minutely data is only 30ms!
Write Performance
SkySpark takes advantage of the fact that most writes to a time-series database are at the end of the time stream. We constantly collect new sensor data and might be writing new data for 100,000 points every minute. SkySpark optimizes for this case and avoids the costly techniques of journaling, log files, and b-tree updates as required by a general purpose RDBMS.
However since the data must be sorted to disk, there can be a performance hit if attempting to write data with timestamps which occur interleaved with data already stored to disk (sometimes you will see a warning when this happens).
Under the covers we use Fantom's actor concurrency model to effectively utilize all the cores of the microprocessor to pre-sort, coalesce, and sanitize the data. The overall result is that SkySpark's historian can typically write data to disk far faster than a general purpose database.
So how fast can we write? On my run-of-the-mill machine, benchmarking yields write rates of 250,000 samples/sec. If working with 15min data, that equates to writing seven years of data every second.
Colocation of Computation
The architecture of SkySpark is designed around the idea that the database engine and computation engine must be colocated in the same OS process. The key observation is that often we wish to crunch large volumes of time series data to compute small answers. Examples: find the total energy consumption for each month, find the day where we had our peak demand in 2010, find periods of time where a sensor exceeded a given limit.
In a relational database, the SQL language is used to query the data. But SQL is not a general purpose language for analyzing and transforming data. So one must use SQL to query the raw data, move it over the network to another process, and then crunch the data using a programming environment like Java, C#, etc. The problem with this approach is that you might have to query millions of rows of data over the network just to compute a single number. To work around the inevitable performance problems, developers try to pre-compute these "rollup" versions of the data. But this is a stop-gap solution that doesn't work for ad hoc general analytics because there is an infinite number of ways we might want to crunch the raw data.
In SkySpark our database and the Axon computation engine are bundled together in the same server. We've designed Axon to be an expressive query language, but also a full fledged general purpose programming language. Let's consider a simple example where we want to query the monthly consumption of energy data for the entire year of 2010 using raw 15 minutely data:
readAll(kwh).hisRead(2010).hisRollup(sum, 1mo)
This expression is evaluated by Axon as follows: read all the records with the kwh tag, pipe that to the hisRead function to read all the raw 15 minutely data for the entire year of 2010, and then pipe that to the hisRollup function to compute the monthly interval sum of the raw data. This entire calculation is executed as the data is read off disk. There is no buffering of raw data in memory, nor are we required to pass any data over a network connection. The design for pipe-lining the computation as we read off disk is a key enabler for our high performance analytics of time series data.
Conclusion
So hopefully that helps explain what a time series database is and why it is a critical foundation for the analytics of sensor data. But the historian is just one sub-system of the SkySpark platform. It is seamlessly integrated with the Folio tag database, Axon scripting language, Fresco user experience, and our ever growing library of extensions. Each of those are exciting technologies in their own right which hopefully we'll cover in future blog posts.
Brian Frank Wed 20 Jul 2011
One of the pivotal features of SkySpark is our time series database, sometimes called a process historian or simply a historian. Many different people are curious about what a time series database is and why we developed our own database technology. In this article I'm going to dive into some of the key design patterns used by the SkySpark historian and why its so darn fast.
What is a Time Series Database
Anything that stores data keyed by timestamps could be called a time series database. In SkySpark we are specifically talking about sensor data which is typically timestamp samples of an analog or digital sensor, setpoint, or command. For example if we have a sensor sampling a zone temperature every 15min, then the data stream would look something like:
Characteristics of a Time Series Database
There are some characteristics of time series sensor data which are important to observe:
Problems with Relational Databases
A common technique is to store time series data in an off the shelf RDBMS like MySql, MS-SQL, etc. But the problem with these general purpose databases is they make trade-offs which are extremely un-optimized for the characteristics described above. They are designed for random access updates versus a stream of append only data. And without an explicit index on the tables, a RDBMS makes no guarantees about indexing a time range or reading the data in sorted order.
Consider what happens under the covers when you query a relational database for a single week of data in a table which stores several years of data. Without an index the entire data set has be read off disk, the predicate tested to see if each timestamp matches the filter, and then the data has to be sorted in memory.
If you setup indexing on your tables, then the size of your data on disk explodes! Most relational databases use a general purpose b-tree structure to index a table. If you have a table with billions of rows, the overhead to maintain the index can quickly add up. For this type of data the index can exceed the disk size of the data itself.
The other major problem with an RDBMS is that they are designed to query the data in one process, then move all the data over a network connection to another process for computation. If you want to compute monthly roll-ups for years worth of data, that can be a huge amount of data to pass over the network before computation can even begin.
So let's look at the techniques SkySpark uses to optimize this problem:
Storage
In SkySpark we store data to disk so that it is always indexed by timestamp and always sorted by timestamp. The way we store data means we can index the data by timestamp with virtually no additional indexing overhead. Furthermore we can utilize the fact that we store samples to disk in temporal order to our advantage by applying a bunch of compression techniques. A typical RDBMS without any indexing is probably going to require at least 12 bytes to store a timestamp and 32-bit floating value. If you add overhead for indexing then this size might double or triple. In SkySpark this same data is stored with an average of 12 bits (1.5 bytes). We've seen customers switch from a relational database to SkySpark and achieve an order of magnitude in disk space savings. Efficient storage of each timestamp/value sample really matters when we are talking about billions of samples.
Read Performance
When you query a specific time range in SkySpark we know a) where to begin reading from disk and b) that we are reading sorted data straight from disk. And because we have compressed data we are often reading fewer bytes from disk. Since disk access is orders of magnitude slower than RAM access, the number of blocks read from disk before you can start crunching your data is the #1 issue in how well your database performs. In SkySpark we don't spend any time reading index information from disk and we don't have to perform any sorting in memory.
So how fast can we read data? Even on my run-of-the-mill machine and disk drive I consistently get benchmarks for querying 800,000 samples/sec. To crunch a year's worth of 15 minutely data is only 30ms!
Write Performance
SkySpark takes advantage of the fact that most writes to a time-series database are at the end of the time stream. We constantly collect new sensor data and might be writing new data for 100,000 points every minute. SkySpark optimizes for this case and avoids the costly techniques of journaling, log files, and b-tree updates as required by a general purpose RDBMS.
However since the data must be sorted to disk, there can be a performance hit if attempting to write data with timestamps which occur interleaved with data already stored to disk (sometimes you will see a warning when this happens).
Under the covers we use Fantom's actor concurrency model to effectively utilize all the cores of the microprocessor to pre-sort, coalesce, and sanitize the data. The overall result is that SkySpark's historian can typically write data to disk far faster than a general purpose database.
So how fast can we write? On my run-of-the-mill machine, benchmarking yields write rates of 250,000 samples/sec. If working with 15min data, that equates to writing seven years of data every second.
Colocation of Computation
The architecture of SkySpark is designed around the idea that the database engine and computation engine must be colocated in the same OS process. The key observation is that often we wish to crunch large volumes of time series data to compute small answers. Examples: find the total energy consumption for each month, find the day where we had our peak demand in 2010, find periods of time where a sensor exceeded a given limit.
In a relational database, the SQL language is used to query the data. But SQL is not a general purpose language for analyzing and transforming data. So one must use SQL to query the raw data, move it over the network to another process, and then crunch the data using a programming environment like Java, C#, etc. The problem with this approach is that you might have to query millions of rows of data over the network just to compute a single number. To work around the inevitable performance problems, developers try to pre-compute these "rollup" versions of the data. But this is a stop-gap solution that doesn't work for ad hoc general analytics because there is an infinite number of ways we might want to crunch the raw data.
In SkySpark our database and the Axon computation engine are bundled together in the same server. We've designed Axon to be an expressive query language, but also a full fledged general purpose programming language. Let's consider a simple example where we want to query the monthly consumption of energy data for the entire year of 2010 using raw 15 minutely data:
This expression is evaluated by Axon as follows: read all the records with the
kwh
tag, pipe that to thehisRead
function to read all the raw 15 minutely data for the entire year of 2010, and then pipe that to thehisRollup
function to compute the monthly interval sum of the raw data. This entire calculation is executed as the data is read off disk. There is no buffering of raw data in memory, nor are we required to pass any data over a network connection. The design for pipe-lining the computation as we read off disk is a key enabler for our high performance analytics of time series data.Conclusion
So hopefully that helps explain what a time series database is and why it is a critical foundation for the analytics of sensor data. But the historian is just one sub-system of the SkySpark platform. It is seamlessly integrated with the Folio tag database, Axon scripting language, Fresco user experience, and our ever growing library of extensions. Each of those are exciting technologies in their own right which hopefully we'll cover in future blog posts.