Significant improvements have been made in DB2 10 with regards to its compression capabilities. This allows you to reduce the storage cost of your data even further, by compressing data better and by compressing more types of data.
DB2's industry-leading row compression technology, which is available under the DB2 Storage Optimization feature, reaches higher compression ratios than ever. Storage space savings from compression typically translate into fewer physical I/O operations for reading the data in a compressed table, because the same number of rows is stored on fewer pages. Because compression allows more data rows to be packed into the same number of pages, and buffer pool hit ratios increase. In many cases, I/O savings and improved buffer pool utilization results in higher throughput and faster query execution times.
Starting in DB2 10, there are two types of row compression: Classic row compression refers to the compression technology that has been used for user table data since DB2 Version 9.1, and for XML and temporary data since DB2 Version 9.7. Adaptive row compression is a new compression mode that is introduced in DB2 10 that you can apply to user table data. Adaptive row compression is superior to classic row compression in that it generally achieves better compression and requires less database maintenance to keep the compression ratio near an optimal level.
Classic row compression is based on a dictionary-based compression algorithm. There is one compression dictionary for each table object. The dictionary contains a mapping of patterns that frequently occur in rows throughout the whole table. This compression dictionary is referred to as the table-level compression dictionary.
Adaptive compression builds on classic row compression, and table-level compression dictionaries are still used. The table-level dictionary is complemented by page-level compression dictionaries, which contain entries for frequently occurring patterns within a single page. The table-level dictionary helps eliminate repeating patterns in a global scope, while page-level dictionaries take care of locally repeating patterns. Page-level dictionaries are maintained automatically. When a page becomes filled with data, the database manager builds a page-level compression dictionary for the data in that page. Over time, the database manager automatically determines when to rebuild the dictionary for pages where data patterns have changed significantly.
The result of using adaptive compression is not only higher overall compression savings. The compression algorithm can also adapt to changing data patterns, which keeps the compression ratios at a high level over time. This helps you reduce maintenance cost on your large data warehouses. In many cases, the compression ratio remains nearly optimal over time, thus reducing the cost that is associated with monitoring compression ratios of tables and performing maintenance (classic, offline table reorganization) on tables to improve storage utilization.
In DB2 10 you can now also compress your log archives. This allows you to reduce the storage cost associated with your inactive, administrative data, even if you archive this data to a location where the storage infrastructure does not provide native support for compression. When log files are moved from the active log path to the archive, the database manager compresses these log files. Upon retrieval from the archive, DB2 automatically decompresses compressed log files. This mechanism complements the backup compression mechanism that existed in DB2 for a number of releases.