Find out best way to use SSDs as a front-end tier by designing a solution that considers device specific characteristics and using a combination of caching and tiering techniques. There are quite a few unknowns in designing such a solution which we are trying to explore.
We are meeting every Thursday at 3pm PST (6pm PST), via 888-426-6840 (Participant Code: 45536712).
In this set of experiments I look at the two claims:
Type | Model | Avail | Cost |
SSD | Intel x25 MLC | 2 | $430 |
SAS | ST3450857SS | 4 | $325 |
SATA | ST31000340NS | 4 | $170 |
*Cost according to FAST paper per device
Type | Model | Avail | Cost |
SSD | Intel 320 Series MLC | 2 | $220 |
SAS | ST3300657SS | 4 | $220 |
SATA | ST31000524NS | 4 | $110 |
*Cost according to Google per device
For this experiment I use a SSD+SATA configuration.
For this experiment I use a SSD+SATA configuration.
For this experiment I use a SSD+SAS+SATA configuration.
All data and plots shown below are assuming 1mb extents (cache lines).
Workload | Length (days) | Total I/Os | Active Exts | Total Exts | % Exts Accessed |
server | 7 | 219828231 | 522919 | 1690624 | 30.930 |
data | 7 | 125587968 | 2368184 | 3809280 | 62.168 |
srccntl | 7 | 88442081 | 311331 | 925696 | 33.632 |
fiu-nas | 12 | 1316154000 | 9849142 | 20507840 | 48.026 |
Hit rate assuming 5% and 1% of HDD space provisioning for the SSD cache.
SERVER
DATA
SRCCNTL
FIU NAS
For all experiments there are a couple of different configurations being tried out:
I/O and extent distribution through time (top plot is I/O and bottom is extent)
EDT
MQ
First column is 16kb cache lines and second column is 4kb.
I/O and extent distribution through time (top plot is I/O and bottom is extent)
EDT
MQ
First column is 16kb cache lines and second column is 4kb.
Sequentially read 15GB from a 60GB file, in a loop for 1hr.
I/O and extent distribution through time (top plot is I/O and bottom is extent)
EDT
Sequentially write 15GB from a 60GB file, in a loop for 1hr.
I/O and extent distribution through time (top plot is I/O and bottom is extent)
EDT
In particular the paper from Reddy's group. We should also look at Ismail Ari's work from HP labs. IIRC, he brought up this issue of modeling the cost of writes explicitly many years ago.
Such as LRU, ARC, MQ. And measure how they perform. Here we should evaluate these algorithms as it is. Then we can modify them to not write every block to SSDs but only when we think it should go in. This will include modeling the cost of writes and not doing them for every miss.
Currently I plan to evaluate two caching solutions. First, LRU which represents a basic solution in which every page accessed is always brought to the cache if not already present. Second, a more complex approach, MQ, which was designed for storage and takes into account for frequency and recency.
Current experiments are done using a 1mb extent, try with different sizes and measure it's impact on performance. Here we want to try out basic caching style with no prefetching and compare that with prefetching larger block size on a miss and write that to SSD. I agree with Jody that 1 MB may be an overkill. We should also consider SSD erase block size here.
One possibility is use a trace to provision, just like in the EDT work, but now use the requirements of the optimal placement as the result of the provisioning.
To my knowledge, include a few google searches, there is not much work on how to provision cache systems. But the general feeling is the bigger the cache the most benefit one should get, at an increase in cost.
Compute the working set size for some interval with a given “adequate” cache line size. Then take the maximum working set size across time as the size of the cache. The key questions here are:
The work from Reddy's group describes an method to load balance the I/O load in a two tier system by continuously migrating among devices in an effort to improve performance based on I/O parallelism. In particular they migrate data on three scenerios:
This is very useful and needed. It will be good if we can do some of the above mentioned analysis in a simulator using different workloads and then implement the near-final design in a real system. This is just my suggestion, feel free to go with the implementation route if possible.
Data plots from individual MSR volumes msr-volumes.zip.
Here I think write-through is pretty much necessary for cases when SSDs are local to the server and a host failure can lead to potential data loss. If the SSDs are on the array and/or are mirrored in some way, we can use write-back. I think for the first cut, we can stick to write-through.
Current system is designed to be write-back. The write-through mechanism needs to be coded, it will require some work probably a couple of days of coding.
Are two tiers (SSD+SATA) enough, or do we need three tiers (SSD+SAS+SATA). If three tiers are need then how do we design a caching solution?