==+== FAST '09 Paper Review Form
==-== Set the paper number and fill out lettered sections A through G.
==-== DO NOT CHANGE LINES THAT START WITH ”==+==”!
==+== RAJU MUST REPLACE THIS LINE BEFORE UPLOADING
==+== Begin Review
==+== Paper #000000
==-== Replace '000000' with the actual paper number.
==+== Review Readiness
==-== Enter “Ready” here if the review is ready for others to see:
[Enter your choice here]
==+== A. Overall merit
==-== Enter a number from 1 to 5.
==-== Choices: 1. Reject
==-== 2. Weak reject
==-== 3. Weak accept
==-== 4. Accept
==-== 5. Strong accept
5
==+== B. Novelty
==-== Enter a number from 1 to 5.
==-== Choices: 1. Published before
==-== 2. Done before (not necessarily published)
==-== 3. Incremental improvement
==-== 4. New contribution
==-== 5. Surprisingly new contribution
5
==+== C. Longevity
==-== How important will this work be over time?
==-== Enter a number from 1 to 5.
==-== Choices: 1. Not important now or later
==-== 2. Low importance
==-== 3. Average importance
==-== 4. Important
==-== 5. Exciting
5
==+== D. Reviewer expertise
==-== Enter a number from 1 to 4.
==-== Choices: 1. No familiarity
==-== 2. Some familiarity
==-== 3. Knowledgeable
==-== 4. Expert
4
==+== E. Paper summary
The authors propose a technique that attempts to redirect majority of user I/O requests to a RAID instead to a surrogate RAID system during RAID reconstruction. The reconstruction times improve since the reconstruction I/Os and the user I/Os are directed to different RAID sets and so bandwidth and arm contention is eliminated.
==+== F. Comments for author
* This paper presents a surprisingly new idea for a problem that is relatively well-explored. Validation of the idea is detailed and clearly demonstrates the substantial benefits of this fairly straightforward approach. Nicely done!
* Some comments that the authors can use for improvement:
- The architecture depicts what would typically represent an high-end storage box with multiple RAIDs being managed internally by a single storage controller. Given the appreciation of teh cost-effectiveness commodity storage solutions in the storage research community, it would also be worthwhile to discuss an easy extension to your proposal to address such deployments as well. A surrogate RAID can be shared by several commodity RAID-based SAN devices and an operating system block redirection mechanism to this surrogate (iSCSI) store on failures would still work in my opinion.
- The writeback of the data from the surrogate RAID system seems buggy as described, since you delete the entry from the redirection table and then do the write. If the write fails, reads can get corrupt data.
- Since the same surrogate RAID can be used by different degraded RAID sets, what additional information is stored by the surrogate space manager in such a case?
- How is consistency maintained when the log in D_table is deleted and the system crashes before the copy to the recovered RAID is made?
- Section 3.4, RAID-1 space overhead should be greater than RAID-5 and not the other way around.
* Typos, grammar, etc.:
- Space overhead “high” and “low” incorrectly switched in Table 1 ?
- In table 4, in the average user response time during reconstruction, what do the columns “Normal” and “Degraded” mean? misnomer?
- Section 3.3: needs not be reclaimed → need not be reclaimed
- Section 4.3: has redirected → redirects
- Section 4.4: both the two stripe→ both the stripe
- General comment: Remove unnecessary commas.
- General Comment: Break all the large sentences into two to improve readability.
==+== G. Comments for PC (hidden from authors)
==+== End Review