==+== FAST '09 Paper Review Form

==-== Set the paper number and fill out lettered sections A through G.

==-== DO NOT CHANGE LINES THAT START WITH ”==+==”!

==+== RAJU MUST REPLACE THIS LINE BEFORE UPLOADING

==+== Begin Review

==+== Paper #18

==-== Replace '000000' with the actual paper number.

==+== Review Readiness

==-== Enter “Ready” here if the review is ready for others to see:

Ready

==+== A. Overall merit

==-== Enter a number from 1 to 5.

==-== Choices: 1. Reject

==-== 2. Weak reject

==-== 3. Weak accept

==-== 4. Accept

==-== 5. Strong accept

2

==+== B. Novelty

==-== Enter a number from 1 to 5.

==-== Choices: 1. Published before

==-== 2. Done before (not necessarily published)

==-== 3. Incremental improvement

==-== 4. New contribution

==-== 5. Surprisingly new contribution

2

==+== C. Longevity

==-== How important will this work be over time?

==-== Enter a number from 1 to 5.

==-== Choices: 1. Not important now or later

==-== 2. Low importance

==-== 3. Average importance

==-== 4. Important

==-== 5. Exciting

2

==+== D. Reviewer expertise

==-== Enter a number from 1 to 4.

==-== Choices: 1. No familiarity

==-== 2. Some familiarity

==-== 3. Knowledgeable

==-== 4. Expert

3

==+== E. Paper summary

The authors presents an approach for solving the problem of maintaining and porting third-party file system implementations for new OS versions by means of virtualization. They propose pre-packaged virtual machines (FSVAs) that run third-party file systems running alongside a user OS running in it's own VM. These VMs communicate via inter-VM communication mechanisms of the underlying VMM enabled by an FS-agnostic proxy on the user OS end, and a FSVA proxy specific to the OS and file system versions on the FSVA end. Third-party FS developers thus develop only for their OSs of choice. The FS-agnostic proxy maintained by the OS distribution vendors communicates VFS invocations in the user VM to an FSVA proxy inside the FSVA through inter-VM RPC and event channel notifications.

==+== F. Comments for author

I first comment about the problem solution approach.

What the authors propose is essentially provided by NFS implicitly. It is important to put this in the right perspective and think about what an administrator today would do if he were to use a file system that is not supported by the target OS version. The solution for him is really straightforward. He would mount this disk on a compatible OS and then export it via a network file system solution (say NFS). There is no fundamental restriction in doing the same for two VMs instead of two physical hosts.

In a revision, the authors should address the key question: “What are the fundamental differences of the FSVA approach with NFS ?” when considering hte following arguments:

- One of the design goals of NFS was to make it easily portable across operating systems and file systems. NFS is also a virtualization independent solution.

- Third party developers and motivated administrators can use NFS servers for implementing / deploying third-party FSes

- FSVAs will have to deal with exactly the same set of issues that the much more mature NFS project is currently dealing with in terms of cross-version compatibility

- Given the immense amount of research on optimizing inter-VM communication in the virtualization and networking communities, tuning NFS for use within VMs is already addressed.

Given the existence of NFS and its many variants, the main contribution of this work seems hard to identify. At the very least the authors need to provide a more in-depth discussion of why using and extension of NFS is not sufficient. One example would be use NFS with a XenSocket inter-VM optimized communication for Xen environments. Several other optimizations are suggested for both native and hosted virtual machine monitors.

Thus, FSVA as proposed by the authors seems like a reinvention of existing ideas, and this primarily motivates my weak reject rating for this paper.

To the authors credit, they do propose some individual techniques developed for optimizations which are novel – for instance, improving accountability with the unified buffer cache and synchronizing VM migrations. Having an independent FSVA for each user OS also makes good sense.

With the split-proxy architecture as proposed by the authors, there are some secondary set of problems with respect to the high-level solution approach that seem hardly avoidable. Some of these also plague NFS implementations currently, but since NFS is based on well-established network communication abstractions, overall NFS does better:

1. Hypervisor technology incurs substantial change from one VM vendor to the other. Further, the number of hypervisor technology vendors is likely to increase (the hypervisor market is still young with new solutions becoming available every few months), each with their own solutions. With this, the A user's choice of VM technology can be quite diverse. Furthermore, successive versions of of the same Virtualization solution may also break backward compatibility for the User and FSVA proxy communication to work. It seems that we might be moving the problem from one place to the other rather than solving it. Does one of the problems now map to how do you maintain code for the communication between the user and FSVA proxies across all these virtualization solutions and their evolving versions. The impact of these variables need to be further explored. The key question is how does the amount of new work created (now being the burden of the FS developers and the OS vendors as well) compare to the existing state of third-party file system maintenance?

2. Given that FSVA proxies are tied to the semantics of base OS, FSVA proxies need to be created for each OS version that 3rd party FS developers choose to develop with. If there are multiple independent development branches for each third-party implementation with its own preferred OS version, the cost of developing and maintaining FSVA proxies can become prohibitive as well. One unresolved question for me after reading the paper was “Who develops FSVA proxies?” The answer to my earlier question may be connected to this broader question.

3. Given that storage stack implementations are already quite large and are buggy, it might be very difficult to convince OS vendors to maintain another fairly complex branch with an additional few thousand lines of code. They need to port User proxies for each VMM through their evolution.

4. What if VFS semantics change across OS versions? Does this break backward compatibility to FSVA proxies running for the previous (now deprecated interface) definition?

5. Since a virtualized environment becomes a pre-requisite to running third-party file systems using this approach, what would a user who wishes to run his OS natively and use a third-party FS do?

More information would be appreciated for the following questions:

1. How does the Policy and Semantic changes (exemplified in Pg 3) affect the development of FSVA proxies and User proxies.

2. Pg 6. What is the underlying uncertainty when you state “The FSVA will *likely* retain an inode …”

3. Pg 7. Elaborate what and how page mappings are changed when you state “The user and FSVA proxies transparently fix the page mappings … ”

Evaluation comments:

1. Since one of the overheads of the proposed approach is metadata duplication, a part of the evaluation must focus on benchmarks that address metadata use. Postmark has a variety of parameters that you can use to control the amount of metadata generated during hte benchmark.

2. In 5.2, some discussion is needed on how the FSVA proxies were implemented ?

3. The % runtime overhead for the openssh compilation may possibly be biased due to the presence other system bottlenecks (e.g. CPU). Suggest to use IO intensive benchmarks which can clearly identify any performance overheads of the approach (e.g. Postmark).

4. In 5.6, what is the significance of running your custom benchmark against a “root NFS filesystem” rather than a local file system.

5. In 5.6, a timeline of the migration operation (similar in spirit but perhaps more detailed than Fig 3) would really help the reader identify the relative overheads of the various stages in the migration.

Typos etc.:

The paper is well written. There are minor writing mistakes :

1. In page 5, change “not the customers' pace” → “not at the customers' pace”.

2. In page 5, change “between the component” → “between the components”.

3. In page 10, change “copy two VMs' memory” → “copying two VMs' memory”.

4. In page 2, “even if the VFS interface were” → “even if the VFS interfaces were”.

5. In page 6, I believe you mean “1-to-1” and not “1-to-n”

==+== G. Comments for PC (hidden from authors)

==+== End Review