Setup the key-value store benchmark (YCSB) to test NBW (file system path).
Fix the CPU scheduler bug that could potentially influence the performance degradation shown by some of the benchmarks.
Setup the consolidated benchmark using qemu on a Linux host. Two possible options here:
Use NBW at the host level and virtualize guest machines without KVM to create memory pressure at the hyper-visor level.
Use vanilla host and virtualize guest machines using KVM and let NBW work at the guest level.
We plan to set the base comparison level by partitioning statically the physical memory among all the VMs to let fit in memory each working set without problems.
Incrementally we increase the amount of VMs (i.e workloads) so they start competing for memory. We measure the improvements in execution times of NBW compared to the vanilla system.
The result of the benchmark is a vector of execution times where each dimension is the execution time of each workload in the consolidated system. There should be one vector for vanilla and NBW respectively.
Depending on time and the results of the key-value benchmark as a file system benchmark, explore other file systems benchmarks as databases or Filebench (possible using MongoDB as back-end)
With very low priority, we could rerun the SPEC SFS benchmark to see if there is any improvement by the inclusion of the read-from-patches implementation.
Future ToDo's
Get times of changing from user to kernel mode, and the other way around.
Test the system changing queue depths with NCQ.
Explore creation of persistent patches.
Implement NFS support.
Meetings
01/08/15: Non-blocking write curves for paper
12/16/14: Handling of journaling modes; Responding to shepherd; Simulation of cache misses for partial page writes (Hector)
12/09/14: FAST'15 Review Comments (Audio lost)
11/25/14: Intro feedback and trace-replayed brainstorming
10/16/13: FS-specific metadata is cached via page cache using device mapping!
10/11/13: Two options to explore for FS metadata blocking (1) lazy page fetching and (2) work-queue based threaded page fetching
10/09/13: Sync metadata access seems to block NBW; discussion of next steps
09/25/13: Seems like non-blocking writes are blocking!
09/23/13: Results examination for lazy, planning for filebench runs
09/20/13: Results examination, development, and paper writing tasks
09/18/13: Inspection of NFS client kernel-code
09/16/13: Results for SPECSfs and Filebench, updates on NFS client
09/13/13: Updates on filebench (fileserver) and SPECSfs
09/11/13: New workload plan – jdbc,jdbm,gdbm(java), etc.
09/06/13: New evaluation plan
08/28/13: Next steps: SpecSFS anaysis, I/O scheduler, other FS benchmarks
08/21/13: Updates on SPECSfs behavior and controlling runtime
08/19/13: Updates/discussion on NFS client – does it block on write?
08/14/13: Implementation update for CPU scheduler and some initial direction of NFS implementation; Results update (Aldo)
08/09/13: NFS server - async config, many updates from Daniel on development - NetApp project
07/31/13: Memory ordering discussion + updates from Aldo
07/24/13: I/O prioritization v.s. Lazy file system implementation + discussion of next set of configurations to run
07/17/13: Review of current status (Aldo, Daniel, Luis) and thoughts on syscall tracing
07/10/13: Discussion with Luis about FAST paper and Filesystem oriented submission
07/03/13: Discussion with Luis on implementation/documentation next steps
04/29/13: FIO new data points to NCQ being effective – need to test with larger file size in FIO
04/23/13: NCQ results indicate fake-ncq – Aldo will confirm with more expts; New results with Lazy (10K max patches) are promising
04/18/13: Three directions next: reliability/reproducability of data, NCQ, and Jesus' scheduler completion and integration with Lazy
04/11/13: Two primary strengths of NBW not capitalized in current implementation
03/28/13: Strategy discussion
03/26/13: New data – higher queue depths not proportional to fraction of NBW requests (request merging the answer?) (as single-stepping)
03/21/13: Potential source of failure identified (as single-stepping)
03/19/13: Failure issues with dacapo
01/31/13: Initial set of results with new CPU scheduler; next steps fix cpu scheduler optimization for mmap faulting processes; correlating counters with performance; I/O scheduler optimization; memory corruption issue
01/17/13: Updates and status of CPU scheduler implementation from Jesus
01/15/13: Memory sensitivity and benchmarks for evaluation
06/07/11: 32 vs. 64 bit porting of memcheck.c – next steps: porting to 64 bit, random single-step checks, radix/range/redblack tree based patch implementation
05/31/11: More generic w/ generic_fault_handler; thoughts on consistency and syscall implementation
05/23/11: Updates on patch implementation
05/17/11: Single stepping v.s. disassembly for write size
05/10/11: Initial thoughts on handling syscalls
05/03/11: Initial kernel design
04/19/11: Addressing Hotstorage review comments
03/11/11: Result review and Paper writing plans.
03/09/11: (evening skype call) Review
03/09/11: Deconstructing results, motivation discussion, and changes to graph data