Towards Automatic File Systems By Kanchan Damle Course Instructor

advertisement
CS 736
Spring 2008
Towards Automatic File Systems
By
Kanchan Damle
Course Instructor
Prof. Remzi Arpaci-Dusseau
AGENDA
• Lot of Questions
• Implementation.
– Basic File System Design.
– FUSE
• Some Measures.
– Performance of File System.
• Conclusions
• Questions?
The usual questions for any new idea !!
• What?
– User :Specifies
– Implementer: Will be Done!
• Why?
– User knows his needs better
– Flexibility
– Optimum Resource Usage
• How?
– Let the user specify
– Just redirect the system calls
Implementation Details
• Say user wants : “Simple FS”
(I picked another ,“S”- SFS)
– 1 superblock
-no redundancy like FFS
– Inode and Data Bitmaps
-no problems of free-list .
– Limit to 100 files.
– User not sure of how he will access the data..
Other Details
•
•
•
•
•
•
•
Max Number of Open Files: 25
Max File Size : 15K
Some reserved blocks.
Maximum Filename length = 16
Maximum Pathname length = 256
Root Exists !(Inode #2)
Persistent Storage on File:
– Raw device
– File on existing file system.
Implementation Details
METADATA STRUCTURES
0
1
2
3
4
5-39
INODE-BITMAP
DATA-BLOCK BITMAP
SUPER BLOCK
***Data blocks follow 40 onwards
SECTORS 5-39
INODES
Implementation Details
Why FUSE?
• User level FS implementation done
• But how can we have system level calls directed to
implemented FS?
• Need ? -perhaps Uniformity, hassle to remember the
new calls..
• Solution : Indirection!
• FUSE can help
FUSE unveiled!
• What is FUSE?
• A framework: Redirects FS calls to (User level ) Simple
FS API
• Has two modules + Mount Utility
– User Library : Interfaces with the Simple FS API
– Kernel Module : Binds to VFS, Calls FUSE Library
Basic Call Flow..
Interface Details
[0] Blocking read call on
kernel module
Simple FS
User Space
[1] Request from VFS
[4]
[3]
[2] Unblocks the thread
and sends request
FUSE Library
Blocking
Read Call
[0]
[2]
[3] Fuse Library invokes
respective Simple FS
Module
[5]
[4] Module returns with
requested data.
[1]
Kernel Module
[5] Transfer data to kernel
module
[6] Transfer data to VFS
[6]
Kernel Space
Internal Details
• VFS System Call Map
– unlink() -> sys_unlink
• Fuse kernel call transfer
– sys_unlink -> fuse_read_device
• Request Processing
– User Level FS call invoked
• Blocking Calls : Queues Used
– Pending, processing, etc
Cost-Benefit …
• Benefits :
– Flexibility
– Not tied to any particular OS
– Avoid Kernel recompilation
– Non-privileged mounts possible
– Simplicity : Coding in user space is simpler
Cost :
More overhead: More user and kernel switching
Performance Measures .. Simple FS
250
Time in msec
200
150
Disk Intializing Time
(ms)
Sync time
(ms)
100
50
0
1
2
3
4
5
6
7
Run Number
8
9
10
Performance Measures .. Simple FS
File System API
700
Get Attr
(us)
600
File Create
(us)
Open
(us)
Time in usec
500
Write
(us)
400
Seek
(us)
Bigger Write
(us)
300
Read
(us)
200
Close
(us)
Unlink
(us)
100
Read Dir
(us)
0
1
2
3
4
5
6
7
Run Number
8
9
10
11
12
Unlink
(us)
Conclusions
• Good performance
– Buffered writes, fewer writes
• No worry about positioning time
– Uniform access time
• Flexibility
– User gets what he requires
• System calls redirected hence easy to use.
– Don’t remember erratic name and be uniform
• But,
– Periodic FS_Sync required
– What if file deleted?
– Yet not secure for sensitive data
Lessons Learnt
• Reading does not imply doing.
• Simple things can be bigger hurdles!
• Planning work helps
• Give it “time” and “thought”
• Believe that you can do it!
Thank you…
QUESTIONS?
Download