The Virtual File System Abstraction

The Virtual File System (VFS) is a software layer in the kernel that provides a uniform interface for all filesystem operations, regardless of the underlying filesystem implementation. When a program calls open(), read(), write(), or close(), it goes through the VFS, which dispatches the call to the appropriate filesystem-specific code. The application never needs to know whether the file resides on an ext4 partition, an NFS network share, a procfs pseudo-filesystem, or a tmpfs RAM disk.

Why VFS Exists

Without VFS, every application would need filesystem-specific code to handle different storage backends. Imagine a text editor that needs separate code paths for reading files on ext4 vs. XFS vs. NFS. The VFS eliminates this by defining a set of abstract operations (function pointers in C) that every filesystem must implement. The kernel calls these operations through the VFS layer, and the filesystem driver handles the details.

VFS Objects

The Linux VFS defines four primary objects:

ObjectPurpose
superblockRepresents a mounted filesystem instance. Contains metadata like block size, max file size, and a pointer to the root inode. Each mount produces one superblock.
inodeRepresents a file on disk (metadata + block pointers). The VFS inode is an in-memory copy of the on-disk inode, enriched with VFS-specific fields.
dentry (directory entry)Maps a filename to an inode. The kernel caches dentries in the dentry cache (dcache) for fast path lookups.
fileRepresents an open file -- the combination of a dentry (which file) and a current position (offset). Multiple file objects can point to the same dentry.

File Descriptor Table

Each process has a file descriptor table -- an array of pointers to file objects. When a process calls open(), the kernel creates a file object and returns the array index (the file descriptor number: 0, 1, 2, ...). The first three are typically stdin (0), stdout (1), and stderr (2). The file descriptor is the handle the process uses for all subsequent operations (read(fd, ...), write(fd, ...)).

System-Wide Open File Table

The kernel also maintains a system-wide open file table (also called the file table). Each entry in this table contains the current file offset, the access mode (read/write), and a pointer to the underlying dentry/inode. Multiple processes can have file descriptors pointing to the same entry (e.g., after fork()), meaning they share the file offset.

How a File Operation Flows

When a process calls read(fd, buf, 4096):

  1. The kernel looks up fd in the process's file descriptor table to find the file object.
  2. The file object has a pointer to the dentry, which points to the inode.
  3. The inode has a set of operations (inode_operations and file_operations), which are function pointers set by the filesystem driver when the inode was loaded.
  4. The VFS calls the filesystem-specific read() function (e.g., ext4_read() or nfs_read()).
  5. The filesystem driver reads the data (from disk, network, or memory) and copies it into the user's buffer.

Mount Points

Different filesystems are mounted at directories in the single unified directory tree. The VFS maintains a mount table that records which filesystem is mounted where. When path resolution crosses a mount point, the VFS transparently switches to the new filesystem's superblock and root dentry. This is how /proc (procfs), /sys (sysfs), /tmp (tmpfs), and /mnt/nfs (NFS) all appear in one seamless tree despite being entirely different filesystem types.

One read() Call, Many Backends

Real-World Example

Consider three files opened by the same process:

int fd1 = open("/home/user/data.csv", O_RDONLY);    // ext4 on SSD
int fd2 = open("/proc/self/status", O_RDONLY);        // procfs (kernel-generated)
int fd3 = open("/mnt/share/report.pdf", O_RDONLY);    // NFS (network)

All three use the exact same read() syscall:

read(fd1, buf, 4096);  // VFS dispatches to ext4_read() -> reads from SSD
read(fd2, buf, 4096);  // VFS dispatches to proc_read() -> generates text from kernel structs
read(fd3, buf, 4096);  // VFS dispatches to nfs_read() -> sends RPC to NFS server

The application code is identical. The VFS performs the dispatch based on the filesystem type recorded in the superblock of each mount point.

Path resolution example: When opening /mnt/share/report.pdf:

  1. Start at the root inode of the root filesystem (ext4).
  2. Look up "mnt" in root's dentry -> find inode for /mnt (ext4).
  3. Look up "share" in /mnt's dentry -> mount point detected! Switch to the NFS superblock.
  4. Look up "report.pdf" in the NFS root dentry -> find inode for the file (NFS inode).
  5. Return a file descriptor backed by NFS file_operations.

This seamless switching is the core value of VFS. Adding a new filesystem to Linux means implementing the VFS interfaces (superblock_operations, inode_operations, file_operations, dentry_operations), and the new filesystem immediately works with every existing application.

VFS Architecture: Uniform Interface Over Multiple Filesystems

VFS: Uniform Interface Over Multiple FS Types App A App B App C System Call Interface: open() read() write() close() Virtual File System (VFS) superblock | inode | dentry | file objects dentry cache (dcache) | inode cache ext4 journaling FS XFS high-perf FS NFS network FS procfs kernel info tmpfs RAM-backed SSD /dev/sda HDD /dev/sdb Network Kernel mem RAM Mount table: / (ext4) | /data (XFS) | /mnt (NFS) | /proc | /tmp
Step 1 of 2