diff options
Diffstat (limited to 'Documentation/filesystems')
| -rw-r--r-- | Documentation/filesystems/ext4/directory.rst | 63 | ||||
| -rw-r--r-- | Documentation/filesystems/fscrypt.rst | 2 | ||||
| -rw-r--r-- | Documentation/filesystems/iomap/operations.rst | 50 | ||||
| -rw-r--r-- | Documentation/filesystems/porting.rst | 15 | ||||
| -rw-r--r-- | Documentation/filesystems/ramfs-rootfs-initramfs.rst | 12 | ||||
| -rw-r--r-- | Documentation/filesystems/resctrl.rst | 134 | ||||
| -rw-r--r-- | Documentation/filesystems/xfs/xfs-online-fsck-design.rst | 2 |
7 files changed, 208 insertions, 70 deletions
diff --git a/Documentation/filesystems/ext4/directory.rst b/Documentation/filesystems/ext4/directory.rst index 6eece8e31df8..9b003a4d453f 100644 --- a/Documentation/filesystems/ext4/directory.rst +++ b/Documentation/filesystems/ext4/directory.rst @@ -183,10 +183,10 @@ in the place where the name normally goes. The structure is - det_checksum - Directory leaf block checksum. -The leaf directory block checksum is calculated against the FS UUID, the -directory's inode number, the directory's inode generation number, and -the entire directory entry block up to (but not including) the fake -directory entry. +The leaf directory block checksum is calculated against the FS UUID (or +the checksum seed, if that feature is enabled for the fs), the directory's +inode number, the directory's inode generation number, and the entire +directory entry block up to (but not including) the fake directory entry. Hash Tree Directories ~~~~~~~~~~~~~~~~~~~~~ @@ -196,12 +196,12 @@ new feature was added to ext3 to provide a faster (but peculiar) balanced tree keyed off a hash of the directory entry name. If the EXT4_INDEX_FL (0x1000) flag is set in the inode, this directory uses a hashed btree (htree) to organize and find directory entries. For -backwards read-only compatibility with ext2, this tree is actually -hidden inside the directory file, masquerading as “empty” directory data -blocks! It was stated previously that the end of the linear directory -entry table was signified with an entry pointing to inode 0; this is -(ab)used to fool the old linear-scan algorithm into thinking that the -rest of the directory block is empty so that it moves on. +backwards read-only compatibility with ext2, interior tree nodes are actually +hidden inside the directory file, masquerading as “empty” directory entries +spanning the whole block. It was stated previously that directory entries +with the inode set to 0 are treated as unused entries; this is (ab)used to +fool the old linear-scan algorithm into skipping over those blocks containing +the interior tree node data. The root of the tree always lives in the first data block of the directory. By ext2 custom, the '.' and '..' entries must appear at the @@ -209,24 +209,24 @@ beginning of this first block, so they are put here as two ``struct ext4_dir_entry_2`` s and not stored in the tree. The rest of the root node contains metadata about the tree and finally a hash->block map to find nodes that are lower in the htree. If -``dx_root.info.indirect_levels`` is non-zero then the htree has two -levels; the data block pointed to by the root node's map is an interior -node, which is indexed by a minor hash. Interior nodes in this tree -contains a zeroed out ``struct ext4_dir_entry_2`` followed by a -minor_hash->block map to find leafe nodes. Leaf nodes contain a linear -array of all ``struct ext4_dir_entry_2``; all of these entries -(presumably) hash to the same value. If there is an overflow, the -entries simply overflow into the next leaf node, and the -least-significant bit of the hash (in the interior node map) that gets -us to this next leaf node is set. - -To traverse the directory as a htree, the code calculates the hash of -the desired file name and uses it to find the corresponding block -number. If the tree is flat, the block is a linear array of directory -entries that can be searched; otherwise, the minor hash of the file name -is computed and used against this second block to find the corresponding -third block number. That third block number will be a linear array of -directory entries. +``dx_root.info.indirect_levels`` is non-zero then the htree has that many +levels and the blocks pointed to by the root node's map are interior nodes. +These interior nodes have a zeroed out ``struct ext4_dir_entry_2`` followed by +a hash->block map to find nodes of the next level. Leaf nodes look like +classic linear directory blocks, but all of its entries have a hash value +equal or greater than the indicated hash of the parent node. + +The actual hash value for an entry name is only 31 bits, the least-significant +bit is set to 0. However, if there is a hash collision between directory +entries, the least-significant bit may get set to 1 on interior nodes in the +case where these two (or more) hash-colliding entries do not fit into one leaf +node and must be split across multiple nodes. + +To look up a name in such a htree, the code calculates the hash of the desired +file name and uses it to find the leaf node with the range of hash values the +calculated hash falls into (in other words, a lookup works basically the same +as it would in a B-Tree keyed by the hash value), and possibly also scanning +the leaf nodes that follow (in tree order) in case of hash collisions. To traverse the directory as a linear array (such as the old code does), the code simply reads every data block in the directory. The blocks used @@ -319,7 +319,8 @@ of a data block: * - 0x24 - __le32 - block - - The block number (within the directory file) that goes with hash=0. + - The block number (within the directory file) that lead to the left-most + leaf node, i.e. the leaf containing entries with the lowest hash values. * - 0x28 - struct dx_entry - entries[0] @@ -442,7 +443,7 @@ The dx_tail structure is 8 bytes long and looks like this: * - 0x0 - u32 - dt_reserved - - Zero. + - Unused (but still part of the checksum curiously). * - 0x4 - __le32 - dt_checksum @@ -450,4 +451,4 @@ The dx_tail structure is 8 bytes long and looks like this: The checksum is calculated against the FS UUID, the htree index header (dx_root or dx_node), all of the htree indices (dx_entry) that are in -use, and the tail block (dx_tail). +use, and the tail block (dx_tail) with the dt_checksum initially set to 0. diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 696a5844bfa3..70af896822e1 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -450,9 +450,7 @@ API, but the filenames mode still does. - CONFIG_CRYPTO_HCTR2 - Recommended: - arm64: CONFIG_CRYPTO_AES_ARM64_CE_BLK - - arm64: CONFIG_CRYPTO_POLYVAL_ARM64_CE - x86: CONFIG_CRYPTO_AES_NI_INTEL - - x86: CONFIG_CRYPTO_POLYVAL_CLMUL_NI - Adiantum - Mandatory: diff --git a/Documentation/filesystems/iomap/operations.rst b/Documentation/filesystems/iomap/operations.rst index 387fd9cc72ca..da982ca7e413 100644 --- a/Documentation/filesystems/iomap/operations.rst +++ b/Documentation/filesystems/iomap/operations.rst @@ -135,6 +135,27 @@ These ``struct kiocb`` flags are significant for buffered I/O with iomap: * ``IOCB_DONTCACHE``: Turns on ``IOMAP_DONTCACHE``. +``struct iomap_read_ops`` +-------------------------- + +.. code-block:: c + + struct iomap_read_ops { + int (*read_folio_range)(const struct iomap_iter *iter, + struct iomap_read_folio_ctx *ctx, size_t len); + void (*submit_read)(struct iomap_read_folio_ctx *ctx); + }; + +iomap calls these functions: + + - ``read_folio_range``: Called to read in the range. This must be provided + by the caller. If this succeeds, iomap_finish_folio_read() must be called + after the range is read in, regardless of whether the read succeeded or + failed. + + - ``submit_read``: Submit any pending read requests. This function is + optional. + Internal per-Folio State ------------------------ @@ -182,6 +203,28 @@ The ``flags`` argument to ``->iomap_begin`` will be set to zero. The pagecache takes whatever locks it needs before calling the filesystem. +Both ``iomap_readahead`` and ``iomap_read_folio`` pass in a ``struct +iomap_read_folio_ctx``: + +.. code-block:: c + + struct iomap_read_folio_ctx { + const struct iomap_read_ops *ops; + struct folio *cur_folio; + struct readahead_control *rac; + void *read_ctx; + }; + +``iomap_readahead`` must set: + * ``ops->read_folio_range()`` and ``rac`` + +``iomap_read_folio`` must set: + * ``ops->read_folio_range()`` and ``cur_folio`` + +``ops->submit_read()`` and ``read_ctx`` are optional. ``read_ctx`` is used to +pass in any custom data the caller needs accessible in the ops callbacks for +fulfilling reads. + Buffered Writes --------------- @@ -317,6 +360,9 @@ The fields are as follows: delalloc reservations to avoid having delalloc reservations for clean pagecache. This function must be supplied by the filesystem. + If this succeeds, iomap_finish_folio_write() must be called once writeback + completes for the range, regardless of whether the writeback succeeded or + failed. - ``writeback_submit``: Submit the previous built writeback context. Block based file systems should use the iomap_ioend_writeback_submit @@ -444,10 +490,6 @@ These ``struct kiocb`` flags are significant for direct I/O with iomap: Only meaningful for asynchronous I/O, and only if the entire I/O can be issued as a single ``struct bio``. - * ``IOCB_DIO_CALLER_COMP``: Try to run I/O completion from the caller's - process context. - See ``linux/fs.h`` for more details. - Filesystems should call ``iomap_dio_rw`` from ``->read_iter`` and ``->write_iter``, and set ``FMODE_CAN_ODIRECT`` in the ``->open`` function for the file. diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst index 7233b04668fc..d33429294252 100644 --- a/Documentation/filesystems/porting.rst +++ b/Documentation/filesystems/porting.rst @@ -211,7 +211,7 @@ test and set for you. e.g.:: inode = iget_locked(sb, ino); - if (inode->i_state & I_NEW) { + if (inode_state_read_once(inode) & I_NEW) { err = read_inode_from_disk(inode); if (err < 0) { iget_failed(inode); @@ -1309,3 +1309,16 @@ a different length, use vfs_parse_fs_qstr(fc, key, &QSTR_LEN(value, len)) instead. + +--- + +**mandatory** + +vfs_mkdir() now returns a dentry - the one returned by ->mkdir(). If +that dentry is different from the dentry passed in, including if it is +an IS_ERR() dentry pointer, the original dentry is dput(). + +When vfs_mkdir() returns an error, and so both dputs() the original +dentry and doesn't provide a replacement, it also unlocks the parent. +Consequently the return value from vfs_mkdir() can be passed to +end_creating() and the parent will be unlocked precisely when necessary. diff --git a/Documentation/filesystems/ramfs-rootfs-initramfs.rst b/Documentation/filesystems/ramfs-rootfs-initramfs.rst index fa4f81099cb4..a9d271e171c3 100644 --- a/Documentation/filesystems/ramfs-rootfs-initramfs.rst +++ b/Documentation/filesystems/ramfs-rootfs-initramfs.rst @@ -290,11 +290,11 @@ Why cpio rather than tar? This decision was made back in December, 2001. The discussion started here: - http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1538.html +- https://lore.kernel.org/lkml/a03cke$640$1@cesium.transmeta.com/ And spawned a second thread (specifically on tar vs cpio), starting here: - http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1587.html +- https://lore.kernel.org/lkml/3C25A06D.7030408@zytor.com/ The quick and dirty summary version (which is no substitute for reading the above threads) is: @@ -310,7 +310,7 @@ the above threads) is: either way about the archive format, and there are alternative tools, such as: - http://freecode.com/projects/afio + https://linux.die.net/man/1/afio 2) The cpio archive format chosen by the kernel is simpler and cleaner (and thus easier to create and parse) than any of the (literally dozens of) @@ -331,12 +331,12 @@ the above threads) is: 5) Al Viro made the decision (quote: "tar is ugly as hell and not going to be supported on the kernel side"): - http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1540.html + - https://lore.kernel.org/lkml/Pine.GSO.4.21.0112222109050.21702-100000@weyl.math.psu.edu/ explained his reasoning: - - http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1550.html - - http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1638.html + - https://lore.kernel.org/lkml/Pine.GSO.4.21.0112222240530.21702-100000@weyl.math.psu.edu/ + - https://lore.kernel.org/lkml/Pine.GSO.4.21.0112230849550.23300-100000@weyl.math.psu.edu/ and, most importantly, designed and implemented the initramfs code. diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst index b7f35b07876a..8c8ce678148a 100644 --- a/Documentation/filesystems/resctrl.rst +++ b/Documentation/filesystems/resctrl.rst @@ -17,17 +17,18 @@ AMD refers to this feature as AMD Platform Quality of Service(AMD QoS). This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo flag bits: -=============================================== ================================ -RDT (Resource Director Technology) Allocation "rdt_a" -CAT (Cache Allocation Technology) "cat_l3", "cat_l2" -CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2" -CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" -MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local" -MBA (Memory Bandwidth Allocation) "mba" -SMBA (Slow Memory Bandwidth Allocation) "" -BMEC (Bandwidth Monitoring Event Configuration) "" -ABMC (Assignable Bandwidth Monitoring Counters) "" -=============================================== ================================ +=============================================================== ================================ +RDT (Resource Director Technology) Allocation "rdt_a" +CAT (Cache Allocation Technology) "cat_l3", "cat_l2" +CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2" +CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" +MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local" +MBA (Memory Bandwidth Allocation) "mba" +SMBA (Slow Memory Bandwidth Allocation) "" +BMEC (Bandwidth Monitoring Event Configuration) "" +ABMC (Assignable Bandwidth Monitoring Counters) "" +SDCIAE (Smart Data Cache Injection Allocation Enforcement) "" +=============================================================== ================================ Historically, new features were made visible by default in /proc/cpuinfo. This resulted in the feature flags becoming hard to parse by humans. Adding a new @@ -72,6 +73,11 @@ The 'info' directory contains information about the enabled resources. Each resource has its own subdirectory. The subdirectory names reflect the resource names. +Most of the files in the resource's subdirectory are read-only, and +describe properties of the resource. Resources that support global +configuration options also include writable files that can be used +to modify those settings. + Each subdirectory contains the following files with respect to allocation: @@ -90,12 +96,19 @@ related to allocation: must be set when writing a mask. "shareable_bits": - Bitmask of shareable resource with other executing - entities (e.g. I/O). User can use this when - setting up exclusive cache partitions. Note that - some platforms support devices that have their - own settings for cache use which can over-ride - these bits. + Bitmask of shareable resource with other executing entities + (e.g. I/O). Applies to all instances of this resource. User + can use this when setting up exclusive cache partitions. + Note that some platforms support devices that have their + own settings for cache use which can over-ride these bits. + + When "io_alloc" is enabled, a portion of each cache instance can + be configured for shared use between hardware and software. + "bit_usage" should be used to see which portions of each cache + instance is configured for hardware use via "io_alloc" feature + because every cache instance can have its "io_alloc" bitmask + configured independently via "io_alloc_cbm". + "bit_usage": Annotated capacity bitmasks showing how all instances of the resource are used. The legend is: @@ -109,16 +122,16 @@ related to allocation: "H": Corresponding region is used by hardware only but available for software use. If a resource - has bits set in "shareable_bits" but not all - of these bits appear in the resource groups' - schematas then the bits appearing in - "shareable_bits" but no resource group will - be marked as "H". + has bits set in "shareable_bits" or "io_alloc_cbm" + but not all of these bits appear in the resource + groups' schemata then the bits appearing in + "shareable_bits" or "io_alloc_cbm" but no + resource group will be marked as "H". "X": Corresponding region is available for sharing and - used by hardware and software. These are the - bits that appear in "shareable_bits" as - well as a resource group's allocation. + used by hardware and software. These are the bits + that appear in "shareable_bits" or "io_alloc_cbm" + as well as a resource group's allocation. "S": Corresponding region is used by software and available for sharing. @@ -136,6 +149,77 @@ related to allocation: "1": Non-contiguous 1s value in CBM is supported. +"io_alloc": + "io_alloc" enables system software to configure the portion of + the cache allocated for I/O traffic. File may only exist if the + system supports this feature on some of its cache resources. + + "disabled": + Resource supports "io_alloc" but the feature is disabled. + Portions of cache used for allocation of I/O traffic cannot + be configured. + "enabled": + Portions of cache used for allocation of I/O traffic + can be configured using "io_alloc_cbm". + "not supported": + Support not available for this resource. + + The feature can be modified by writing to the interface, for example: + + To enable:: + + # echo 1 > /sys/fs/resctrl/info/L3/io_alloc + + To disable:: + + # echo 0 > /sys/fs/resctrl/info/L3/io_alloc + + The underlying implementation may reduce resources available to + general (CPU) cache allocation. See architecture specific notes + below. Depending on usage requirements the feature can be enabled + or disabled. + + On AMD systems, io_alloc feature is supported by the L3 Smart + Data Cache Injection Allocation Enforcement (SDCIAE). The CLOSID for + io_alloc is the highest CLOSID supported by the resource. When + io_alloc is enabled, the highest CLOSID is dedicated to io_alloc and + no longer available for general (CPU) cache allocation. When CDP is + enabled, io_alloc routes I/O traffic using the highest CLOSID allocated + for the instruction cache (CDP_CODE), making this CLOSID no longer + available for general (CPU) cache allocation for both the CDP_CODE + and CDP_DATA resources. + +"io_alloc_cbm": + Capacity bitmasks that describe the portions of cache instances to + which I/O traffic from supported I/O devices are routed when "io_alloc" + is enabled. + + CBMs are displayed in the following format: + + <cache_id0>=<cbm>;<cache_id1>=<cbm>;... + + Example:: + + # cat /sys/fs/resctrl/info/L3/io_alloc_cbm + 0=ffff;1=ffff + + CBMs can be configured by writing to the interface. + + Example:: + + # echo 1=ff > /sys/fs/resctrl/info/L3/io_alloc_cbm + # cat /sys/fs/resctrl/info/L3/io_alloc_cbm + 0=ffff;1=00ff + + # echo "0=ff;1=f" > /sys/fs/resctrl/info/L3/io_alloc_cbm + # cat /sys/fs/resctrl/info/L3/io_alloc_cbm + 0=00ff;1=000f + + When CDP is enabled "io_alloc_cbm" associated with the CDP_DATA and CDP_CODE + resources may reflect the same values. For example, values read from and + written to /sys/fs/resctrl/info/L3DATA/io_alloc_cbm may be reflected by + /sys/fs/resctrl/info/L3CODE/io_alloc_cbm and vice versa. + Memory bandwidth(MB) subdirectory contains the following files with respect to allocation: diff --git a/Documentation/filesystems/xfs/xfs-online-fsck-design.rst b/Documentation/filesystems/xfs/xfs-online-fsck-design.rst index 8cbcd3c26434..55e727b5f12e 100644 --- a/Documentation/filesystems/xfs/xfs-online-fsck-design.rst +++ b/Documentation/filesystems/xfs/xfs-online-fsck-design.rst @@ -249,7 +249,7 @@ sharing and lock acquisition rules as the regular filesystem. This means that scrub cannot take *any* shortcuts to save time, because doing so could lead to concurrency problems. In other words, online fsck is not a complete replacement for offline fsck, and -a complete run of online fsck may take longer than online fsck. +a complete run of online fsck may take longer than offline fsck. However, both of these limitations are acceptable tradeoffs to satisfy the different motivations of online fsck, which are to **minimize system downtime** and to **increase predictability of operation**. |
