summaryrefslogtreecommitdiff
path: root/builtin
AgeCommit message (Collapse)Author
25 hoursMerge branch 'kn/maintenance-is-needed'Junio C Hamano
"git maintenance" command learned "is-needed" subcommand to tell if it is necessary to perform various maintenance tasks. * kn/maintenance-is-needed: maintenance: add 'is-needed' subcommand maintenance: add checking logic in `pack_refs_condition()` refs: add a `optimize_required` field to `struct ref_storage_be` reftable/stack: add function to check if optimization is required reftable/stack: return stack segments directly
3 daysMerge branch 'ps/ref-peeled-tags'Junio C Hamano
Some ref backend storage can hold not just the object name of an annotated tag, but the object name of the object the tag points at. The code to handle this information has been streamlined. * ps/ref-peeled-tags: t7004: do not chdir around in the main process ref-filter: fix stale parsed objects ref-filter: parse objects on demand ref-filter: detect broken tags when dereferencing them refs: don't store peeled object IDs for invalid tags object: add flag to `peel_object()` to verify object type refs: drop infrastructure to peel via iterators refs: drop `current_ref_iter` hack builtin/show-ref: convert to use `reference_get_peeled_oid()` ref-filter: propagate peeled object ID upload-pack: convert to use `reference_get_peeled_oid()` refs: expose peeled object ID via the iterator refs: refactor reference status flags refs: fully reset `struct ref_iterator::ref` on iteration refs: introduce `.ref` field for the base iterator refs: introduce wrapper struct for `each_ref_fn`
3 daysMerge branch 'ps/packed-git-in-object-store'Junio C Hamano
The list of packfiles used in a running Git process is moved from the packed_git structure into the packfile store. * ps/packed-git-in-object-store: packfile: track packs via the MRU list exclusively packfile: always add packfiles to MRU when adding a pack packfile: move list of packs into the packfile store builtin/pack-objects: simplify logic to find kept or nonlocal objects packfile: fix approximation of object counts http: refactor subsystem to use `packfile_list`s packfile: move the MRU list into the packfile store packfile: use a `strmap` to store packs by name
10 daysMerge branch 'tc/last-modified-active-paths-optimization'Junio C Hamano
"git last-modified" was optimized by narrowing the set of paths to follow as it dug deeper in the history. * tc/last-modified-active-paths-optimization: last-modified: implement faster algorithm
12 daysmaintenance: add 'is-needed' subcommandKarthik Nayak
The 'git-maintenance(1)' command provides tooling to run maintenance tasks over Git repositories. The 'run' subcommand, as the name suggests, runs the maintenance tasks. When used with the '--auto' flag, it uses heuristics to determine if the required thresholds are met for running said maintenance tasks. There is however a lack of insight into these heuristics. Meaning, the checks are linked to the execution. Add a new 'is-needed' subcommand to 'git-maintenance(1)' which allows users to simply check if it is needed to run maintenance without performing it. This subcommand can check if it is needed to run maintenance without actually running it. Ideally it should be used with the '--auto' flag, which would allow users to check if the thresholds required are met. The subcommand also supports the '--task' flag which can be used to check specific maintenance tasks. While adding the respective tests in 't/t7900-maintenance.sh', remove a duplicate of the test: 'worktree-prune task with --auto honors maintenance.worktree-prune.auto'. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
12 daysmaintenance: add checking logic in `pack_refs_condition()`Karthik Nayak
The 'git-maintenance(1)' command supports an '--auto' flag. Usage of the flag ensures to run maintenance tasks only if certain thresholds are met. The heuristic is defined on a task level, wherein each task defines an 'auto_condition', which states if the task should be run. The 'pack-refs' task is hard-coded to return 1 as: 1. There was never a way to check if the reference backend needs to be optimized without actually performing the optimization. 2. We can pass in the '--auto' flag to 'git-pack-refs(1)' which would optimize based on heuristics. The previous commit added a `refs_optimize_required()` function, which can be used to check if a reference backend required optimization. Use this within `pack_refs_condition()`. This allows us to add a 'git maintenance is-needed' subcommand which can notify the user if maintenance is needed without actually performing the optimization. Without this change, the reference backend would always state that optimization is needed. Since we import 'revision.h', we need to remove the definition for 'SEEN' which is duplicated in the included header. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-06Merge branch 'cc/fast-import-export-i18n-cleanup'Junio C Hamano
Messages from fast-import/export are now marked for i18n. * cc/fast-import-export-i18n-cleanup: gpg-interface: mark a string for translation fast-import: mark strings for translation fast-export: mark strings for translation gpg-interface: use left shift to define GPG_VERIFY_* gpg-interface: simplify ssh fingerprint parsing
2025-11-05Merge branch 'rz/t0450-bisect-doc-update'Junio C Hamano
The help text and manual page of "git bisect" command have been made consistent with each other. * rz/t0450-bisect-doc-update: bisect: update usage and docs to match each other
2025-11-04Merge branch 'jt/repo-structure'Junio C Hamano
"git repo structure", a new command. * jt/repo-structure: builtin/repo: add progress meter for structure stats builtin/repo: add keyvalue and nul format for structure stats builtin/repo: add object counts in structure output builtin/repo: introduce structure subcommand ref-filter: export ref_kind_from_refname() ref-filter: allow NULL filter pattern builtin/repo: rename repo_info() to cmd_repo_info()
2025-11-04Merge branch 'ps/ref-peeled-tags' into kn/maintenance-is-neededJunio C Hamano
* ps/ref-peeled-tags: (23 commits) t7004: do not chdir around in the main process ref-filter: fix stale parsed objects ref-filter: parse objects on demand ref-filter: detect broken tags when dereferencing them refs: don't store peeled object IDs for invalid tags object: add flag to `peel_object()` to verify object type refs: drop infrastructure to peel via iterators refs: drop `current_ref_iter` hack builtin/show-ref: convert to use `reference_get_peeled_oid()` ref-filter: propagate peeled object ID upload-pack: convert to use `reference_get_peeled_oid()` refs: expose peeled object ID via the iterator refs: refactor reference status flags refs: fully reset `struct ref_iterator::ref` on iteration refs: introduce `.ref` field for the base iterator refs: introduce wrapper struct for `each_ref_fn` builtin/repo: add progress meter for structure stats builtin/repo: add keyvalue and nul format for structure stats builtin/repo: add object counts in structure output builtin/repo: introduce structure subcommand ...
2025-11-04builtin/show-ref: convert to use `reference_get_peeled_oid()`Patrick Steinhardt
The git-show-ref(1) command has multiple different modes: - It knows to show all references matching a pattern. - It knows to list all references that are an exact match to whatever the user has provided. - It knows to check for reference existence. The first two commands use mostly the same infrastructure to print the references via `show_one()`. But while the former mode uses a proper iterator and thus has a `struct reference` available in its context, the latter calls `refs_read_ref()` and thus doesn't. Consequently, we cannot easily use `reference_get_peeled_oid()` to print the peeled value. Adapt the code so that we manually construct a `struct reference` when verifying refs. We wouldn't ever have the peeled value available anyway as we're not using an iterator here, so we can simply plug in the values we _do_ have. With this change we now have a `struct reference` available at both callsites of `show_one()` and can thus pass it, which allows us to use `reference_get_peeled_oid()` instead of `peel_iterated_oid()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-04ref-filter: propagate peeled object IDPatrick Steinhardt
When queueing a reference in the "ref-filter" subsystem we end up creating a new ref array item that contains the reference's info. One bit of info that we always discard though is the peeled object ID, and because of that we are forced to use `peel_iterated_oid()`. Refactor the code to propagate the peeled object ID via the ref array, if available. This allows us to manually peel tags without having to go through the object database. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-04refs: expose peeled object ID via the iteratorPatrick Steinhardt
Both the "files" and "reftable" backend are able to store peeled values for tags in the respective formats. This allows for a more efficient lookup of the target object of such a tag without having to manually peel via the object database. The infrastructure to access these peeled object IDs is somewhat funky though. When iterating through objects, we store a pointer reference to the current iterator in a global variable. The callbacks invoked by that iterator are then expected to call `peel_iterated_oid()`, which checks whether the globally-stored iterator's current reference refers to the one handed into that function. If so, we ask the iterator to peel the object, otherwise we manually peel the object via the object database. Depending on global state like this is somewhat weird and also quite fragile. Introduce a new `struct reference::peeled_oid` field that can be populated by the reference backends. This field can be accessed via a new function `reference_get_peeled_oid()` that either uses that value, if set, or alternatively peels via the ODB. With this change we don't have to rely on global state anymore, but make the peeled object ID available to the callback functions directly. Adjust trivial callers that already have a `struct reference` available. Remaining callers will be adjusted in subsequent commits. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-04refs: introduce wrapper struct for `each_ref_fn`Patrick Steinhardt
The `each_ref_fn` callback function type is used across our code base for several different functions that iterate through reference. There's a bunch of callbacks implementing this type, which makes any changes to the callback signature extremely noisy. An example of the required churn is e8207717f1 (refs: add referent to each_ref_fn, 2024-08-09): adding a single argument required us to change 48 files. It was already proposed back then [1] that we might want to introduce a wrapper structure to alleviate the pain going forward. While this of course requires the same kind of global refactoring as just introducing a new parameter, it at least allows us to more change the callback type afterwards by just extending the wrapper structure. One counterargument to this refactoring is that it makes the structure more opaque. While it is obvious which callsites need to be fixed up when we change the function type, it's not obvious anymore once we use a structure. That being said, we only have a handful of sites that actually need to populate this wrapper structure: our ref backends, "refs/iterator.c" as well as very few sites that invoke the iterator callback functions directly. Introduce this wrapper structure so that we can adapt the iterator interfaces more readily. [1]: <ZmarVcF5JjsZx0dl@tanuki> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-03last-modified: implement faster algorithmToon Claes
The current implementation of git-last-modified(1) works by doing a revision walk, and inspecting the diff at each level of that walk to annotate entries remaining in the hashmap of paths. In other words, if the diff at some level touches a path which has not yet been associated with a commit, then that commit becomes associated with the path. While a perfectly reasonable implementation, it can perform poorly in either one of two scenarios: 1. There are many entries of interest, in which case there is simply a lot of work to do. 2. Or, there are (even a few) entries which have not been updated in a long time, and so we must walk through a lot of history in order to find a commit that touches that path. This patch rewrites the last-modified implementation that addresses the second point. The idea behind the algorithm is to propagate a set of 'active' paths (a path is 'active' if it does not yet belong to a commit) up to parents and do a truncated revision walk. The walk is truncated because it does not produce a revision for every change in the original pathspec, but rather only for active paths. More specifically, consider a priority queue of commits sorted by generation number. First, enqueue the set of boundary commits with all paths in the original spec marked as interesting. Then, while the queue is not empty, do the following: 1. Pop an element, say, 'c', off of the queue, making sure that 'c' isn't reachable by anything in the '--not' set. 2. For each parent 'p' (with index 'parent_i') of 'c', do the following: a. Compute the diff between 'c' and 'p'. b. Pass any active paths that are TREESAME from 'c' to 'p'. c. If 'p' has any active paths, push it onto the queue. 3. Any path that remains active on 'c' is associated to that commit. This ends up being equivalent to doing something like 'git log -1 -- $path' for each path simultaneously. But, it allows us to go much faster than the original implementation by limiting the number of diffs we compute, since we can avoid parts of history that would have been considered by the revision walk in the original implementation, but are known to be uninteresting to us because we have already marked all paths in that area to be inactive. To avoid computing many first-parent diffs, add another trick on top of this and check if all paths active in 'c' are DEFINITELY NOT in c's Bloom filter. Since the commit-graph only stores first-parent diffs in the Bloom filters, we can only apply this trick to first-parent diffs. Comparing the performance of this new algorithm shows about a 2.5x improvement on git.git: Benchmark 1: master no bloom Time (mean ± σ): 2.868 s ± 0.023 s [User: 2.811 s, System: 0.051 s] Range (min … max): 2.847 s … 2.926 s 10 runs Benchmark 2: master with bloom Time (mean ± σ): 949.9 ms ± 15.2 ms [User: 907.6 ms, System: 39.5 ms] Range (min … max): 933.3 ms … 971.2 ms 10 runs Benchmark 3: HEAD no bloom Time (mean ± σ): 782.0 ms ± 6.3 ms [User: 740.7 ms, System: 39.2 ms] Range (min … max): 776.4 ms … 798.2 ms 10 runs Benchmark 4: HEAD with bloom Time (mean ± σ): 307.1 ms ± 1.7 ms [User: 276.4 ms, System: 29.9 ms] Range (min … max): 303.7 ms … 309.5 ms 10 runs Summary HEAD with bloom ran 2.55 ± 0.02 times faster than HEAD no bloom 3.09 ± 0.05 times faster than master with bloom 9.34 ± 0.09 times faster than master no bloom In short, the existing implementation is comparably fast *with* Bloom filters as the new implementation is *without* Bloom filters. So, most repositories should get a dramatic speed-up by just deploying this (even without computing Bloom filters), and all repositories should get faster still when computing Bloom filters. When comparing a more extreme example of `git last-modified -- COPYING t`, the difference is even 5 times better: Benchmark 1: master Time (mean ± σ): 4.372 s ± 0.057 s [User: 4.286 s, System: 0.062 s] Range (min … max): 4.308 s … 4.509 s 10 runs Benchmark 2: HEAD Time (mean ± σ): 826.3 ms ± 22.3 ms [User: 784.1 ms, System: 39.2 ms] Range (min … max): 810.6 ms … 881.2 ms 10 runs Summary HEAD ran 5.29 ± 0.16 times faster than master As an added benefit, results are more consistent now. For example implementation in 'master' gives: $ git log --max-count=1 --format=%H -- pkt-line.h 15df15fe07ef66b51302bb77e393f3c5502629de $ git last-modified -- pkt-line.h 15df15fe07ef66b51302bb77e393f3c5502629de pkt-line.h $ git last-modified | grep pkt-line.h 5b49c1af03e600c286f63d9d9c9fb01403230b9f pkt-line.h With the changes in this patch the results of git-last-modified(1) always match those of `git log --max-count=1`. One thing to note though, the results might be outputted in a different order than before. This is not considerd to be an issue because nowhere is documented the order is guaranteed. Based-on-patches-by: Derrick Stolee <stolee@gmail.com> Based-on-patches-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Toon Claes <toon@iotcl.com> Acked-by: Taylor Blau <me@ttaylorr.com> [jc: tweaked use of xcalloc() to unbreak coccicheck] Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-03Merge branch 'ps/maintenance-geometric'Junio C Hamano
"git maintenance" command learns the "geometric" strategy where it avoids doing maintenance tasks that rebuilds everything from scratch. * ps/maintenance-geometric: t7900: fix a flaky test due to git-repack always regenerating MIDX builtin/maintenance: introduce "geometric" strategy builtin/maintenance: make "gc" strategy accessible builtin/maintenance: extend "maintenance.strategy" to manual maintenance builtin/maintenance: run maintenance tasks depending on type builtin/maintenance: improve readability of strategies builtin/maintenance: don't silently ignore invalid strategy builtin/maintenance: make the geometric factor configurable builtin/maintenance: introduce "geometric-repack" task builtin/gc: make `too_many_loose_objects()` reusable without GC config builtin/gc: remove global `repack` variable
2025-10-30Merge branch 'rz/bisect-help-unknown'Junio C Hamano
"git bisect" command did not react correctly to "git bisect help" and "git bisect unknown", which has been corrected. * rz/bisect-help-unknown: bisect: fix handling of `help` and invalid subcommands
2025-10-30Merge branch 'ps/remove-packfile-store-get-packs'Junio C Hamano
Two slightly different ways to get at "all the packfiles" in API has been cleaned up. * ps/remove-packfile-store-get-packs: packfile: rename `packfile_store_get_all_packs()` packfile: introduce macro to iterate through packs packfile: drop `packfile_store_get_packs()` builtin/grep: simplify how we preload packs builtin/gc: convert to use `packfile_store_get_all_packs()` object-name: convert to use `packfile_store_get_all_packs()`
2025-10-30Merge branch 'ey/commit-graph-changed-paths-config'Junio C Hamano
A new configuration variable commitGraph.changedPaths allows to turn "--changed-paths" on by default for "git commit-graph". * ey/commit-graph-changed-paths-config: commit-graph: add new config for changed-paths & recommend it in scalar
2025-10-30packfile: track packs via the MRU list exclusivelyPatrick Steinhardt
We track packfiles via two different lists: - `struct packfile_store::packs` is a list that sorts local packs first. In addition, these packs are sorted so that younger packs are sorted towards the front. - `struct packfile_store::mru` is a list that sorts packs so that most-recently used packs are at the front. The reasoning behind the ordering in the `packs` list is that younger objects stored in the local object store tend to be accessed more frequently, and that is certainly true for some cases. But there are going to be lots of cases where that isn't true. Especially when traversing history it is likely that one needs to access many older objects, and due to our housekeeping it is very likely that almost all of those older objects will be contained in one large pack that is oldest. So whether or not the ordering makes sense really depends on the use case at hand. A flexible approach like our MRU list addresses that need, as it will sort packs towards the front that are accessed all the time. Intuitively, this approach is thus able to satisfy more use cases more efficiently. This reasoning casts some doubt on whether or not it really makes sense to track packs via two different lists. It causes confusion, and it is not clear whether there are use cases where the `packs` list really is such an obvious choice. Merge these two lists into one most-recently-used list. Note that there is one important edge case: `for_each_packed_object()` uses the MRU list to iterate through packs, and then it lists each object in those packs. This would have the effect that we now sort the current pack towards the front, thus modifying the list of packfiles we are iterating over, with the consequence that we'll see an infinite loop. This edge case is worked around by introducing a new field that allows us to skip updating the MRU. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-30packfile: move list of packs into the packfile storePatrick Steinhardt
Move the list of packs into the packfile store. This follows the same logic as in a previous commit, where we moved the most-recently-used list of packs, as well. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-30builtin/pack-objects: simplify logic to find kept or nonlocal objectsPatrick Steinhardt
The function `has_sha1_pack_kept_or_nonlocal()` takes an object ID and then searches through packed objects to figure out whether the object exists in a kept or non-local pack. As a performance optimization we remember the packfile that contains a given object ID so that the next call to the function first checks that same packfile again. The way this is written is rather hard to follow though, as the caching mechanism is intertwined with the loop that iterates through the packs. Consequently, we need to do some gymnastics to re-start the iteration if the cached pack does not contain the objects. Refactor this so that we check the cached packfile at the beginning. We don't have to re-verify whether the packfile meets the properties as we have already verified those when storing the pack in `last_found` in the first place. So all we need to do is to use `find_pack_entry_one()` to check whether the pack contains the object ID, and to skip the cached pack in the loop so that we don't search it twice. Furthermore, stop using the `(void *)1` sentinel value and instead use a simple `NULL` pointer to indicate that we don't have a last-found pack yet. This refactoring significantly simplifies the logic and makes it much easier to follow. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-30packfile: move the MRU list into the packfile storePatrick Steinhardt
Packfiles have two lists associated to them: - A list that keeps track of packfiles in the order that they were added to a packfile store. - A list that keeps track of packfiles in most-recently-used order so that packfiles that are more likely to contain a specific object are ordered towards the front. Both of these lists are hosted by `struct packed_git` itself, So to identify all packfiles in a repository you simply need to grab the first packfile and then iterate the `->next` pointers or the MRU list. This pattern has the problem that all packfiles are part of the same list, regardless of whether or not they belong to the same object source. With the upcoming pluggable object database effort this needs to change: packfiles should be contained by a single object source, and reading an object from any such packfile should use that source to look up the object. Consequently, we need to break up the global lists of packfiles into per-object-source lists. A first step towards this goal is to move those lists out of `struct packed_git` and into the packfile store. While the packfile store is currently sitting on the `struct object_database` level, the intent is to push it down one level into the `struct odb_source` in a subsequent patch series. Introduce a new `struct packfile_list` that is used to manage lists of packfiles and use it to store the list of most-recently-used packfiles in `struct packfile_store`. For now, the new list type is only used in a single spot, but we'll expand its usage in subsequent patches. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-30fast-import: mark strings for translationChristian Couder
Some error or warning messages in "builtin/fast-import.c" are marked for translation, but many are not. To be more consistent and provide a better experience to people using a translated version, let's mark all the remaining error or warning messages for translation. While at it, let's make the following small changes: - replace "GIT" or "git" in a few error messages to just "Git", - replace "Expected from command, got %s" to "expected 'from' command, got '%s'", which makes it clearer that "from" is a command and should not be translated, - downcase error and warning messages that start with an uppercase, - fix test cases in "t9300-fast-import.sh" that broke because an error or warning message was downcased, - split error and warning messages that are too long, - adjust the indentation of some arguments of the error functions. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-30fast-export: mark strings for translationChristian Couder
Some error or warning messages in "builtin/fast-export.c" are marked for translation, but many are not. To be more consistent and provide a better experience to people using a translated version, let's mark all the remaining error or warning messages for translation. While at it: - improve how some arguments to some error functions are indented, - remove "Error:" at the start of an error message, - downcase error and warning messages that start with an uppercase. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-29Merge branch 'tb/incremental-midx-part-3.1'Junio C Hamano
Clean-up "git repack" machinery to prepare for incremental update of midx files. * tb/incremental-midx-part-3.1: (49 commits) builtin/repack.c: clean up unused `#include`s repack: move `write_cruft_pack()` out of the builtin repack: move `write_filtered_pack()` out of the builtin repack: move `pack_kept_objects` to `struct pack_objects_args` repack: move `finish_pack_objects_cmd()` out of the builtin builtin/repack.c: pass `write_pack_opts` to `finish_pack_objects_cmd()` repack: extract `write_pack_opts_is_local()` repack: move `find_pack_prefix()` out of the builtin builtin/repack.c: use `write_pack_opts` within `write_cruft_pack()` builtin/repack.c: introduce `struct write_pack_opts` repack: 'write_midx_included_packs' API from the builtin builtin/repack.c: inline packs within `write_midx_included_packs()` builtin/repack.c: pass `repack_write_midx_opts` to `midx_included_packs` builtin/repack.c: inline `remove_redundant_bitmaps()` builtin/repack.c: reorder `remove_redundant_bitmaps()` repack: keep track of MIDX pack names using existing_packs builtin/repack.c: use a string_list for 'midx_pack_names' builtin/repack.c: extract opts struct for 'write_midx_included_packs()' builtin/repack.c: remove ref snapshotting from builtin repack: remove pack_geometry API from the builtin ...
2025-10-28bisect: update usage and docs to match each otherRuoyu Zhong
Update the usage string of `git bisect` and documentation to match each other. While at it, also: 1. Move the synopsis of `git bisect` subcommands to the synopsis section, so that the test `t0450-txt-doc-vs-help.sh` can pass. 2. Document the `git bisect next` subcommand, which exists in the code but is missing from the documentation. See also: [1]. [1]: https://lore.kernel.org/git/3DA38465-7636-4EEF-B074-53E4628F5355@gmail.com/ Suggested-by: Ben Knoble <ben.knoble@gmail.com> Signed-off-by: Ruoyu Zhong <zhongruoyu@outlook.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-28Merge branch 'cc/fast-import-strip-signed-tags'Junio C Hamano
"git fast-import" is taught to handle signed tags, just like it recently learned to handle signed commits, in different ways. * cc/fast-import-strip-signed-tags: fast-import: add '--signed-tags=<mode>' option fast-export: handle all kinds of tag signatures t9350: properly count annotated tags lib-gpg: allow tests with GPGSM or GPGSSH prereq first doc: git-tag: stop focusing on GPG signed tags
2025-10-28Merge branch 'ds/sparse-checkout-clean'Junio C Hamano
"git sparse-checkout" subcommand learned a new "clean" action to prune otherwise unused working-tree files that are outside the areas of interest. * ds/sparse-checkout-clean: sparse-index: improve advice message instructions t: expand tests around sparse merges and clean sparse-index: point users to new 'clean' action sparse-checkout: add --verbose option to 'clean' dir: add generic "walk all files" helper sparse-checkout: match some 'clean' behavior sparse-checkout: add basics of 'clean' command sparse-checkout: remove use of the_repository
2025-10-28Merge branch 'ps/remove-packfile-store-get-packs' into ↵Junio C Hamano
ps/packed-git-in-object-store * ps/remove-packfile-store-get-packs: (55 commits) packfile: rename `packfile_store_get_all_packs()` packfile: introduce macro to iterate through packs packfile: drop `packfile_store_get_packs()` builtin/grep: simplify how we preload packs builtin/gc: convert to use `packfile_store_get_all_packs()` object-name: convert to use `packfile_store_get_all_packs()` builtin/repack.c: clean up unused `#include`s repack: move `write_cruft_pack()` out of the builtin repack: move `write_filtered_pack()` out of the builtin repack: move `pack_kept_objects` to `struct pack_objects_args` repack: move `finish_pack_objects_cmd()` out of the builtin builtin/repack.c: pass `write_pack_opts` to `finish_pack_objects_cmd()` repack: extract `write_pack_opts_is_local()` repack: move `find_pack_prefix()` out of the builtin builtin/repack.c: use `write_pack_opts` within `write_cruft_pack()` builtin/repack.c: introduce `struct write_pack_opts` repack: 'write_midx_included_packs' API from the builtin builtin/repack.c: inline packs within `write_midx_included_packs()` builtin/repack.c: pass `repack_write_midx_opts` to `midx_included_packs` builtin/repack.c: inline `remove_redundant_bitmaps()` ...
2025-10-24builtin/maintenance: introduce "geometric" strategyPatrick Steinhardt
We have two different repacking strategies in Git: - The "gc" strategy uses git-gc(1). - The "incremental" strategy uses multi-pack indices and `git multi-pack-index repack` to merge together smaller packfiles as determined by a specific batch size. The former strategy is our old and trusted default, whereas the latter has historically been used for our scheduled maintenance. But both strategies have their shortcomings: - The "gc" strategy performs regular all-into-one repacks. Furthermore it is rather inflexible, as it is not easily possible for a user to enable or disable specific subtasks. - The "incremental" strategy is not a full replacement for the "gc" strategy as it doesn't know to prune stale data. So today, we don't have a strategy that is well-suited for large repos while being a full replacement for the "gc" strategy. Introduce a new "geometric" strategy that aims to fill this gap. This strategy invokes all the usual cleanup tasks that git-gc(1) does like pruning reflogs and rerere caches as well as stale worktrees. But where it differs from both the "gc" and "incremental" strategy is that it uses our geometric repacking infrastructure exposed by git-repack(1) to repack packfiles. The advantage of geometric repacking is that we only need to perform an all-into-one repack when the object count in a repo has grown significantly. One downside of this strategy is that pruning of unreferenced objects is not going to happen regularly anymore. Every geometric repack knows to soak up all loose objects regardless of their reachability, and merging two or more packs doesn't consider reachability, either. Consequently, the number of unreachable objects will grow over time. This is remedied by doing an all-into-one repack instead of a geometric repack whenever we determine that the geometric repack would end up merging all packfiles anyway. This all-into-one repack then performs our usual reachability checks and writes unreachable objects into a cruft pack. As cruft packs won't ever be merged during geometric repacks we can thus phase out these objects over time. Of course, this still means that we retain unreachable objects for far longer than with the "gc" strategy. But the maintenance strategy is intended especially for large repositories, where the basic assumption is that the set of unreachable objects will be significantly dwarfed by the number of reachable objects. If this assumption is ever proven to be too disadvantageous we could for example introduce a time-based strategy: if the largest packfile has not been touched for longer than $T, we perform an all-into-one repack. But for now, such a mechanism is deferred into the future as it is not clear yet whether it is needed in the first place. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: make "gc" strategy accessiblePatrick Steinhardt
While the user can pick the "incremental" maintenance strategy, it is not possible to explicitly use the "gc" strategy. This has two downsides: - It is impossible to use the default "gc" strategy for a specific repository when the strategy was globally set to a different strategy. - It is not possible to use git-gc(1) for scheduled maintenance. Address these issues by making making the "gc" strategy configurable. Furthermore, extend the strategy so that git-gc(1) runs for both manual and scheduled maintenance. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: extend "maintenance.strategy" to manual maintenancePatrick Steinhardt
The "maintenance.strategy" configuration allows users to configure how Git is supposed to perform repository maintenance. The idea is that we provide a set of high-level strategies that may be useful in different contexts, like for example when handling a large monorepo. Furthermore, the strategy can be tweaked by the user by overriding specific tasks. In its current form though, the strategy only applies to scheduled maintenance. This creates something of a gap, as scheduled and manual maintenance will now use _different_ strategies as the latter would continue to use git-gc(1) by default. This makes the strategies way less useful than they could be on the one hand. But even more importantly, the two different strategies might clash with one another, where one of the strategies performs maintenance in such a way that it discards benefits from the other strategy. So ideally, it should be possible to pick one strategy that then applies globally to all the different ways that we perform maintenance. This doesn't necessarily mean that the strategy always does the _same_ thing for every maintenance type. But it means that the strategy can configure the different types to work in tandem with each other. Change the meaning of "maintenance.strategy" accordingly so that the strategy is applied to both types, manual and scheduled. As preceding commits have introduced logic to run maintenance tasks depending on this type we can tweak strategies so that they perform those tasks depending on the context. Note that this raises the question of backwards compatibility: when the user has configured the "incremental" strategy we would have ignored that strategy beforehand. Instead, repository maintenance would have continued to use git-gc(1) by default. But luckily, we can match that behaviour by: - Keeping all current tasks of the incremental strategy as `MAINTENANCE_TYPE_SCHEDULED`. This ensures that those tasks will not run during manual maintenance. - Configuring the "gc" task so that it is invoked during manual maintenance. Like this, the user shouldn't observe any difference in behaviour. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: run maintenance tasks depending on typePatrick Steinhardt
We basically have three different ways to execute repository maintenance: 1. Manual maintenance via `git maintenance run`. 2. Automatic maintenance via `git maintenance run --auto`. 3. Scheduled maintenance via `git maintenance run --schedule=`. At the moment, maintenance strategies only have an effect for the last type of maintenance. This is about to change in subsequent commits, but to do so we need to be able to skip some tasks depending on how exactly maintenance was invoked. Introduce a new maintenance type that discern between manual (1 & 2) and scheduled (3) maintenance. Convert the `enabled` field into a bitset so that it becomes possible to specifiy which tasks exactly should run in a specific context. The types picked for existing strategies match the status quo: - The default strategy is only ever executed as part of a manual maintenance run. It is not possible to use it for scheduled maintenance. - The incremental strategy is only ever executed as part of a scheduled maintenance run. It is not possible to use it for manual maintenance. The strategies will be tweaked in subsequent commits to make use of this new infrastructure. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: improve readability of strategiesPatrick Steinhardt
Our maintenance strategies are essentially a large array of structures, where each of the tasks can be enabled and scheduled individually. With the current layout though all the configuration sits on the same nesting layer, which makes it a bit hard to discern which initialized fields belong to what task. Improve readability of the individual tasks by using nested designated initializers instead. Suggested-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: don't silently ignore invalid strategyPatrick Steinhardt
When parsing maintenance strategies we completely ignore the user-configured value in case it is unknown to us. This makes it basically undiscoverable to the user that scheduled maintenance is devolving into a no-op. Change this to instead die when seeing an unknown maintenance strategy. While at it, pull out the parsing logic into a separate function so that we can reuse it in a subsequent commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: make the geometric factor configurablePatrick Steinhardt
The geometric repacking task uses a factor of two for its geometric sequence, meaning that each next pack must contain at least twice as many objects as the next-smaller one. In some cases it may be helpful to configure this factor though to reduce the number of packfile merges even further, e.g. in very big repositories. But while git-repack(1) itself supports doing this, the maintenance task does not give us a way to tune it. Introduce a new "maintenance.geometric-repack.splitFactor" configuration to plug this gap. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/maintenance: introduce "geometric-repack" taskPatrick Steinhardt
Introduce a new "geometric-repack" task. This task uses our geometric repack infrastructure as provided by git-repack(1) itself, which is a strategy that especially hosting providers tend to use to amortize the costs of repacking objects. There is one issue though with geometric repacks, namely that they unconditionally pack all loose objects, regardless of whether or not they are reachable. This is done because it means that we can completely skip the reachability step, which significantly speeds up the operation. But it has the big downside that we are unable to expire objects over time. To address this issue we thus use a split strategy in this new task: whenever a geometric repack would merge together all packs, we instead do an all-into-one repack. By default, these all-into-one repacks have cruft packs enabled, so unreachable objects would now be written into their own pack. Consequently, they won't be soaked up during geometric repacking anymore and can be expired with the next full repack, assuming that their expiry date has surpassed. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/gc: make `too_many_loose_objects()` reusable without GC configPatrick Steinhardt
To decide whether or not a repository needs to be repacked we estimate the number of loose objects. If the number exceeds a certain threshold we perform the repack, otherwise we don't. This is done via `too_many_loose_objects()`, which takes as parameter the `struct gc_config`. This configuration is only used to determine the threshold. In a subsequent commit we'll add another caller of this function that wants to pass a different limit than the one stored in that structure. Refactor the function accordingly so that we only take the limit as parameter instead of the whole structure. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-24builtin/gc: remove global `repack` variablePatrick Steinhardt
The global `repack` variable is used to store all command line arguments that we eventually want to pass to git-repack(1). It is being appended to from multiple different functions, which makes it hard to follow the logic. Besides being hard to follow, it also makes it unnecessarily hard to reuse this infrastructure in new code. Refactor the code so that we store this variable on the stack and pass a pointer to it around as needed. This is done so that we can reuse `add_repack_all_options()` in a subsequent commit. The refactoring itself is straight-forward. One function that deserves attention though is `need_to_gc()`: this function determines whether or not we need to execute garbage collection for `git gc --auto`, but also for `git maintenance run --auto`. But besides figuring out whether we have to perform GC, the function also sets up the `repack` arguments. For `git gc --auto` it's trivial to adapt, as we already have the on-stack variable at our fingertips. But for the maintenance condition it's less obvious what to do. As it turns out, we can just use another temporary variable there that we then immediately discard. If we need to perform GC we execute a child git-gc(1) process to repack objects for us, and that process will have to recompute the arguments anyway. Signed-off-by: Patrick Steinhardt <ps@pks.im> Acked-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-22Merge branch 'bc/sha1-256-interop-01'Junio C Hamano
The beginning of SHA1-SHA256 interoperability work. * bc/sha1-256-interop-01: t1010: use BROKEN_OBJECTS prerequisite t: allow specifying compatibility hash fsck: consider gpgsig headers expected in tags rev-parse: allow printing compatibility hash docs: add documentation for loose objects docs: improve ambiguous areas of pack format documentation docs: reflect actual double signature for tags docs: update offset order for pack index v3 docs: update pack index v3 format
2025-10-22bisect: fix handling of `help` and invalid subcommandsRuoyu Zhong
As documented in git-bisect(1), `git bisect help` should display usage information. However, since the migration of `git bisect` to a full builtin command in 73fce29427 (Turn `git bisect` into a full built-in, 2022-11-10), this behavior was broken. Running `git bisect help` would, instead of showing usage, either fail silently if already in a bisect session, or otherwise trigger an interactive autostart prompt asking "Do you want me to do it for you [Y/n]?". Similarly, since df63421be9 (bisect--helper: handle states directly, 2022-11-10), running invalid subcommands like `git bisect foobar` also led to the same behavior. This occurred because `help` and other unrecognized subcommands were being unconditionally passed to `bisect_state`, which then called `bisect_autostart`, triggering the interactive prompt. Fix this by: 1. Adding explicit handling for the `help` subcommand to show usage; 2. Validating that unrecognized commands are actually valid state commands before calling `bisect_state`; 3. Showing an error with usage for truly invalid commands. This ensures that `git bisect help` displays the usage as documented, and invalid commands fail cleanly without entering interactive mode. Alternate terms are still handled correctly through `check_and_set_terms`. Signed-off-by: Ruoyu Zhong <zhongruoyu@outlook.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-22commit-graph: add new config for changed-paths & recommend it in scalarEmily Yang
The changed-path Bloom filters feature has proven stable and reliable over several years of use, delivering significant performance improvement for file history computation in large monorepos. Currently a user can opt-in to writing the changed-path Bloom filters using the "--changed-paths" option to "git commit-graph write". The filters will be persisted until the user drops the filters using the "--no-changed-paths" option. For this functionality, refer to 0087a87ba8 (commit-graph: persist existence of changed-paths, 2020-07-01). Large monorepos using Git's background maintenance to build and update commit-graph files could use an easy switch to enable this feature without a foreground computation. In this commit, we're proposing a new config option "commitGraph.changedPaths": * If "true", "git commit-graph write" will write Bloom filters, equivalent to passing "--changed-paths"; * If "false" or "unset", Bloom filters will be written during "git commit-graph write" only if the filters already exist in the current commit-graph file. This matches the default behaviour of "git commit-graph write" without any "--[no-]changed-paths" option. Note "false" can disable a previous "true" config value but doesn't imply "--no-changed-paths". This config will always respect the precedence of command line option "--[no-]changed-paths". We also set this new config as optional recommended config in scalar to turn on this feature for large repos. Helped-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Emily Yang <emilyyang.git@gmail.com> Acked-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-22Merge branch 'jt/repo-structure' into ps/ref-peeled-tagsJunio C Hamano
* jt/repo-structure: builtin/repo: add progress meter for structure stats builtin/repo: add keyvalue and nul format for structure stats builtin/repo: add object counts in structure output builtin/repo: introduce structure subcommand ref-filter: export ref_kind_from_refname() ref-filter: allow NULL filter pattern builtin/repo: rename repo_info() to cmd_repo_info()
2025-10-22Merge branch 'tb/incremental-midx-part-3.1' into ps/ref-peeled-tagsJunio C Hamano
* tb/incremental-midx-part-3.1: (49 commits) builtin/repack.c: clean up unused `#include`s repack: move `write_cruft_pack()` out of the builtin repack: move `write_filtered_pack()` out of the builtin repack: move `pack_kept_objects` to `struct pack_objects_args` repack: move `finish_pack_objects_cmd()` out of the builtin builtin/repack.c: pass `write_pack_opts` to `finish_pack_objects_cmd()` repack: extract `write_pack_opts_is_local()` repack: move `find_pack_prefix()` out of the builtin builtin/repack.c: use `write_pack_opts` within `write_cruft_pack()` builtin/repack.c: introduce `struct write_pack_opts` repack: 'write_midx_included_packs' API from the builtin builtin/repack.c: inline packs within `write_midx_included_packs()` builtin/repack.c: pass `repack_write_midx_opts` to `midx_included_packs` builtin/repack.c: inline `remove_redundant_bitmaps()` builtin/repack.c: reorder `remove_redundant_bitmaps()` repack: keep track of MIDX pack names using existing_packs builtin/repack.c: use a string_list for 'midx_pack_names' builtin/repack.c: extract opts struct for 'write_midx_included_packs()' builtin/repack.c: remove ref snapshotting from builtin repack: remove pack_geometry API from the builtin ...
2025-10-21builtin/repo: add progress meter for structure statsJustin Tobler
When using the structure subcommand for git-repo(1), evaluating a repository may take some time depending on its shape. Add a progress meter to provide feedback to the user about what is happening. The progress meter is enabled by default when the command is executed from a tty. It can also be explicitly enabled/disabled via the --[no-]progress option. Signed-off-by: Justin Tobler <jltobler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-21builtin/repo: add keyvalue and nul format for structure statsJustin Tobler
All repository structure stats are outputted in a human-friendly table form. This format is not suitable for machine parsing. Add a --format option that supports three output modes: `table`, `keyvalue`, and `nul`. The `table` mode is the default format and prints the same table output as before. With the `keyvalue` mode, each line of output contains a key-value pair of a repository stat. The '=' character is used to delimit between keys and values. The `nul` mode is similar to `keyvalue`, but key-values are delimited by a NUL character instead of a newline. Also, instead of a '=' character to delimit between keys and values, a newline character is used. This allows stat values to support special characters without having to cquote them. These two new modes provides output that is more machine-friendly. Signed-off-by: Justin Tobler <jltobler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-21builtin/repo: add object counts in structure outputJustin Tobler
The amount of objects in a repository can provide insight regarding its shape. To surface this information, use the path-walk API to count the number of reachable objects in the repository by object type. All regular references are used to determine the reachable set of objects. The object counts are appended to the same table containing the reference information. Signed-off-by: Justin Tobler <jltobler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-21builtin/repo: introduce structure subcommandJustin Tobler
The structure of a repository's history can have huge impacts on the performance and health of the repository itself. Currently, Git lacks a means to surface repository metrics regarding its structure/shape via a single command. Acquiring this information requires users to be familiar with the relevant data points and the various Git commands required to surface them. To fill this gap, supplemental tools such as git-sizer(1) have been developed. To allow users to more readily identify repository structure related information, introduce the "structure" subcommand in git-repo(1). The goal of this subcommand is to eventually provide similar functionality to git-sizer(1), but natively in Git. The initial version of this command only iterates through all references in the repository and tracks the count of branches, tags, remote refs, and other reference types. The corresponding information is displayed in a human-friendly table formatted in a very similar manner to git-sizer(1). The width of each table column is adjusted automatically to satisfy the requirements of the widest row contained. Subsequent commits will surface additional relevant data points to output and also provide other more machine-friendly output formats. Based-on-patch-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Justin Tobler <jltobler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-21builtin/repo: rename repo_info() to cmd_repo_info()Justin Tobler
Subcommand functions are often prefixed with `cmd_` to denote that they are an entrypoint. Rename repo_info() to cmd_repo_info() accordingly. Signed-off-by: Justin Tobler <jltobler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>