| Age | Commit message (Collapse) | Author |
|
We test xmkstemp() in our helper by just calling:
xmkstemp(xstrdup(argv[1]));
This leaks both the copied string as well as the descriptor returned by
the function. In practice this isn't a big deal, since we immediately
exit the program, but:
1. LSan will complain about the memory leak. The only reason we did
not notice this in our leak-checking builds is that both of the
callers in the test suite (both in t0070) pass a broken template
(and expect failure). So the function calls die() before we can
actually leak.
But it's an accident waiting to happen if anybody adds a call which
succeeds.
2. Coverity complains about the descriptor leak. There's a long list
of uninteresting or false positives in Coverity's results, but
since we're here we might as well fix it, too.
I didn't bother adding a new test that triggers the leak. It's not even
in real production code, but just in the test-helper itself.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the same spirit as 9faf3963b6 (t: introduce compatibility options to
clar-based tests, 2024-12-13), we should ignore --no-chain-lint passed
to our clar tests, since it may appear in GIT_TEST_OPTS to be used with
other tests.
This is particularly important on Windows CI, where --no-chain-lint is
added to the test options by default, and the meson build will pass all
options to the unit tests. The only reason our meson Windows CI job does
not run into this currently is that it is not respecting GIT_TEST_OPTS
at all! So ignoring this option is a prerequisite to fixing that
situation.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
ASan has an option to enable strict string checking, where any pointer
passed to a function that expects a NUL-terminated string will be
checked for that NUL termination. This can sometimes produce false
positives. E.g., it is not wrong to pass a buffer with { '1', '2', '\n' }
into strtoul(). Even though it is not NUL-terminated, it will stop at
the newline.
But in trying it out, it identified two problematic spots in our test
suite (which have now been adjusted):
1. The strtol() parsing in cache-tree.c was a real potential problem,
which would have been very hard to find otherwise (since it
required constructing a very specific broken index file).
2. The use of string functions in fsck_ident() were false positives,
because we knew that there was always a trailing newline which
would stop the functions from reading off the end of the buffer.
But the reasoning behind that is somewhat fragile, and silencing
those complaints made the code easier to reason about.
So even though this did not find any earth-shattering bugs, and even had
a few false positives, I'm sufficiently convinced that its complaints
are more helpful than hurtful. Let's turn it on by default (since the
test suite now runs cleanly with it) and see if it ever turns up any
other instances.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
At GitHub, we had a repository that was triggering
git: merge-ort.c:3032: process_renames: Assertion `newinfo && !newinfo->merged.clean` failed.
during git replay.
This sounds similar to the somewhat recent f6ecb603ff8a (merge-ort: fix
directory rename on top of source of other rename/delete, 2025-08-06),
but the cause is different. Unlike that case, there are no
rename-to-self situations arising in this case, and new to this case it
can only be triggered during a replay operation on the 2nd or later
commit being replayed, never on the first merge in the sequence.
To trigger, the repository needs:
* an upstream which:
* renames a file to a different directory, e.g.
old/file -> new/file
* leaves other files remaining in the original directory (so that
e.g. "old/" still exists upstream even though file has been
removed from it and placed elsewhere)
* a topic branch being rebased where:
* a commit in the sequence:
* modifies old/file
* a subsequent commit in the sequence being replayed:
* does NOT touch *anything* under new/
* does NOT touch old/file
* DOES modify other paths under old/
* does NOT have any relevant renames that we need to detect
_anywhere_ elsewhere in the tree (meaning this interacts
interestingly with both directory renames and cached renames)
In such a case, the assertion will trigger. The fix turns out to be
surprisingly simple. I have a very vague recollection that I actually
considered whether to add such an if-check years ago when I added the
very similar one for oldinfo in 1b6b902d95a5 (merge-ort:
process_renames() now needs more defensiveness, 2021-01-19), but I think
I couldn't figure out a possible way to trigger it and was worried at
the time that if I didn't know how to trigger it then I wasn't so sure
that simply skipping it was correct. Waiting did give me a chance to
put more thorough tests and checks into place for the rename-to-self
cases a few months back, which I might not have found as easily
otherwise. Anyway, put the check in place now and add a test that
demonstrates the fix.
Note that this bug, as demonstrated by the conditions listed above,
runs at the intersection of relevant renames, trivial directory
resolutions, and cached renames. All three of those optimizations are
ones that unfortunately make the code (and testcases!) a bit more
complex, and threading all three makes it a bit more so. However, the
testcase isn't crazy enough that I'd expect no one to ever hit it in
practice, and was confused why we didn't see it before. After some
digging, I discovered that merge.directoryRenames=false is a workaround
to this bug, and GitHub used that setting until recently (it was a
"temporary" match-what-libgit2-does piece of code that lasted years
longer than intended). Since the conditions I gave above for triggering
this bug rule out the possibility of there being directory renames, one
might assume that it shouldn't matter whether you try to detect such
renames if there aren't any. However, due to commit a16e8efe5c2b
(merge-ort: fix merge.directoryRenames=false, 2025-03-13), the heavy
hammer used there means that merge.directoryRenames=false ALSO turns off
rename caching, which is critical to triggering the bug. This becomes
a bit more than an aside since...
Re-reading that old commit, a16e8efe5c2b (merge-ort: fix
merge.directoryRenames=false, 2025-03-13), it appears that the solution
to this latest bug might have been at least a partial alternative
solution to that old commit. And it may have been an improved
alternative (or at least help implement one), since it may be able to
avoid the heavy-handed disabling of rename cache. That might be an
interesting future thing to investigate, but is not critical for the
current fix. However, since I spent time digging it all up, at least
leave a small comment tweak breadcrumb to help some future reader
(myself or others) who wants to dig further to connect the dots a little
quicker.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A comment at the top of t6429 mentions why the test doesn't exercise git
rebase or git cherry-pick. However, it claims that it uses `test-tool
fast-rebase`. That was true when the comment was written, but commit
f920b0289ba3 (replay: introduce new builtin, 2023-11-24) changed it to
use git replay without updating this comment.
We could potentially just strike this second comment, since git replay
is a bona fide built-in, but perhaps the explanation about why it focuses
on git replay is still useful. Update the comment to make it accurate
again.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The diff algorithm used in 'git-blame(1)' is set to 'myers',
without the possibility to change it aside from the `--minimal` option.
There has been long-standing interest in changing the default diff
algorithm to "histogram", and Git 3.0 was floated as a possible occasion
for taking some steps towards that:
https://lore.kernel.org/git/xmqqed873vgn.fsf@gitster.g/
As a preparation for this move, it is worth making sure that the diff
algorithm is configurable where useful.
Make it configurable in the `git-blame(1)` command by introducing the
`--diff-algorithm` option and make honor the `diff.algorithm` config
variable. Keep Myers diff as the default.
Signed-off-by: Antonin Delpeuch <antonin@delpeuch.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The output table from "git repo structure" is misaligned when displaying
UTF-8 characters (e.g., non-ASCII glyphs). E.g.:
| 仓库结构 | 值 |
| -------------- | ---- |
| * 引用 | |
| * 计数 | 67 |
The previous implementation used simple width formatting with printf()
which didn't properly handle multi-byte UTF-8 characters, causing
misaligned table columns when displaying repository structure
information.
This change modifies the stats_table_print_structure function to use
strbuf_utf8_align() instead of basic printf width specifiers. This
ensures proper column alignment regardless of the character encoding of
the content being displayed.
Also add test cases for strbuf_utf8_align(), a function newly introduced
in "builtin/repo.c".
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The file "builtin/repo.c" uses utf8_strwidth() to calculate the display
width of UTF-8 characters in a table, but the resulting output is still
misaligned. Add test cases for both utf8_strwidth and utf8_strnwidth to
verify that they correctly compute the display width for UTF-8
characters.
Also updated the build configuration in Makefile and meson.build to
include the new test suite in the build process.
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In add_to_cache(), we treat any directories as submodules, and complain
if we can't resolve their HEAD. This call to resolve_gitlink_ref() was
added by f937bc2f86 (add: error appropriately on repository with no
commits, 2019-04-09), with the goal of improving the error message for
empty repositories.
But we already resolve the submodule HEAD in index_path(), which is
where we find the actual oid we're going to use. Resolving it again here
introduces some downsides:
1. It's more work, since we have to open up the submodule repository's
files twice.
2. There are call paths that get to index_path() without going through
add_to_cache(). For instance, we'd want a similar informative
message if "git diff empty" finds that it can't resolve the
submodule's HEAD. (In theory we can also get there through
update-index, but AFAICT it refuses to consider directories as
submodules at all, and just complains about them).
3. The resolution in index_path() catches more errors that we don't
handle here. In particular, it will validate that the object format
for the submodule matches that of the superproject. This isn't a
bug, since our call in add_to_cache() throws away the oid it gets
without looking at it. But it certainly caused confusion for me
when looking at where the object-format check should go.
So instead of resolving the submodule HEAD in add_to_cache(), let's just
teach the call in index_path() to actually produce an error message
(which it already does for other cases). That's probably what f937bc2f86
should have done in the first place, and it gives us a single point of
resolution when adding a submodule to the index.
The resulting output is slightly more verbose, as we propagate the error
up the call stack, but I think that's OK (and again, matches many other
errors we get when indexing fails).
I've left the text of the error message as-is, though it is perhaps
overly specific. There are many reasons that resolving the submodule
HEAD might fail, though outside of corruption or system errors it is
probably most likely that the submodule HEAD is simply on an unborn
branch.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The design of the hash algorithm transition plan is that objects stored
must be entirely in one algorithm since we lack any way to indicate a
mix of algorithms. This also includes submodules, but we have
traditionally not enforced this, which leads to various problems when
trying to clone or check out the the submodule from the remote.
Since this cannot work in the general case, restrict adding a submodule
of a different algorithm to the index. Add tests for git add and git
submodule add that these are rejected.
Note that we cannot check this in git fsck because the malformed
submodule is stored in the tree as an object ID which is either
truncated (when a SHA-256 submodule is added to a SHA-1 repository) or
padded with zeros (when a SHA-1 submodule is added to a SHA-256
repository). We cannot detect even the latter case because someone
could have an actual submodule that actually ends in 24 zeros, which
would be a false positive.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Teach "git diff" to highlight "\ No newline at end of file" message
as a whitespace error when incomplete-line whitespace error class is
in effect. Thanks to the previous refactoring of complete rewrite
code path, we can do this at a single place.
Unlike whitespace errors in the payload where we need to annotate in
line, possibly using colors, the line that has whitespace problems,
we have a dedicated line already that can serve as the error
message, so paint it as a whitespace error message.
Also teach "git diff --check" to notice incomplete lines as
whitespace errors and report when incomplete-line whitespace error
class is in effect.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The final line of a file that lacks the terminating newline at its
end is called an incomplete line. In general they are frowned upon
for many reasons (imagine concatenating two files with "cat A B" and
what happens when A ends in an incomplete line, for example), and
text-oriented tools often mishandle such a line.
Implement checks in "git apply" for incomplete lines, which is off
by default for backward compatibility's sake, so that "git apply
--whitespace={fix,warn,error}" can notice, warn against, and fix
them.
As one of the new test shows, if you modify contents on an
incomplete line in the original and leave the resulting line
incomplete, it is still considered a whitespace error, the reasoning
being that "you'd better fix it while at it if you are making a
change on an incomplete line anyway", which may controversial.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git last-modified" was optimized by narrowing the set of paths to
follow as it dug deeper in the history.
* tc/last-modified-active-paths-optimization:
last-modified: implement faster algorithm
|
|
Given a set of attribute macros like:
[attr]a1 a2
[attr]a2 a3
...
[attr]a300000 -text
file a1
expanding the attributes for "file" requires expanding "a1" to "a2",
"a2" to "a3", and so on until hitting a non-macro expansion ("-text", in
this case). We implement this via recursion: fill_one() calls
macroexpand_one(), which then recurses back to fill_one(). As a result,
very deep macro chains like the one above can run out of stack space and
cause us to segfault.
The required stack space is fairly small; I needed on the order of
200,000 entries to get a segfault on Linux. So it's unlikely anybody
would hit this accidentally, leaving only malicious inputs. There you
can easily construct a repo which will segfault on clone (we look at
attributes during the checkout step, but you'd see the same trying to do
other operations, like diff in a bare repo). It's mostly harmless, since
anybody constructing such a repo is only preventing victims from cloning
their evil garbage, but it could be a nuisance for hosting sites.
One option to prevent this is to limit the depth of recursion we'll
allow. This is conceptually easy to implement, but it raises other
questions: what should the limit be, and do we need a configuration knob
for it?
The recursion here is simple enough that we can avoid those questions by
just converting it to iteration instead. Rather than iterate over the
states of a match_attr in fill_one(), we'll put them all in a queue, and
the expansion of each can add to the queue rather than recursing. Note
that this is a LIFO queue in order to keep the same depth-first order we
did with the recursive implementation. I've avoided using the word
"stack" in the code because the term is already heavily used to refer to
the stack of .gitattribute files that matches the tree structure of the
repository.
The test uses a limited stack size so we can trigger the problem with a
much smaller input than the one shown above. The value here (3000) is
enough to trigger the issue on my x86_64 Linux machine.
Reported-by: Ben Stav <benstav@miggo.io>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Detecting renames and copies improves diff's output. This effort is
wasted if we don't show any. Disable detection in that case.
This actually fixes the error code when using the options --cached,
--find-copies-harder, --no-ext-diff and --quiet together:
run_diff_index() indirectly calls diff-lib.c::show_modified(), which
queues even non-modified entries using diff_change() because we need
them for copy detection. diff_change() sets flags.has_changes, though,
which causes diff_can_quit_early() to declare we're done after seeing
only the very first entry -- way too soon.
Using --cached, --find-copies-harder and --quiet together without
--no-ext-diff was not affected even before, as it causes the flag
flags.diff_from_contents to be set, which disables the optimization
in a different way.
Reported-by: D. Ben Knoble <ben.knoble@gmail.com>
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The 'git-maintenance(1)' command provides tooling to run maintenance
tasks over Git repositories. The 'run' subcommand, as the name suggests,
runs the maintenance tasks. When used with the '--auto' flag, it uses
heuristics to determine if the required thresholds are met for running
said maintenance tasks.
There is however a lack of insight into these heuristics. Meaning, the
checks are linked to the execution.
Add a new 'is-needed' subcommand to 'git-maintenance(1)' which allows
users to simply check if it is needed to run maintenance without
performing it.
This subcommand can check if it is needed to run maintenance without
actually running it. Ideally it should be used with the '--auto' flag,
which would allow users to check if the thresholds required are met. The
subcommand also supports the '--task' flag which can be used to check
specific maintenance tasks.
While adding the respective tests in 't/t7900-maintenance.sh', remove a
duplicate of the test: 'worktree-prune task with --auto honors
maintenance.worktree-prune.auto'.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The reftable backend performs auto-compaction as part of its regular
flow, which is required to keep the number of tables part of a stack at
bay. This allows it to stay optimized.
Compaction can also be triggered voluntarily by the user via the 'git
pack-refs' or the 'git refs optimize' command. However, currently there
is no way for the user to check if optimization is required without
actually performing it.
Extract out the heuristics logic from 'reftable_stack_auto_compact()'
into an internal function 'update_segment_if_compaction_required()'.
Then use this to add and expose `reftable_stack_compaction_required()`
which will allow users to check if the reftable backend can be
optimized.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Messages from fast-import/export are now marked for i18n.
* cc/fast-import-export-i18n-cleanup:
gpg-interface: mark a string for translation
fast-import: mark strings for translation
fast-export: mark strings for translation
gpg-interface: use left shift to define GPG_VERIFY_*
gpg-interface: simplify ssh fingerprint parsing
|
|
The help text and manual page of "git bisect" command have been
made consistent with each other.
* rz/t0450-bisect-doc-update:
bisect: update usage and docs to match each other
|
|
Add a configuration variable to control the default behavior of git replay
for updating references. This allows users who prefer the traditional
pipeline output to set it once in their config instead of passing
--ref-action=print with every command.
The config variable uses string values that mirror the behavior modes:
* replay.refAction = update (default): atomic ref updates
* replay.refAction = print: output commands for pipeline
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Elijah Newren <newren@gmail.com>
Helped-by: Christian Couder <christian.couder@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The git replay command currently outputs update commands that can be
piped to update-ref to achieve a rebase, e.g.
git replay --onto main topic1..topic2 | git update-ref --stdin
This separation had advantages for three special cases:
* it made testing easy (when state isn't modified from one step to
the next, you don't need to make temporary branches or have undo
commands, or try to track the changes)
* it provided a natural can-it-rebase-cleanly (and what would it
rebase to) capability without automatically updating refs, similar
to a --dry-run
* it provided a natural low-level tool for the suite of hash-object,
mktree, commit-tree, mktag, merge-tree, and update-ref, allowing
users to have another building block for experimentation and making
new tools
However, it should be noted that all three of these are somewhat
special cases; users, whether on the client or server side, would
almost certainly find it more ergonomic to simply have the updating
of refs be the default.
For server-side operations in particular, the pipeline architecture
creates process coordination overhead. Server implementations that need
to perform rebases atomically must maintain additional code to:
1. Spawn and manage a pipeline between git-replay and git-update-ref
2. Coordinate stdout/stderr streams across the pipe boundary
3. Handle partial failure states if the pipeline breaks mid-execution
4. Parse and validate the update-ref command output
Change the default behavior to update refs directly, and atomically (at
least to the extent supported by the refs backend in use). This
eliminates the process coordination overhead for the common case.
For users needing the traditional pipeline workflow, add a new
--ref-action=<mode> option that preserves the original behavior:
git replay --ref-action=print --onto main topic1..topic2 | git update-ref --stdin
The mode can be:
* update (default): Update refs directly using an atomic transaction
* print: Output update-ref commands for pipeline use
Test suite changes:
All existing tests that expected command output now use
--ref-action=print to preserve their original behavior. This keeps
the tests valid while allowing them to verify that the pipeline workflow
still works correctly.
New tests were added to verify:
- Default atomic behavior (no output, refs updated directly)
- Bare repository support (server-side use case)
- Equivalence between traditional pipeline and atomic updates
- Real atomicity using a lock file to verify all-or-nothing guarantee
- Test isolation using test_when_finished to clean up state
- Reflog messages include replay mode and target
A following commit will add a replay.refAction configuration
option for users who prefer the traditional pipeline output as their
default behavior.
Helped-by: Elijah Newren <newren@gmail.com>
Helped-by: Patrick Steinhardt <ps@pks.im>
Helped-by: Christian Couder <christian.couder@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The patterns used in the .gitignore files use backslash in the way
documented for fnmatch(3); document as such to reduce confusion.
* jk/doc-backslash-in-exclude:
doc: document backslash in gitignore patterns
|
|
Leakfix.
* jk/test-delete-gpgsig-leakfix:
test-tool: fix leak in delete-gpgsig command
|
|
Test fix.
* eb/t1016-hash-transition-fix:
t1016-compatObjectFormat: really freeze time for reproduciblity
|
|
Tests did not set up GNUPGHOME correctly, which is fixed but some
flaky tests are exposed in t1016, which needs to be addressed
before this topic can move forward.
* tz/test-prepare-gnupghome:
t/lib-gpg: call prepare_gnupghome() in GPG2 prereq
t/lib-gpg: add prepare_gnupghome() to create GNUPGHOME dir
|
|
"git repo structure", a new command.
* jt/repo-structure:
builtin/repo: add progress meter for structure stats
builtin/repo: add keyvalue and nul format for structure stats
builtin/repo: add object counts in structure output
builtin/repo: introduce structure subcommand
ref-filter: export ref_kind_from_refname()
ref-filter: allow NULL filter pattern
builtin/repo: rename repo_info() to cmd_repo_info()
|
|
* kn/refs-optim-cleanup:
t/pack-refs-tests: move the 'test_done' to callees
refs: rename 'pack_refs_opts' to 'refs_optimize_opts'
refs: move to using the '.optimize' functions
|
|
* ps/ref-peeled-tags: (23 commits)
t7004: do not chdir around in the main process
ref-filter: fix stale parsed objects
ref-filter: parse objects on demand
ref-filter: detect broken tags when dereferencing them
refs: don't store peeled object IDs for invalid tags
object: add flag to `peel_object()` to verify object type
refs: drop infrastructure to peel via iterators
refs: drop `current_ref_iter` hack
builtin/show-ref: convert to use `reference_get_peeled_oid()`
ref-filter: propagate peeled object ID
upload-pack: convert to use `reference_get_peeled_oid()`
refs: expose peeled object ID via the iterator
refs: refactor reference status flags
refs: fully reset `struct ref_iterator::ref` on iteration
refs: introduce `.ref` field for the base iterator
refs: introduce wrapper struct for `each_ref_fn`
builtin/repo: add progress meter for structure stats
builtin/repo: add keyvalue and nul format for structure stats
builtin/repo: add object counts in structure output
builtin/repo: introduce structure subcommand
...
|
|
In ac0bad0af4 (t0601: refactor tests to be shareable, 2025-09-19), we
refactored 't/t0601-reffiles-pack-refs.sh' to move all of the tests to
't/pack-refs-tests.sh', which became a common test suite which was also
used by 't/t1463-refs-optimize.sh'.
This also moved the 'test_done' directive to 't/pack-refs-tests.sh'.
Which inhibits additional tests from being added to either of the tests.
Let's move the directive out to both the tests, so that we can add
additional specific tests to them. Also the test flow logic shouldn't be
part of tests which can be embedded in other test scripts.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Move down to no-contains subdirectory inside a subshell, just like
the previous step that created and used it does.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 054f5f457e (ref-filter: parse objects on demand, 2025-10-23) we have
started to skip parsing some objects in case we don't need to access
their values in the first place. This was done by introducing a new
member `struct expand_data::maybe_object` that gets populated on demand
via `get_or_parse_object()`.
This has led to a regression though where the object now gets reused
because we don't reset it properly. The `oi` structure is declared in
global scope, and there is no single place where we reset it before
invoking `get_object()`. The consequence is that the `maybe_object`
member doesn't get reset across calls, so subsequent calls will end up
reusing the same object.
This is only an issue for a subset of retrieved values, as not all of
the infrastructure ends up calling `get_or_parse_object()`. So the
effect is limited, which is probably why the issue wasn't detected
earlier.
Fix the issue by resetting `maybe_object` in `get_object()`.
Reported-by: Junio C Hamano <gitster@pobox.com>
Based-on-patch-by: Jeff King <peff@peff.net>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Users can ask git-for-each-ref(1) to peel tags and return information of
the tagged object by adding an asterisk to the format, like for example
"%(*$objectname)". If so, git-for-each-ref(1) peels that object to the
first non-tag object and then returns its values.
As mentioned in preceding commits, it can happen that the tagged object
type and the claimed object type differ, effectively resulting in a
corrupt tag. git-for-each-ref(1) would notice this mismatch, print an
error and then bail out when trying to peel the tag.
But we only notice this corruption in some very specific edge cases!
While we have a test in "t/for-each-ref-tests.sh" that verifies the
above scenario, this test is specifically crafted to detect the issue at
hand. Namely, we create two tags:
- One tag points to a specific object with the correct type.
- The other tag points to the *same* object with a different type.
The fact that both tags point to the same object is important here:
`peel_object()` wouldn't notice the corruption if the tagged objects
were different.
The root cause is that `peel_object()` calls `lookup_${type}()`
eventually, where the type is the same type declared in the tag object.
Consequently, when we have two tags pointing to the same object but with
different declared types we'll call two different lookup functions. The
first lookup will store the object with an unverified type A, whereas
the second lookup will try to look up the object with a different
unverified type B. And it is only now that we notice the discrepancy in
object types, even though type A could've already been the wrong type.
Fix the issue by verifying the object type in `populate_value()`. With
this change we'll also notice type mismatches when only dereferencing a
tag once.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Both the "files" and "reftable" backend store peeled object IDs for
references that point to tags:
- The "files" backend stores the value when packing refs, where each
peeled object ID is prefixed with "^".
- The "reftable" backend stores the value whenever writing a new
reference that points to a tag via a special ref record type.
Both of these backends use `peel_object()` to find the peeled object ID.
But as explained in the preceding commit, that function does not detect
the case where the tag's tagged object and its claimed type mismatch.
The consequence of storing these bogus peeled object IDs is that we're
less likely to detect such corruption in other parts of Git.
git-for-each-ref(1) for example does not notice anymore that the tag is
broken when using "--format=%(*objectname)" to dereference tags.
One could claim that this is good, because it still allows us to mostly
use the tag as intended. But the biggest problem here is that we now
have different behaviour for such a broken tag depending on whether or
not we have its peeled value in the refdb.
Fix the issue by verifying the object type when peeling the object. If
that verification fails we simply skip storing the peeled value in
either of the reference formats.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When peeling a tag to a non-tag object we repeatedly call
`parse_object()` on the tagged object until we find the first object
that isn't a tag. While this feels sensible at first, there is a big
catch here: `parse_object()` doesn't actually verify the type of the
tagged object.
The relevant code path here eventually ends up in `parse_tag_buffer()`.
Here, we parse the various fields of the tag, including the "type". Once
we've figured out the type and the tagged object ID, we call one of the
`lookup_${type}()` functions for whatever type we have found. There is
two possible outcomes in the successful case:
1. The object is already part of our cached objects. In that case we
double-check whether the type we're trying to look up matches the
type that was cached.
2. The object is _not_ part of our cached objects. In that case, we
simply create a new object with the expected type, but we don't
parse that object.
In the first case we might notice type mismatches, but only in the case
where our cache has the object with the correct type. In the second
case, we'll blindly assume that the type is correct and then go with it.
We'll only notice that the type might be wrong when we try to parse the
object at a later point.
Now arguably, we could change `parse_tag_buffer()` to verify the tagged
object's type for us. But that would have the effect that such a tag
cannot be parsed at all anymore, and we have a small bunch of tests for
exactly this case that assert we still can open such tags. So this
change does not feel like something we can retroactively tighten, even
though one shouldn't ever hit such corrupted tags.
Instead, add a new `flags` field to `peel_object()` that allows the
caller to opt in to strict object verification. This will be wired up at
a subset of callsites over the next few commits.
Note that this change also inlines `deref_tag_noverify()`. There's only
been two callsites of that function, the one we're changing and one in
our test helpers. The latter callsite can trivially use `deref_tag()`
instead, so by inlining the function we avoid having to pass down the
flag.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `each_ref_fn` callback function type is used across our code base
for several different functions that iterate through reference. There's
a bunch of callbacks implementing this type, which makes any changes to
the callback signature extremely noisy. An example of the required churn
is e8207717f1 (refs: add referent to each_ref_fn, 2024-08-09): adding a
single argument required us to change 48 files.
It was already proposed back then [1] that we might want to introduce a
wrapper structure to alleviate the pain going forward. While this of
course requires the same kind of global refactoring as just introducing
a new parameter, it at least allows us to more change the callback type
afterwards by just extending the wrapper structure.
One counterargument to this refactoring is that it makes the structure
more opaque. While it is obvious which callsites need to be fixed up
when we change the function type, it's not obvious anymore once we use
a structure. That being said, we only have a handful of sites that
actually need to populate this wrapper structure: our ref backends,
"refs/iterator.c" as well as very few sites that invoke the iterator
callback functions directly.
Introduce this wrapper structure so that we can adapt the iterator
interfaces more readily.
[1]: <ZmarVcF5JjsZx0dl@tanuki>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The current implementation of git-last-modified(1) works by doing a
revision walk, and inspecting the diff at each level of that walk to
annotate entries remaining in the hashmap of paths. In other words, if
the diff at some level touches a path which has not yet been associated
with a commit, then that commit becomes associated with the path.
While a perfectly reasonable implementation, it can perform poorly in
either one of two scenarios:
1. There are many entries of interest, in which case there is simply
a lot of work to do.
2. Or, there are (even a few) entries which have not been updated in a
long time, and so we must walk through a lot of history in order to
find a commit that touches that path.
This patch rewrites the last-modified implementation that addresses the
second point. The idea behind the algorithm is to propagate a set of
'active' paths (a path is 'active' if it does not yet belong to a
commit) up to parents and do a truncated revision walk.
The walk is truncated because it does not produce a revision for every
change in the original pathspec, but rather only for active paths.
More specifically, consider a priority queue of commits sorted by
generation number. First, enqueue the set of boundary commits with all
paths in the original spec marked as interesting.
Then, while the queue is not empty, do the following:
1. Pop an element, say, 'c', off of the queue, making sure that 'c'
isn't reachable by anything in the '--not' set.
2. For each parent 'p' (with index 'parent_i') of 'c', do the
following:
a. Compute the diff between 'c' and 'p'.
b. Pass any active paths that are TREESAME from 'c' to 'p'.
c. If 'p' has any active paths, push it onto the queue.
3. Any path that remains active on 'c' is associated to that commit.
This ends up being equivalent to doing something like 'git log -1 --
$path' for each path simultaneously. But, it allows us to go much faster
than the original implementation by limiting the number of diffs we
compute, since we can avoid parts of history that would have been
considered by the revision walk in the original implementation, but are
known to be uninteresting to us because we have already marked all paths
in that area to be inactive.
To avoid computing many first-parent diffs, add another trick on top of
this and check if all paths active in 'c' are DEFINITELY NOT in c's
Bloom filter. Since the commit-graph only stores first-parent diffs in
the Bloom filters, we can only apply this trick to first-parent diffs.
Comparing the performance of this new algorithm shows about a 2.5x
improvement on git.git:
Benchmark 1: master no bloom
Time (mean ± σ): 2.868 s ± 0.023 s [User: 2.811 s, System: 0.051 s]
Range (min … max): 2.847 s … 2.926 s 10 runs
Benchmark 2: master with bloom
Time (mean ± σ): 949.9 ms ± 15.2 ms [User: 907.6 ms, System: 39.5 ms]
Range (min … max): 933.3 ms … 971.2 ms 10 runs
Benchmark 3: HEAD no bloom
Time (mean ± σ): 782.0 ms ± 6.3 ms [User: 740.7 ms, System: 39.2 ms]
Range (min … max): 776.4 ms … 798.2 ms 10 runs
Benchmark 4: HEAD with bloom
Time (mean ± σ): 307.1 ms ± 1.7 ms [User: 276.4 ms, System: 29.9 ms]
Range (min … max): 303.7 ms … 309.5 ms 10 runs
Summary
HEAD with bloom ran
2.55 ± 0.02 times faster than HEAD no bloom
3.09 ± 0.05 times faster than master with bloom
9.34 ± 0.09 times faster than master no bloom
In short, the existing implementation is comparably fast *with* Bloom
filters as the new implementation is *without* Bloom filters. So, most
repositories should get a dramatic speed-up by just deploying this (even
without computing Bloom filters), and all repositories should get faster
still when computing Bloom filters.
When comparing a more extreme example of
`git last-modified -- COPYING t`, the difference is even 5 times better:
Benchmark 1: master
Time (mean ± σ): 4.372 s ± 0.057 s [User: 4.286 s, System: 0.062 s]
Range (min … max): 4.308 s … 4.509 s 10 runs
Benchmark 2: HEAD
Time (mean ± σ): 826.3 ms ± 22.3 ms [User: 784.1 ms, System: 39.2 ms]
Range (min … max): 810.6 ms … 881.2 ms 10 runs
Summary
HEAD ran
5.29 ± 0.16 times faster than master
As an added benefit, results are more consistent now. For example
implementation in 'master' gives:
$ git log --max-count=1 --format=%H -- pkt-line.h
15df15fe07ef66b51302bb77e393f3c5502629de
$ git last-modified -- pkt-line.h
15df15fe07ef66b51302bb77e393f3c5502629de pkt-line.h
$ git last-modified | grep pkt-line.h
5b49c1af03e600c286f63d9d9c9fb01403230b9f pkt-line.h
With the changes in this patch the results of git-last-modified(1)
always match those of `git log --max-count=1`.
One thing to note though, the results might be outputted in a different
order than before. This is not considerd to be an issue because nowhere
is documented the order is guaranteed.
Based-on-patches-by: Derrick Stolee <stolee@gmail.com>
Based-on-patches-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Toon Claes <toon@iotcl.com>
Acked-by: Taylor Blau <me@ttaylorr.com>
[jc: tweaked use of xcalloc() to unbreak coccicheck]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code to walk revision graph to compute merge base has been
optimized.
* rs/merge-base-optim:
commit-reach: avoid commit_list_insert_by_date()
|
|
Finishing touches to fixes to the recent regression in "git diff -w
--quiet" and anything that needs to internally generate patch to
see if it turns empty.
* jk/diff-patch-dry-run-cleanup:
diff: simplify run_external_diff() quiet logic
diff: drop dry-run redirection to /dev/null
diff: replace diff_options.dry_run flag with NULL file
diff: drop save/restore of color_moved in dry-run mode
diff: send external diff output to diff_options.file
|
|
"git maintenance" command learns the "geometric" strategy where it
avoids doing maintenance tasks that rebuilds everything from
scratch.
* ps/maintenance-geometric:
t7900: fix a flaky test due to git-repack always regenerating MIDX
builtin/maintenance: introduce "geometric" strategy
builtin/maintenance: make "gc" strategy accessible
builtin/maintenance: extend "maintenance.strategy" to manual maintenance
builtin/maintenance: run maintenance tasks depending on type
builtin/maintenance: improve readability of strategies
builtin/maintenance: don't silently ignore invalid strategy
builtin/maintenance: make the geometric factor configurable
builtin/maintenance: introduce "geometric-repack" task
builtin/gc: make `too_many_loose_objects()` reusable without GC config
builtin/gc: remove global `repack` variable
|
|
The wildmatch code had a corner case bug that mistakenly makes
"foo**/bar" match with "foobar", which has been corrected.
* jk/match-pathname-fix:
match_pathname(): give fnmatch one char of prefix context
match_pathname(): reorder prefix-match check
|
|
The 'q'(uit) command in "git add -p" has been improved to quit
without doing any meaningless work before leaving, and giving EOF
(typically control-D) to the prompt is made to behave the same way.
* rs/add-patch-quit:
add-patch: quit on EOF
add-patch: quit without skipping undecided hunks
|
|
Regression fixes for a topic that has already been merged.
* ly/diff-name-only-with-diff-from-content:
diff: stop output garbled message in dry run mode
|
|
Two slightly different ways to get at "all the packfiles" in API
has been cleaned up.
* ps/remove-packfile-store-get-packs:
packfile: rename `packfile_store_get_all_packs()`
packfile: introduce macro to iterate through packs
packfile: drop `packfile_store_get_packs()`
builtin/grep: simplify how we preload packs
builtin/gc: convert to use `packfile_store_get_all_packs()`
object-name: convert to use `packfile_store_get_all_packs()`
|
|
"Symlink symref" has been added to the list of things that will
disappear at Git 3.0 boundary.
* ps/symlink-symref-deprecation:
refs/files: deprecate writing symrefs as symbolic links
|
|
A new configuration variable commitGraph.changedPaths allows to
turn "--changed-paths" on by default for "git commit-graph".
* ey/commit-graph-changed-paths-config:
commit-graph: add new config for changed-paths & recommend it in scalar
|
|
Some error or warning messages in "builtin/fast-import.c" are marked
for translation, but many are not.
To be more consistent and provide a better experience to people using a
translated version, let's mark all the remaining error or warning
messages for translation.
While at it, let's make the following small changes:
- replace "GIT" or "git" in a few error messages to just "Git",
- replace "Expected from command, got %s" to "expected 'from'
command, got '%s'", which makes it clearer that "from" is a command
and should not be translated,
- downcase error and warning messages that start with an uppercase,
- fix test cases in "t9300-fast-import.sh" that broke because an
error or warning message was downcased,
- split error and warning messages that are too long,
- adjust the indentation of some arguments of the error functions.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We read the input into a strbuf, so we must free it. Without this, t1016
complains in SANITIZE=leak mode.
The bug was introduced in 7673ecd2dc (t1016-compatObjectFormat: add
tests to verify the conversion between objects, 2023-10-01). But nobody
seems to have noticed, probably because CI did not run these tests until
the fix in 6cd8369ef3 (t/lib-gpg: call prepare_gnupghome() in GPG2
prereq, 2024-07-03).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Because gitignore patterns are passed to fnmatch, the handling of
backslashes is the same as it is there: it can be used to escape
metacharacters. We do reference fnmatch(3) for more details, but it may
be friendlier to point out this implication explicitly (especially for
people who want to know about backslash handling and search the
documentation for that word). There are also two cases that I've seen
some other backslash-escaping systems handle differently, so let's
describe those:
1. A backslash before any character treats that character literally,
even if it's not otherwise a meta-character. As opposed to
including the backslash itself (like "foo\bar" in shell expands to
"foo\bar") or forbidding it ("foo\zar" is required to produce a
diagnostic in C).
2. A backslash at the end of the string is an invalid pattern (and not
a literal backslash).
This second one in particular was a point of confusion between our
implementation and the one in JGit. Our wildmatch behavior matches what
POSIX specifies for fnmatch, so the code and documentation are in line.
But let's add a test to cover this case. Note that the behavior here
differs between wildmatch itself (which is what gitignore will use) and
pathspec matching (which will only turn to wildmatch if a literal match
fails). So we match "foo\" to "foo\" in pathspecs, but not via
gitignore.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The strategy in t1016-compatObjectFormat is to build two trees with
identical commits, one tree encoded in sha1 the other tree encoded
in sha256 and to use the compatibility code to test and see if
the two trees are identical.
GPG signatures include the current time as part of the signature.
To make gpg deterministic I forced the use of gpg --faked-system-time.
Unfortunately I did not look closely enough.
By default gpg still allows time to move forward with --faked-system-time.
So in those rare instances when the system is heavily loaded and gpg runs
slower than other times, signatures over the exact same data differ
due to timestamps with a minuscule difference.
Reading through the gpg documentation with a close eye, time can be
frozen by including an exclamation point at the end of the argument to
--faked-system-time.
Add the exclamation point so gpg really runs with a fixed notion of time,
resulting in the exact same data having identical gpg signatures.
That is enough that I can run "t1016-compatObjectFormat.sh --stress"
and I don't see any failures.
It is possible a future change to gpg will make replay protection more
robust and not provide a way to allow two separate runs of gpg to
produce exactly the same signature for exactly the same data. If that
happens a deeper comparison of the two repositories will need to be
performed. A comparison that simply verifies the signatures and
compares the data for equality. For now that is a lot of work
for no gain so I am just documenting the possibility.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update the usage string of `git bisect` and documentation to match each
other. While at it, also:
1. Move the synopsis of `git bisect` subcommands to the synopsis
section, so that the test `t0450-txt-doc-vs-help.sh` can pass.
2. Document the `git bisect next` subcommand, which exists in the code
but is missing from the documentation.
See also: [1].
[1]: https://lore.kernel.org/git/3DA38465-7636-4EEF-B074-53E4628F5355@gmail.com/
Suggested-by: Ben Knoble <ben.knoble@gmail.com>
Signed-off-by: Ruoyu Zhong <zhongruoyu@outlook.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|