diff --git a/Documentation/RelNotes/2.54.0.adoc b/Documentation/RelNotes/2.54.0.adoc index 7c8c2fa466e0ec..207db16b80d7a7 100644 --- a/Documentation/RelNotes/2.54.0.adoc +++ b/Documentation/RelNotes/2.54.0.adoc @@ -36,6 +36,14 @@ UI, Workflows & Features * Extend the alias configuration syntax to allow aliases using characters outside ASCII alphanumeric (plus '-'). + * A signature on a commit that was GPG signed long time ago ought to + be still valid after the key that was used to sign it has expired, + but we showed them in alarming red. + + * "git subtree split --prefix=P " now checks the prefix P + against the tree of the (potentially quite different from the + current working tree) given commit. + Performance, Internal Implementation, Development Support etc. -------------------------------------------------------------- @@ -86,6 +94,19 @@ Performance, Internal Implementation, Development Support etc. * Some tests assumed "iconv" is available without honoring ICONV prerequisite, which has been corrected. + * Revamp object enumeration API around odb. + + * Additional tests were introduced to see the interaction with netrc + auth with auth failure on the http transport. + + * A couple of bugs in use of flag bits around odb API has been + corrected, and the flag bits reordered. + + * Plumb gitk/git-gui build and install procedure in meson based + builds. + + * The code to accept shallow "git push" has been optimized. + Fixes since v2.53 ----------------- @@ -155,6 +176,9 @@ Fixes since v2.53 * "git format-patch --from=" did not honor the command line option when writing out the cover letter, which has been corrected. + * Update build precedure for mergetool documentation in meson-based builds. + (merge 58e4eeeeb5 pw/meson-doc-mergetool later to maint). + * Other code cleanup, docfix, build fix, etc. (merge d79fff4a11 jk/remote-tracking-ref-leakfix later to maint). (merge 7a747f972d dd/t5403-modernise later to maint). @@ -180,3 +204,6 @@ Fixes since v2.53 (merge 5133837392 ps/ci-gitlab-msvc-updates later to maint). (merge 143e84958c db/doc-fetch-jobs-auto later to maint). (merge 0678e01f02 ap/use-test-seq-f-more later to maint). + (merge 96286f14b0 ty/symlinks-use-unsigned-for-bitset later to maint). + (merge b10e0cb1f3 kh/doc-am-xref later to maint). + (merge ed84bc1c0d kh/doc-patch-id-4 later to maint). diff --git a/Documentation/config/am.adoc b/Documentation/config/am.adoc index 5bcad2efb11639..e9561e12d7467e 100644 --- a/Documentation/config/am.adoc +++ b/Documentation/config/am.adoc @@ -1,14 +1,20 @@ am.keepcr:: - If true, git-am will call git-mailsplit for patches in mbox format - with parameter `--keep-cr`. In this case git-mailsplit will + If true, linkgit:git-am[1] will call linkgit:git-mailsplit[1] + for patches in mbox format with parameter `--keep-cr`. In this + case linkgit:git-mailsplit[1] will not remove `\r` from lines ending with `\r\n`. Can be overridden by giving `--no-keep-cr` from the command line. - See linkgit:git-am[1], linkgit:git-mailsplit[1]. am.threeWay:: - By default, `git am` will fail if the patch does not apply cleanly. When - set to true, this setting tells `git am` to fall back on 3-way merge if - the patch records the identity of blobs it is supposed to apply to and - we have those blobs available locally (equivalent to giving the `--3way` - option from the command line). Defaults to `false`. - See linkgit:git-am[1]. + By default, linkgit:git-am[1] will fail if the patch does not + apply cleanly. When set to true, this setting tells + linkgit:git-am[1] to fall back on 3-way merge if the patch + records the identity of blobs it is supposed to apply to and we + have those blobs available locally (equivalent to giving the + `--3way` option from the command line). Defaults to `false`. + +am.messageId:: + Add a `Message-ID` trailer based on the email header to the + commit when using linkgit:git-am[1] (see + linkgit:git-interpret-trailers[1]). See also the `--message-id` + and `--no-message-id` options. diff --git a/Documentation/git-am.adoc b/Documentation/git-am.adoc index 972398d4575c85..384e0cd7f9b5d5 100644 --- a/Documentation/git-am.adoc +++ b/Documentation/git-am.adoc @@ -9,7 +9,7 @@ git-am - Apply a series of patches from a mailbox SYNOPSIS -------- [verse] -'git am' [--signoff] [--keep] [--[no-]keep-cr] [--[no-]utf8] [--no-verify] +'git am' [--signoff] [--keep] [--[no-]keep-cr] [--[no-]utf8] [--[no-]verify] [--[no-]3way] [--interactive] [--committer-date-is-author-date] [--ignore-date] [--ignore-space-change | --ignore-whitespace] [--whitespace=] [-C] [-p] [--directory=] @@ -37,20 +37,21 @@ OPTIONS -s:: --signoff:: - Add a `Signed-off-by` trailer to the commit message, using - the committer identity of yourself. - See the signoff option in linkgit:git-commit[1] for more information. + Add a `Signed-off-by` trailer to the commit message (see + linkgit:git-interpret-trailers[1]), using the committer identity + of yourself. See the signoff option in linkgit:git-commit[1] + for more information. -k:: --keep:: - Pass `-k` flag to 'git mailinfo' (see linkgit:git-mailinfo[1]). + Pass `-k` flag to linkgit:git-mailinfo[1]. --keep-non-patch:: - Pass `-b` flag to 'git mailinfo' (see linkgit:git-mailinfo[1]). + Pass `-b` flag to linkgit:git-mailinfo[1]. --keep-cr:: --no-keep-cr:: - With `--keep-cr`, call 'git mailsplit' (see linkgit:git-mailsplit[1]) + With `--keep-cr`, call linkgit:git-mailsplit[1] with the same option, to prevent it from stripping CR at the end of lines. `am.keepcr` configuration variable can be used to specify the default behaviour. `--no-keep-cr` is useful to override `am.keepcr`. @@ -65,7 +66,7 @@ OPTIONS Ignore scissors lines (see linkgit:git-mailinfo[1]). --quoted-cr=:: - This flag will be passed down to 'git mailinfo' (see linkgit:git-mailinfo[1]). + This flag will be passed down to linkgit:git-mailinfo[1]. --empty=(drop|keep|stop):: How to handle an e-mail message lacking a patch: @@ -83,10 +84,11 @@ OPTIONS -m:: --message-id:: - Pass the `-m` flag to 'git mailinfo' (see linkgit:git-mailinfo[1]), - so that the Message-ID header is added to the commit message. - The `am.messageid` configuration variable can be used to specify - the default behaviour. + Pass the `-m` flag to linkgit:git-mailinfo[1], so that the + `Message-ID` header is added as a trailer (see + linkgit:git-interpret-trailers[1]). The `am.messageid` + configuration variable can be used to specify the default + behaviour. --no-message-id:: Do not add the Message-ID header to the commit message. @@ -98,7 +100,7 @@ OPTIONS -u:: --utf8:: - Pass `-u` flag to 'git mailinfo' (see linkgit:git-mailinfo[1]). + Pass `-u` flag to linkgit:git-mailinfo[1]. The proposed commit log message taken from the e-mail is re-coded into UTF-8 encoding (configuration variable `i18n.commitEncoding` can be used to specify the project's @@ -108,8 +110,7 @@ This was optional in prior versions of git, but now it is the default. You can use `--no-utf8` to override this. --no-utf8:: - Pass `-n` flag to 'git mailinfo' (see - linkgit:git-mailinfo[1]). + Pass `-n` flag to linkgit:git-mailinfo[1]. -3:: --3way:: @@ -132,9 +133,8 @@ include::rerere-options.adoc[] --exclude=:: --include=:: --reject:: - These flags are passed to the 'git apply' (see linkgit:git-apply[1]) - program that applies - the patch. + These flags are passed to the linkgit:git-apply[1] program that + applies the patch. + Valid for the `--whitespace` option are: `nowarn`, `warn`, `fix`, `error`, and `error-all`. @@ -150,11 +150,14 @@ Valid for the `--whitespace` option are: --interactive:: Run interactively. +--verify:: -n:: --no-verify:: - By default, the pre-applypatch and applypatch-msg hooks are run. - When any of `--no-verify` or `-n` is given, these are bypassed. - See also linkgit:githooks[5]. + Run the `pre-applypatch` and `applypatch-msg` hooks. This is the + default. Skip these hooks with `-n` or `--no-verify`. See also + linkgit:githooks[5]. ++ +Note that `post-applypatch` cannot be skipped. --committer-date-is-author-date:: By default the command records the date from the e-mail @@ -205,7 +208,8 @@ applying. to the screen before exiting. This overrides the standard message informing you to use `--continue` or `--skip` to handle the failure. This is solely - for internal use between 'git rebase' and 'git am'. + for internal use between linkgit:git-rebase[1] and + linkgit:git-am[1]. --abort:: Restore the original branch and abort the patching operation. @@ -223,7 +227,7 @@ applying. failure again. --show-current-patch[=(diff|raw)]:: - Show the message at which `git am` has stopped due to + Show the message at which linkgit:git-am[1] has stopped due to conflicts. If `raw` is specified, show the raw contents of the e-mail message; if `diff`, show the diff portion only. Defaults to `raw`. @@ -259,7 +263,7 @@ include::format-patch-end-of-commit-message.adoc[] This means that the contents of the commit message can inadvertently interrupt the processing (see the <> section below). -When initially invoking `git am`, you give it the names of the mailboxes +When initially invoking linkgit:git-am[1], you give it the names of the mailboxes to process. Upon seeing the first patch that does not apply, it aborts in the middle. You can recover from this in one of two ways: @@ -277,8 +281,8 @@ names. Before any patches are applied, ORIG_HEAD is set to the tip of the current branch. This is useful if you have problems with multiple -commits, like running 'git am' on the wrong branch or an error in the -commits that is more easily fixed by changing the mailbox (e.g. +commits, like running linkgit:git-am[1] on the wrong branch or an error +in the commits that is more easily fixed by changing the mailbox (e.g. errors in the "From:" lines). [[caveats]] @@ -294,6 +298,8 @@ This command can run `applypatch-msg`, `pre-applypatch`, and `post-applypatch` hooks. See linkgit:githooks[5] for more information. +See the `--verify`/`-n`/`--no-verify` options. + CONFIGURATION ------------- diff --git a/Documentation/git-patch-id.adoc b/Documentation/git-patch-id.adoc index 013e1a6190681b..05859990c8e1fd 100644 --- a/Documentation/git-patch-id.adoc +++ b/Documentation/git-patch-id.adoc @@ -3,7 +3,7 @@ git-patch-id(1) NAME ---- -git-patch-id - Compute unique ID for a patch +git-patch-id - Compute unique IDs for patches SYNOPSIS -------- @@ -12,7 +12,7 @@ git patch-id [--stable | --unstable | --verbatim] DESCRIPTION ----------- -Read a patch from the standard input and compute the patch ID for it. +Read patches from standard input and compute the patch IDs. A "patch ID" is nothing but a sum of SHA-1 of the file diffs associated with a patch, with line numbers ignored. As such, it's "reasonably stable", but at @@ -25,7 +25,8 @@ When dealing with `git diff-tree --patch` output, it takes advantage of the fact that the patch is prefixed with the object name of the commit, and outputs two 40-byte hexadecimal strings. The first string is the patch ID, and the second string is the commit ID. -This can be used to make a mapping from patch ID to commit ID. +This can be used to make a mapping from patch ID to commit ID for a +set or range of commits. OPTIONS ------- @@ -67,6 +68,50 @@ This is the default if `patchid.stable` is set to `true`. + This is the default. +EXAMPLES +-------- + +linkgit:git-cherry[1] shows what commits from a branch have patch ID +equivalent commits in some upstream branch. But it only tells you +whether such a commit exists or not. What if you wanted to know the +relevant commits in the upstream? We can use this command to make a +mapping between your branch and the upstream branch: + +---- +#!/bin/sh + +upstream="$1" +branch="$2" +test -z "$branch" && branch=HEAD +limit="$3" +if test -n "$limit" +then + tail_opts="$limit".."$upstream" +else + since=$(git log --format=%aI "$upstream".."$branch" | tail -1) + tail_opts=--since="$since"' '"$upstream" +fi +for_branch=$(mktemp) +for_upstream=$(mktemp) + +git rev-list --no-merges "$upstream".."$branch" | + git diff-tree --patch --stdin | + git patch-id --stable | sort >"$for_branch" +git rev-list --no-merges $tail_opts | + git diff-tree --patch --stdin | + git patch-id --stable | sort >"$for_upstream" +join -a1 "$for_branch" "$for_upstream" | cut -d' ' -f2,3 +rm "$for_branch" +rm "$for_upstream" +---- + +Now the first column shows the commit from your branch and the second +column shows the patch ID equivalent commit, if it exists. + +SEE ALSO +-------- +linkgit:git-cherry[1] + GIT --- Part of the linkgit:git[1] suite diff --git a/Documentation/meson.build b/Documentation/meson.build index fd2e8cc02d689f..d6365b888bbed3 100644 --- a/Documentation/meson.build +++ b/Documentation/meson.build @@ -354,13 +354,10 @@ foreach mode : [ 'diff', 'merge' ] command: [ shell, '@INPUT@', - '..', + meson.project_source_root(), mode, '@OUTPUT@' ], - env: [ - 'MERGE_TOOLS_DIR=' + meson.project_source_root() / 'mergetools', - ], input: 'generate-mergetool-list.sh', output: 'mergetools-' + mode + '.adoc', ) diff --git a/builtin/backfill.c b/builtin/backfill.c index e80fc1b694df61..d8cb3b0eba799d 100644 --- a/builtin/backfill.c +++ b/builtin/backfill.c @@ -67,8 +67,7 @@ static int fill_missing_blobs(const char *path UNUSED, return 0; for (size_t i = 0; i < list->nr; i++) { - if (!odb_has_object(ctx->repo->objects, &list->oid[i], - OBJECT_INFO_FOR_PREFETCH)) + if (!odb_has_object(ctx->repo->objects, &list->oid[i], 0)) oid_array_append(&ctx->current_batch, &list->oid[i]); } diff --git a/builtin/cat-file.c b/builtin/cat-file.c index df8e87a81f5eee..53ffe80c79eeac 100644 --- a/builtin/cat-file.c +++ b/builtin/cat-file.c @@ -806,11 +806,14 @@ struct for_each_object_payload { void *payload; }; -static int batch_one_object_loose(const struct object_id *oid, - const char *path UNUSED, - void *_payload) +static int batch_one_object_oi(const struct object_id *oid, + struct object_info *oi, + void *_payload) { struct for_each_object_payload *payload = _payload; + if (oi && oi->whence == OI_PACKED) + return payload->callback(oid, oi->u.packed.pack, oi->u.packed.offset, + payload->payload); return payload->callback(oid, NULL, 0, payload->payload); } @@ -846,8 +849,21 @@ static void batch_each_object(struct batch_options *opt, .payload = _payload, }; struct bitmap_index *bitmap = NULL; + struct odb_source *source; - for_each_loose_object(the_repository->objects, batch_one_object_loose, &payload, 0); + /* + * TODO: we still need to tap into implementation details of the object + * database sources. Ideally, we should extend `odb_for_each_object()` + * to handle object filters itself so that we can move the filtering + * logic into the individual sources. + */ + odb_prepare_alternates(the_repository->objects); + for (source = the_repository->objects->sources; source; source = source->next) { + int ret = odb_source_loose_for_each_object(source, NULL, batch_one_object_oi, + &payload, flags); + if (ret) + break; + } if (opt->objects_filter.choice != LOFC_DISABLED && (bitmap = prepare_bitmap_git(the_repository)) && @@ -863,8 +879,14 @@ static void batch_each_object(struct batch_options *opt, &payload, flags); } } else { - for_each_packed_object(the_repository, batch_one_object_packed, - &payload, flags); + struct object_info oi = { 0 }; + + for (source = the_repository->objects->sources; source; source = source->next) { + int ret = packfile_store_for_each_object(source->packfiles, &oi, + batch_one_object_oi, &payload, flags); + if (ret) + break; + } } free_bitmap_index(bitmap); @@ -924,7 +946,7 @@ static int batch_objects(struct batch_options *opt) cb.seen = &seen; batch_each_object(opt, batch_unordered_object, - FOR_EACH_OBJECT_PACK_ORDER, &cb); + ODB_FOR_EACH_OBJECT_PACK_ORDER, &cb); oidset_clear(&seen); } else { diff --git a/builtin/fsck.c b/builtin/fsck.c index 0512f78a87fe1d..384d47ee7731d2 100644 --- a/builtin/fsck.c +++ b/builtin/fsck.c @@ -162,7 +162,8 @@ static int mark_object(struct object *obj, enum object_type type, return 0; if (!(obj->flags & HAS_OBJ)) { - if (parent && !odb_has_object(the_repository->objects, &obj->oid, 1)) { + if (parent && !odb_has_object(the_repository->objects, &obj->oid, + HAS_OBJECT_RECHECK_PACKED)) { printf_ln(_("broken link from %7s %s\n" " to %7s %s"), printable_type(&parent->oid, parent->type), @@ -219,15 +220,17 @@ static int mark_used(struct object *obj, enum object_type type UNUSED, return 0; } -static void mark_unreachable_referents(const struct object_id *oid) +static int mark_unreachable_referents(const struct object_id *oid, + struct object_info *oi UNUSED, + void *data UNUSED) { struct fsck_options options = FSCK_OPTIONS_DEFAULT; struct object *obj = lookup_object(the_repository, oid); if (!obj || !(obj->flags & HAS_OBJ)) - return; /* not part of our original set */ + return 0; /* not part of our original set */ if (obj->flags & REACHABLE) - return; /* reachable objects already traversed */ + return 0; /* reachable objects already traversed */ /* * Avoid passing OBJ_NONE to fsck_walk, which will parse the object @@ -244,22 +247,7 @@ static void mark_unreachable_referents(const struct object_id *oid) fsck_walk(obj, NULL, &options); if (obj->type == OBJ_TREE) free_tree_buffer((struct tree *)obj); -} -static int mark_loose_unreachable_referents(const struct object_id *oid, - const char *path UNUSED, - void *data UNUSED) -{ - mark_unreachable_referents(oid); - return 0; -} - -static int mark_packed_unreachable_referents(const struct object_id *oid, - struct packed_git *pack UNUSED, - uint32_t pos UNUSED, - void *data UNUSED) -{ - mark_unreachable_referents(oid); return 0; } @@ -395,12 +383,8 @@ static void check_connectivity(void) * and ignore any that weren't present in our earlier * traversal. */ - for_each_loose_object(the_repository->objects, - mark_loose_unreachable_referents, NULL, 0); - for_each_packed_object(the_repository, - mark_packed_unreachable_referents, - NULL, - 0); + odb_for_each_object(the_repository->objects, NULL, + mark_unreachable_referents, NULL, 0); } /* Look up all the requirements, warn about missing objects.. */ @@ -900,26 +884,12 @@ static void fsck_index(struct index_state *istate, const char *index_path, fsck_resolve_undo(istate, index_path); } -static void mark_object_for_connectivity(const struct object_id *oid) +static int mark_object_for_connectivity(const struct object_id *oid, + struct object_info *oi UNUSED, + void *cb_data UNUSED) { struct object *obj = lookup_unknown_object(the_repository, oid); obj->flags |= HAS_OBJ; -} - -static int mark_loose_for_connectivity(const struct object_id *oid, - const char *path UNUSED, - void *data UNUSED) -{ - mark_object_for_connectivity(oid); - return 0; -} - -static int mark_packed_for_connectivity(const struct object_id *oid, - struct packed_git *pack UNUSED, - uint32_t pos UNUSED, - void *data UNUSED) -{ - mark_object_for_connectivity(oid); return 0; } @@ -1068,10 +1038,8 @@ int cmd_fsck(int argc, odb_reprepare(the_repository->objects); if (connectivity_only) { - for_each_loose_object(the_repository->objects, - mark_loose_for_connectivity, NULL, 0); - for_each_packed_object(the_repository, - mark_packed_for_connectivity, NULL, 0); + odb_for_each_object(the_repository->objects, NULL, + mark_object_for_connectivity, NULL, 0); } else { odb_prepare_alternates(the_repository->objects); for (source = the_repository->objects->sources; source; source = source->next) diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c index 433d77cf278320..c1ee4d5ed7f675 100644 --- a/builtin/pack-objects.c +++ b/builtin/pack-objects.c @@ -3912,7 +3912,7 @@ static void read_packs_list_from_stdin(struct rev_info *revs) for_each_object_in_pack(p, add_object_entry_from_pack, revs, - FOR_EACH_OBJECT_PACK_ORDER); + ODB_FOR_EACH_OBJECT_PACK_ORDER); } strbuf_release(&buf); @@ -4325,25 +4325,12 @@ static void show_edge(struct commit *commit) } static int add_object_in_unpacked_pack(const struct object_id *oid, - struct packed_git *pack, - uint32_t pos, + struct object_info *oi, void *data UNUSED) { if (cruft) { - off_t offset; - time_t mtime; - - if (pack->is_cruft) { - if (load_pack_mtimes(pack) < 0) - die(_("could not load cruft pack .mtimes")); - mtime = nth_packed_mtime(pack, pos); - } else { - mtime = pack->mtime; - } - offset = nth_packed_object_offset(pack, pos); - - add_cruft_object_entry(oid, OBJ_NONE, pack, offset, - NULL, mtime); + add_cruft_object_entry(oid, OBJ_NONE, oi->u.packed.pack, + oi->u.packed.offset, NULL, *oi->mtimep); } else { add_object_entry(oid, OBJ_NONE, "", 0); } @@ -4352,14 +4339,25 @@ static int add_object_in_unpacked_pack(const struct object_id *oid, static void add_objects_in_unpacked_packs(void) { - if (for_each_packed_object(to_pack.repo, - add_object_in_unpacked_pack, - NULL, - FOR_EACH_OBJECT_PACK_ORDER | - FOR_EACH_OBJECT_LOCAL_ONLY | - FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS | - FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS)) - die(_("cannot open pack index")); + struct odb_source *source; + time_t mtime; + struct object_info oi = { + .mtimep = &mtime, + }; + + odb_prepare_alternates(to_pack.repo->objects); + for (source = to_pack.repo->objects->sources; source; source = source->next) { + if (!source->local) + continue; + + if (packfile_store_for_each_object(source->packfiles, &oi, + add_object_in_unpacked_pack, NULL, + ODB_FOR_EACH_OBJECT_PACK_ORDER | + ODB_FOR_EACH_OBJECT_LOCAL_ONLY | + ODB_FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS | + ODB_FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS)) + die(_("cannot open pack index")); + } } static int add_loose_object(const struct object_id *oid, const char *path, diff --git a/commit-graph.c b/commit-graph.c index 1fcceb3920ef3f..d250a729b1a099 100644 --- a/commit-graph.c +++ b/commit-graph.c @@ -1479,30 +1479,38 @@ static int write_graph_chunk_bloom_data(struct hashfile *f, return 0; } +static int add_packed_commits_oi(const struct object_id *oid, + struct object_info *oi, + void *data) +{ + struct write_commit_graph_context *ctx = (struct write_commit_graph_context*)data; + + if (ctx->progress) + display_progress(ctx->progress, ++ctx->progress_done); + + if (*oi->typep != OBJ_COMMIT) + return 0; + + oid_array_append(&ctx->oids, oid); + set_commit_pos(ctx->r, oid); + + return 0; +} + static int add_packed_commits(const struct object_id *oid, struct packed_git *pack, uint32_t pos, void *data) { - struct write_commit_graph_context *ctx = (struct write_commit_graph_context*)data; enum object_type type; off_t offset = nth_packed_object_offset(pack, pos); struct object_info oi = OBJECT_INFO_INIT; - if (ctx->progress) - display_progress(ctx->progress, ++ctx->progress_done); - oi.typep = &type; if (packed_object_info(pack, offset, &oi) < 0) die(_("unable to get type of object %s"), oid_to_hex(oid)); - if (type != OBJ_COMMIT) - return 0; - - oid_array_append(&ctx->oids, oid); - set_commit_pos(ctx->r, oid); - - return 0; + return add_packed_commits_oi(oid, &oi, data); } static void add_missing_parents(struct write_commit_graph_context *ctx, struct commit *commit) @@ -1927,7 +1935,7 @@ static int fill_oids_from_packs(struct write_commit_graph_context *ctx, goto cleanup; } for_each_object_in_pack(p, add_packed_commits, ctx, - FOR_EACH_OBJECT_PACK_ORDER); + ODB_FOR_EACH_OBJECT_PACK_ORDER); close_pack(p); free(p); } @@ -1959,13 +1967,23 @@ static int fill_oids_from_commits(struct write_commit_graph_context *ctx, static void fill_oids_from_all_packs(struct write_commit_graph_context *ctx) { + struct odb_source *source; + enum object_type type; + struct object_info oi = { + .typep = &type, + }; + if (ctx->report_progress) ctx->progress = start_delayed_progress( ctx->r, _("Finding commits for commit graph among packed objects"), ctx->approx_nr_objects); - for_each_packed_object(ctx->r, add_packed_commits, ctx, - FOR_EACH_OBJECT_PACK_ORDER); + + odb_prepare_alternates(ctx->r->objects); + for (source = ctx->r->objects->sources; source; source = source->next) + packfile_store_for_each_object(source->packfiles, &oi, add_packed_commits_oi, + ctx, ODB_FOR_EACH_OBJECT_PACK_ORDER); + if (ctx->progress_done < ctx->approx_nr_objects) display_progress(ctx->progress, ctx->approx_nr_objects); stop_progress(&ctx->progress); diff --git a/commit.c b/commit.c index d16ae7334510e0..0ffdd6679eec80 100644 --- a/commit.c +++ b/commit.c @@ -42,13 +42,35 @@ const char *commit_type = "commit"; struct commit *lookup_commit_reference_gently(struct repository *r, const struct object_id *oid, int quiet) { - struct object *obj = deref_tag(r, - parse_object(r, oid), - NULL, 0); + const struct object_id *maybe_peeled; + struct object_id peeled_oid; + struct commit *commit; + enum object_type type; - if (!obj) + switch (peel_object_ext(r, oid, &peeled_oid, 0, &type)) { + case PEEL_NON_TAG: + maybe_peeled = oid; + break; + case PEEL_PEELED: + maybe_peeled = &peeled_oid; + break; + default: return NULL; - return object_as_type(obj, OBJ_COMMIT, quiet); + } + + if (type != OBJ_COMMIT) { + if (!quiet) + error(_("object %s is a %s, not a %s"), + oid_to_hex(oid), type_name(type), + type_name(OBJ_COMMIT)); + return NULL; + } + + commit = lookup_commit(r, maybe_peeled); + if (!commit || repo_parse_commit_gently(r, commit, quiet) < 0) + return NULL; + + return commit; } struct commit *lookup_commit_reference(struct repository *r, const struct object_id *oid) diff --git a/commit.h b/commit.h index 1635de418b59e0..f2f39e1a89ef62 100644 --- a/commit.h +++ b/commit.h @@ -103,16 +103,26 @@ static inline int repo_parse_commit(struct repository *r, struct commit *item) return repo_parse_commit_gently(r, item, 0); } +void unparse_commit(struct repository *r, const struct object_id *oid); + static inline int repo_parse_commit_no_graph(struct repository *r, struct commit *commit) { + /* + * When the commit has been parsed but its tree wasn't populated then + * this is an indicator that it has been parsed via the commit-graph. + * We cannot read the tree via the commit-graph, as we're explicitly + * told not to use it. We thus have to first un-parse the object so + * that we can re-parse it without the graph. + */ + if (commit->object.parsed && !commit->maybe_tree) + unparse_commit(r, &commit->object.oid); + return repo_parse_commit_internal(r, commit, 0, 0); } void parse_commit_or_die(struct commit *item); -void unparse_commit(struct repository *r, const struct object_id *oid); - struct buffer_slab; struct buffer_slab *allocate_commit_buffer_slab(void); void free_commit_buffer_slab(struct buffer_slab *bs); diff --git a/contrib/coccinelle/commit.cocci b/contrib/coccinelle/commit.cocci index c5284604c566bc..42725161e9f660 100644 --- a/contrib/coccinelle/commit.cocci +++ b/contrib/coccinelle/commit.cocci @@ -26,7 +26,7 @@ expression s; // repo_get_commit_tree() on the LHS. @@ identifier f != { repo_get_commit_tree, get_commit_tree_in_graph_one, - load_tree_for_commit, set_commit_tree }; + load_tree_for_commit, set_commit_tree, repo_parse_commit_no_graph }; expression c; @@ f(...) {<... diff --git a/contrib/subtree/git-subtree.sh b/contrib/subtree/git-subtree.sh index 17106d1a721519..d7f9121f2f8ee9 100755 --- a/contrib/subtree/git-subtree.sh +++ b/contrib/subtree/git-subtree.sh @@ -257,6 +257,9 @@ main () { test -e "$arg_prefix" && die "fatal: prefix '$arg_prefix' already exists." ;; + split) + # checked later against the commit, not the working tree + ;; *) test -e "$arg_prefix" || die "fatal: '$arg_prefix' does not exist; use 'git subtree add'" @@ -966,6 +969,12 @@ cmd_split () { else die "fatal: you must provide exactly one revision, and optionally a repository. Got: '$*'" fi + + # Now validate prefix against the commit, not the working tree + if ! git cat-file -e "$rev:$dir" 2>/dev/null + then + die "fatal: '$dir' does not exist; use 'git subtree add'" + fi repository="" if test "$#" = 2 then diff --git a/contrib/subtree/t/t7900-subtree.sh b/contrib/subtree/t/t7900-subtree.sh index e7040718f221f0..8024703cad90e8 100755 --- a/contrib/subtree/t/t7900-subtree.sh +++ b/contrib/subtree/t/t7900-subtree.sh @@ -368,6 +368,28 @@ test_expect_success 'split requires path given by option --prefix must exist' ' ) ' +test_expect_success 'split works when prefix exists in commit but not in working tree' ' + subtree_test_create_repo "$test_count" && + ( + cd "$test_count" && + + # create subtree + mkdir pkg && + echo ok >pkg/file && + git add pkg && + git commit -m "add pkg" && + good=$(git rev-parse HEAD) && + + # remove it from working tree in later commit + git rm -r pkg && + git commit -m "remove pkg" && + + # must still be able to split using the old commit + git subtree split --prefix=pkg "$good" >out && + test -s out + ) +' + test_expect_success 'split rejects flags for add' ' subtree_test_create_repo "$test_count" && subtree_test_create_repo "$test_count/sub proj" && diff --git a/gpg-interface.c b/gpg-interface.c index 87fb6605fba174..7e6a1520bd1535 100644 --- a/gpg-interface.c +++ b/gpg-interface.c @@ -382,7 +382,8 @@ static int verify_gpg_signed_buffer(struct signature_check *sigc, delete_tempfile(&temp); - ret |= !strstr(gpg_stdout.buf, "\n[GNUPG:] GOODSIG "); + ret |= !strstr(gpg_stdout.buf, "\n[GNUPG:] GOODSIG ") && + !strstr(gpg_stdout.buf, "\n[GNUPG:] EXPKEYSIG "); sigc->output = strbuf_detach(&gpg_stderr, NULL); sigc->gpg_status = strbuf_detach(&gpg_stdout, NULL); @@ -680,7 +681,7 @@ int check_signature(struct signature_check *sigc, if (status && !sigc->output) return !!status; - status |= sigc->result != 'G'; + status |= sigc->result != 'G' && sigc->result != 'Y'; status |= sigc->trust_level < configured_min_trust_level; return !!status; diff --git a/meson.build b/meson.build index 762e2d0fc073df..6f155beafa4922 100644 --- a/meson.build +++ b/meson.build @@ -239,7 +239,9 @@ git = find_program('git', dirs: program_path, native: true, required: false) sed = find_program('sed', dirs: program_path, native: true) shell = find_program('sh', dirs: program_path, native: true) tar = find_program('tar', dirs: program_path, native: true) +tclsh = find_program('tclsh', required: get_option('git_gui'), native: false) time = find_program('time', dirs: program_path, required: get_option('benchmarks')) +wish = find_program('wish', required: get_option('git_gui').enabled() or get_option('gitk').enabled(), native: false) # Detect the target shell that is used by Git at runtime. Note that we prefer # "/bin/sh" over a PATH-based lookup, which provides a working shell on most @@ -2254,6 +2256,16 @@ configure_file( configuration: build_options_config, ) +gitk_option = get_option('gitk').disable_auto_if(not wish.found()) +if gitk_option.allowed() + subproject('gitk') +endif + +git_gui_option = get_option('git_gui').disable_auto_if(not tclsh.found() or not wish.found()) +if git_gui_option.allowed() + subproject('git-gui') +endif + # Development environments can be used via `meson devenv -C `. This # allows you to execute test scripts directly with the built Git version and # puts the built version of Git in your PATH. @@ -2280,6 +2292,8 @@ summary({ 'curl': curl, 'expat': expat, 'gettext': intl, + 'gitk': gitk_option.allowed(), + 'git-gui': git_gui_option.allowed(), 'gitweb': gitweb_option.allowed(), 'iconv': iconv, 'pcre2': pcre2, diff --git a/meson_options.txt b/meson_options.txt index e0be260ae1bce8..659cbb218f46e0 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -43,6 +43,10 @@ option('expat', type: 'feature', value: 'enabled', description: 'Build helpers used to push to remotes with the HTTP transport.') option('gettext', type: 'feature', value: 'auto', description: 'Build translation files.') +option('gitk', type: 'feature', value: 'auto', + description: 'Build the Gitk graphical repository browser. Requires Tcl/Tk.') +option('git_gui', type: 'feature', value: 'auto', + description: 'Build the git-gui graphical user interface for Git. Requires Tcl/Tk.') option('gitweb', type: 'feature', value: 'auto', description: 'Build Git web interface. Requires Perl.') option('iconv', type: 'feature', value: 'auto', diff --git a/object-file.c b/object-file.c index 1b62996ef0a96f..b8265021f988eb 100644 --- a/object-file.c +++ b/object-file.c @@ -165,30 +165,13 @@ int stream_object_signature(struct repository *r, const struct object_id *oid) } /* - * Find "oid" as a loose object in given source. - * Returns 0 on success, negative on failure. + * Find "oid" as a loose object in given source, open the object and return its + * file descriptor. Returns the file descriptor on success, negative on failure. * * The "path" out-parameter will give the path of the object we found (if any). * Note that it may point to static storage and is only valid until another * call to stat_loose_object(). */ -static int stat_loose_object(struct odb_source_loose *loose, - const struct object_id *oid, - struct stat *st, const char **path) -{ - static struct strbuf buf = STRBUF_INIT; - - *path = odb_loose_path(loose->source, &buf, oid); - if (!lstat(*path, st)) - return 0; - - return -1; -} - -/* - * Like stat_loose_object(), but actually open the object and return the - * descriptor. See the caveats on the "path" parameter above. - */ static int open_loose_object(struct odb_source_loose *loose, const struct object_id *oid, const char **path) { @@ -412,19 +395,21 @@ static int parse_loose_header(const char *hdr, struct object_info *oi) return 0; } -int odb_source_loose_read_object_info(struct odb_source *source, +static int read_object_info_from_path(struct odb_source *source, + const char *path, const struct object_id *oid, - struct object_info *oi, int flags) + struct object_info *oi, + enum object_info_flags flags) { int ret; int fd; unsigned long mapsize; - const char *path; void *map = NULL; git_zstream stream, *stream_to_end = NULL; char hdr[MAX_HEADER_LEN]; unsigned long size_scratch; enum object_type type_scratch; + struct stat st; /* * If we don't care about type or size, then we don't @@ -437,24 +422,28 @@ int odb_source_loose_read_object_info(struct odb_source *source, if (!oi || (!oi->typep && !oi->sizep && !oi->contentp)) { struct stat st; - if ((!oi || !oi->disk_sizep) && (flags & OBJECT_INFO_QUICK)) { + if ((!oi || (!oi->disk_sizep && !oi->mtimep)) && (flags & OBJECT_INFO_QUICK)) { ret = quick_has_loose(source->loose, oid) ? 0 : -1; goto out; } - if (stat_loose_object(source->loose, oid, &st, &path) < 0) { + if (lstat(path, &st) < 0) { ret = -1; goto out; } - if (oi && oi->disk_sizep) - *oi->disk_sizep = st.st_size; + if (oi) { + if (oi->disk_sizep) + *oi->disk_sizep = st.st_size; + if (oi->mtimep) + *oi->mtimep = st.st_mtime; + } ret = 0; goto out; } - fd = open_loose_object(source->loose, oid, &path); + fd = git_open(path); if (fd < 0) { if (errno != ENOENT) error_errno(_("unable to open loose object %s"), oid_to_hex(oid)); @@ -462,7 +451,21 @@ int odb_source_loose_read_object_info(struct odb_source *source, goto out; } - map = map_fd(fd, path, &mapsize); + if (fstat(fd, &st)) { + close(fd); + ret = -1; + goto out; + } + + mapsize = xsize_t(st.st_size); + if (!mapsize) { + close(fd); + ret = error(_("object file %s is empty"), path); + goto out; + } + + map = xmmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, fd, 0); + close(fd); if (!map) { ret = -1; goto out; @@ -470,6 +473,8 @@ int odb_source_loose_read_object_info(struct odb_source *source, if (oi->disk_sizep) *oi->disk_sizep = mapsize; + if (oi->mtimep) + *oi->mtimep = st.st_mtime; stream_to_end = &stream; @@ -533,6 +538,16 @@ int odb_source_loose_read_object_info(struct odb_source *source, return ret; } +int odb_source_loose_read_object_info(struct odb_source *source, + const struct object_id *oid, + struct object_info *oi, + enum object_info_flags flags) +{ + static struct strbuf buf = STRBUF_INIT; + odb_loose_path(source, &buf, oid); + return read_object_info_from_path(source, buf.buf, oid, oi, flags); +} + static void hash_object_body(const struct git_hash_algo *algo, struct git_hash_ctx *c, const void *buf, unsigned long len, struct object_id *oid, @@ -719,7 +734,8 @@ struct odb_transaction_files { static void prepare_loose_object_transaction(struct odb_transaction *base) { - struct odb_transaction_files *transaction = (struct odb_transaction_files *)base; + struct odb_transaction_files *transaction = + container_of_or_null(base, struct odb_transaction_files, base); /* * We lazily create the temporary object directory @@ -738,7 +754,8 @@ static void prepare_loose_object_transaction(struct odb_transaction *base) static void fsync_loose_object_transaction(struct odb_transaction *base, int fd, const char *filename) { - struct odb_transaction_files *transaction = (struct odb_transaction_files *)base; + struct odb_transaction_files *transaction = + container_of_or_null(base, struct odb_transaction_files, base); /* * If we have an active ODB transaction, we issue a call that @@ -1634,11 +1651,14 @@ int index_fd(struct index_state *istate, struct object_id *oid, type, path, flags); } else { struct object_database *odb = the_repository->objects; + struct odb_transaction_files *files_transaction; struct odb_transaction *transaction; transaction = odb_transaction_begin(odb); - ret = index_blob_packfile_transaction((struct odb_transaction_files *)odb->transaction, - oid, fd, + files_transaction = container_of(odb->transaction, + struct odb_transaction_files, + base); + ret = index_blob_packfile_transaction(files_transaction, oid, fd, xsize_t(st->st_size), path, flags); odb_transaction_commit(transaction); @@ -1792,24 +1812,52 @@ int for_each_loose_file_in_source(struct odb_source *source, return r; } -int for_each_loose_object(struct object_database *odb, - each_loose_object_fn cb, void *data, - enum for_each_object_flags flags) -{ +struct for_each_object_wrapper_data { struct odb_source *source; + const struct object_info *request; + odb_for_each_object_cb cb; + void *cb_data; +}; - odb_prepare_alternates(odb); - for (source = odb->sources; source; source = source->next) { - int r = for_each_loose_file_in_source(source, cb, NULL, - NULL, data); - if (r) - return r; +static int for_each_object_wrapper_cb(const struct object_id *oid, + const char *path, + void *cb_data) +{ + struct for_each_object_wrapper_data *data = cb_data; - if (flags & FOR_EACH_OBJECT_LOCAL_ONLY) - break; + if (data->request) { + struct object_info oi = *data->request; + + if (read_object_info_from_path(data->source, path, oid, &oi, 0) < 0) + return -1; + + return data->cb(oid, &oi, data->cb_data); + } else { + return data->cb(oid, NULL, data->cb_data); } +} - return 0; +int odb_source_loose_for_each_object(struct odb_source *source, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags) +{ + struct for_each_object_wrapper_data data = { + .source = source, + .request = request, + .cb = cb, + .cb_data = cb_data, + }; + + /* There are no loose promisor objects, so we can return immediately. */ + if ((flags & ODB_FOR_EACH_OBJECT_PROMISOR_ONLY)) + return 0; + if ((flags & ODB_FOR_EACH_OBJECT_LOCAL_ONLY) && !source->local) + return 0; + + return for_each_loose_file_in_source(source, for_each_object_wrapper_cb, + NULL, NULL, &data); } static int append_loose_object(const struct object_id *oid, @@ -1992,7 +2040,8 @@ int read_loose_object(struct repository *repo, static void odb_transaction_files_commit(struct odb_transaction *base) { - struct odb_transaction_files *transaction = (struct odb_transaction_files *)base; + struct odb_transaction_files *transaction = + container_of(base, struct odb_transaction_files, base); flush_loose_object_transaction(transaction); flush_packfile_transaction(transaction); @@ -2047,7 +2096,8 @@ struct odb_loose_read_stream { static ssize_t read_istream_loose(struct odb_read_stream *_st, char *buf, size_t sz) { - struct odb_loose_read_stream *st = (struct odb_loose_read_stream *)_st; + struct odb_loose_read_stream *st = + container_of(_st, struct odb_loose_read_stream, base); size_t total_read = 0; switch (st->z_state) { @@ -2093,7 +2143,9 @@ static ssize_t read_istream_loose(struct odb_read_stream *_st, char *buf, size_t static int close_istream_loose(struct odb_read_stream *_st) { - struct odb_loose_read_stream *st = (struct odb_loose_read_stream *)_st; + struct odb_loose_read_stream *st = + container_of(_st, struct odb_loose_read_stream, base); + if (st->z_state == ODB_LOOSE_READ_STREAM_INUSE) git_inflate_end(&st->z); munmap(st->mapped, st->mapsize); diff --git a/object-file.h b/object-file.h index a62d0de39459b1..47fad6663ba5fd 100644 --- a/object-file.h +++ b/object-file.h @@ -47,7 +47,8 @@ void odb_source_loose_reprepare(struct odb_source *source); int odb_source_loose_read_object_info(struct odb_source *source, const struct object_id *oid, - struct object_info *oi, int flags); + struct object_info *oi, + enum object_info_flags flags); int odb_source_loose_read_object_stream(struct odb_read_stream **out, struct odb_source *source, @@ -126,16 +127,17 @@ int for_each_loose_file_in_source(struct odb_source *source, void *data); /* - * Iterate over all accessible loose objects without respect to - * reachability. By default, this includes both local and alternate objects. - * The order in which objects are visited is unspecified. - * - * Any flags specific to packs are ignored. + * Iterate through all loose objects in the given object database source and + * invoke the callback function for each of them. If an object info request is + * given, then the object info will be read for every individual object and + * passed to the callback as if `odb_source_loose_read_object_info()` was + * called for the object. */ -int for_each_loose_object(struct object_database *odb, - each_loose_object_fn, void *, - enum for_each_object_flags flags); - +int odb_source_loose_for_each_object(struct odb_source *source, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags); /** * format_object_header() is a thin wrapper around s xsnprintf() that diff --git a/object.c b/object.c index 4669b8d65e74ab..99b6df3780475e 100644 --- a/object.c +++ b/object.c @@ -207,10 +207,11 @@ struct object *lookup_object_by_type(struct repository *r, } } -enum peel_status peel_object(struct repository *r, - const struct object_id *name, - struct object_id *oid, - unsigned flags) +enum peel_status peel_object_ext(struct repository *r, + const struct object_id *name, + struct object_id *oid, + unsigned flags, + enum object_type *typep) { struct object *o = lookup_unknown_object(r, name); @@ -220,8 +221,10 @@ enum peel_status peel_object(struct repository *r, return PEEL_INVALID; } - if (o->type != OBJ_TAG) + if (o->type != OBJ_TAG) { + *typep = o->type; return PEEL_NON_TAG; + } while (o && o->type == OBJ_TAG) { o = parse_object(r, &o->oid); @@ -241,9 +244,19 @@ enum peel_status peel_object(struct repository *r, return PEEL_INVALID; oidcpy(oid, &o->oid); + *typep = o->type; return PEEL_PEELED; } +enum peel_status peel_object(struct repository *r, + const struct object_id *name, + struct object_id *oid, + unsigned flags) +{ + enum object_type dummy; + return peel_object_ext(r, name, oid, flags, &dummy); +} + struct object *parse_object_buffer(struct repository *r, const struct object_id *oid, enum object_type type, unsigned long size, void *buffer, int *eaten_p) { struct object *obj; diff --git a/object.h b/object.h index dfe7a1f0ea29da..d814647ebe6c18 100644 --- a/object.h +++ b/object.h @@ -309,6 +309,11 @@ enum peel_status peel_object(struct repository *r, const struct object_id *name, struct object_id *oid, unsigned flags); +enum peel_status peel_object_ext(struct repository *r, + const struct object_id *name, + struct object_id *oid, + unsigned flags, + enum object_type *typep); struct object_list *object_list_insert(struct object *item, struct object_list **list_p); diff --git a/odb.c b/odb.c index 1679cc0465aa68..776de5356c92a0 100644 --- a/odb.c +++ b/odb.c @@ -702,6 +702,8 @@ static int do_oid_object_info_extended(struct object_database *odb, oidclr(oi->delta_base_oid, odb->repo->hash_algo); if (oi->contentp) *oi->contentp = xmemdupz(co->buf, co->size); + if (oi->mtimep) + *oi->mtimep = 0; oi->whence = OI_CACHED; } return 0; @@ -842,7 +844,7 @@ static int oid_object_info_convert(struct repository *r, int odb_read_object_info_extended(struct object_database *odb, const struct object_id *oid, struct object_info *oi, - unsigned flags) + enum object_info_flags flags) { int ret; @@ -964,7 +966,7 @@ void *odb_read_object_peeled(struct object_database *odb, } int odb_has_object(struct object_database *odb, const struct object_id *oid, - unsigned flags) + enum has_object_flags flags) { unsigned object_info_flags = 0; @@ -995,6 +997,35 @@ int odb_freshen_object(struct object_database *odb, return 0; } +int odb_for_each_object(struct object_database *odb, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags) +{ + int ret; + + odb_prepare_alternates(odb); + for (struct odb_source *source = odb->sources; source; source = source->next) { + if (flags & ODB_FOR_EACH_OBJECT_LOCAL_ONLY && !source->local) + continue; + + if (!(flags & ODB_FOR_EACH_OBJECT_PROMISOR_ONLY)) { + ret = odb_source_loose_for_each_object(source, request, + cb, cb_data, flags); + if (ret) + return ret; + } + + ret = packfile_store_for_each_object(source->packfiles, request, + cb, cb_data, flags); + if (ret) + return ret; + } + + return 0; +} + void odb_assert_oid_type(struct object_database *odb, const struct object_id *oid, enum object_type expect) { diff --git a/odb.h b/odb.h index 83d3a378059802..68b8ec22893bb2 100644 --- a/odb.h +++ b/odb.h @@ -335,6 +335,19 @@ struct object_info { struct object_id *delta_base_oid; void **contentp; + /* + * The time the given looked-up object has been last modified. + * + * Note: the mtime may be ambiguous in case the object exists multiple + * times in the object database. It is thus _not_ recommended to use + * this field outside of contexts where you would read every instance + * of the object, like for example with `odb_for_each_object()`. As it + * is impossible to say at the ODB level what the intent of the caller + * is (e.g. whether to find the oldest or newest object), it is the + * responsibility of the caller to disambiguate the mtimes. + */ + time_t *mtimep; + /* Response */ enum { OI_CACHED, @@ -369,23 +382,29 @@ struct object_info { */ #define OBJECT_INFO_INIT { 0 } -/* Invoke lookup_replace_object() on the given hash */ -#define OBJECT_INFO_LOOKUP_REPLACE 1 -/* Do not retry packed storage after checking packed and loose storage */ -#define OBJECT_INFO_QUICK 8 -/* - * Do not attempt to fetch the object if missing (even if fetch_is_missing is - * nonzero). - */ -#define OBJECT_INFO_SKIP_FETCH_OBJECT 16 -/* - * This is meant for bulk prefetching of missing blobs in a partial - * clone. Implies OBJECT_INFO_SKIP_FETCH_OBJECT and OBJECT_INFO_QUICK - */ -#define OBJECT_INFO_FOR_PREFETCH (OBJECT_INFO_SKIP_FETCH_OBJECT | OBJECT_INFO_QUICK) +/* Flags that can be passed to `odb_read_object_info_extended()`. */ +enum object_info_flags { + /* Invoke lookup_replace_object() on the given hash. */ + OBJECT_INFO_LOOKUP_REPLACE = (1 << 0), -/* Die if object corruption (not just an object being missing) was detected. */ -#define OBJECT_INFO_DIE_IF_CORRUPT 32 + /* Do not reprepare object sources when the first lookup has failed. */ + OBJECT_INFO_QUICK = (1 << 1), + + /* + * Do not attempt to fetch the object if missing (even if fetch_is_missing is + * nonzero). + */ + OBJECT_INFO_SKIP_FETCH_OBJECT = (1 << 2), + + /* Die if object corruption (not just an object being missing) was detected. */ + OBJECT_INFO_DIE_IF_CORRUPT = (1 << 3), + + /* + * This is meant for bulk prefetching of missing blobs in a partial + * clone. Implies OBJECT_INFO_SKIP_FETCH_OBJECT and OBJECT_INFO_QUICK. + */ + OBJECT_INFO_FOR_PREFETCH = (OBJECT_INFO_SKIP_FETCH_OBJECT | OBJECT_INFO_QUICK), +}; /* * Read object info from the object database and populate the `object_info` @@ -394,7 +413,7 @@ struct object_info { int odb_read_object_info_extended(struct object_database *odb, const struct object_id *oid, struct object_info *oi, - unsigned flags); + enum object_info_flags flags); /* * Read a subset of object info for the given object ID. Returns an `enum @@ -406,7 +425,7 @@ int odb_read_object_info(struct object_database *odb, const struct object_id *oid, unsigned long *sizep); -enum { +enum has_object_flags { /* Retry packed storage after checking packed and loose storage */ HAS_OBJECT_RECHECK_PACKED = (1 << 0), /* Allow fetching the object in case the repository has a promisor remote. */ @@ -419,7 +438,7 @@ enum { */ int odb_has_object(struct object_database *odb, const struct object_id *oid, - unsigned flags); + enum has_object_flags flags); int odb_freshen_object(struct object_database *odb, const struct object_id *oid); @@ -459,26 +478,59 @@ static inline void obj_read_unlock(void) if(obj_read_use_lock) pthread_mutex_unlock(&obj_read_mutex); } + /* Flags for for_each_*_object(). */ -enum for_each_object_flags { +enum odb_for_each_object_flags { /* Iterate only over local objects, not alternates. */ - FOR_EACH_OBJECT_LOCAL_ONLY = (1<<0), + ODB_FOR_EACH_OBJECT_LOCAL_ONLY = (1<<0), /* Only iterate over packs obtained from the promisor remote. */ - FOR_EACH_OBJECT_PROMISOR_ONLY = (1<<1), + ODB_FOR_EACH_OBJECT_PROMISOR_ONLY = (1<<1), /* * Visit objects within a pack in packfile order rather than .idx order */ - FOR_EACH_OBJECT_PACK_ORDER = (1<<2), + ODB_FOR_EACH_OBJECT_PACK_ORDER = (1<<2), /* Only iterate over packs that are not marked as kept in-core. */ - FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS = (1<<3), + ODB_FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS = (1<<3), /* Only iterate over packs that do not have .keep files. */ - FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS = (1<<4), + ODB_FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS = (1<<4), }; +/* + * A callback function that can be used to iterate through objects. If given, + * the optional `oi` parameter will be populated the same as if you would call + * `odb_read_object_info()`. + * + * Returning a non-zero error code will cause iteration to abort. The error + * code will be propagated. + */ +typedef int (*odb_for_each_object_cb)(const struct object_id *oid, + struct object_info *oi, + void *cb_data); + +/* + * Iterate through all objects contained in the object database. Note that + * objects may be iterated over multiple times in case they are either stored + * in different backends or in case they are stored in multiple sources. + * If an object info request is given, then the object info will be read and + * passed to the callback as if `odb_read_object_info()` was called for the + * object. + * + * Returning a non-zero error code from the callback function will cause + * iteration to abort. The error code will be propagated. + * + * Returns 0 on success, a negative error code in case a failure occurred, or + * an arbitrary non-zero error code returned by the callback itself. + */ +int odb_for_each_object(struct object_database *odb, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags); + enum { /* * By default, `odb_write_object()` does not actually write anything diff --git a/packfile.c b/packfile.c index 402c3b5dc73131..ce837f852ad1f6 100644 --- a/packfile.c +++ b/packfile.c @@ -1578,13 +1578,14 @@ static void add_delta_base_cache(struct packed_git *p, off_t base_offset, hashmap_add(&delta_base_cache, &ent->ent); } -int packed_object_info(struct packed_git *p, - off_t obj_offset, struct object_info *oi) +static int packed_object_info_with_index_pos(struct packed_git *p, off_t obj_offset, + uint32_t *maybe_index_pos, struct object_info *oi) { struct pack_window *w_curs = NULL; unsigned long size; off_t curpos = obj_offset; enum object_type type = OBJ_NONE; + uint32_t pack_pos; int ret; /* @@ -1619,16 +1620,35 @@ int packed_object_info(struct packed_git *p, } } - if (oi->disk_sizep) { - uint32_t pos; - if (offset_to_pack_pos(p, obj_offset, &pos) < 0) { + if (oi->disk_sizep || (oi->mtimep && p->is_cruft)) { + if (offset_to_pack_pos(p, obj_offset, &pack_pos) < 0) { error("could not find object at offset %"PRIuMAX" " "in pack %s", (uintmax_t)obj_offset, p->pack_name); ret = -1; goto out; } + } + + if (oi->disk_sizep) + *oi->disk_sizep = pack_pos_to_offset(p, pack_pos + 1) - obj_offset; + + if (oi->mtimep) { + if (p->is_cruft) { + uint32_t index_pos; + + if (load_pack_mtimes(p) < 0) + die(_("could not load .mtimes for cruft pack '%s'"), + pack_basename(p)); - *oi->disk_sizep = pack_pos_to_offset(p, pos + 1) - obj_offset; + if (maybe_index_pos) + index_pos = *maybe_index_pos; + else + index_pos = pack_pos_to_index(p, pack_pos); + + *oi->mtimep = nth_packed_mtime(p, index_pos); + } else { + *oi->mtimep = p->mtime; + } } if (oi->typep) { @@ -1681,6 +1701,12 @@ int packed_object_info(struct packed_git *p, return ret; } +int packed_object_info(struct packed_git *p, off_t obj_offset, + struct object_info *oi) +{ + return packed_object_info_with_index_pos(p, obj_offset, NULL, oi); +} + static void *unpack_compressed_entry(struct packed_git *p, struct pack_window **w_curs, off_t curpos, @@ -2149,7 +2175,7 @@ int packfile_store_freshen_object(struct packfile_store *store, int packfile_store_read_object_info(struct packfile_store *store, const struct object_id *oid, struct object_info *oi, - unsigned flags UNUSED) + enum object_info_flags flags UNUSED) { struct pack_entry e; int ret; @@ -2259,12 +2285,12 @@ int has_object_kept_pack(struct repository *r, const struct object_id *oid, int for_each_object_in_pack(struct packed_git *p, each_packed_object_fn cb, void *data, - enum for_each_object_flags flags) + unsigned flags) { uint32_t i; int r = 0; - if (flags & FOR_EACH_OBJECT_PACK_ORDER) { + if (flags & ODB_FOR_EACH_OBJECT_PACK_ORDER) { if (load_pack_revindex(p->repo, p)) return -1; } @@ -2285,7 +2311,7 @@ int for_each_object_in_pack(struct packed_git *p, * - in pack-order, it is pack position, which we must * convert to an index position in order to get the oid. */ - if (flags & FOR_EACH_OBJECT_PACK_ORDER) + if (flags & ODB_FOR_EACH_OBJECT_PACK_ORDER) index_pos = pack_pos_to_index(p, i); else index_pos = i; @@ -2301,75 +2327,114 @@ int for_each_object_in_pack(struct packed_git *p, return r; } -int for_each_packed_object(struct repository *repo, each_packed_object_fn cb, - void *data, enum for_each_object_flags flags) +struct packfile_store_for_each_object_wrapper_data { + struct packfile_store *store; + const struct object_info *request; + odb_for_each_object_cb cb; + void *cb_data; +}; + +static int packfile_store_for_each_object_wrapper(const struct object_id *oid, + struct packed_git *pack, + uint32_t index_pos, + void *cb_data) { - struct odb_source *source; - int r = 0; - int pack_errors = 0; + struct packfile_store_for_each_object_wrapper_data *data = cb_data; - odb_prepare_alternates(repo->objects); + if (data->request) { + off_t offset = nth_packed_object_offset(pack, index_pos); + struct object_info oi = *data->request; - for (source = repo->objects->sources; source; source = source->next) { - struct packfile_list_entry *e; + if (packed_object_info_with_index_pos(pack, offset, + &index_pos, &oi) < 0) { + mark_bad_packed_object(pack, oid); + return -1; + } - source->packfiles->skip_mru_updates = true; + return data->cb(oid, &oi, data->cb_data); + } else { + return data->cb(oid, NULL, data->cb_data); + } +} - for (e = packfile_store_get_packs(source->packfiles); e; e = e->next) { - struct packed_git *p = e->pack; +int packfile_store_for_each_object(struct packfile_store *store, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags) +{ + struct packfile_store_for_each_object_wrapper_data data = { + .store = store, + .request = request, + .cb = cb, + .cb_data = cb_data, + }; + struct packfile_list_entry *e; + int pack_errors = 0, ret; - if ((flags & FOR_EACH_OBJECT_LOCAL_ONLY) && !p->pack_local) - continue; - if ((flags & FOR_EACH_OBJECT_PROMISOR_ONLY) && - !p->pack_promisor) - continue; - if ((flags & FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS) && - p->pack_keep_in_core) - continue; - if ((flags & FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS) && - p->pack_keep) - continue; - if (open_pack_index(p)) { - pack_errors = 1; - continue; - } + store->skip_mru_updates = true; - r = for_each_object_in_pack(p, cb, data, flags); - if (r) - break; - } + for (e = packfile_store_get_packs(store); e; e = e->next) { + struct packed_git *p = e->pack; - source->packfiles->skip_mru_updates = false; + if ((flags & ODB_FOR_EACH_OBJECT_LOCAL_ONLY) && !p->pack_local) + continue; + if ((flags & ODB_FOR_EACH_OBJECT_PROMISOR_ONLY) && + !p->pack_promisor) + continue; + if ((flags & ODB_FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS) && + p->pack_keep_in_core) + continue; + if ((flags & ODB_FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS) && + p->pack_keep) + continue; + if (open_pack_index(p)) { + pack_errors = 1; + continue; + } - if (r) - break; + ret = for_each_object_in_pack(p, packfile_store_for_each_object_wrapper, + &data, flags); + if (ret) + goto out; } - return r ? r : pack_errors; + ret = 0; + +out: + store->skip_mru_updates = false; + + if (!ret && pack_errors) + ret = -1; + return ret; } +struct add_promisor_object_data { + struct repository *repo; + struct oidset *set; +}; + static int add_promisor_object(const struct object_id *oid, - struct packed_git *pack, - uint32_t pos UNUSED, - void *set_) + struct object_info *oi UNUSED, + void *cb_data) { - struct oidset *set = set_; + struct add_promisor_object_data *data = cb_data; struct object *obj; int we_parsed_object; - obj = lookup_object(pack->repo, oid); + obj = lookup_object(data->repo, oid); if (obj && obj->parsed) { we_parsed_object = 0; } else { we_parsed_object = 1; - obj = parse_object_with_flags(pack->repo, oid, + obj = parse_object_with_flags(data->repo, oid, PARSE_OBJECT_SKIP_HASH_CHECK); } if (!obj) return 1; - oidset_insert(set, oid); + oidset_insert(data->set, oid); /* * If this is a tree, commit, or tag, the objects it refers @@ -2387,19 +2452,19 @@ static int add_promisor_object(const struct object_id *oid, */ return 0; while (tree_entry_gently(&desc, &entry)) - oidset_insert(set, &entry.oid); + oidset_insert(data->set, &entry.oid); if (we_parsed_object) free_tree_buffer(tree); } else if (obj->type == OBJ_COMMIT) { struct commit *commit = (struct commit *) obj; struct commit_list *parents = commit->parents; - oidset_insert(set, get_commit_tree_oid(commit)); + oidset_insert(data->set, get_commit_tree_oid(commit)); for (; parents; parents = parents->next) - oidset_insert(set, &parents->item->object.oid); + oidset_insert(data->set, &parents->item->object.oid); } else if (obj->type == OBJ_TAG) { struct tag *tag = (struct tag *) obj; - oidset_insert(set, get_tagged_oid(tag)); + oidset_insert(data->set, get_tagged_oid(tag)); } return 0; } @@ -2411,10 +2476,13 @@ int is_promisor_object(struct repository *r, const struct object_id *oid) if (!promisor_objects_prepared) { if (repo_has_promisor_remote(r)) { - for_each_packed_object(r, add_promisor_object, - &promisor_objects, - FOR_EACH_OBJECT_PROMISOR_ONLY | - FOR_EACH_OBJECT_PACK_ORDER); + struct add_promisor_object_data data = { + .repo = r, + .set = &promisor_objects, + }; + + odb_for_each_object(r->objects, NULL, add_promisor_object, &data, + ODB_FOR_EACH_OBJECT_PROMISOR_ONLY | ODB_FOR_EACH_OBJECT_PACK_ORDER); } promisor_objects_prepared = 1; } diff --git a/packfile.h b/packfile.h index acc5c55ad57754..224142fd346715 100644 --- a/packfile.h +++ b/packfile.h @@ -247,7 +247,7 @@ int packfile_store_read_object_stream(struct odb_read_stream **out, int packfile_store_read_object_info(struct packfile_store *store, const struct object_id *oid, struct object_info *oi, - unsigned flags); + enum object_info_flags flags); /* * Open the packfile and add it to the store if it isn't yet known. Returns @@ -339,9 +339,22 @@ typedef int each_packed_object_fn(const struct object_id *oid, void *data); int for_each_object_in_pack(struct packed_git *p, each_packed_object_fn, void *data, - enum for_each_object_flags flags); -int for_each_packed_object(struct repository *repo, each_packed_object_fn cb, - void *data, enum for_each_object_flags flags); + unsigned flags); + +/* + * Iterate through all packed objects in the given packfile store and invoke + * the callback function for each of them. If an object info request is given, + * then the object info will be read for every individual object and passed to + * the callback as if `packfile_store_read_object_info()` was called for the + * object. + * + * The flags parameter is a combination of `odb_for_each_object_flags`. + */ +int packfile_store_for_each_object(struct packfile_store *store, + const struct object_info *request, + odb_for_each_object_cb cb, + void *cb_data, + unsigned flags); /* A hook to report invalid files in pack directory */ #define PACKDIR_FILE_PACK 1 diff --git a/reachable.c b/reachable.c index 4b532039d5f84f..101cfc272715fb 100644 --- a/reachable.c +++ b/reachable.c @@ -191,30 +191,27 @@ static int obj_is_recent(const struct object_id *oid, timestamp_t mtime, return oidset_contains(&data->extra_recent_oids, oid); } -static void add_recent_object(const struct object_id *oid, - struct packed_git *pack, - off_t offset, - timestamp_t mtime, - struct recent_data *data) +static int want_recent_object(struct recent_data *data, + const struct object_id *oid) { - struct object *obj; - enum object_type type; + if (data->ignore_in_core_kept_packs && + has_object_kept_pack(data->revs->repo, oid, KEPT_PACK_IN_CORE)) + return 0; + return 1; +} - if (!obj_is_recent(oid, mtime, data)) - return; +static int add_recent_object(const struct object_id *oid, + struct object_info *oi, + void *cb_data) +{ + struct recent_data *data = cb_data; + struct object *obj; - /* - * We do not want to call parse_object here, because - * inflating blobs and trees could be very expensive. - * However, we do need to know the correct type for - * later processing, and the revision machinery expects - * commits and tags to have been parsed. - */ - type = odb_read_object_info(the_repository->objects, oid, NULL); - if (type < 0) - die("unable to get object info for %s", oid_to_hex(oid)); + if (!want_recent_object(data, oid) || + !obj_is_recent(oid, *oi->mtimep, data)) + return 0; - switch (type) { + switch (*oi->typep) { case OBJ_TAG: case OBJ_COMMIT: obj = parse_object_or_die(the_repository, oid, NULL); @@ -227,77 +224,22 @@ static void add_recent_object(const struct object_id *oid, break; default: die("unknown object type for %s: %s", - oid_to_hex(oid), type_name(type)); + oid_to_hex(oid), type_name(*oi->typep)); } if (!obj) die("unable to lookup %s", oid_to_hex(oid)); - - add_pending_object(data->revs, obj, ""); - if (data->cb) - data->cb(obj, pack, offset, mtime); -} - -static int want_recent_object(struct recent_data *data, - const struct object_id *oid) -{ - if (data->ignore_in_core_kept_packs && - has_object_kept_pack(data->revs->repo, oid, KEPT_PACK_IN_CORE)) + if (obj->flags & SEEN) return 0; - return 1; -} -static int add_recent_loose(const struct object_id *oid, - const char *path, void *data) -{ - struct stat st; - struct object *obj; - - if (!want_recent_object(data, oid)) - return 0; - - obj = lookup_object(the_repository, oid); - - if (obj && obj->flags & SEEN) - return 0; - - if (stat(path, &st) < 0) { - /* - * It's OK if an object went away during our iteration; this - * could be due to a simultaneous repack. But anything else - * we should abort, since we might then fail to mark objects - * which should not be pruned. - */ - if (errno == ENOENT) - return 0; - return error_errno("unable to stat %s", oid_to_hex(oid)); + add_pending_object(data->revs, obj, ""); + if (data->cb) { + if (oi->whence == OI_PACKED) + data->cb(obj, oi->u.packed.pack, oi->u.packed.offset, *oi->mtimep); + else + data->cb(obj, NULL, 0, *oi->mtimep); } - add_recent_object(oid, NULL, 0, st.st_mtime, data); - return 0; -} - -static int add_recent_packed(const struct object_id *oid, - struct packed_git *p, - uint32_t pos, - void *data) -{ - struct object *obj; - timestamp_t mtime = p->mtime; - - if (!want_recent_object(data, oid)) - return 0; - - obj = lookup_object(the_repository, oid); - - if (obj && obj->flags & SEEN) - return 0; - if (p->is_cruft) { - if (load_pack_mtimes(p) < 0) - die(_("could not load cruft pack .mtimes")); - mtime = nth_packed_mtime(p, pos); - } - add_recent_object(oid, p, nth_packed_object_offset(p, pos), mtime, data); return 0; } @@ -307,7 +249,13 @@ int add_unseen_recent_objects_to_traversal(struct rev_info *revs, int ignore_in_core_kept_packs) { struct recent_data data; - enum for_each_object_flags flags; + unsigned flags; + enum object_type type; + time_t mtime; + struct object_info oi = { + .mtimep = &mtime, + .typep = &type, + }; int r; data.revs = revs; @@ -318,16 +266,13 @@ int add_unseen_recent_objects_to_traversal(struct rev_info *revs, oidset_init(&data.extra_recent_oids, 0); data.extra_recent_oids_loaded = 0; - r = for_each_loose_object(the_repository->objects, add_recent_loose, &data, - FOR_EACH_OBJECT_LOCAL_ONLY); - if (r) - goto done; - - flags = FOR_EACH_OBJECT_LOCAL_ONLY | FOR_EACH_OBJECT_PACK_ORDER; + flags = ODB_FOR_EACH_OBJECT_LOCAL_ONLY | ODB_FOR_EACH_OBJECT_PACK_ORDER; if (ignore_in_core_kept_packs) - flags |= FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS; + flags |= ODB_FOR_EACH_OBJECT_SKIP_IN_CORE_KEPT_PACKS; - r = for_each_packed_object(revs->repo, add_recent_packed, &data, flags); + r = odb_for_each_object(revs->repo->objects, &oi, add_recent_object, &data, flags); + if (r) + goto done; done: oidset_clear(&data.extra_recent_oids); diff --git a/repack-promisor.c b/repack-promisor.c index 73af57bce31376..90318ce15093f5 100644 --- a/repack-promisor.c +++ b/repack-promisor.c @@ -17,8 +17,8 @@ struct write_oid_context { * necessary. */ static int write_oid(const struct object_id *oid, - struct packed_git *pack UNUSED, - uint32_t pos UNUSED, void *data) + struct object_info *oi UNUSED, + void *data) { struct write_oid_context *ctx = data; struct child_process *cmd = ctx->cmd; @@ -98,8 +98,8 @@ void repack_promisor_objects(struct repository *repo, */ ctx.cmd = &cmd; ctx.algop = repo->hash_algo; - for_each_packed_object(repo, write_oid, &ctx, - FOR_EACH_OBJECT_PROMISOR_ONLY); + odb_for_each_object(repo->objects, NULL, write_oid, &ctx, + ODB_FOR_EACH_OBJECT_PROMISOR_ONLY); if (cmd.in == -1) { /* No packed objects; cmd was never started */ diff --git a/revision.c b/revision.c index 047ff7e45831c9..ddddbe4993d7d4 100644 --- a/revision.c +++ b/revision.c @@ -3632,8 +3632,7 @@ void reset_revision_walk(void) } static int mark_uninteresting(const struct object_id *oid, - struct packed_git *pack UNUSED, - uint32_t pos UNUSED, + struct object_info *oi UNUSED, void *cb) { struct rev_info *revs = cb; @@ -3942,10 +3941,9 @@ int prepare_revision_walk(struct rev_info *revs) (revs->limited && limiting_can_increase_treesame(revs))) revs->treesame.name = "treesame"; - if (revs->exclude_promisor_objects) { - for_each_packed_object(revs->repo, mark_uninteresting, revs, - FOR_EACH_OBJECT_PROMISOR_ONLY); - } + if (revs->exclude_promisor_objects) + odb_for_each_object(revs->repo->objects, NULL, mark_uninteresting, + revs, ODB_FOR_EACH_OBJECT_PROMISOR_ONLY); if (!revs->reflog_info) prepare_to_use_bloom_filter(revs); diff --git a/subprojects/git-gui b/subprojects/git-gui new file mode 120000 index 00000000000000..c6d917088b39bc --- /dev/null +++ b/subprojects/git-gui @@ -0,0 +1 @@ +../git-gui \ No newline at end of file diff --git a/subprojects/gitk b/subprojects/gitk new file mode 120000 index 00000000000000..b66ad18ae5f423 --- /dev/null +++ b/subprojects/gitk @@ -0,0 +1 @@ +../gitk-git \ No newline at end of file diff --git a/symlinks.c b/symlinks.c index 9cc090d42c089f..9e01ab3bc86c5b 100644 --- a/symlinks.c +++ b/symlinks.c @@ -74,11 +74,12 @@ static inline void reset_lstat_cache(struct cache_def *cache) */ static int lstat_cache_matchlen(struct cache_def *cache, const char *name, int len, - int *ret_flags, int track_flags, + unsigned int *ret_flags, unsigned int track_flags, int prefix_len_stat_func) { int match_len, last_slash, last_slash_dir, previous_slash; - int save_flags, ret, saved_errno = 0; + unsigned int save_flags; + int ret, saved_errno = 0; struct stat st; if (cache->track_flags != track_flags || @@ -192,10 +193,10 @@ static int lstat_cache_matchlen(struct cache_def *cache, return match_len; } -static int lstat_cache(struct cache_def *cache, const char *name, int len, - int track_flags, int prefix_len_stat_func) +static unsigned int lstat_cache(struct cache_def *cache, const char *name, int len, + unsigned int track_flags, int prefix_len_stat_func) { - int flags; + unsigned int flags; (void)lstat_cache_matchlen(cache, name, len, &flags, track_flags, prefix_len_stat_func); return flags; @@ -234,7 +235,7 @@ int check_leading_path(const char *name, int len, int warn_on_lstat_err) static int threaded_check_leading_path(struct cache_def *cache, const char *name, int len, int warn_on_lstat_err) { - int flags; + unsigned int flags; int match_len = lstat_cache_matchlen(cache, name, len, &flags, FL_SYMLINK|FL_NOENT|FL_DIR, USE_ONLY_LSTAT); int saved_errno = errno; diff --git a/symlinks.h b/symlinks.h index 7ae3d5b85695fb..25bf04f54fbee0 100644 --- a/symlinks.h +++ b/symlinks.h @@ -5,8 +5,8 @@ struct cache_def { struct strbuf path; - int flags; - int track_flags; + unsigned int flags; + unsigned int track_flags; int prefix_len_stat_func; }; #define CACHE_DEF_INIT { \ diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh index 5091db949b7f99..5f42c311c2f2f5 100644 --- a/t/lib-httpd.sh +++ b/t/lib-httpd.sh @@ -319,13 +319,22 @@ setup_askpass_helper() { ' } -set_askpass() { +set_askpass () { >"$TRASH_DIRECTORY/askpass-query" && echo "$1" >"$TRASH_DIRECTORY/askpass-user" && echo "$2" >"$TRASH_DIRECTORY/askpass-pass" } -expect_askpass() { +set_netrc () { + # $HOME=$TRASH_DIRECTORY + echo "machine $1 login $2 password $3" >"$TRASH_DIRECTORY/.netrc" +} + +clear_netrc () { + rm -f "$TRASH_DIRECTORY/.netrc" +} + +expect_askpass () { dest=$HTTPD_DEST${3+/$3} { diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index e631ab0eb5ef05..6b8c50a51a3b72 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -238,6 +238,10 @@ SSLEngine On AuthName "git-auth" AuthUserFile passwd Require valid-user + + # return 403 for authenticated user: forbidden-user@host + RewriteCond "%{REMOTE_USER}" "^forbidden-user@host" + RewriteRule ^ - [F] diff --git a/t/lib-httpd/passwd b/t/lib-httpd/passwd index d9c122f3482891..3bab7b64236b11 100644 --- a/t/lib-httpd/passwd +++ b/t/lib-httpd/passwd @@ -1 +1,2 @@ user@host:$apr1$LGPmCZWj$9vxEwj5Z5GzQLBMxp3mCx1 +forbidden-user@host:$apr1$LGPmCZWj$9vxEwj5Z5GzQLBMxp3mCx1 diff --git a/t/t5550-http-fetch-dumb.sh b/t/t5550-http-fetch-dumb.sh index 05c34db780428a..9d0a7f5c4b625f 100755 --- a/t/t5550-http-fetch-dumb.sh +++ b/t/t5550-http-fetch-dumb.sh @@ -102,6 +102,31 @@ test_expect_success 'cloning password-protected repository can fail' ' expect_askpass both wrong ' +test_expect_success 'using credentials from netrc to clone successfully' ' + test_when_finished clear_netrc && + set_askpass wrong && + set_netrc 127.0.0.1 user@host pass@host && + git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-netrc && + expect_askpass none +' + +test_expect_success 'netrc unauthorized credentials (prompt after 401)' ' + test_when_finished clear_netrc && + set_askpass wrong && + set_netrc 127.0.0.1 user@host pass@wrong && + test_must_fail git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-netrc-401 && + expect_askpass both wrong +' + +test_expect_success 'netrc authorized but forbidden credentials (fail on 403)' ' + test_when_finished clear_netrc && + set_askpass wrong && + set_netrc 127.0.0.1 forbidden-user@host pass@host && + test_must_fail git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-netrc-403 2>err && + expect_askpass none && + grep "The requested URL returned error: 403" err +' + test_expect_success 'http auth can use user/pass in URL' ' set_askpass wrong && git clone "$HTTPD_URL_USER_PASS/auth/dumb/repo.git" clone-auth-none &&