Split monolithic pipeline into PR, Merge, and Release workflows#642
Split monolithic pipeline into PR, Merge, and Release workflows#642
Conversation
|
|
||
| - name: Install cargo-nextest | ||
| run: cargo install cargo-nextest --locked | ||
| run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin |
There was a problem hiding this comment.
Any reason to change from using cargo install? Downloading a link directly always looks more suspicious.
There was a problem hiding this comment.
cargo install download the source code and compile it from the source. It takes several minutes to do that. The new way download the pre-compiled binary from the original website. It takes seconds.
.github/workflows/merge.yaml
Outdated
|
|
||
| - name: Create and push the x86_64 docker image to beta ecr repo | ||
| run: | | ||
| tar -c -C build-x86_64/LambdaAdapterLayerX86/extensions . | docker import --platform linux/amd64 - 477159140107.dkr.ecr.ap-northeast-1.amazonaws.com/awsguru/aws-lambda-adapter:latest-x86_64 |
There was a problem hiding this comment.
I see that it was already like this before, but I'm just curious. Do you know why for the beta images we do
tar -c -C <dir> | docker import ...
but then for the prod images we do
printf 'FROM scratch\nADD <dir>' | docker build ...
Conceptually, both commands look like they do the same, but I don't know if there's a reason for them to be different (we don't need to change it, I'm just curious about the difference if there's any)
There was a problem hiding this comment.
docker import takes a tar stream and creates a flat image with no layers, no metadata, no CMD/ENTRYPOINT — just a filesystem snapshot. It's fast but the resulting image has no Dockerfile history or build cache.
docker build with FROM scratch creates a proper OCI image with a Dockerfile layer, build metadata, and supports --platform and --provenance flags. It's what we'd use when we need the image to be publishable or inspectable.
There was a problem hiding this comment.
so the good way would be to probably use docker build from beta too? It's weird that we're doing something different for both cases.
.github/workflows/merge.yaml
Outdated
|
|
||
| - name: Create and push the x86_64 docker image to beta ecr repo | ||
| run: | | ||
| tar -c -C build-x86_64/LambdaAdapterLayerX86/extensions . | docker import --platform linux/amd64 - 477159140107.dkr.ecr.ap-northeast-1.amazonaws.com/awsguru/aws-lambda-adapter:latest-x86_64 |
There was a problem hiding this comment.
so the good way would be to probably use docker build from beta too? It's weird that we're doing something different for both cases.
- PR: test + build validation only (no deploys) - Merge: test, build, beta deploy, e2e tests - Release: test, build, gamma, prod, public ECR publish Additional improvements: - Replace sccache with Swatinem/rust-cache - Use prebuilt cargo-nextest binary - Trim unnecessary musl targets from test jobs - Consolidate load-matrix jobs in release workflow - Bump Python from 3.8 to 3.13 - Normalize download-artifact to @v4 - Remove unused env vars - Fix paths-ignore globs - Replace docker import with docker build
0f6bf1c to
b1c8117
Compare
Summary
Splits the single
pipeline.yamlinto three focused workflows:Improvements
Swatinem/rust-cache@v2for reliable cachingcargo-nextestbinary instead of compiling from sourcedownload-artifactto@v4paths-ignoreglobs (docs/**instead ofdocs)