Переход на s6-overlay v3 и обновление базового образа до Ubuntu 20.04#94
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughMigrate multiple images and service orchestration to s6-overlay v3, add s6-based service scripts (xfce, xvfb, x11vnc, jenkins-agent), update Dockerfiles and build scripts (including removing ONEC creds from some builds), bump client base to Ubuntu 20.04, reorganize configs under rootfs-v3, and broaden Changes
Sequence Diagram(s)sequenceDiagram
participant S6 as s6 /init
participant Xvfb as Xvfb
participant XFCE as startxfce4
participant x11vnc as x11vnc
Note over S6: s6-rc starts longrun services
S6->>Xvfb: exec /usr/bin/Xvfb $DISPLAY -screen 0 ${DISPLAY_WIDTH}x${DISPLAY_HEIGHT}x24
S6->>XFCE: exec /usr/bin/startxfce4
S6->>x11vnc: exec x11vnc -display ${DISPLAY} ...
Xvfb-->>XFCE: provide virtual framebuffer
XFCE-->>x11vnc: render desktop to DISPLAY
x11vnc-->>S6: accept VNC connections
sequenceDiagram
participant S6 as s6 /init
participant Wget as downloader (wget)
participant Java as java -jar agent.jar
participant Jenkins as Jenkins master
S6->>Wget: wget $DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JAR_URL -O agent.jar
Wget-->>S6: agent.jar saved
S6->>Java: exec java -jar agent.jar -jnlpUrl $DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL -secret $DOCKER_SWARM_PLUGIN_JENKINS_AGENT_SECRET -noReconnect -workDir /home/jenkins
Java-->>Jenkins: register/connect
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
client/Dockerfile (2)
47-64: Consider combining RUN commands to minimize layers.These two RUN blocks (gnupg/wget installation and bionic repo setup) could be combined into a single RUN command to reduce image layers, as per coding guidelines.
♻️ Proposed combined RUN block
-# Установка gnupg и wget -RUN set -e \ - && apt-get update \ - && apt-get install -y --no-install-recommends \ - gnupg \ - wget \ - ca-certificates \ - && mkdir -p -m 0755 /etc/apt/keyrings \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* - -# Добавление репо bionic для libenchant1c2a -RUN set -e \ - && wget -qO- 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3B4FE6ACC0B21F32' | \ - gpg --dearmor -o /etc/apt/keyrings/ubuntu-archive-bionic.gpg \ - && echo "deb [signed-by=/etc/apt/keyrings/ubuntu-archive-bionic.gpg arch=amd64] \ - http://archive.ubuntu.com/ubuntu bionic main universe" | \ - tee /etc/apt/sources.list.d/bionic-archive.list > /dev/null +# Установка gnupg, wget и добавление репо bionic для libenchant1c2a +RUN set -e \ + && apt-get update \ + && apt-get install -y --no-install-recommends \ + gnupg \ + wget \ + ca-certificates \ + && mkdir -p -m 0755 /etc/apt/keyrings \ + && wget -qO- 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3B4FE6ACC0B21F32' | \ + gpg --dearmor -o /etc/apt/keyrings/ubuntu-archive-bionic.gpg \ + && echo "deb [signed-by=/etc/apt/keyrings/ubuntu-archive-bionic.gpg arch=amd64] \ + http://archive.ubuntu.com/ubuntu bionic main universe" | \ + tee /etc/apt/sources.list.d/bionic-archive.list > /dev/null \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/*As per coding guidelines: "Combine RUN commands in Dockerfiles to minimize layers".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/Dockerfile` around lines 47 - 64, Combine the two separate RUN steps into a single RUN instruction in the Dockerfile so the apt-get update/install of gnupg, wget and ca-certificates and the subsequent creation of /etc/apt/keyrings plus the wget/gpg dearmor and adding the bionic repo happen in one layer; update the RUN that currently performs apt-get update/install (the first RUN block) to also create /etc/apt/keyrings, run the wget and gpg --dearmor command that writes /etc/apt/keyrings/ubuntu-archive-bionic.gpg, and echo/tee the bionic sources.list entry (the commands currently in the second RUN block), ensuring you keep set -e, apt-get clean, and rm -rf /var/lib/apt/lists/* in the combined sequence to preserve error handling and cleanup.
112-129: Same layer optimization opportunity in final stage.Similar to the base stage, these RUN blocks could be combined for consistency and layer reduction.
♻️ Proposed combined RUN block
-# Установка gnupg и wget -RUN set -e \ - && apt-get update \ - && apt-get install -y --no-install-recommends \ - gnupg \ - wget \ - ca-certificates \ - && mkdir -p -m 0755 /etc/apt/keyrings \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* - -# Добавление репо bionic для libwebkitgtk-3.0-0 -RUN set -e \ - && wget -qO- 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3B4FE6ACC0B21F32' | \ - gpg --dearmor -o /etc/apt/keyrings/ubuntu-archive-bionic.gpg \ - && echo "deb [signed-by=/etc/apt/keyrings/ubuntu-archive-bionic.gpg arch=amd64] \ - http://archive.ubuntu.com/ubuntu bionic main universe" | \ - tee /etc/apt/sources.list.d/bionic-archive.list > /dev/null +# Установка gnupg, wget и добавление репо bionic для libwebkitgtk-3.0-0 +RUN set -e \ + && apt-get update \ + && apt-get install -y --no-install-recommends \ + gnupg \ + wget \ + ca-certificates \ + && mkdir -p -m 0755 /etc/apt/keyrings \ + && wget -qO- 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3B4FE6ACC0B21F32' | \ + gpg --dearmor -o /etc/apt/keyrings/ubuntu-archive-bionic.gpg \ + && echo "deb [signed-by=/etc/apt/keyrings/ubuntu-archive-bionic.gpg arch=amd64] \ + http://archive.ubuntu.com/ubuntu bionic main universe" | \ + tee /etc/apt/sources.list.d/bionic-archive.list > /dev/null \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/*🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/Dockerfile` around lines 112 - 129, Combine the two separate RUN blocks in the final Dockerfile stage into a single RUN to reduce image layers and mirror the optimization used in the base stage: perform apt-get update and apt-get install (gnupg, wget, ca-certificates), create /etc/apt/keyrings, then fetch and dearmor the Ubuntu archive key with wget | gpg --dearmor to /etc/apt/keyrings/ubuntu-archive-bionic.gpg and add the bionic source via echo | tee, and finally run apt-get clean and rm -rf /var/lib/apt/lists/* in the same RUN; update the existing RUN blocks that reference gnupg, wget, ca-certificates, /etc/apt/keyrings, gpg --dearmor and /etc/apt/sources.list.d/bionic-archive.list to be a single chained RUN using set -e && ... so ordering and permissions are preserved.client-vnc/Dockerfile (2)
7-7: Running asrootis required for s6-overlay, but consider documenting this.Trivy flags this because the last
USERdirective isroot. However, s6-overlay's/initentrypoint requires root privileges to manage services and typically handles privilege dropping internally via service configuration. If yourrootfs-v3service definitions run processes as a non-root user (e.g.,usr1cv8), this is acceptable.Consider adding a comment explaining why root is needed:
# s6-overlay /init requires root; services drop privileges via s6-rc.d config USER root🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client-vnc/Dockerfile` at line 7, Add an explanatory comment above the USER root directive in the Dockerfile clarifying that s6-overlay's /init requires root to manage services and that service processes (e.g., under rootfs-v3) drop privileges via s6 configuration (for example user usr1cv8), so the final USER root is intentional and safe; ensure the comment references s6-overlay and /init so future reviewers and scanners understand why root is required.
34-40: Combine the tar extraction commands into a single RUN layer to minimize image layers.Per coding guidelines, combining RUN commands minimizes image layers. The two
ADD+RUN tarsequences can be consolidated into a single RUN command:♻️ Proposed consolidation
# Install s6-overlay v3 ARG S6_OVERLAY_VERSION=3.2.2.0 ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp -RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp -RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz \ - && rm -f /tmp/s6-overlay-*.tar.xz +RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz \ + && tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz \ + && rm -f /tmp/s6-overlay-*.tar.xz🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client-vnc/Dockerfile` around lines 34 - 40, The Dockerfile creates extra image layers by using two separate ADD + RUN tar sequences for s6-overlay; combine them into a single RUN that uses ARG S6_OVERLAY_VERSION to fetch both s6-overlay-noarch.tar.xz and s6-overlay-x86_64.tar.xz, extract them with tar -C / -Jxpf (for both files) and then remove the /tmp/*.tar.xz artifacts in the same RUN so only one layer is produced and temporary files are cleaned up; update the block that references ARG S6_OVERLAY_VERSION, the two ADD lines and the two RUN tar invocations to a single RUN sequence that downloads/extracts/removes both archives.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@build-base-swarm-jenkins-agent.sh`:
- Around line 22-31: The docker build invocation uses an undefined variable
no_cache_arg causing the --no-cache option to be ignored; fix by making the
variable names consistent: either set no_cache_arg where last_arg is currently
defined or replace $no_cache_arg in the docker build command with the
already-defined $last_arg (or consolidate the logic so a single variable like
last_arg/no_cache_arg is assigned based on the NO_CACHE condition). Update the
assignment logic that currently defines last_arg (and any NO_CACHE detection)
and ensure the docker build line references that same variable (no_cache_arg or
last_arg) so the --no-cache flag is passed when intended.
In
`@swarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/run`:
- Around line 3-4: Add strict shell error handling and validation before
downloading and executing the agent: enable fail-fast with set -euo pipefail (or
at least set -e and set -u), ensure DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JAR_URL is
non-empty before calling wget, verify wget -O agent.jar succeeds and that
agent.jar exists and is non-empty, and only then exec java -jar agent.jar; on
any failure print a clear error message to stderr and exit non‑zero so s6 can
stop restarting blindly.
In
`@swarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/type`:
- Line 1: The service file declares a longrun service named "jenkins-agent" but
it is not added to any started s6 bundle, so the agent never gets started; fix
by adding "jenkins-agent" to a started bundle (for example create or update a
bundle's contents list such as user/contents.d/jenkins-agent) so the s6 init
will include and start the service, ensuring the service name matches
"jenkins-agent" and the bundle type is a started bundle (not just declared).
---
Nitpick comments:
In `@client-vnc/Dockerfile`:
- Line 7: Add an explanatory comment above the USER root directive in the
Dockerfile clarifying that s6-overlay's /init requires root to manage services
and that service processes (e.g., under rootfs-v3) drop privileges via s6
configuration (for example user usr1cv8), so the final USER root is intentional
and safe; ensure the comment references s6-overlay and /init so future reviewers
and scanners understand why root is required.
- Around line 34-40: The Dockerfile creates extra image layers by using two
separate ADD + RUN tar sequences for s6-overlay; combine them into a single RUN
that uses ARG S6_OVERLAY_VERSION to fetch both s6-overlay-noarch.tar.xz and
s6-overlay-x86_64.tar.xz, extract them with tar -C / -Jxpf (for both files) and
then remove the /tmp/*.tar.xz artifacts in the same RUN so only one layer is
produced and temporary files are cleaned up; update the block that references
ARG S6_OVERLAY_VERSION, the two ADD lines and the two RUN tar invocations to a
single RUN sequence that downloads/extracts/removes both archives.
In `@client/Dockerfile`:
- Around line 47-64: Combine the two separate RUN steps into a single RUN
instruction in the Dockerfile so the apt-get update/install of gnupg, wget and
ca-certificates and the subsequent creation of /etc/apt/keyrings plus the
wget/gpg dearmor and adding the bionic repo happen in one layer; update the RUN
that currently performs apt-get update/install (the first RUN block) to also
create /etc/apt/keyrings, run the wget and gpg --dearmor command that writes
/etc/apt/keyrings/ubuntu-archive-bionic.gpg, and echo/tee the bionic
sources.list entry (the commands currently in the second RUN block), ensuring
you keep set -e, apt-get clean, and rm -rf /var/lib/apt/lists/* in the combined
sequence to preserve error handling and cleanup.
- Around line 112-129: Combine the two separate RUN blocks in the final
Dockerfile stage into a single RUN to reduce image layers and mirror the
optimization used in the base stage: perform apt-get update and apt-get install
(gnupg, wget, ca-certificates), create /etc/apt/keyrings, then fetch and dearmor
the Ubuntu archive key with wget | gpg --dearmor to
/etc/apt/keyrings/ubuntu-archive-bionic.gpg and add the bionic source via echo |
tee, and finally run apt-get clean and rm -rf /var/lib/apt/lists/* in the same
RUN; update the existing RUN blocks that reference gnupg, wget, ca-certificates,
/etc/apt/keyrings, gpg --dearmor and /etc/apt/sources.list.d/bionic-archive.list
to be a single chained RUN using set -e && ... so ordering and permissions are
preserved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e1d0d526-e5da-4edc-9b04-e9b25fa057f6
📒 Files selected for processing (28)
.gitattributesbuild-base-k8s-jenkins-agent.shbuild-base-swarm-jenkins-agent.shclient-vnc/Dockerfileclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/user/contents.d/x11vncclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/user/contents.d/xfceclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/user/contents.d/xvfbclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/x11vnc/dependencies.d/xfceclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/x11vnc/dependencies.d/xvfbclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/x11vnc/runclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/x11vnc/typeclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xfce/dependencies.d/xvfbclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xfce/runclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xfce/typeclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xvfb/dependencies.d/baseclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xvfb/runclient-vnc/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/xvfb/typeclient/Dockerfileclient/configs/current/conf/conf.cfgconfigs/client-vnc/rootfs/etc/fix-attrs.d/01-usr1c8-ibasesconfigs/client-vnc/rootfs/etc/services.d/onec/runconfigs/client-vnc/rootfs/etc/services.d/xfce/runconfigs/client-vnc/rootfs/etc/services.d/xvfb/runswarm-jenkins-agent/Dockerfileswarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/runswarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/typeswarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/user/jenkins-agentswarm-jenkins-agent/docker-entrypoint.sh
💤 Files with no reviewable changes (6)
- configs/client-vnc/rootfs/etc/fix-attrs.d/01-usr1c8-ibases
- configs/client-vnc/rootfs/etc/services.d/xfce/run
- configs/client-vnc/rootfs/etc/services.d/xvfb/run
- swarm-jenkins-agent/docker-entrypoint.sh
- configs/client-vnc/rootfs/etc/services.d/onec/run
- build-base-k8s-jenkins-agent.sh
There was a problem hiding this comment.
🧹 Nitpick comments (1)
build-base-swarm-jenkins-agent.sh (1)
32-41: Consider validating required environment variables.The build uses
$ONEC_USERNAME,$ONEC_PASSWORD, and$ONEC_VERSIONwithout validation. If any are unset or empty, the build may fail with unclear errors or produce a broken image.💡 Suggested validation before the onec-client build
+# Проверка обязательных переменных окружения для сборки onec-client +if [[ -z "$ONEC_VERSION" ]]; then + echo "Error: ONEC_VERSION is not set" + exit 1 +fi + +if [[ -z "$ONEC_USERNAME" ]] || [[ -z "$ONEC_PASSWORD" ]]; then + echo "Error: ONEC_USERNAME and ONEC_PASSWORD must be set for client build" + exit 1 +fi + docker build \ --build-arg ONEC_USERNAME=$ONEC_USERNAME \As per coding guidelines: "Validate required environment variables in build scripts"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build-base-swarm-jenkins-agent.sh` around lines 32 - 41, Add explicit validation of required environment variables before the docker build invocation: check that ONEC_USERNAME, ONEC_PASSWORD, and ONEC_VERSION are set and non-empty (and optionally ONEC_VERSION format) and exit with a clear error message if any are missing; locate the validation immediately above the docker build block where the variables are referenced (the docker build command that uses --build-arg ONEC_USERNAME/ONEC_PASSWORD/ONEC_VERSION and tags the image with $ONEC_VERSION) so the script fails fast with a helpful message rather than producing unclear build errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@build-base-swarm-jenkins-agent.sh`:
- Around line 32-41: Add explicit validation of required environment variables
before the docker build invocation: check that ONEC_USERNAME, ONEC_PASSWORD, and
ONEC_VERSION are set and non-empty (and optionally ONEC_VERSION format) and exit
with a clear error message if any are missing; locate the validation immediately
above the docker build block where the variables are referenced (the docker
build command that uses --build-arg ONEC_USERNAME/ONEC_PASSWORD/ONEC_VERSION and
tags the image with $ONEC_VERSION) so the script fails fast with a helpful
message rather than producing unclear build errors.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dfa5d32d-54a3-43af-8aa4-9c49818f47ff
📒 Files selected for processing (1)
build-base-swarm-jenkins-agent.sh
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
client-vnc/Dockerfile (1)
7-7: Consider security implications of running as root.Trivy flags
USER rootas the last USER directive. While this may be necessary for VNC/X11 services that require elevated privileges, consider whether a non-root user could be used at runtime with appropriate capabilities.If root is genuinely required for the VNC stack, you can suppress this warning by documenting the rationale or adding a
# trivy:ignore:DS-0002comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client-vnc/Dockerfile` at line 7, The Dockerfile currently ends with the USER root directive which triggers security warnings; either create and switch to a non-root user and use that user at runtime (create a user/group, chown needed files, and replace USER root with the new username) or, if root is required for the VNC/X11 stack, add an explicit rationale comment next to the USER root line and suppress Trivy with a `# trivy:ignore:DS-0002` comment; update the Dockerfile's USER directive and any file ownership/permission changes and ensure any startup scripts/ENTRYPOINTs (refer to the Dockerfile USER directive and related startup scripts) work with the chosen user.s6-overlay/Dockerfile (2)
20-21: Architecture is hardcoded to x86_64.This limits portability to only x86_64 systems. Consider parameterizing the architecture or detecting it dynamically for multi-arch support.
💡 Example using TARGETARCH for multi-arch support
ARG S6_OVERLAY_VERSION=3.2.2.0 +ARG TARGETARCH + +# Map Docker TARGETARCH to s6-overlay naming +# amd64 -> x86_64, arm64 -> aarch64 ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz -ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp -RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz \ +ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-${TARGETARCH:-x86_64}.tar.xz /tmp +RUN tar -C / -Jxpf /tmp/s6-overlay-${TARGETARCH:-x86_64}.tar.xz \ && rm -f /tmp/s6-overlay-*.tar.xzNote: s6-overlay uses different naming (x86_64 vs amd64), so a mapping script may be needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s6-overlay/Dockerfile` around lines 20 - 21, The Dockerfile hardcodes the s6 overlay artifact name as s6-overlay-x86_64.tar.xz which breaks multi-arch builds; change to parameterize/detect architecture by introducing an ARG (e.g., TARGETARCH) and derive a build-time variable (e.g., S6_ARCH or mapping logic) that maps Docker's TARGETARCH to the s6 naming (amd64 -> x86_64, arm64 -> aarch64, etc.), then use that variable in the ADD URL and RUN tar invocation instead of the literal s6-overlay-x86_64.tar.xz (refer to S6_OVERLAY_VERSION and the existing ADD/RUN lines to locate where to replace the artifact name).
11-22: Combine RUN commands to minimize layers.Per coding guidelines, RUN commands should be combined. Additionally, consider verifying downloaded tarball checksums for security.
♻️ Proposed refactor to combine commands and add checksum verification
-RUN apt-get update \ - && apt-mark hold iptables \ - && env DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - xz-utils - # Install s6-overlay v3 ARG S6_OVERLAY_VERSION=3.2.2.0 + +RUN apt-get update \ + && apt-mark hold iptables \ + && env DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ + xz-utils \ + ca-certificates \ + curl \ + && curl -fsSL https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz -o /tmp/s6-overlay-noarch.tar.xz \ + && curl -fsSL https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz.sha256 -o /tmp/s6-overlay-noarch.tar.xz.sha256 \ + && cd /tmp && sha256sum -c s6-overlay-noarch.tar.xz.sha256 \ + && tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz \ + && curl -fsSL https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz -o /tmp/s6-overlay-x86_64.tar.xz \ + && curl -fsSL https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz.sha256 -o /tmp/s6-overlay-x86_64.tar.xz.sha256 \ + && cd /tmp && sha256sum -c s6-overlay-x86_64.tar.xz.sha256 \ + && tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz \ + && rm -f /tmp/s6-overlay-* \ + && apt-get clean \ + && rm -rf /var/lib/apt/lists/* -ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp -RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz -ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp -RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz \ - && rm -f /tmp/s6-overlay-*.tar.xzAs per coding guidelines: "Combine RUN commands in Dockerfiles to minimize layers."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s6-overlay/Dockerfile` around lines 11 - 22, Combine the separate RUN/ADD/tar steps into fewer RUN layers: perform apt-get update, apt-mark hold, install xz-utils, download both s6-overlay tar.xz files (using ARG S6_OVERLAY_VERSION) into /tmp, verify each tarball against a published checksum (e.g., fetch corresponding .sha256 or .sha256sum and run sha256sum -c), extract both with tar -C / -Jxpf, and remove /tmp/s6-overlay-*.tar.xz in the same RUN to ensure cleanup; reference the existing ARG S6_OVERLAY_VERSION, the ADD/ downloaded files (/tmp/s6-overlay-noarch.tar.xz and /tmp/s6-overlay-x86_64.tar.xz), and the tar -C / -Jxpf commands when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@build-edt.sh`:
- Around line 34-43: The docker build command includes an undefined variable
$no_cache_arg which should be removed; update the docker build invocation (the
line containing $no_cache_arg) to omit that variable and rely on the existing
$last_arg behavior instead, ensuring the command still includes the build args
(--pull, --build-arg ..., -t
${DOCKER_REGISTRY_URL:+"$DOCKER_REGISTRY_URL/"}oscript-downloader:latest, -f
oscript/Dockerfile, $last_arg) and no longer references no_cache_arg.
---
Nitpick comments:
In `@client-vnc/Dockerfile`:
- Line 7: The Dockerfile currently ends with the USER root directive which
triggers security warnings; either create and switch to a non-root user and use
that user at runtime (create a user/group, chown needed files, and replace USER
root with the new username) or, if root is required for the VNC/X11 stack, add
an explicit rationale comment next to the USER root line and suppress Trivy with
a `# trivy:ignore:DS-0002` comment; update the Dockerfile's USER directive and
any file ownership/permission changes and ensure any startup scripts/ENTRYPOINTs
(refer to the Dockerfile USER directive and related startup scripts) work with
the chosen user.
In `@s6-overlay/Dockerfile`:
- Around line 20-21: The Dockerfile hardcodes the s6 overlay artifact name as
s6-overlay-x86_64.tar.xz which breaks multi-arch builds; change to
parameterize/detect architecture by introducing an ARG (e.g., TARGETARCH) and
derive a build-time variable (e.g., S6_ARCH or mapping logic) that maps Docker's
TARGETARCH to the s6 naming (amd64 -> x86_64, arm64 -> aarch64, etc.), then use
that variable in the ADD URL and RUN tar invocation instead of the literal
s6-overlay-x86_64.tar.xz (refer to S6_OVERLAY_VERSION and the existing ADD/RUN
lines to locate where to replace the artifact name).
- Around line 11-22: Combine the separate RUN/ADD/tar steps into fewer RUN
layers: perform apt-get update, apt-mark hold, install xz-utils, download both
s6-overlay tar.xz files (using ARG S6_OVERLAY_VERSION) into /tmp, verify each
tarball against a published checksum (e.g., fetch corresponding .sha256 or
.sha256sum and run sha256sum -c), extract both with tar -C / -Jxpf, and remove
/tmp/s6-overlay-*.tar.xz in the same RUN to ensure cleanup; reference the
existing ARG S6_OVERLAY_VERSION, the ADD/ downloaded files
(/tmp/s6-overlay-noarch.tar.xz and /tmp/s6-overlay-x86_64.tar.xz), and the tar
-C / -Jxpf commands when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c5217951-1311-4461-b206-487c6441f4c9
📒 Files selected for processing (8)
build-base-swarm-jenkins-agent.shbuild-edt-swarm-agent.shbuild-edt.shclient-vnc/Dockerfiles6-overlay/Dockerfileswarm-jenkins-agent/Dockerfileswarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/runswarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/user/contents.d/jenkins-agent
✅ Files skipped from review due to trivial changes (1)
- swarm-jenkins-agent/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (2)
- swarm-jenkins-agent/configs/rootfs-v3/etc/s6-overlay/s6-rc.d/jenkins-agent/run
- build-base-swarm-jenkins-agent.sh
Я расковырял. Падает строка "oscript ${dbgsFindScriptPath} ${config.v8version} > ${dbgsPathResult}" отсюда. У меня была включена отладка "logosConfig": "logger.rootLogger=DEBUG" . Команда oscript '${dbgsFindScriptPath}' '${config.v8version}' выводила кучу логов и результат последней строкой. И все эти логи пытались перенаправится в файл и почему то не смогли. Работал ли сбор покрытия раньше при включенной отладке - не знаю. Я отладку включил уже для расследования приколов с обновлением. Отключил отладку - этот скрипт выполнился без проблем и пайплайн пошел работать дальше. |
|
У меня следующий прикол. BDD тесты оптом поотваливались с ошибкой Детально внутри контейнера еще не разбирал проблему. Может быть связано с тем, что я на 8.5 пытаюсь тесты гонять. @ovcharenko-di у тебя на этих образах работают сценарные тесты на VA? |
|
@Stepa86 сценарные пока не запускал, а вот тесты по открытию форм на ADD в GitLab сегодня отладил, все работает. Там механизм подключения клиентов тестирования похожий используется. Сейчас попробую запустить сценарные на Jenkins, но на 8.3.24. Напишу, что вышло. |
Это была проблема на моей стороне. Нужна была миграция данных, в настройках пайплайна я ее не включил. А БДД тесты вот так падают, если тест-клиент не стартует из-за требования в миграции (по скриншотам понял) |
| set -eu | ||
|
|
||
| : "${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JAR_URL:?Required env var not set}" | ||
| : "${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL:?Required env var not set}" |
There was a problem hiding this comment.
У меня этот скрипт на старом плагине так и не поехал. Ругнулся на незаполненный параметр и больше ничего не выполнялось
There was a problem hiding this comment.
а на какой именно параметр ругается?
There was a problem hiding this comment.
Если ругается на JENKINS_URL, то проверь, заполнен ли у тебя в настройках облака параметр Jenkins Url?
У меня он вроде как заполнен, значение должно передаваться в переменную DOCKER_SWARM_PLUGIN_JENKINS_URL, а мой скрипт ниже должен присваивать это значение JENKINS_URL. Почему-то у меня это не сработало, поэтому для старого плагина я задаю JENKINS_URL как переменную среды в Docker Agent template.
Попробуй ты сделать так, как написано выше, может это я что-то не так делаю.
There was a problem hiding this comment.
Ругается на "./run: 10: DOCKER_SWARM_PLUGIN_JENKINS_URL: parameter not set"
Вот что передается в докер у меня:
"Env": [
"DOCKER_SWARM_PLUGIN_JENKINS_AGENT_SECRET=18e60fdf355555555555555555552af97771e454824354a",
"DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JAR_URL=http://192.168.31.147:8080/jnlpJars/agent.jar",
"DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL=http://192.168.31.147:8080/computer/agt%2D%5FDPT%5Fdevelop%5F10%2D66/jenkins-agent.jnlp",
"DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME=agt-_DPT_develop_10-66",
"PATH=/opt/Coverage41C-2.7.3/bin:/root/.local/share/ovm/current/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=ru_RU.UTF-8",
"LANGUAGE=ru_RU:ru",
"LC_ALL=ru_RU.UTF-8",
"S6_KEEP_ENV=1",
"DISPLAY=:0",
"DISPLAY_WIDTH=1440",
"DISPLAY_HEIGHT=900",
"OSCRIPTBIN=/root/.local/share/ovm/current/bin",
"EDT_LOCATION=/opt/1C/1CE/"
]Настройки УРЛ вроде заданы
There was a problem hiding this comment.
@ovcharenko-di было бы здорово поддержать передачу DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL если она не заполнена
There was a problem hiding this comment.
@nixel2007 при использовании старого docker-swarm-plugin эта переменная уже должна быть заполнена, а swarm-agents-cloud-plugin ее и так заполняет
There was a problem hiding this comment.
Тогда я не понял этого треда. Что не работает на старом плагине?
There was a problem hiding this comment.
@nixel2007 я переписал run-файл, чтобы агент соединялся с Jenkins не через jnlp, который deprecated, а через переменные JENKINS_URL и JENKINS_AGENT_NAME.
Но эти переменные умеет обрабатывать только новый плагин. Старый же плагин работает с переменными с префиксом DOCKER_SWARM_PLUGIN_, поэтому для совместимости run-файла с ним я реализовал в скрипте установку переменной JENKINS_URL из DOCKER_SWARM_PLUGIN_JENKINS_URL, которую он по идее должен заполнять (см. вот тут). Но он ее почему-то не заполняет.
|
сделал так, что в слой test-utils можно было добавлять свои пакеты через еще одну переменную окружения |
…-docker into feature/s6-overlay-v3
85737d0
into
firstBitMarksistskaya:feature/first-bit


Отличия от #90:
./configsя растащил по другим каталогам. Так, кажется, логичнее: все, что относится кclient-vnc, находится теперь в./client-vnc, в том числе конфиги.client-vncсобирается поверхclientonecв виде долгоживущей (longrun) службы, потому что это не служба, а приложение. Оно должно корректно запускаться через CMD после /init (s6-overlay поддерживает такой сценарий).UPD 03.04.2026: проблема, описанная ниже, не актуальна.
Однако, есть одна проблема.Если запустить пайплайн в GitLab на образе client-vnc или его "потомке", то команда
ibcmd infobase config importимпортирует пустую конфигурацию, несмотря на то, что файлы конфигурации присутствуют локально. То есть из базы потом сохраняется "пустой" cf, хотя все команды выполняются успешно. В ТЖ во время импорта фиксируется событие:Однако, если прямо в задании GitLab отключить
entrypoint, то импорт выполняется корректно.Более того, при запуске образа вручную через
docker run --rm -it <sha> /bin/bashи выполнении всех тех же команд прямо в консоли, проблема не наблюдается. Причем ее нет как при пустом ENTRYPOINT, так и при /init.Получается, проблема существует при выполнении одновременно двух условий:
От версии платформы не зависит, пробовал на 8.3.21, 8.3.27.
Summary by CodeRabbit
New Features
Infrastructure Updates
Refactor
Bug Fixes