Skip to content

Conversation

@sebastienvermeille
Copy link
Collaborator

@sebastienvermeille sebastienvermeille commented Jun 2, 2025

Changes

graph TD
    A[Push to main branch] --> B[Checkout Code]
    B --> C[Set Git Commit Timestamp]
    C --> D[Set up QEMU]
    D --> E[Set up Docker Buildx]
    E --> F[Login to DockerHub]
    F --> G[Build and Push Multi-Arch Docker Image]

    G -->|linux/amd64| H([Create Image: nspanelmanager/nspanelmanager:latest])
    G -->|linux/386| H
    G -->|linux/arm64| H
    G -->|linux/arm/v7| H
    G -->|qemu-arm| H
    G -->|qemu-aarch64| H
Loading

Note:
I added https://reproducible-builds.org/docs/source-date-epoch/ (it's pretty interesting we can insert the "build time" in an ENV variable and seems it is a standard, we could then reuse it as part of the "help" UI to publish some bug reports etc in the future)
At the moment I added a TODO because it tags everything :test so that we can ensure it works as expected safely. (Have to test the multiple architecture images are produced, and that it works as expected)

That way a tag is published each time a commit is performed on the main branch
That way it will works transparently with other architecture platforms
Copy link
Collaborator

@tpanajott tpanajott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay. We are missing one important step in the workflow, to replace placeholders of version number in the files, as per already existing workflow:

- run: |
          cp docker/web/nspanelmanager/web/templates/footer_template.html docker/web/nspanelmanager/web/templates/footer.html
          sed -i 's/%version%/${{ github.ref_name }}/g' docker/web/nspanelmanager/web/templates/footer.html
          cp docs/tex/manual.pdf docker/web/nspanelmanager/manual.pdf

with:
context: ./docker # Path to your Dockerfile
# note: platforms list: https://github.com/docker/setup-qemu-action
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,qemu-arm,qemu-aarch64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these run in parallel or serially?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to go the simple way first so it is currently: sequential

We can change it later on using a matrix and some mechanisms (the doc also mention that variant but it is slightly more complicated to setup)

Let me know if you want it already in parallel I can give it a shot too

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current free tier we are using have a limit of 6 hours per job, so not doing it in parallel it will fail. Also, we are currently building with Home Assistant builder and what is considered armhf in that (I'm guessing that's the qemu-arm thing here) takes about 6-7 hours to build. So, that specific image has to be built at my PC currently and then uploaded.

This is a known issue that I have to fix, unfortunately it requires a bit of work and I don't feel we should do that work before we publish the current beta as stable. Basically, we need to remove the ZMQ and rework the communication between the web interface and the MQTTManager-component.

@sebastienvermeille
Copy link
Collaborator Author

Sorry for the delay. We are missing one important step in the workflow, to replace placeholders of version number in the files, as per already existing workflow:

- run: |
          cp docker/web/nspanelmanager/web/templates/footer_template.html docker/web/nspanelmanager/web/templates/footer.html
          sed -i 's/%version%/${{ github.ref_name }}/g' docker/web/nspanelmanager/web/templates/footer.html
          cp docs/tex/manual.pdf docker/web/nspanelmanager/manual.pdf

Okay,thank you for the notice I will integrate that

@tpanajott
Copy link
Collaborator

Also, one thing that's missing here that I forgot to mention (but is probably not something we should implement just know) is the final step to update the docker/config.yaml and docker-beta/config.yaml file to point to the latest release when the publish has finished.

push:
branches:
- main
- ci/refactore # TODO: remove it
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the build is failing because sed uses / to determine different parts of the command. It fails on this command sed -i "s/%version%/$VERSION/g" docker/web/***/web/templates/footer.html when trying to update the version shown in the footer, but, because the branch is "ci/refactore" it fails as it tries to parse "refactore" as an option to the sed command s as per sed -i "s/%version%/ci/refactore/g" docker/web/***/web/templates/footer.html My pruposal is to change the sed command to sed -i "s|%version%|$VERSION|g" docker/web/***/web/templates/footer.html (ie. use pipes instead of / as separators, we are much more unlikely to want to have a pipe in the version than a / either way).

Update as per my review, change / to | in sed-command that replaces version on footer.
@tpanajott
Copy link
Collaborator

Hmm, we are now getting the error as below but I'm not quite sure why. This is not something I've ever seen before.

#74 [linux/386 stage-1 14/15] RUN pip install -r requirements.txt # Install python packages
#74 20.80   Installing build dependencies: finished with status 'done'
#74 20.81   Getting requirements to build wheel: started
#74 23.26   Getting requirements to build wheel: finished with status 'error'
#74 23.26   error: subprocess-exited-with-error
#74 23.26   
#74 23.26   × Getting requirements to build wheel did not run successfully.
#74 23.26   │ exit code: 1
#74 23.26   ╰─> [47 lines of output]
#74 23.26       Compiling src/gevent/resolver/cares.pyx because it changed.
#74 23.26       [1/1] Cythonizing src/gevent/resolver/cares.pyx
#74 23.26       performance hint: src/gevent/libev/corecext.pyx:1357:0: Exception check on '_syserr_cb' will always require the GIL to be acquired.
#74 23.26       Possible solutions:
#74 23.26           1. Declare '_syserr_cb' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
#74 23.26           2. Use an 'int' return type on '_syserr_cb' to allow an error code to be returned.
#74 23.26       
#74 23.26       Error compiling Cython file:
#74 23.26       ------------------------------------------------------------
#74 23.26       ...
#74 23.26       cdef tuple integer_types
#74 23.26       
#74 23.26       if sys.version_info[0] >= 3:
#74 23.26           integer_types = int,
#74 23.26       else:
#74 23.26           integer_types = (int, long)
#74 23.26                                 ^
#74 23.26       ------------------------------------------------------------
#74 23.26       
#74 23.26       src/gevent/libev/corecext.pyx:69:26: undeclared name not builtin: long
#74 23.26       Compiling src/gevent/libev/corecext.pyx because it changed.
#74 23.26       [1/1] Cythonizing src/gevent/libev/corecext.pyx
#74 23.26       Traceback (most recent call last):
#74 23.26         File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
#74 23.26           main()
#74 23.26         File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
#74 23.26           json_out['return_val'] = hook(**hook_input['kwargs'])
#74 23.26                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#74 23.26         File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
#74 23.26           return hook(config_settings)
#74 23.26                  ^^^^^^^^^^^^^^^^^^^^^
#74 23.26         File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
#74 23.26           return self._get_build_requires(config_settings, requirements=[])
#74 23.26                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#74 23.26         File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires
#74 23.26           self.run_setup()
#74 23.26         File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 317, in run_setup
#74 23.26           exec(code, locals())
#74 23.26         File "<string>", line 54, in <module>
#74 23.26         File "/tmp/pip-install-_mz2o1rz/gevent_b68f469440c14a27b8b8fe401569bd86/_setuputils.py", line 249, in cythonize1
#74 23.26           new_ext = cythonize(
#74 23.26                     ^^^^^^^^^^
#74 23.26         File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
#74 23.26           cythonize_one(*args)
#74 23.26         File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1298, in cythonize_one
#74 23.26           raise CompileError(None, pyx_file)
#74 23.26       Cython.Compiler.Errors.CompileError: src/gevent/libev/corecext.pyx
#74 23.26       [end of output]
#74 23.26   
#74 23.26   note: This error originates from a subprocess, and is likely not a problem with pip.
#74 23.27 error: subprocess-exited-with-error

@tpanajott
Copy link
Collaborator

I think I figured it out though I'm not quite sure on how to test it. The error above (src/gevent/libev/corecext.pyx:69:26: undeclared name not builtin: long) I coming from the fact that, when building for i386/x86 the data type long is simply not declared in python. The package that is trying to use long is gevent which is only used as a dependency for conan which in it self is only used when building the MQTTManager binary during the first stage of the Dockerfile.

Previously we've used the home assistant builder tool to build the images and when using that it sends along the argument BUILDPLATFORM which can be used to together with the --platform argument in the FROM command in the Dockerfile to specify that even if an image is being built for i386/x86 (in this case, could be arm or whatever) this particular stage in the Dockerfile should simply run the native architecture of the host system (almost always x86_64) and on that architecture the data type long is declared. Doing this also has a massive performance uplift as emulating ARM on an x86_64 processor is stupid slow, therefor if we can keep doing as previously and cross-compile the MQTTManager on a native architecture we will save a lot of time and solve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants