|
| 1 | +--- |
| 2 | +sidebar_position: 5 |
| 3 | +--- |
| 4 | + |
| 5 | +# Developers guide |
| 6 | + |
| 7 | +Here you can find how to test a virtual kubelet implementation against the main |
| 8 | +pod use cases we mean to support. |
| 9 | + |
| 10 | +## Requirements |
| 11 | + |
| 12 | +- [Docker engine](https://docs.docker.com/engine/install/) |
| 13 | +- [Dagger CLI v0.13.x](https://docs.dagger.io/install/) |
| 14 | + |
| 15 | +## What's in the Dagger module |
| 16 | + |
| 17 | +- E2e integration tests: a reproducible test environment (selfcontained in |
| 18 | + Dagger runtime). Run the very same tests executed by github actions to |
| 19 | + validate any PR |
| 20 | +- A development setup tool: optionally you can use your k8s cluster of choice to |
| 21 | + run and install different interlink components via this module. |
| 22 | + |
| 23 | +:warning: by default the docker plugin is the one tested and to be referred to |
| 24 | +for any change as first thing. |
| 25 | + |
| 26 | +## Usage |
| 27 | + |
| 28 | +The whole test suite is based on the application of k8s manifests inside a |
| 29 | +folder that must be passed at runtime. In `./ci/manifests` of this repo you can |
| 30 | +find the one executed by default by the github actions. |
| 31 | + |
| 32 | +That means you can test your code **before** any commit, discovering in advance |
| 33 | +if anything is breaking. |
| 34 | + |
| 35 | +### Run e2e tests |
| 36 | + |
| 37 | +The easiest way is to simply run `make test` from the root folder of interlink. |
| 38 | +But if you need to debug or understand further the test utility or a plugin, you |
| 39 | +should follow these instructions. |
| 40 | + |
| 41 | +#### Edit manifests with your images |
| 42 | + |
| 43 | +- `service-account.yaml` is the default set of permission needed by the |
| 44 | + virtualkubelet. Do not touch unless you know what you are doing. |
| 45 | +- `virtual-kubelet-config.yaml` is the configuration mounted into the **virtual |
| 46 | + kubelet** component to determine its behaviour. |
| 47 | +- `virtual-kubelet.yaml` is the one that you should touch if you are pointing to |
| 48 | + different interlink endpoints or if you want to change the **virtual kubelet** |
| 49 | + image to be tested. |
| 50 | +- `interlink-config.yaml` is the configuration mounted into the **interlink |
| 51 | + API** component to determine its behaviour. |
| 52 | +- `interlink.yaml` is the one that you should touch if you are pointing to |
| 53 | + different plugin endpoints or if you want to change the **interlink API** |
| 54 | + image to be tested. |
| 55 | +- `plugin-config.yaml` is the configuration for the **interLink plugin** |
| 56 | + component that you MUST TO START MANUALLY on your host. |
| 57 | + - we do have solution to make it start inside dagger environment, but is not |
| 58 | + documented yet. |
| 59 | + |
| 60 | +#### Start the local docker plugin service |
| 61 | + |
| 62 | +For a simple demonstration, you can use the plugin that we actually use in are |
| 63 | +Github Actions: |
| 64 | + |
| 65 | +```bash |
| 66 | +wget https://github.com/interlink-hq/interlink-docker-plugin/releases/download/0.0.24-no-gpu/docker-plugin_Linux_x86_64 -O docker-plugin \ |
| 67 | + && chmod +x docker-plugin \ |
| 68 | + && docker ps \ |
| 69 | + && export INTERLINKCONFIGPATH=$PWD/ci/manifests/plugin-config.yaml \ |
| 70 | + && ./docker-plugin |
| 71 | +``` |
| 72 | + |
| 73 | +#### Run the tests |
| 74 | + |
| 75 | +Then, in another terminal sessions you are ready to execute the e2e tests with |
| 76 | +Dagger. |
| 77 | + |
| 78 | +First of all, in `ci/manifests/vktest_config.yaml` you will find the pytest |
| 79 | +configuration file. Please see the |
| 80 | +[test documentation](https://github.com/interlink-hq/vk-test-set/tree/main) for |
| 81 | +understanding how to tweak it. |
| 82 | + |
| 83 | +The following instructions are thought for building docker images of the |
| 84 | +virtual-kubelet and interlink api server components at runtime and published on |
| 85 | +`virtual-kubelet-ref` and `interlink-ref` repositories (in this example it will |
| 86 | +be dockerHUB repository of the dciangot user). It basically consists on a chain |
| 87 | +of Dagger tasks for building core images (`build-images`), creating the |
| 88 | +kubernetes environment configured with core components (`new-interlink`), |
| 89 | +installing the plugin of choice indicated in the `manifest` folder |
| 90 | +(`load-plugin`), and eventually the execution of the tests (`test`) |
| 91 | + |
| 92 | +To run the default tests you can move to `ci` folder and execute the Dagger |
| 93 | +pipeline with: |
| 94 | + |
| 95 | +```bash |
| 96 | +dagger call \ |
| 97 | + --name my-tests \ |
| 98 | + build-images \ |
| 99 | + new-interlink \ |
| 100 | + --plugin-endpoint tcp://localhost:4000 \ |
| 101 | + test stdout |
| 102 | +``` |
| 103 | + |
| 104 | +:warning: by default the docker plugin is the one tested and to be referred to |
| 105 | +for any change as first thing. |
| 106 | + |
| 107 | +In case of success the output should print something like the following: |
| 108 | + |
| 109 | +```text |
| 110 | +cachedir: .pytest_cache |
| 111 | +rootdir: /opt/vk-test-set |
| 112 | +configfile: pyproject.toml |
| 113 | +collecting ... collected 12 items / 1 deselected / 11 selected |
| 114 | +
|
| 115 | +vktestset/basic_test.py::test_namespace_exists[default] PASSED [ 9%] |
| 116 | +vktestset/basic_test.py::test_namespace_exists[kube-system] PASSED [ 18%] |
| 117 | +vktestset/basic_test.py::test_namespace_exists[interlink] PASSED [ 27%] |
| 118 | +vktestset/basic_test.py::test_node_exists[virtual-kubelet] PASSED [ 36%] |
| 119 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-000-hello-world.yaml] PASSED [ 45%] |
| 120 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-010-simple-python.yaml] PASSED [ 54%] |
| 121 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-020-python-env.yaml] PASSED [ 63%] |
| 122 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-030-simple-shared-volume.yaml] PASSED [ 72%] |
| 123 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-040-config-volumes.yaml] PASSED [ 81%] |
| 124 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-050-limits.yaml] PASSED [ 90%] |
| 125 | +vktestset/basic_test.py::test_manifest[virtual-kubelet-060-init-container.yaml] PASSED [100%] |
| 126 | +
|
| 127 | +====================== 11 passed, 1 deselected in 41.71s ======================= |
| 128 | +``` |
| 129 | + |
| 130 | +#### Debug with interactive session |
| 131 | + |
| 132 | +In case something went wrong, you have the possibility to spawn a session inside |
| 133 | +the final step of the pipeline to debug things: |
| 134 | + |
| 135 | +```bash |
| 136 | +dagger call \ |
| 137 | + --name my-tests \ |
| 138 | + build-images \ |
| 139 | + new-interlink \ |
| 140 | + --plugin-endpoint tcp://localhost:4000 \ |
| 141 | + run terminal |
| 142 | + |
| 143 | +``` |
| 144 | + |
| 145 | +with this command (after some minutes) then you should be able to access a bash |
| 146 | +session doing the following commands: |
| 147 | + |
| 148 | +```bash |
| 149 | +bash |
| 150 | +source .venv/bin/activate |
| 151 | +export KUBECONFIG=/.kube/config |
| 152 | + |
| 153 | +## check connectivity with k8s cluster |
| 154 | +kubectl get pod -A |
| 155 | + |
| 156 | +## re-run the tests |
| 157 | +pytest -vk 'not rclone' |
| 158 | +``` |
| 159 | + |
| 160 | +#### Debug from kubectl on your host |
| 161 | + |
| 162 | +You can get the Kubernetes service running with: |
| 163 | + |
| 164 | +```bash |
| 165 | +dagger call \ |
| 166 | + --name my-tests \ |
| 167 | + build-images \ |
| 168 | + new-interlink \ |
| 169 | + --plugin-endpoint tcp://localhost:4000 \ |
| 170 | + kube up |
| 171 | +``` |
| 172 | + |
| 173 | +and then from another session, you can get the kubeconfig with: |
| 174 | + |
| 175 | +```bash |
| 176 | +dagger call \ |
| 177 | + --name my-tests \ |
| 178 | + config export --path ./kubeconfig.yaml |
| 179 | +``` |
| 180 | + |
| 181 | +### Deploy on existing K8s cluster |
| 182 | + |
| 183 | +TBD |
| 184 | + |
| 185 | +<!-- --> |
| 186 | +<!-- You might want to hijack the test machinery in order to have it instantiating the test environemnt on your own kubernetes cluster (e.g. to debug and develop plugins in a efficient way). We are introducing options for this purpose and it is expected to be extended even more in the future. --> |
| 187 | +<!-- --> |
| 188 | +<!-- If you have a kubernetes cluster **publically accessible**, you can pass your kubeconfig to the Dagger pipeline and use that instead of the internal one that is "one-shot" for the tests only. --> |
| 189 | +<!-- --> |
| 190 | +<!-- ```bash --> |
| 191 | +<!-- ``` --> |
| 192 | +<!-- --> |
| 193 | +<!-- If you have a *local* cluster (e.g. via MiniKube), you need to forward the local port of the Kubernetes API server (look inside the kubeconfig file) inside the Dagger runtime with the following: --> |
| 194 | +<!-- --> |
| 195 | +<!-- ```bash --> |
| 196 | +<!-- ``` --> |
| 197 | + |
| 198 | +### Develop Virtual Kubelet code |
| 199 | + |
| 200 | +:warning: Coming soon |
| 201 | + |
| 202 | +### Develop Interlink API code |
| 203 | + |
| 204 | +:warning: Coming soon |
| 205 | + |
| 206 | +### Develop your plugin |
| 207 | + |
| 208 | +:warning: Coming soon |
| 209 | + |
| 210 | +## SSL Certificate Management |
| 211 | + |
| 212 | +### CSR Integration for Virtual Kubelet |
| 213 | + |
| 214 | +As of this version, Virtual Kubelet now supports proper SSL certificate management using Kubernetes Certificate Signing Requests (CSRs) instead of self-signed certificates. This resolves compatibility issues with `kubectl logs` and other Kubernetes clients. |
| 215 | + |
| 216 | +#### Key Changes |
| 217 | + |
| 218 | +- **CSR-based certificates**: Virtual Kubelet now requests certificates from the Kubernetes cluster CA using the standard `kubernetes.io/kubelet-serving` signer |
| 219 | +- **Automatic fallback**: If CSR creation fails, the system falls back to self-signed certificates with a warning |
| 220 | +- **Improved compatibility**: No longer requires `--insecure-skip-tls-verify-backend` flag for `kubectl logs` |
| 221 | + |
| 222 | +#### Technical Details |
| 223 | + |
| 224 | +The implementation uses: |
| 225 | +- **Signer**: `kubernetes.io/kubelet-serving` (standard kubelet serving certificate signer) |
| 226 | +- **Certificate store**: `/tmp/certs` directory with `virtual-kubelet` prefix |
| 227 | +- **Subject**: `system:node:<node-name>` with `system:nodes` organization |
| 228 | +- **IP SANs**: Node IP address for proper certificate validation |
| 229 | + |
| 230 | +#### Testing Certificate Integration |
| 231 | + |
| 232 | +To verify CSR-based certificate functionality: |
| 233 | + |
| 234 | +1. **Check CSR creation**: |
| 235 | + ```bash |
| 236 | + kubectl get csr |
| 237 | + ``` |
| 238 | + |
| 239 | +2. **Test kubectl logs without insecure flag**: |
| 240 | + ```bash |
| 241 | + kubectl logs <pod-name-on-virtual-kubelet-node> |
| 242 | + ``` |
| 243 | + |
| 244 | +3. **Monitor Virtual Kubelet logs** for certificate retrieval messages: |
| 245 | + ```bash |
| 246 | + kubectl logs -n interlink virtual-kubelet-<node-name> |
| 247 | + ``` |
| 248 | + |
| 249 | +#### ⚠️ IMPORTANT: CSR Manual Approval Required |
| 250 | + |
| 251 | +:exclamation: **CRITICAL**: CSRs (Certificate Signing Requests) must be manually approved by a cluster administrator, otherwise **log access will not work**. Without CSR approval, `kubectl logs` and other log-related operations will fail. |
| 252 | + |
| 253 | +**Required steps for enabling log functionality:** |
| 254 | + |
| 255 | +1. **Check for pending CSRs**: |
| 256 | + ```bash |
| 257 | + kubectl get csr |
| 258 | + ``` |
| 259 | + |
| 260 | +2. **Approve the CSR** (replace `csr-xxxxx` with the actual CSR name): |
| 261 | + ```bash |
| 262 | + kubectl certificate approve csr-xxxxx |
| 263 | + ``` |
| 264 | + |
| 265 | +3. **Verify logs are accessible**: |
| 266 | + ```bash |
| 267 | + kubectl logs <pod-name-on-virtual-kubelet-node> |
| 268 | + ``` |
| 269 | + |
| 270 | +#### Troubleshooting |
| 271 | + |
| 272 | +- **CSR approval**: Ensure your cluster has automatic CSR approval configured or manually approve CSRs |
| 273 | +- **RBAC permissions**: Virtual Kubelet needs permissions to create CSRs in the `certificates.k8s.io` API group |
| 274 | +- **Fallback behavior**: Check logs for warnings about falling back to self-signed certificates |
| 275 | + |
| 276 | +For clusters without proper CSR support, the system maintains backward compatibility by automatically using self-signed certificates with appropriate warnings. |
0 commit comments