Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions modules/nodes-containers-port-forwarding-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,19 @@
[id="nodes-containers-port-forwarding-about_{context}"]
= Understanding port forwarding

You can use the CLI to forward one or more local ports to a pod. This allows you
to listen on a given or random port locally, and have data forwarded to and from
given ports in the pod.
[role="_abstract"]
You can use the {oc-first} to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod.

Support for port forwarding is built into the CLI:
You can use a command similar to the following to forward one or more local ports to a pod.

[source,terminal]
----
$ oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]
----

The CLI listens on each local port specified by the user, forwarding using the protocol described below.
The {oc-first} listens on each local port specified by the user, forwarding using the protocol described below.

Ports may be specified using the following formats:
You can specify ports by using the following formats:

[horizontal]
`5000`:: The client listens on port 5000 locally and forwards to 5000 in the
Expand Down
22 changes: 13 additions & 9 deletions modules/nodes-containers-port-forwarding-protocol.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,25 @@
[id="nodes-containers-port-forwarding-protocol_{context}"]
= Protocol for initiating port forwarding from a client

Clients initiate port forwarding to a pod by issuing a request to the
Kubernetes API server:
[role="_abstract"]
A client resource in your cluster can initiate port forwarding to a pod by issuing a request to the Kubernetes API server.

Use a request in the following format:

[source,terminal]
----
/proxy/nodes/<node_name>/portForward/<namespace>/<pod>
----
where:

In the above URL:

- `<node_name>` is the FQDN of the node.
- `<namespace>` is the namespace of the target pod.
- `<pod>` is the name of the target pod.

For example:
--
`<node_name>`:: Specifies the FQDN of the node.
`<namespace>`:: Specifies the namespace of the target pod.
`<pod>`:: Specifies the name of the target pod.
--

.Example request
[source,terminal]
----
/proxy/nodes/node123.openshift.com/portForward/myns/mypod
----
Expand Down
23 changes: 11 additions & 12 deletions modules/nodes-containers-port-forwarding-using.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,19 @@
[id="nodes-containers-port-forwarding-using_{context}"]
= Using port forwarding

You can use the CLI to port-forward one or more local ports to a pod.
[role="_abstract"]
You can use the {oc-first} to port-forward one or more local ports to a pod.

.Procedure

Use the following command to listen on the specified port in a pod:

* Use a command similar to the following to listen on the specified port in a pod:
+
[source,terminal]
----
$ oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]
----

For example:

* Use the following command to listen on ports `5000` and `6000` locally and forward data to and from ports `5000` and `6000` in the pod:
+
For example, use the following command to listen on ports `5000` and `6000` locally and forward data to and from ports `5000` and `6000` in the pod:
+
[source,terminal]
----
Expand All @@ -34,8 +33,8 @@ Forwarding from [::1]:5000 -> 5000
Forwarding from 127.0.0.1:6000 -> 6000
Forwarding from [::1]:6000 -> 6000
----

* Use the following command to listen on port `8888` locally and forward to `5000` in the pod:
+
For example, use the following command to listen on port `8888` locally and forward to `5000` in the pod:
+
[source,terminal]
----
Expand All @@ -48,8 +47,8 @@ $ oc port-forward <pod> 8888:5000
Forwarding from 127.0.0.1:8888 -> 5000
Forwarding from [::1]:8888 -> 5000
----

* Use the following command to listen on a free port locally and forward to `5000` in the pod:
+
For example, use the following command to listen on a free port locally and forward to `5000` in the pod:
+
[source,terminal]
----
Expand All @@ -63,7 +62,7 @@ Forwarding from 127.0.0.1:42390 -> 5000
Forwarding from [::1]:42390 -> 5000
----
+
Or:
Alternatively, use the following command to listen on a free port locally and forward to `5000` in the pod:
+
[source,terminal]
----
Expand Down
15 changes: 8 additions & 7 deletions modules/nodes-containers-remote-commands-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,31 @@
[id="nodes-containers-remote-commands-about_{context}"]
= Executing remote commands in containers

Support for remote container command execution is built into the CLI.
[role="_abstract"]
You can use the {oc-first} to execute remote commands in {product-title} containers. By running commands in a container, you can perform troubleshooting, inspect logs, run scripts, and other tasks.

.Procedure

To run a command in a container:

* Use a command similar to the following to run a command in a container:
+
[source,terminal]
----
$ oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]
----

+
For example:

+
[source,terminal]
----
$ oc exec mypod date
----

+
.Example output
[source,terminal]
----
Thu Apr 9 02:21:53 UTC 2015
----

+
[IMPORTANT]
====
link:https://access.redhat.com/errata/RHSA-2015:1650[For security purposes], the
Expand Down
34 changes: 18 additions & 16 deletions modules/nodes-containers-remote-commands-protocol.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,37 @@
[id="nodes-containers-remote-commands-protocol_{context}"]
= Protocol for initiating a remote command from a client

Clients initiate the execution of a remote command in a container by issuing a
request to the Kubernetes API server:
[role="_abstract"]
A client resource in your cluster can initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server.

The following example is the format for a typical request to a Kubernetes API server:

[source,terminal]
----
/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>
----
where:

In the above URL:

- `<node_name>` is the FQDN of the node.
- `<namespace>` is the project of the target pod.
- `<pod>` is the name of the target pod.
- `<container>` is the name of the target container.
- `<command>` is the desired command to be executed.

For example:
--
`<node_name>`:: Specifies the FQDN of the node.
`<namespace>`:: Specifies the project of the target pod.
`<pod>`:: Specifies the name of the target pod.
`<container>`:: Specifies the name of the target container.
`<command>`:: Specifies the desired command to be executed.
--

.Example request
[source,terminal]
----
/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date
----

Additionally, the client can add parameters to the request to indicate if:
Additionally, the client can add parameters to the request to indicate any of the following conditions:

- the client should send input to the remote container's command (stdin).
- the client's terminal is a TTY.
- the remote container's command should send output from stdout to the client.
- the remote container's command should send output from stderr to the client.
* The client should send input to the remote container's command (stdin).
* The client's terminal is a TTY.
* The remote container's command should send output from stdout to the client.
* The remote container's command should send output from stderr to the client.

After sending an `exec` request to the API server, the client upgrades the
connection to one that supports multiplexed streams; the current implementation
Expand Down
38 changes: 22 additions & 16 deletions modules/nodes-containers-start-pod-safe-sysctls.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
[id="nodes-starting-pod-safe-sysctls_{context}"]
= Starting a pod with safe sysctls

You can set sysctls on pods using the pod's `securityContext`. The `securityContext` applies to all containers in the same pod.
[role="_abstract"]
You can modify kernel parameters for all containers in a pod by adding the sysctls parameter to the `securityContext` parameter in a pod spec.

Safe sysctls are allowed by default.

Expand All @@ -22,7 +23,8 @@ This example uses the pod `securityContext` to set the following safe sysctls:
To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects.
====

Use this procedure to start a pod with the configured sysctl settings.
The following procedure shows how to start a pod with the configured sysctl settings.

[NOTE]
====
In most cases you modify an existing pod definition and add the `securityContext` spec.
Expand All @@ -46,14 +48,14 @@ spec:
image: centos
command: ["bin/bash", "-c", "sleep INF"]
securityContext:
runAsUser: 2000 <1>
runAsGroup: 3000 <2>
allowPrivilegeEscalation: false <3>
capabilities: <4>
runAsUser: 2000
runAsGroup: 3000
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
securityContext:
runAsNonRoot: true <5>
seccompProfile: <6>
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
sysctls:
- name: kernel.shm_rmid_forced
Expand All @@ -65,21 +67,25 @@ spec:
- name: net.ipv4.ping_group_range
value: "0 200000000"
----
<1> `runAsUser` controls which user ID the container is run with.
<2> `runAsGroup` controls which primary group ID the containers is run with.
<3> `allowPrivilegeEscalation` determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the `no_new_privs` flag gets set on the container process.
<4> `capabilities` permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod.
<5> `runAsNonRoot: true` requires that the container will run with a user with any UID other than 0.
<6> `RuntimeDefault` enables the default seccomp profile for a pod or container workload.
where:

`spec.containers.securityContext.runAsUser`:: Specifies which user ID the container is run with.
`spec.containers.securityContext.runAsGroup`:: Specifies which primary group ID the containers is run with.
`spec.containers.securityContext.allowPrivilegeEscalation`:: Specifies whether a pod can request privilege escalation. The default is `true`. This boolean directly controls whether the `no_new_privs` flag gets set on the container process.
`spec.containers.securityContext.capabilities`:: Specifies permitted privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod.
`spec.securityContext.runAsNonRoot: true`:: Specifies that the container runs with a user with any UID other than 0.
`spec.securityContext.seccompProfile.type: RuntimeDefault`:: Specifies that the default seccomp profile is enabled for a pod or container workload.

. Create the pod by running the following command:
+
[source,terminal]
----
$ oc apply -f sysctl_pod.yaml
----
+
. Verify that the pod is created by running the following command:

.Verification

. Check that the pod is created by running the following command:
+
[source,terminal]
----
Expand Down
11 changes: 7 additions & 4 deletions modules/nodes-containers-sysctls-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,20 @@
[id="nodes-containers-sysctls-about_{context}"]
= About sysctls

In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available from the `_/proc/sys/_` virtual process file system. The parameters cover various subsystems, such as:
[role="_abstract"]
The Linux sysctl interface allows you to modify kernel parameters at runtime to manage subsystems such as networking, virtual memory, and MDADM. By accessing the sysctl interface, you can view and adjust system configurations without rebooting the operating system.

You can modify the following subsystems by using sysctls:

- kernel (common prefix: `_kernel._`)
- networking (common prefix: `_net._`)
- virtual memory (common prefix: `_vm._`)
- MDADM (common prefix: `_dev._`)

More subsystems are described in link:https://www.kernel.org/doc/Documentation/sysctl/README[Kernel documentation].
To get a list of all parameters, run:
Refer to the link:https://www.kernel.org/doc/Documentation/sysctl/README[Kernel.org documentation] for more information on the subsystems you can manage.
You can get a list of all parameters by running the following command:

[source,terminal]
----
$ sudo sysctl -a
----
----
7 changes: 5 additions & 2 deletions modules/nodes-containers-sysctls-setting.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,10 @@
[id="nodes-containers-starting-pod-with-unsafe-sysctls_{context}"]
= Starting a pod with unsafe sysctls

A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes.
[role="_abstract"]
You can run a pod that is configured to use unsafe sysctls on a node where a cluster administrator explicitly enabled unsafe sysctls. You might use unsafe sysctls for situations such as high performance or real-time application tuning.

You can use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes.

The following example uses the pod `securityContext` to set a safe sysctl `kernel.shm_rmid_forced` and two unsafe sysctls, `net.core.somaxconn` and `kernel.msgmax`. There is no distinction between _safe_ and _unsafe_ sysctls in the specification.

Expand Down Expand Up @@ -51,7 +54,7 @@ spec:
value: "65536"
----

. Create the pod using the following command:
. Create the pod by using the following command:
+
[source,terminal]
----
Expand Down
Loading