Puppet Modules issueshttps://code.immerda.ch/groups/immerda/puppet-modules/-/issues2022-12-31T11:52:28Zhttps://code.immerda.ch/immerda/puppet-modules/podman/-/issues/15Monitor pod lifeness2022-12-31T11:52:28ZmhMonitor pod lifenessAt the moment systemd monitors the PID of the last container in the pod.
When this container exits, then the unit is going to be restarted. When this container stays alive the pod is seen as running, even when any other container fails....At the moment systemd monitors the PID of the last container in the pod.
When this container exits, then the unit is going to be restarted. When this container stays alive the pod is seen as running, even when any other container fails.
2 issues with that:
1. We might not restart if for example the socat container files, which provides connectivity to the pod
2. We might restart although the overall pod is healthy, but just one container fails.
What would be the expected behavior?
* Monitor all containers and fail if one exits?
* What else?
* Should we consider other health checks?https://code.immerda.ch/immerda/puppet-modules/podman/-/issues/13CI broken2022-09-12T19:33:10Zo@ungehorsam.chCI brokenhttps://code.immerda.ch/immerda/puppet-modules/podman/-/jobs/42458https://code.immerda.ch/immerda/puppet-modules/podman/-/jobs/42458https://code.immerda.ch/immerda/puppet-modules/podman/-/issues/11Add initContainers2021-10-09T08:01:31ZstrixAdd initContainersinitContainers are supported by Podman:
* https://github.com/containers/podman/pull/11011
* https://docs.podman.io/en/latest/markdown/podman-create.1.html#init-ctr-type-pods-only
in the format `podman create --init-ctr=<always|once>` ...initContainers are supported by Podman:
* https://github.com/containers/podman/pull/11011
* https://docs.podman.io/en/latest/markdown/podman-create.1.html#init-ctr-type-pods-only
in the format `podman create --init-ctr=<always|once>` which then should be started with `podman start`. The format is the same as for `spec.containers` just in the place `spec.initContainers`:
```yaml
spec:
initContainers:
- name: mycontainer
image: alpine:latest
...
```
We should also decide if we want to support both `always` and `once` values. Kubernetes only knows the `always` type, podman has both.https://code.immerda.ch/immerda/puppet-modules/php/-/issues/1Fix snufllepagus config2021-08-07T05:57:20ZmhFix snufllepagus configI think there is an error in the current snuffleupagus config. We copy&pasted the rules from the readme and there was a typo (https://github.com/jvoisin/snuffleupagus/issues/312). Since the release 0.7.1 this throws a warning (https://gi...I think there is an error in the current snuffleupagus config. We copy&pasted the rules from the readme and there was a typo (https://github.com/jvoisin/snuffleupagus/issues/312). Since the release 0.7.1 this throws a warning (https://github.com/jvoisin/snuffleupagus/pull/367).
I hope this will fix the warnings we currently see on some php setups. I couldn't test the changes on my local machine. So some tests need to be done.
changing that breaks lots of sites !2 - so we need to further check, but at least we can remove them for now, since they are useless and just generate logs.https://code.immerda.ch/immerda/puppet-modules/podman/-/issues/7Ports and selinux2021-01-23T10:43:10ZmahogonyPorts and selinuxI struggled to start a pod with a container, which binds to port 8181. It fails to bind this port and never started.
> type=AVC msg=audit(1578260727.872:871): avc: denied { name_bind } for pid=5450 comm="ticker" src=8181 scontext=sy...I struggled to start a pod with a container, which binds to port 8181. It fails to bind this port and never started.
> type=AVC msg=audit(1578260727.872:871): avc: denied { name_bind } for pid=5450 comm="ticker" src=8181 scontext=system_u:system_r:httpd_container_rw_content.process:s0:c32,c925 tcontext=system_u:object_r:intermapper_port_t:s0 tclass=tcp_socket permissive=0
Currently the selinux policy (http_container_rw_content.sli) allows the containers to bind to 80, 8080 and all unreserved ports (for all reserved port `semanage port -l`). But its very unclear which port are unreserved. So why we limit the ports a container can bind?
For containers with a published port that policy make sense (f.e. wkd-svr). Here actually a port is published to the network. But for containers in a pod with socat this makes not that much sense. The container binds the port just inside the pod network. So it can't bind to an already used port.
Should we use a different se-policy for container which publish a port and those who just bind inside a pod-network? Or should we just document this behavior and give some advise on which portrange a container can bind without problems?