aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorFranck Cuny <franck@fcuny.net>2024-12-06 17:59:29 -0800
committerFranck Cuny <franck@fcuny.net>2024-12-06 17:59:29 -0800
commit96fbcb37c718dc18ee814f45fecc2c51cea4e909 (patch)
treefa3fc78a41ea9c249df6564b6e0dd5488945af9f
parentsimplify (diff)
downloadfcuny.net-96fbcb37c718dc18ee814f45fecc2c51cea4e909.tar.gz
add container security summit notes
-rw-r--r--content/container-security-summit-2019.md83
-rw-r--r--content/container-security-summit-2020.md56
2 files changed, 139 insertions, 0 deletions
diff --git a/content/container-security-summit-2019.md b/content/container-security-summit-2019.md
new file mode 100644
index 0000000..3ec8149
--- /dev/null
+++ b/content/container-security-summit-2019.md
@@ -0,0 +1,83 @@
++++
+title = "Container Security Summit 2019"
+date = 2019-02-20
+[taxonomies]
+tags = ["conference", "containers"]
++++
+
+This was the 4th edition of the summit.
+
+- [Program](https://cloudplatformonline.com/2019-NA-Container-Security-Summit-Agenda.html)
+- [slides](https://cloudplatformonline.com/2019-NA-Container-Security-Summit-Agenda.html)
+- [another summary](https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-four-takeaways-from-container-community-summit-2019)
+
+There was a number of talks and panels. Santhosh and Chris P. were there too, and they might have a different perspective.
+
+- There was some conversation about Root-less containers
+ - Running root-less containers is not there yet (it’s possible to do it, but it’s not a great experience).
+ - Challenge is to have the runc daemon to not run as root
+ - If you can escape the container it's game over
+ - But it seems to be a goal for this year
+ - Once you start mocking around with /proc you’re going to cry
+ - Root-less Build for containers, however, is here, and is a good thing.
+ - We talked a little bit about reproducible build.
+ - Debian and some other distros / groups are putting a lot of efforts here
+- Someone shared some recommendations when setting a k8s cluster
+ - Don’t let Pods access node’s IAM role in metadata endpoint
+ - This can be done via `networkPolicy`
+ - Disable auto-mount for SA tokens
+ - Prevent creation of privileged pods
+ - Prevent kubelets from accessing secrets for pods on other nodes
+- `ebpf` is the buzzword of the year
+ - Stop using `iptables` and only use `ebpf`
+- GKE on prem is clearly not for us (we knew it)
+ - We talked with a Google engineer working on the product
+ - You need to run vsphere, which increases the cost
+ - This is likely a temporary solution
+ - We would still have to deal with hardware
+- During one session we talked about isolating workloads
+ - We will want various clusters for various environment (dev / staging / prod)
+ - This will make our life easier for upgrading them
+ - Someone from Amazon (bob wise, previously head of SIG scalability) recommended namespace per service
+ - They act as quota boundaries
+- Google is working on tooling to manage namespaces across clusters
+ - Unclear about timeline
+- Google is also working on tooling to manage clusters
+ - But unclear (to me) if it's for GKE, on prem, or both
+- Talked about CIS benchmark for Docker and kubernetes
+ - The interesting part here (IMO) was the template they use to make recommendation. This is something we should look at for our RFC process when it comes to operational work.
+ - I’ll try to find that somewhere (hopefully we will get the slides)
+- Auditing is a challenge because very little recommendation for hosted kubernetes
+ - There’s a benchmark for Docker and k8s
+ - A robust CD pipeline is required
+ - That’s where organizations should invest
+ - Stop patching just rebuild and deploy
+ - You want to get it done fast
+- Average life for a container is less than 2 weeks
+- Conversations about managing security issues
+ - They shared the postmortem for first high profile CVE for kubernetes
+ - Someone from red hat talked about the one for runc
+ - There's desire to uniformize the way to handle these type of issues
+ - The guy from RH thinks the way they managed the runc one was not great (it leaked too early)
+ - There's a list for vendors to communicate and share these issues
+- Talked about runc issue
+ - Containers are hard
+ - Means different things to different people
+ - We make a lot of assumptions and this break a lot of stuff
+- Kubernetes secrets are not great (but no details why)
+ - Concerning: no one was running kubernetes on prem, just someone with a POC and his comment was “it sucks”
+- Some projects mentioned
+ - In toto
+ - Buildkit
+ - Umoci
+ - Sysdig
+- Some talk about service mesh (mostly istio)
+ - Getting mtls right is hard, use service mesh to get it right
+- API endpoint from vm can be accessed from container
+ - Google looking at ways to make this go away
+ - Too much of a risk (someone showed how to exploit this on aws)
+- There was a panel with a few auditing companies, I did not register anything from it
+ - Container security is hard and very few people understand it
+ - I don’t remember what was the context, but someone mentioned this bug as an example why containers / isolation is hard
+- There’s apparently some conversations about introducing a new Tenant object
+ - I have not been able to find this in tickets / mailing lists so far, would need to reach out to Google for this ?
diff --git a/content/container-security-summit-2020.md b/content/container-security-summit-2020.md
new file mode 100644
index 0000000..8bd6bd5
--- /dev/null
+++ b/content/container-security-summit-2020.md
@@ -0,0 +1,56 @@
++++
+title = "Container Security Summit 2020"
+date = 2020-02-12
+[taxonomies]
+tags = ["conference", "containers"]
++++
+
+This is the second time I go to this event, organized by Google in their Seattle office (the one in Fremont).
+
+As for last year, the content was pretty unequal. The first talk by Kelsey was interesting: one of the main concern that we have is around supply chain: where are our dependencies coming from ? We pull random libraries from all over the place, and no one read the code or try to see if there's vulnerabilities. Same is true with the firmware, bios, etc that we have in the hardware, by the way.
+
+The second talk completely went over my head, it was really not interesting. I'm going to guess that Sky (the company that was presenting) is a big Google Cloud customer and they were asked to do that presentation.
+
+We had a few more small talks, but nothing really great. One of the presentation was by an Australian bank (Up) and they were showing how they get slack notification when someone logs in a container. I hate this trend of sending everything to slack.
+
+After lunch there was a few more talks, again, nothing really interesting. There's a bunch of people in this community that have a lot of hype, but are not that great presenters or don't really have anything really interesting to present.
+
+The "un-conference" part was more interesting. There was two sessions that interested me: supply chain and PSPs. I went to the PSP one, and again, a couple of people suck all the air in the room and it's a dialogue, not a group conversation. The goal was to talk about PSP vs. OPA, but really we talked more about the challenges of PSPs and of moving out of them. The current consensus is to says that we need 3 PSPs: default, restrictive, permissive. Then all implementations (PSPs, OPA, etc) should support them, and they should offer more or less the same security level. Another thing considered is to let the CD pipeline take care of that. EKS / GKE have a challenge with a possible migration: how to move their customers, and to what.
+
+Overall, I think we are doing the right things in term of security: we have PSPs, we have our some controllers to ensure policies, etc. We are also looking at automatically upgrade containers using workflows (having a robust CI/CD pipeline is key here).
+
+
+<a id="org4ab3e9d"></a>
+
+# Some notes to followup / read
+
+- twitcher / host network / follow up on that
+- <https://github.com/cruise-automation/k-rail>
+ - better error message for failures
+- it's not a replacement to PSPs ?
+- <https://cloud.google.com/binary-authorization>
+- [falco](https://github.com/falcosecurity/falco)
+
+conversation about isolation:
+- <https://katacontainers.io/>
+ - could kata be a use case for collocation of storage ?
+- <https://github.com/google/gvisor>
+
+talk about beyondprod (brandon baker)
+- <https://cloud.google.com/security/beyondprod/>
+- binary authorization for borg
+- security infra design white paper
+- questions:
+ - latency for requests ? kerberos is not optimized, alts is
+ - <https://cloud.google.com/security/encryption-in-transit/application-layer-transport-security>
+
+panels:
+- small adoption of OPAh
+
+kubernetes audit logging:
+- <https://kubernetes.io/docs/tasks/debug-application-cluster/audit/>
+- <https://github.com/google/docker-explorer>
+- <https://github.com/google/turbinia>
+- <https://github.com/google/timesketch>
+- plaso (?)
+- <https://github.com/google/grr>