Kubernetes is a software platform for automating the deployment, scaling, and management of containerized applications. It has become the de facto standard for modern container orchestration, with over 300,000 Kubernetes deployments in production. This explosion of deployment and scale, which was once thought impossible with infrastructure-as-a-service (IaaS), has made Kubernetes one of the most exciting technologies in tech right now.

But one of the biggest challenges Kubernetes users face today is how to govern their Kubernetes deployments to keep them secure. Although many security professionals do use LDAP authentication and proxy for security purposes, that's not a sustainable long-term approach for a large Kubernetes cluster. For one, it's easy to make mistakes—and there's no reliable way to recover from a botched administrator password reset.

If you're looking to add a more mature way to secure your Kubernetes cluster, you'll need to implement some kind of LDAP service or other solution. Many tools exist to accomplish this, but one of the most popular options is called Pod Security Policies (PSPs).

In this post, we'll look at the history and current state of Kubernetes PSPs and a demo of a simple deployment.

Post Graduate Program In Cloud Computing

The Only Cloud Computing Program You Need TodayExplore Course
Post Graduate Program In Cloud Computing

What Are Kubernetes PSPs?

In a nutshell, PSPs are a way to govern access to services within your cluster securely. As you can imagine, this can be extremely useful. For example, a Kubernetes admin can enforce password access only from specific IP ranges, limit the number of concurrent users on a given service, or grant access to custom resources based on role, group, and even Git or Mercurial repository.

As their name implies, Kubernetes PSPs are usually used with Kubernetes's pod security layer. They can be integrated with the existing Kubernetes CLI or added as a plugin to the open-source version of the tools. 

How Do Kubernetes PSPs Work?

Let's start by taking a look at how Kubernetes PSPs work.

When a Kubernetes cluster is deployed, the PSPs running on the pods that comprise the cluster are registered in a proprietary LDAP registry, which resides in the Kubernetes cluster.

In this article, we will look at how Kubernetes automatically handles the registration of your PSPs. (Still, if you're using Pod-Xchange or another service, you may need to register your PSP manually.) Once your PSPs are registered, you can use them as two-factor authentication options for various systems within your cluster, configure them to prevent unwanted access to the cluster by giving them access to secrets stored in them (think PSC master password or GRC), and even allow certain groups to access custom services on your clusters.

When Kubernetes starts a new cluster, it will automatically register PSPs on every pod. This process is controlled by the kubelet (the system daemon) on the pod's node. All that's required is to set up a simple registerDnsDaemon DSL command.

For example, you could run this:

curl -XPUT 'localhost:9200/_kube/v1/controllers/registerDnsDaemon/register' -H 'Content-Type: application/json' -d '{ "namespace": {"pod": "src/pod.yml"}, "service": "pod.master", "service": "service.master", "sig": "node-data", "cookie": "", "autoscale": { "reload" : true, "size" : 40, "sizes" : [ "large", "medium" ], "status" : "success", "classifiers": ["XML", "JSON"] } }'

From there, the pod will automatically check for the registration and issue the appropriate certificates on demand for the pod and its service. You can also use a more complex DNS record that requires your pod to be registered with the PSP before using it.

Free Course: Introduction to Kubernetes

Master the Fundamentals of KubernetesEnroll Now
Free Course: Introduction to Kubernetes

Once you've established the PSP registration, you can configure it through the pod metadata in its namespace. For example, here's how you could configure a Pod Security Policy:

config/config.yml service: src/service.yml service-test: name: service-test service-type: service-test container: service-test image: name: my-avatar imageUrl: "https://github.com/kubernetes/test/releases/download/0.0.4" description: Image of avatar for system admin console: name: ps-controls console: name: ps-controls-0.6.0 containerPort: 3306 resource: service: image: name: web-admin-debian imageUrl: "https://coredump.io/kubernetes-kubernetes-labs/latest/distribution/kubernetes-nodejs-k8s-0.0.4.min.gz" credentials: security: certificate: certificateKey: certRequestPort: 12184 security":id: -1

Each name can contain all the services it specifies (e.g., src/service.yml ) or a more complex structure (e.g., Pod Security Policy. For more details, see the documentation for the Pod-Xchange service.

One important thing to note: Kubernetes automatically replaces your existing security policy with the new one, but the old policy can be accessed using the old certificate.

How can you install a Kubernetes orchestration platform on AWS? Kubernetes works on AWS ECS, the fully managed instance service. You can learn more about AWS ECS in our guide to Creating a Kubernetes Cluster on Amazon ECS.

What happens if your cluster has stopped? What happens if you forget to configure your resources properly and you're offline? Kubernetes has a unique approach to updates and installation on AWS, called a re-host. A re-host is a new instance of your cluster, created within AWS, that uses the ECS services that were initially deployed on your cluster.

While re-hosts are a convenient way to deploy cluster resources in AWS, they introduce some security and performance challenges. A primary concern is the lack of compatibility with existing applications. A Kubernetes cluster on re-hosts cannot be deployed in the same way as one on EC2. The security implications are more complex. When running on re-hosts, there is always a chance that malicious actors could access your cluster through ECS. As a result, it is generally recommended that you use Kubernetes on an AWS console running on a clean, isolated EC2 instance with a provisioning profile that does not have any EC2 security enforcement policies applied.

Master's Program: AWS Cloud Architect

Become an Expert in Amazon Web ServicesEnroll Now
Master's Program: AWS Cloud Architect

By implementing AWS security groups, we can provide security to your Kubernetes cluster without introducing another layer of complexity. This helps prevent Kubernetes clusters from being fully exposed to the public internet, making them more difficult to access from the outside.

Become an expert in cloud computing with our in-depth Post-Graduate Program in Cloud Computing. Enroll today for the next cohort!

Caltech Certification | Capstone Projects | 30 CEUs from Caltech CTME

Get Started With Kubernetes on AWS Today

Your company can start benefiting from deploying a Kubernetes cluster today. AWS ECS makes Kubernetes deployment and scaling easy and cost-effective.

To gain the skills you will need to configure solutions with Kubernetes, consider the Caltech Post Graduate Program in DevOps. You may also look into the Caltech Cloud Computing Bootcamp (in the Americas) or Caltech Post Graduate Program in Cloud Computing (in other locations) to dig deeper into AWS and its suite of tools, including ECS.

About the Author

Matthew DavidMatthew David

Matt is a Digital Leader at Accenture. His passion is a combination of solving today's problems to run more efficiently, adjusting focus to take advantage of digital tools to improve tomorrow and move organizations to new ways of working that impact the future.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.
  • *According to Simplilearn survey conducted and subject to terms & conditions with Ernst & Young LLP (EY) as Process Advisors