Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

What?

Openstack Magnum is the Container Infrastructure service, which gives you the possibility to create container clusters like docker-swarm and kubernetes kubernetes clusters as native resources in Openstack. For more information about the service, and offical documentaions, read the user guide and the Magnum Wiki page

How?

Table of Contents

How to use Openstack Magnum

To use Magnum you need to have the openstack magnum clients installed. You have two alternatives:

...

  • If you want to use the openstack CLI on your local system,

...

  • install python3-openstackclient and python3-magnumclient from

...

  • the repositories suiting your Operating System. Make sure you install the version corresponding to the Openstack version we are running in production (listed on this Wikis frontpage).
  • The clients are pre-installed on NTNUs logon servers. We've made sure we have the correct

...

  • version (smile)

When the client is ready to be used, you can start creating kubernetes clusters.

Cluster Templates

To get you started, there is a few public Cluster Templates available in our cloud, which installs a specific version of kubernetes. These templates are verified working by us.

NameOSMaster flavorNode flavor
kubernetes-templateFedora AtomicHost 29m1.smallm1.small
kubernetes-template-haFedora AtomicHost 29m1.smallm1.small
docker-swarm-templateFedora AtomicHost 29m1.smallm1.small
vX.X.XFedora CoreOS 35gxN.2c4rgxN.2c4r
kubernetes-vX.X.X-haFedora CoreOS 35gxN.4c8rgxN.2c4rdocker-swarm-template-haFedora AtomicHost 29m1.smallm1.small


For more information, all templates can be listed and inspected with the following commands:

Code Block
$ openstack coe cluster template list
# And then
$ openstack coe cluster template show <id|name>
# To view details

Private templates can be created by usersIf the provided templates does not suit your needs, you can create your own private templates. Please consult the documentation to find which parameters is needed for different Container Orchestration Engines.

...

We do however recommend to start a cluster from the public templates first so that you know how it looks.

Create a cluster

The different container orchestration engines has different parameters. For an extensive and complete list of parameters, check the documentation.

Examples

The guide here shows you the minimum needed to create a cluster.

When creating a cluster from a For each template, you can override a few parameters when you create your cluster:

Info

Do NOT select flavors with less resources than the default in our templates. The k8s masters will need a certain amount of RAM to function.


ParameterComment
--docker-volume-sizeSize of cinder volume housing docker images and volumes. Defaults to 20GB for our public templates
--master-flavorInstance flavor of VMs running the master node. Defaults to m1gxN.small2c4r for public templates
--flavorInstance flavor of VMs runnin worker nodes. Defaults to m1gxN.small2c4r for public templates
--labels

Override default labels for the give COE. Consult documentation for valid labels for each COE.

Note the labels set in the public templates are there for a reason (wink)

Also note that --labels does not merge by default, so if you want to add labels, please include the labels set in the template add --merge-labels as well.

Docker Swarm

This will create Docker Swarm cluster with one master node, and one worker node

Code Block
openstack coe cluster create <clustername> --cluster-template docker-swarm-mode --master-count 1 --node-count 1 --keypair <your keypair>

Kubernetes



Creating a small kubernetes-cluster can typically be done using a command like so:This will create a kubernetes cluster with one master node, and one minion

Code Block
$ openstack coe cluster create <clustername> --cluster-template kubernetes-v1.15xx.12xx --master-count 1 --node-count 1 --keypair <your keypair>

...

Using the kubernetes cluster

Docker Swarm

You can interact with your docker swarm with the docker CLI. This must of course be installed on your computer locally first, or you can use kubernetes cluster with kubectl. When using kubectl you have at least two options:

Preparing your kubectl configuration

Openstack magnum can help create configuration-files to kubectl for you. In practice you create a directory where the openstack-client can write the kubectl config like so:

Note

Remember to source some openstack-credentials before using the "openstack" commands.


Code Block
$ mkdir -p ~/clusters/dockerkubernetes-swarmcluster
$ $(openstack coe cluster config <your-cluster> --output-certs --dir ~/clusters/docker-swarm)

This will generate a client certificate with a corresponding key, download the CA certificate and export some environment variables needes for the docker client in order to communicate with your cluster.

Example, to list the nodes of your docker swarm:

Code Block
$ docker node ls
ID                            HOSTNAME                                        STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
twmj3a98bo6y1swu9zpz4dmig     sverm-r5rpscxldtrc-node-0.novalocal             Ready               Active                                  1.13.1
us5q2whrjdb1rwiqzegc8xi12     sverm-r5rpscxldtrc-node-1.novalocal             Ready               Active                                  1.13.1
6liccpscem1xo6x2a7kbcvtuz *   sverm-r5rpscxldtrc-primary-master-0.novalocal   Ready               Active              Leader              1.13.1

Kubernetes

You can interact with your kubernetes cluster with kubectl. Install it first, if you don't already have it. Install-guide

Or you can use the NTNU login-servers (login.stud.ntnu.no for students or login.ansatt.ntnu.no for employees).

Source your cluster config:

Code Block
$ mkdir -p ~/clusters/kubernetes-cluster
$ $(openstack coe cluster config <your-cluster> --dir ~/clusters/kubernetes-cluster)

That should just work, and you can run kubectl commands as you please.

Example, to check if services are running

kubernetes-cluster
export KUBECONFIG=/home/demo/clusters/kubernetes-cluster/config

The openstack command prints out a helpful hint on how to tell kubectl which config-file to use. This will help you select the correct cluster if you have config-files for multiple clusters.

Using the kubectl configuration from the openstack command

You need to point the environment-variable "KUBECONFIG" to the kubernetes configuration-file you need to use. To do this you basically run the output from the "openstack coe cluster config" command as a command:

Code Block
$ export KUBECONFIG=/home/demo/clusters/kubernetes-cluster/config
Code Block
$ kubectl get nodes
NAME                            STATUS   ROLES    AGE    VERSION
mycluster-o56ashbsrqqa-master-0   Ready    master   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-0   Ready    <none>   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-1   Ready    <none>   131m   v1.15.12

At this point your cluster is running, and you can use it as a regular kubernetes cluster.

Using your openstack credentials to auth with kubectl

Our public kubernetes template does not have the keystone auth module enabled by default. But if you choose to enable it via label overrides, that makes it possible to interact with your newly created cluster via the environment variables from your opencrc file. If you want that, configure kubectl to use openstack auth as follows:

...

The defaults doesn't really allow you to do much, and you will have to setup RBAC policies yourself, to your liking.

Scaling cluster nodes

To change the number of nodes in your cluster, you can do the following:

...

If you want to decrease the number of nodes, what happens depends on your chose chosen COE. If you're running k8s, magnum will try to find a node with no running containers and delete them. If no empty nodes are found, magnum will warn you, and delete a node at random. With docker swarm, magnum have no logic to discover an empty node, and will just delete nodes at random.

Troubleshooting

What have been working

  • Check cluster for what is wrong, usually quota
  • Scale down with resize
  • Fix quota
  • Scale up

Debugging an uhealthy cluster

To check status of a cluster

Code Block
$ heat stack-list -n
# Get the id of the cluster
$ openstack stack failure list <id of the cluster>
# OR
$ openstack coe cluster list
# Get the id of the cluster, NB, it's shorter than the heat ID
$ openstack coe cluster show <cluster ID>

Debugging a part of the cluster

Use heat to find the id's of the cluster

Code Block
$ heat stack-list -n
<snip output>
$ heat resource-list <id from list above>
<snip output>

Run a check of the cluster

Code Block
$ openstack stack check <ID from heat stack-list -n>

Scaling down cluster when status is unhealthy

When scaling up the cluster beyond quota limit, the openstack coe cluster update command doesn't work. But resize does

Code Block
$ openstack coe cluster resize <your-cluster> <N>

Upgrading

TBA