Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Warning

UNDER CONSTRUCTION - SERVICE IS NOT AVAILABLE YET

What?

Openstack Magnum is the Container Infrastructure service, which gives you the possibility to create container clusters like docker-swarm and kubernetes kubernetes clusters as native resources in Openstack. For more information about the service, and offical documentaions, read the user guide and the Magnum Wiki page

Table of Contents

How

...

to use Openstack Magnum

To use Magnum you need to have the openstack magnum clients installed. You have two alternatives:

...

  • If you want to use the openstack CLI on your local system,

...

  • install python3-openstackclient and python3-magnumclient from

...

  • the repositories suiting your Operating System. Make sure you install the version corresponding to the Openstack version we are running in production (listed on this Wikis frontpage).
  • The clients are pre-installed on NTNUs logon servers. We've made sure we have the correct

...

  • version (smile)

When the client is ready to be used, you can start creating kubernetes clusters.

Cluster Templates

To get you started, there is a few public Cluster Templates available in our cloud, which installs a specific version of kubernetes. These templates are verified working by us.

NameOSMaster flavorNode flavor
kubernetes-templateFedora AtomicHost 29m1.smallm1.small
kubernetes-template-haFedora AtomicHost 29m1.smallm1.small
docker-swarm-templateFedora AtomicHost 29m1.smallm1.small
vX.X.XFedora CoreOS 35gxN.2c4rgxN.2c4r
kubernetes-vX.X.X-haFedora CoreOS 35gxN.4c8rgxN.2c4rdocker-swarm-template-haFedora AtomicHost 29m1.smallm1.small


For more information, all templates can be listed and inspected with the following commands:

Code Block
$ openstack coe cluster template list
# And then
$ openstack coe cluster template show <id|name>
# To view details

Private templates can be created by usersIf the provided templates does not suit your needs, you can create your own private templates. Please consult the documentation to find which parameters is needed for different Container Orchestration Engines.

...

We do however recommend to start a cluster from the public templates first so that you know how it looks.

Create a cluster

The different container orchestration engines has different parameters. For an extensive and complete list of parameters, check the documentation.

Examples

. The guide here shows you the minimum needed to create a cluster.

When creating a cluster from a For each template, you can override a few parameters when you create your cluster::

Info

Do NOT select flavors with less resources than the default in our templates. The k8s masters will need a certain amount of RAM to function.


ParameterComment
--docker-volume-sizeSize of cinder volume housing docker images and volumes. Defaults to 20GB for our public templates
--master-flavorInstance flavor of VMs running the master node. Defaults to m1gxN.small2c4r for public templates
--flavorInstance flavor of VMs runnin worker nodes. Defaults to t1gxN.small2c4r for public templates
--labels

Override default labels for the give COE. Consult documentation for valid labels for each COE.

Note the labels set in the public templates are there for a reason (wink)

Also note that --labels does not merge by default, so if you want to add labels, please include the labels set in the template add --merge-labels as well.

Docker Swarm

This will create Docker Swarm cluster with one master node, and one worker node

Code Block
openstack coe cluster create <clustername> --cluster-template docker-swarm-template --master-count 1 --node-count 1 --keypair <your keypair>

Kubernetes



Creating a small kubernetes-cluster can typically be done using a command like so:This will create a kubernetes cluster with one master node, and one minion

Code Block
$ openstack coe cluster create <clustername> --cluster-template kubernetes-templatev1.xx.xx --master-count 1 --node-count 1 --keypair <your keypair>

Use a cluster

Docker Swarm

TBA

Kubernetes

Using the kubernetes cluster

You can interact with your kubernetes cluster with kubectl. When using kubectl you have at least two options:

  • Install it

...

  • on your local PC, if you don't already have it. Use the Install-guide

...

Preparing your kubectl configuration

Openstack magnum can help create configuration-files to kubectl for you. In practice you create a directory where the openstack-client can write the kubectl config like so:

Note

Remember to source some openstack-credentials before using the "openstack" commands.

...


Code Block
$ mkdir -p ~/clusters/kubernetes-cluster
$ $(openstack coe cluster config <your-cluster> --dir ~/clusters/kubernetes-cluster
export KUBECONFIG=/home/demo/clusters/kubernetes-cluster/config

The openstack command prints out a helpful hint on how to tell kubectl which config-file to use. This will help you select the correct cluster if you have config-files for multiple clusters.

Using the kubectl configuration from the openstack command

You need to point the environment-variable "KUBECONFIG" to the kubernetes configuration-file you need to use. To do this you basically run the output from the "openstack coe cluster config" command as a command:

Code Block
$ export KUBECONFIG=/home/demo/clusters/kubernetes-cluster)

That should just work, and you can run kubectl commands as you please.

/config
$ kubectl get nodes
NAME                            STATUS   ROLES    AGE    VERSION
mycluster-o56ashbsrqqa-master-0   Ready    master   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-0   Ready    <none>   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-1   Ready    <none>   131m   v1.15.12

At this point your cluster is running, and you can use it as a regular kubernetes cluster.

Using your openstack credentials to auth with kubectl

Our public kubernetes template does not have the keystone auth module enabled by default. But if you chosse choose to enable it via label overrides, that makes it possible to interact with your newly created cluster via the environment variables from your opencrc file. If you want that, configure kubectl to use openstack auth as follows:

...

The defaults doesn't really allow you to do much, and you will have to setup RBAC policies yourself, to your liking.

Scaling cluster nodes

To change the number of nodes in your cluster, you can do the following:

Code Block
$ openstack coe cluster update <your-cluster> replace node_count=<N>

Increasing the node_count will (obviously) add a worker node to your cluster. This is nice if your are running out of resources.

If you want to decrease the number of nodes, what happens depends on your chosen COE. If you're running k8s, magnum will try to find a node with no running containers and delete them. If no empty nodes are found, magnum will warn you, and delete a node at random.

Troubleshooting

What have been working

  • Check cluster for what is wrong, usually quota
  • Scale down with resize
  • Fix quota
  • Scale up

Debugging an uhealthy cluster

To check status of a cluster

Code Block
$ heat stack-list -n
# Get the id of the cluster
$ openstack stack failure list <id of the cluster>
# OR
$ openstack coe cluster list
# Get the id of the cluster, NB, it's shorter than the heat ID
$ openstack coe cluster show <cluster ID>

Debugging a part of the cluster

Use heat to find the id's of the cluster

Code Block
$ heat stack-list -n
<snip output>
$ heat resource-list <id from list above>
<snip output>

Run a check of the cluster

Code Block
$ openstack stack check <ID from heat stack-list -n>

Scaling

...

down cluster when status is unhealthy

When scaling up the cluster beyond quota limit, the openstack coe cluster update command doesn't work. But resize does

Code Block
$ openstack coe cluster resize <your-cluster> <N>

...

Upgrading

TBA