Openstack Magnum is the Container Infrastructure service, which gives you the possibility to create kubernetes clusters as native resources in Openstack. For more information about the service, and offical documentaions, read the user guide and the Magnum Wiki page
Table of Contents |
---|
How to use Openstack Magnum
...
Code Block |
---|
$ openstack coe cluster create <clustername> --cluster-template kubernetes-v1.xx.xx --master-count 1 --node-count 1 --keypair <your keypair> |
Using the kubernetes cluster
You can interact with your kubernetes cluster with kubectl.
When using kubectl you have at least two options:
- Install it on your local PC, if you don't already have it. Use the Install-guide.
- Use the NTNU login-servers (login.stud.ntnu.no for students or login.ansatt.ntnu.no for employees), as they already have kubectl installed.
Preparing your kubectl configuration
Openstack magnum can help create configuration-files to kubectl for you. In practice you create a directory where the openstack-client can write the kubectl config like so:
...
The openstack command prints out a helpful hint on how to tell kubectl which config-file to use. This will help you select the correct cluster if you have config-files for multiple clusters.
Using the kubectl configuration from the openstack command
You need to point the environment-variable "KUBECONFIG" to the kubernetes configuration-file you need to use. To do this you basically run the output from the "openstack coe cluster config" command as a command:
...
At this point your cluster is running, and you can use it as a regular kubernetes cluster.
Using your openstack credentials to auth with kubectl
Our public kubernetes template does not have the keystone auth module enabled by default. But if you choose to enable it via label overrides, that makes it possible to interact with your newly created cluster via the environment variables from your opencrc file. If you want that, configure kubectl to use openstack auth as follows:
...
The defaults doesn't really allow you to do much, and you will have to setup RBAC policies yourself, to your liking.
Scaling cluster nodes
To change the number of nodes in your cluster, you can do the following:
...
If you want to decrease the number of nodes, what happens depends on your chosen COE. If you're running k8s, magnum will try to find a node with no running containers and delete them. If no empty nodes are found, magnum will warn you, and delete a node at random.
Troubleshooting
What have been working
- Check cluster for what is wrong, usually quota
- Scale down with resize
- Fix quota
- Scale up
Debugging an uhealthy cluster
To check status of a cluster
Code Block |
---|
$ heat stack-list -n # Get the id of the cluster $ openstack stack failure list <id of the cluster> # OR $ openstack coe cluster list # Get the id of the cluster, NB, it's shorter than the heat ID $ openstack coe cluster show <cluster ID> |
Debugging a part of the cluster
Use heat to find the id's of the cluster
Code Block |
---|
$ heat stack-list -n <snip output> $ heat resource-list <id from list above> <snip output> |
Run a check of the cluster
Code Block |
---|
$ openstack stack check <ID from heat stack-list -n> |
Scaling down cluster when status is unhealthy
When scaling up the cluster beyond quota limit, the openstack coe cluster update command doesn't work. But resize does
...