You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

UNDER CONSTRUCTION - SERVICE IS NOT AVAILABLE YET

What?

Openstack Magnum is the Container Infrastructure service, which gives you the possibility to create container clusters like docker-swarm and kubernetes as native resources in Openstack. For more information about the service, and offical documentaions, read the user guide and the Magnum Wiki page

How?

First...

If you want to use the openstack CLI on your local system, use the lastet version of python-magnumclient from pip3. Some popular distros has an old version in its repositories, and are known to not work properly. Be sure that your version has this commit included. On NTNUs logon servers, you should not worry. We've made sure we have the correct version (smile)

Cluster Templates

To get you started, there is a few public Cluster Templates available in our cloud. These are verified working by us.

NameOSMaster flavorNode flavor
kubernetes-templateFedora AtomicHost 29m1.smallm1.small
kubernetes-template-haFedora AtomicHost 29m1.smallm1.small
docker-swarm-templateFedora AtomicHost 29m1.smallm1.small
docker-swarm-template-haFedora AtomicHost 29m1.smallm1.small

For more information, all templates can be listed with

$ openstack coe cluster template list
# And then
$ openstack coe cluster template show <id|name>
# To view details

Private templates can be created by users. Please consult the documentation to find which parameters is needed for different Container Orchestration Engines.

We know that Fedora AtomicHost 29 is deprecated and EoL, but support for Fedora CoreOS was added in Openstack Train. We are currently running Openstack Stein (one version older), but we are planning updates.


Create a cluster

The different container orchestration engines has different parameters. For an extensive and complete list, check the documentation.

Examples

For each template, you can override a few parameters when you create your cluster:

ParameterComment
--docker-volume-sizeSize of cinder volume housing docker images and volumes. Defaults to 20GB for our public templates
--master-flavorInstance flavor of VMs running the master node. Defaults to m1.small for public templates
--flavorInstance flavor of VMs runnin worker nodes. Defaults to t1.small for public templates
--labels

Override default labels for the give COE. Consult documentation for valid labels for each COE.

Note the labels set in the public templates are there for a reason (wink)

Also note that --labels does not merge, so if you want to add labels, please include the labels set in the template as well.



Docker Swarm

This will create Docker Swarm cluster with one master node, and one worker node

openstack coe cluster create <clustername> --cluster-template docker-swarm-template --master-count 1 --node-count 1 --keypair <your keypair>

Kubernetes

This will create a kubernetes cluster with one master node, and one minion

$ openstack coe cluster create <clustername> --cluster-template kubernetes-template --master-count 1 --node-count 1 --keypair <your keypair>


Use a cluster

Docker Swarm

You can interact with your docker swarm with the docker CLI. This must of course be installed on your computer locally first. (noe om login-servere?)

$ mkdir -p ~/clusters/docker-swarm
$ $(openstack coe cluster config <your-cluster> --output-certs --dir ~/clusters/docker-swarm)

This will generate a client certificate with a corresponding key, download the CA certificate and export some environment variables needes for the docker client in order to communicate with your cluster.

Example, to list the nodes of your docker swarm:

$ docker node ls
ID                            HOSTNAME                                        STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
twmj3a98bo6y1swu9zpz4dmig     sverm-r5rpscxldtrc-node-0.novalocal             Ready               Active                                  1.13.1
us5q2whrjdb1rwiqzegc8xi12     sverm-r5rpscxldtrc-node-1.novalocal             Ready               Active                                  1.13.1
6liccpscem1xo6x2a7kbcvtuz *   sverm-r5rpscxldtrc-primary-master-0.novalocal   Ready               Active              Leader              1.13.1


Kubernetes

You can interact with your kubernetes cluster with kubectl. Install it first, if you don't already have it. Install-guide (noe om login-servere?)

Source your cluster config:

$ mkdir -p ~/clusters/kubernetes-cluster
$ $(openstack coe cluster config <your-cluster> --dir ~/clusters/kubernetes-cluster)

That should just work, and you can run kubectl commands as you please.

Example, to check if services are running

$ kubectl get nodes
NAME                            STATUS   ROLES    AGE    VERSION
mycluster-o56ashbsrqqa-master-0   Ready    master   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-0   Ready    <none>   131m   v1.15.12
mycluster-o56ashbsrqqa-minion-1   Ready    <none>   131m   v1.15.12

Our public kubernetes template does not have the keystone auth module enabled by default. But if you chosse to enable it via label overrides, that makes it possible to interact with your newly created cluster via the environment variables from your opencrc file. If you want that, configure kubectl to use openstack auth as follows:

$ kubectl config set-credentials openstackuser --auth-provider=openstack
$ kubectl config set-context --cluster=<yourclustername> --user=openstackuser openstackuser@<yourclustername>
$ kubectl config use-context openstackuser@<yourclustername>

The defaults doesn't really allow you to do much, and you will have to setup RBAC policies yourself, to your liking.

Scaling cluster nodes

To change the number of nodes in your cluster, you can do the following:

$ openstack coe cluster update <your-cluster> replace node_count=<N>

Increasing the node_count will (obviously) add a worker node to your cluster. This is nice if your are running out of resources.

If you want to decrease the number of nodes, what happens depends on your chose COE. If you're running k8s, magnum will try to find a node with no running containers and delete them. If no empty nodes are found, magnum wil warn you, and delete a node at random. With docker swarm, magnum have no logic to discover an empty node, and will just delete nodes at random.

Upgrading

TBA

  • No labels