You are here


Updating network configuration on the Overcloud after a deployment

By default, subsequent change(s) made to network configuration templates (bonding options, mtu, bond type, etc) are not applied on existing nodes when the overcloud stack is updated.


TripleO Container steps

Container steps

Similar to baremetal, containers are brought up in a stepwise manner. The
current architecture supports bringing up baremetal services alongside of
containers. Therefore, baremetal steps may be required depending on the service
and they are always executed before the corresponding container step.

The list below represents the correlation between the baremetal and the
containers steps. These steps are executed sequentially:


View the list of images on the undercloud’s docker-distribution registry

To view the list of images on the undercloud’s docker-distribution registry using the following command:

(undercloud) $ curl | jq .repositories[]

To view a list of tags for a specific image, use the skopeo command:

(undercloud) $ curl -s | jq .tags


fake_pxe as pm_type in RHOSP13 (TripleO + OpenStack Queens)

So, in RHOSP13 fake_pxe is being deprecated to change in RHOSP14 for manual-management, the problem is that is just in between the migration, so there is not a clean way to use fake_pxe in RHOSP13.
Other change is in the installation of undercloud, the option enabled_drivers is now DEPRECATED an changed by enabled_hardware_types.

What now, in order to being able to use fake_pxe as a pm_type first install the undercloud without the options enabled_drivers, only use enabled_hardware_types and add at the end manual-management, like this:


How to disable Cloud-Init in a EL Cloud Image

So this one is pretty simple. However, I found a lot of misinformation along the way, so I figured that I would jot the proper (and most simple) process here.

Symptoms: a RHEL (or variant) VM that takes a very long time to boot. On the VM console, you can see the following output while the VM boot process is stalled and waiting for a timeout. Note that the message below has nothing to do with cloud init, but its the output that I have most often seen on the console while waiting for a VM to boot.

[106.325574} random: crng init done

Change password to users on qcow2 disk or images

Sometimes you need to change the password to an user in a qcow2 image, to test locally or if you are using an infrastructure without cloud-init, regardless the user the procedure is the same.

Depends on the system the packages name could change a little, I'm using Fedora 27 I have installed

Ceph recovery backfilling affecting production instances

In any kind of distributed system you will have to choose between consistency, availability and partitioning, the CAP theorem states that in the presence of a network partition, one has to choose between consistency and availability, by default (default configurations) CEPH provides consistency and partitioning, just take in count that CEPH has many config options: ~860 in hammer, ~1100 in jewel, check this out, is jewel github co

Get total provisioned size from cinder volumes

Quick way to get the total amount of provisioned space from cinder

alvaro@skyline.local: ~
$ cinder list --all-tenants
mysql like output :)

So to parse the output and add all the values in the Size col, use the next piped commands.

alvaro@skyline.local: ~
$ .

Cloning a Ceph client auth key

I don't recall any reason to do this other than using the same user and auth key to authenticate in different Ceph clusters, like in a multi-backend solution, or just because things get messy when you are not using a default configuration.

Sometimes, things gets easy when we use the same user and auth key on both clusters for services to connect to, so lets see some background commands for managing users, keys and permissions:

Create new user and auth token (cinder client example):

Export instance from OpenStack with Ceph/rbd backend.

Suppose that you want to migrate an instance from differents infrastructures or you want to handover and instance information to a client, so you need to recover (export) the instance volumes information.

Step 1: Get the instance UUID.


Subscribe to RSS - openstack