Search This Blog

Sunday, June 12, 2022

Converting the Image Format Using qemu-img

You can import an image file in VHD, VMDK, QCOW2, RAW, VHDX, QCOW, VDI, QED, ZVHD, or ZVHD2 format to HUAWEI CLOUD. Image files in other formats need to be converted before being imported. The open-source tool qemu-img is provided for you to convert image file formats.

Key points

  • qemu-img supports the mutual conversion of image formats VHD, VMDK, QCOW2, RAW, VHDX, QCOW, VDI, and QED.
  • ZVHD and ZVHD2 are self-developed image file formats and cannot be identified by qemu-img. To convert image files to any of the two formats, use the qemu-img-hw tool. 
  • When you run the command to convert the format of VHD image files, use VPC to replace VHD. Otherwise, qemu-img cannot identify the image format.

I'm using Fedora35 and I've already installed the package

$ sudo dnf provides qemu-img
Last metadata expiration check: 0:53:05 ago on Sun 12 Jun 2022 09:49:21 PM CDT.
qemu-img-2:6.1.0-5.fc35.x86_64 : QEMU command line tool for manipulating disk images
Repo        : fedora
Matched from:
Provide    : qemu-img = 2:6.1.0-5.fc35

qemu-img-2:6.1.0-14.fc35.x86_64 : QEMU command line tool for manipulating disk images
Repo        : @System
Matched from:
Provide    : qemu-img = 2:6.1.0-14.fc35

qemu-img-2:6.1.0-14.fc35.x86_64 : QEMU command line tool for manipulating disk images
Repo        : updates
Matched from:
Provide    : qemu-img = 2:6.1.0-14.fc35

Checking package version.

$ qemu-img -V
qemu-img version 6.1.0 (qemu-6.1.0-14.fc35)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

Converting the image.

$ export VMDK='wiki.vmdk'
$ export QCOW2='wiki.qcow2'
$ qemu-img convert -p -f vmdk -O qcow2 ${VMDK} ${QCOW2}
    (100.00/100%)

Getting the image information.

$ qemu-img info ${QCOW2}
image: wiki.qcow2
file format: qcow2
virtual size: 30 GiB (32212254720 bytes)
disk size: 15.8 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false

And now enjoy, you can continue customizing the image or directing using it on QEMU.

Thursday, October 14, 2021

Resetting a Windows guest’s Administrator password with guestfish

DISCLAIMER: This is not my post is only a copy; in case the original gets deleted or whatever, posting on my personal blog gets more accessible for me to find it. You can find the original one in this link at the end of the post. 

I recently found myself with a Windows guest for which I didn’t have the Administrator password or any way of getting it. Nevertheless, I needed to make configuration changes to it. As I had no need to recover the old password, I was looking for a way to simply replace the Administrator password with one of my choices. 

I came across this excellent post on the topic at 4sysops.com. Option 4, the Sticky Keys trick, worked for me and is exceptionally simple to do with guestfish in Fedora. Windows has a feature called Sticky Keys, part of its suite of accessibility features. As such, it’s available before login and critical to this method. In short, pressing a specific sequence of keys will invoke the Sticky Keys program. 

We will use Guestfish to temporarily replace that program with a command shell, use the command shell to change the Administrator password, log in, and then put everything back how it was. N.B. As pointed out in the above post, Windows uses your password to encrypt various bits of data, including the Windows Vault and passwords stored in IE. Changing the Administrator password using this mechanism will make that data permanently inaccessible. 

First, we assume we have local access to the disk image from our Fedora box and that libguestfs is installed. Also, note that this is an offline process, so the guest must be shut down at this point. Attempting to do this while the guest runs will result in data corruption.

# guestfish -i guest.img
Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems.
Type: 'help' for a list of commands 'man' to read the manual 'quit' to quit the shell
> mv /Windows/System32/sethc.exe /Windows/System32/sethc.exe.bak
> cp /Windows/System32/cmd.exe /Windows/System32/sethc.exe
> exit

You may find that the capitalization of the paths is different in your guest, but Guestfish’s tab completion should help you sort this out quite quickly. Start your guest again. When the login screen appears, press the SHIFT key 5 times. Instead of Sticky Keys, a command shell will be displayed:




The original post for Windows 2008 here

Thursday, April 22, 2021

Can't initialize iptables table filter and nat: Permission denied

The best solution will be to change the container image to have an updated iptables version, but in case you can't do that, follow the next steps.

Environment

  • Red Hat OpenShift Container Platform 4.6+

Issue

Executing iptables command in an application container fails with the following error.

 

[root@pod]# iptables -L iptables v1.8.4 (legacy): can't initialize iptables table `filter': Permission denied Perhaps iptables or your kernel needs to be upgraded.

[root@pod]# iptables -L -t nat iptables v1.8.4 (legacy): can't initialize iptables table `nat': Permission denied Perhaps iptables or your kernel needs to be upgraded.

Resolution

Add the needed capabilities and match the SELinux denied context on audit logs on pod.spec.containers[0].securityContext.

spec: containers: securityContext: privileged: false capabilities: drop: ["all"] add: ["NET_ADMIN", "NET_RAW", "NET_BIND_SERVICE"] seLinuxOptions: user: "system_u" role: "system_r" type: "container_t" level: "s0:c981,c991"

Diagnostic Steps

  1. Find the worker node from where the pod is running.
  2. Connect to the worker node.
  3. Tail audit log.
  4. Initialize a bash session on the pod.
  5. Execute iptables command.
  6. Wait for iptables denial error on audit log.

[root@worker] # tail -f /var/logs/audit/audit.log ...[ SNIP ]... type=AVC msg=audit(1618591176.860:2303): avc: denied { module_request } for pid=912615 comm="iptables" kmod="iptable_filter" scontext=system_u:system_r:container_t:s0:c981,c991 tcontext=system_u:system_r:kernel_t:s0 tclass=system permissive=0 type=AVC msg=audit(1618591176.860:2304): avc: denied { module_request } for pid=912615 comm="iptables" kmod="iptable_filter" scontext=system_u:system_r:container_t:s0:c981,c991 tcontext=system_u:system_r:kernel_t:s0 tclass=system permissive=0 ...[ SNIP ]...

Monday, June 29, 2020

TripleO Container steps

Container steps

Similar to bare metal, containers are brought up in a stepwise manner. The current architecture supports bringing up baremetal services alongside containers. Therefore, baremetal steps may be required depending on the service and they are always executed before the corresponding container step.

The list below represents the correlation between the baremetal and the container steps. These steps are executed sequentially:

  • Containers config files generated per hiera settings.

  • Host Prep

  • Load Balancer configuration baremetal

    • Step 1 external steps (execute Ansible on Undercloud)

    • Step 1 deployment steps (Ansible)

    • Common Deployment steps

      • Step 1 baremetal (Puppet)

      • Step 1 containers

  • Core Services (Database/Rabbit/NTP/etc.)

    • Step 2 external steps (execute Ansible on Undercloud)

    • Step 2 deployment steps (Ansible)

    • Common Deployment steps

      • Step 2 baremetal (Puppet)

      • Step 2 containers

  • Early Openstack Service setup (Ringbuilder, etc.)

    • Step 3 external steps (execute Ansible on Undercloud)

    • Step 3 deployment steps (Ansible)

    • Common Deployment steps

      • Step 3 baremetal (Puppet)

      • Step 3 containers

  • General OpenStack Services

    • Step 4 external steps (execute Ansible on Undercloud)

    • Step 4 deployment steps (Ansible)

    • Common Deployment steps

      • Step 4 baremetal (Puppet)

      • Step 4 containers (Keystone initialization occurs here)

  • Service activation (Pacemaker)

    • Step 5 external steps (execute Ansible on Undercloud)

    • Step 5 deployment steps (Ansible)

    • Common Deployment steps

      • Step 5 baremetal (Puppet)

      • Step 5 containers

Sunday, June 28, 2020

View the list of images on the undercloud docker-distribution registry

To view the list of images on the undercloud docker-distribution registry use the following command:

(undercloud) $ curl http://192.168.24.1:8787/v2/_catalog | jq .repositories[]

To view a list of tags for a specific image, use the skopeo command:

(undercloud) $ curl -s http://192.168.24.1:8787/v2/rhosp13/openstack-keystone/tags/list | jq .tags

To verify a tagged image, use the skopeo command:

(undercloud) $ skopeo inspect --tls-verify=false docker://192.168.24.1:8787/rhosp13/openstack-keystone:13.0-44

Saturday, June 27, 2020

Updating network configuration on the Overcloud after a deployment

By default, subsequent change(s) made to network configuration templates (bonding options, mtu, bond type, etc) are not applied on existing nodes when the overcloud stack is updated.

To push an updated network configuration add UPDATE to the list of actions set in the NetworkDeploymentActions parameter. (The default is ['CREATE'], to enable network configuration on stack update it must be changed to: ['CREATE','UPDATE'].)

  • Enable update of the network configuration for all roles by adding the following to parameter_defaults in an environment file:

    parameter_defaults:
    NetworkDeploymentActions: ['CREATE','UPDATE']
  • Limit the network configuration update to nodes of a specific role by using a role-specific parameter, i.e: {role.name}NetworkDeploymentActions. For example to update the network configuration on the nodes in the Compute role, add the following to parameter_defaults in an environment file:

    parameter_defaults:
    ComputeNetworkDeploymentActions: ['CREATE','UPDATE']

Friday, June 26, 2020

OSD refusing to start with "ERROR: osd init failed: (1) Operation not permitted"

The main issue is: OSD refuses to start with "ERROR: osd init failed: (1) Operation not permitted"

Log error:

2014-11-13 02:32:32.380964 7f977fd87780 1 journal _open /var/lib/ceph/osd/ceph-289/journal fd 21: 10736369664 bytes, block size 4096 bytes, directio = 1, aio = 1
2014-11-13 02:32:32.393814 7f977fd87780 1 journal _open /var/lib/ceph/osd/ceph-289/journal fd 21: 10736369664 bytes, block size 4096 bytes, directio = 1, aio = 1
2014-11-13 02:32:42.105930 7f977fd87780 1 journal close /var/lib/ceph/osd/ceph-289/journal
2014-11-13 02:32:42.112233 7f977fd87780 -1 ** ERROR: osd init failed: (1) Operation not permitted

Resolution:

  • It appears the OSD is having trouble authenticating with the monitor.
  • Verify the keyring file is present and correct?
  • By default, it is located in /var/lib/ceph/osd/ceph-/keyring.
  • It should match the key returned from the command

# ceph auth get osd.