April 13, 2017

Deploying Openstack PoC on CentOS with linux bridge

April 13, 2017 10:00 PM

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

April 12, 2017

Remotely kicking a CentOS install through ligthweight 1Mb iso image

April 12, 2017 10:00 PM

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0
ifstat

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

initrd http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/initrd.img
chain http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/vmlinuz net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/7/os/x86_64/ inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see https://wiki.centos.org/HowTos/RemoteiPXE , but that one defaults to dhcp

Hope it helps

New CentOS Atomic Host with Updated Kubernetes, Etcd and Flannel

April 12, 2017 04:31 PM

An updated version of CentOS Atomic Host (tree version 7.20170405), is now available, including significant updates to kubernetes (version 1.5.2), etcd (version 3.1) and flannel (version 0.7).

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.15.4-2.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-11.el7.centos.x86_64
  • etcd-3.1.0-2.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.10.2.el7.x86_64
  • kubernetes-node-1.5.2-0.2.gitc55cf2b.el7.x86_64
  • ostree-2017.1-3.atomic.el7.x86_64
  • rpm-ostree-client-2017.1-6.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-a50d85b3
ap-south-1 ami-13f6857c
eu-west-2 ami-42233726
eu-west-1 ami-49063c2f
ap-northeast-2 ami-d1c81abf
ap-northeast-1 ami-7b1c3e1c
sa-east-1 ami-914f2dfd
ca-central-1 ami-2de75b49
ap-southeast-1 ami-53328c30
ap-southeast-2 ami-6d929c0e
eu-central-1 ami-dca270b3
us-east-2 ami-18bc987d
us-west-1 ami-b22a0fd2
us-west-2 ami-2e2bbb4e

SHA Sums

b337bc56a71b6b25237a5c0c06c9f48a33973b4e41c648288bcfaf5a494af98c  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2
707db9907a850816fca7782da1dca3584fa0d8be821d0ee95525b688aaa0cc6d  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2.gz
c4ef91cc801777e214106522f848f8b388fb92699d67ed4fe86cc942a361f7a2  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2.xz
5e41a0306a8c1c212117c68eae10f0f59b25cb6c57dec9629bf3ac760bca54bc  CentOS-Atomic-Host-7.1703-Installer.iso
f509eb482a614d2eb047009aaa6c37c125b66cdd483e7015983cae5f72d9f041  CentOS-Atomic-Host-7.1703-Vagrant-Libvirt.box
2c0ba7dda2f4f249aa6c31cfcb36df1a17913b9d8786afb7b340a24b15b404f1  CentOS-Atomic-Host-7.1703-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

April 07, 2017

Updated CentOS Vagrant Images Available (v1703.01)

April 07, 2017 09:54 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 30 March 2017 and the following changes:

  • The VMware images now use the paravirtualized SCSI controller (the kernel module for the LSILogic controller has been deprecated upstream).
  • The VMware images now specify vmware_desktop, allowing them to work with bth VMware Fusion and VMware Workstation

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn’t work with SMB sync due to Vagrant bug #8404
  7. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.8.1 from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.3 on OS X 10.11.6, with VirtualBox 5.1.18.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 82bbed14c34fdd8fd3cb617b0e8c0f154ebd4d1388f45de3335b2cdf791e5fed --provider libvirt --box-version 1703.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

March 13, 2017

Updated CentOS Vagrant Images Available (v1702.01)

March 13, 2017 10:23 AM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 28 February 2017.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  7. The metadata of the images for VMware is set to vmware_fusion. Please specify vmware_fusion as the provider when downloading the images, even if you’re using VMware Workstation.

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.0 on OS X 10.11.6, with VirtualBox 5.0.30.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 48745c0f2dd4fbee366d830e3e333b637528ad936dd66ed5911df2adc02f46d7 --provider libvirt --box-version 1702.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

March 08, 2017

[infra] scheduled major outage for several services

March 08, 2017 12:36 PM

As announced, and confirmed on the centos-devel list, next week we’ll have a major outage impacting several services that are hosted in the same DC : due to some reorganization at the DC/Cage level, we’ll have to shutdown/move/reconfigure a big part of our hosted infra for the following services :

  • https://cbs.centos.org (Koji)
  • https://accounts.centos.org (auth backend, and also https://id.centos.org, our idp in front of ACO)
  • https://ci.centos.org (jenkins-driven CI environment)
  • https://registry.centos.org (that one will be temporary migrated to a read-only registry, so that people already pointing to that node will continue to be able to pull images)

We’re working on a plan to minimize the downtime/reconfiguration part, but at first sight, due to the hardware move of the racks/recabling parts/etc, the announced downtime will be probably ~48h.

What does that mean ? That during this maintenance window, nobody will be able to build/tests packages, nor be able to triggers automatically CI jobs (important). This hardware migration is scheduled for March 14th, starting at 13:00 UTC.

We’ll obviously try to restore those services as soon as possible, to minimize the impact on people building pkgs for SIGs

If you have questions, feel free to discuss this in the #centos-devel channel on irc.freenode.net, or the centos-devel mailing list

February 15, 2017

New CentOS Atomic Host with Updated Docker, Kubernetes and Etcd

February 15, 2017 03:08 PM

An updated version of CentOS Atomic Host (tree version 7.20170209), is now available, including significant updates to docker (version 1.12.5), kubernetes (version 1.4) and etcd (version 3.0.15).

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.14.1-5.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.5-14.el7.centos.x86_64
  • etcd-3.0.15-1.el7.x86_64
  • flannel-0.5.5-2.el7.x86_64
  • kernel-3.10.0-514.6.1.el7.x86_64
  • kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64
  • ostree-2016.15-1.atomic.el7.x86_64
  • rpm-ostree-client-2016.13-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from
the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-10f53a06
us-west-2 ami-4d9b1c2d
us-west-1 ami-4ae1bd2a
eu-west-1 ami-1daa8c7b
eu-central-1 ami-e8c20987
ap-southeast-1 ami-a8388fcb
ap-northeast-1 ami-ba2b67dd
ap-southeast-2 ami-1f84857c
ap-northeast-2 ami-adbd6dc3
sa-east-1 ami-1f492e73

SHA Sums

6f8b91373c763cf96ffead6ca044ddf6eea5c0b102a239933c112a7f1089396e  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2
380dcbdd4514f87f8915fee418cc965985c89a91b9182af622e36ffad26c9e04  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2.gz
0bf3d5ec95d40cee94bc80e7c19206b3a260d2835fa43f1e99965bb8f99a777d  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2.xz
bc55326e54832e3e08530e41cb738c4b293a7645c960a4c9be7f66024770a68c  CentOS-Atomic-Host-7.1701-Installer.iso
aaba6ca5e3b0a64abff843bff28eb82092e39fe82f120c76614822334ff22462  CentOS-Atomic-Host-7.1701-Vagrant-Libvirt.box
8d3c64895a40638cb8659186a0caabef9fc10ba944a130eda53f7d2109cfba35  CentOS-Atomic-Host-7.1701-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

January 31, 2017

Weekly scanning of container images with CentOS Container Pipeline

January 31, 2017 01:10 PM

As a part of CentOS Container Pipeline project, we’ve been continually discussing, debating and working towards features that developers and sysadmins out there would like to have from a build pipeline. In the sense, besides just building the container images upon a push to some git repository, what else would add value for the devs and admins?

If you read the previous blog that talked about CentOS Container Image scanners, you already know that we have scanners based on atomic scan. These scanners scan the container image post build and report the results as an email to the user. If you’re already using it, you might find the JSON content of the email to be a bit untidy. But, rest assured, we’re working towards making it more eye candy. 🙂

As is the case with build pipelines similar to CentOS Container Pipeline, most container images are scanned only at the time of build. However, with CentOS Container Pipeline, we cannot afford to have such an architecture. Enterprises, academics, research institutes and various other large & small scale projects that use CentOS as their base platform for servers and developing containerized applications, often have stringent security rules which require them to update to the latest version of enterprise Linux packages. Besides security updates, new version of packages are often bundled with new features!

So, we figured it would be helpful for the devs and admins to have a weekly update about the status of their container images. In simplest terms, weekly image scans present exactly same output to the users as a post-build scan does, albeit on a weekly basis instead of forgetting about the images after building them. Weekly scans are a part of our Scheduled Scans story wherein we want to be able to provide the users with various time intervals, at the end of which, they want to get their container image scanned.

Based on the results of such scheduled scan, a dev or an admin can decide if their image needs to be upgraded or are they OK with its current state. So far, the only way you can do this is by running a container and checking the result of yum check-update.

To use this feature now, all you need to do is checkout the first blog about how to get started with CentOS Container Pipeline. Once you build images with CentOS Container Pipeline, those images are automatically scanned on a weekly basis and an email is sent out to the user for each of his/her image(s).

If you would like to have a feature included in CentOS Container Pipeline, come talk to us on the IRC channel #centos-devel on Freenode server. Alternatively, you can also checkout our GitHub repo and open an issue for discussion there. We are excited to hear and understand about features that developers and sysadmins would find helpful!

January 16, 2017

Enabling SPF record for centos.org

January 16, 2017 11:00 PM

In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was Email Spoofing (aka "forged from address"). That's how I discovered that we never implemented SPF for centos.org (while some of the Infra team members had that on their personal SMTP servers).

While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :

sending

The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after being authorized of course) through servers listed in your SPF.

One thing to know with SPF is that it breaks plain forwarding and aliases but it's not how you will setup your SPF record, but how originator domain does it : For example if you have joe@domain.com sending to joe@otherdomain.com itself being an alias to joe2@domain.com, that will break, as MX for domain.com will see that a mail for domain.com was 'sent' from otherdomain.com and not from an IP listed in their SPF. There are workaround for this though, aka remailing and SRS

receiving

So you have a SPF in place and so restrict from where you are sending mails ? Great, but SPF only works if other SMTP servers involved are checking for it, and so you should do the same ! The fun part is that even if you have CentOS 7, and so Postfix 2.10, there is nothing by default that let you verify SPF : as stated on this page :

Note: Postfix already ships with SPF support, in the form of a plug-in policy daemon. This is the preferred integration model, at least until SPF is mandated by standards. 

So for our postfix setup, we decided to use pypolicy-spf : lightweight, easy , written in python. The needed packages are already available in Epel, but we also rebuilt it on CBS. Once installed, configured and integrated with Postfix, you'll start (based on your .conf settings) blocking mail that arrives to your SMTP servers, but from IP/servers not listed in the originator domain SPF policy (if any).

If you have issues with our SPF current policy on centos.org, feel free to reach us in #centos-devel on irc.freenode.net to discuss it.

January 13, 2017

create a new github.com repo from the cli

January 13, 2017 11:03 PM

I often get into a state where I’ve started some work, done some commits etc and then realised I dont have a place to push the code to. Getting it on github has involved getting the browser out, login to github, click click click {pain}. So, here is how you can create a new repo for your login name on github.com without moving away from the shell.


curl -H "X-GitHub-OTP: XXXXX" -u 'LoginName' https://api.github.com/user/repos -d '{"name":"Repo-To-Create"}'

You need to supply your OTP pass and replace XXXX with it, and ofcourse your own LoginName and finally the Repo-To-Create. Once this call runs, curl will ask for your password and you should get the github API dump a bunch of details ( or tell you that it failed, in which case you need to check the call ).

now the usual ‘git remote add github git@github.com:LoginName/Repo-To-Create‘ and you are off.

regards,

January 04, 2017

Music recording on CentOS 7 DAW

January 04, 2017 11:00 PM

There was something that was on my (private) TODO list for quite some time now : being able to record music, mix and output a single song from multiple recorded tracks. For that you need a Digital Audio Workstation DAW.

I have several instruments at home (electric guitars, bass, digital piano and also drums) but due to lack of (free) time I never investigated the DAW part on Linux and especially CentOS. So having some "offline" days during the holidays helped me investigating that and also being able to setup a small DAW on a recycled machine. Let's consider the hardware and software parts.

Hardware support

I personally still own a Line6 TonePort UX2 interface which is now more than 10 years old, and that I used in the past on a iMac. The iMac still runs, but exclusively with CentOS 7 those days, and the TonePort was just collecting dust. When I tried to plug it , it wasn't really detected, but just mainly because of the kernel config, so I asked (gently) Toracat to enable the required kernel module in the centos-plus kernel and with the centos-plus kernel, the toneport ux2 is seen as an external sound card. Good :

geonosis kernel: usb 3-2: new full-speed USB device number 2 using xhci_hcd
geonosis kernel: usb 3-2: New USB device found, idVendor=0e41, idProduct=4142
geonosis kernel: usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
geonosis kernel: usb 3-2: Product: TonePort UX2
geonosis kernel: usb 3-2: Manufacturer: Line 6
geonosis mtp-probe: checking bus 3, device 2: "/sys/devices/pci0000:00/0000:00:14.0/
usb3/3-2"
geonosis mtp-probe: bus: 3, device: 2 was not an MTP device
geonosis kernel: line6usb: module is from the staging directory, the quality is unkn
own, you have been warned.
geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 found
geonosis kernel: line6usb: module is from the staging directory, the quality is unkn
own, you have been warned.
geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 now attached
geonosis kernel: line6usb 3-2:1.1: Line6 TonePort UX2 found
geonosis kernel: usbcore: registered new interface driver line6usb
geonosis kernel: usbcore: registered new interface driver snd_usb_toneport

I also recently offered myself a small gift to play with : a small Fender Mustang guitar amplifier : small enough to fit under my desk in my home office, and with amps/effects emulation built-in, plus usb output to redirect sound directly to computer. (Easier for easy recording, than setting up a microphone in front my other Fender Custom vibrolux reverb tube amp, and my neighbors are also grateful for that decision)

The good news is that it's directly recognized as another sound card without any kernel module to activate/enable :

geonosis kernel: usb 3-1: new full-speed USB device number 3 using xhci_hcd
geonosis kernel: usb 3-1: New USB device found, idVendor=1ed8, idProduct=0014
geonosis kernel: usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
geonosis kernel: usb 3-1: Product: Mustang Amplifier
geonosis kernel: usb 3-1: Manufacturer: FMIC
geonosis kernel: usb 3-1: SerialNumber: 05D7FF373837594743075518
geonosis kernel: hid-generic 0003:1ED8:0014.0001: hiddev0,hidraw0: USB HID v1.10 Device [FMIC Mustang Amplifier] on usb-0000:00:14.0-1/input0
geonosis mtp-probe: checking bus 3, device 3: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-1"
geonosis mtp-probe: bus: 3, device: 3 was not an MTP device
geonosis kernel: usbcore: registered new interface driver snd-usb-audio

With those two additional sound cards detected, it looks now like this :

[arrfab@geonosis ~]$ cat /proc/asound/cards 
 0 [PCH            ]: HDA-Intel - HDA Intel PCH
                      HDA Intel PCH at 0xd2530000 irq 32
 1 [TonePortUX2    ]: line6usb - TonePort UX2
                      Line6 TonePort UX2 at USB 3-2:1.0
 2 [Amplifier      ]: USB-Audio - Mustang Amplifier
                      FMIC Mustang Amplifier at usb-0000:00:14.0-1, full speed

Great, now let's have a look at the software part !

Software

There are multiple ways to record quickly any sound from a sound card on Linux, and Audacity is well known for this, as it comes with several effects, you can import, edit , cut, paste (and more !) quickly sounds (and even multiple tracks). But when it comes to music recording, especially if you want to also play with MIDI , you need a proper sequencer. It's really great to see that on Linux you have multiple alternatives, but one that seems to be very popular in the Free and open source world is Ardour. As nothing was built for CentOS 7, I decided to create a DAW-7 COPR repository that has everything I need ( when combined with EPEL and/or Nux-Dextop )

I so (re)built (thanks to upstream Fedora maintainers !) in that copr repository multiple packages, including (but not limited to) :

  • Ardour 5.5 : sequencer
  • Qjackctl : frontend for needed jack-audio-connection-kit
  • Calf : very good effects/plugins for jack and so that can be used directly within ardour
  • LV2 : other effects/plugins
  • Guitarix : guitar/bass amp+effect simulator
  • LMMS : other sequencer but more oriented for midi/loops but not really audio recording from external devices
  • Hydrogen : Drum machine when you can't record real drums but you can program your own pattern[s]
  • ... and much more ... :-)

After having tested multiple settings (there are a lot to learn around this), I found myself comfortable with this :

sudo su -c 'curl https://copr.fedorainfracloud.org/coprs/arrfab/DAW-7/repo/epel-7/arrfab-DAW-7-epel-7.repo > /etc/yum.repos.d/arrfab-daw.repo'
sudo yum install -y ardour5 calf lmms hydrogen qjackctl jack-audio-connection-kit jack_capture guitarix lv2-abGate lv2-calf-plugins lv2-drumgizmo lv2-drumkv1 lv2-fomp-plugins lv2-guitarix-plugins lv2-invada-plugins lv2-vocoder-plugins lv2-x42-plugins fluid-soundfont-gm fluid-soundfont-gs

One thing that you have to know (but read all the tutorials/documentation around this) is that your user needs to be part of the jackuser and audio groups in Linux to be able to use the needed Jack sound server (which you have to also master, but once you understand it, it's just a virtual view of what you'd need to do with real cables plugging in/out into various hardware elements) :

sudo usermod --groups jackuser,audio --append $your_username

One website I recommend you to read is LibreMusicProduction as it has tons of howtos and also video tutorials about Ardour and other settings. Something else worth mentioning if you just want drum loops : you can find some on Google but I found some real good ones to start with (and licensed in a way that you can reuse those) on Looperman, Drumslive and Freesound.

Who said that CentOS 7 was only for servers in datacenters and the Cloud ? :-)

CentOS 7 DAW

Have fun on your CentOS 7 DAW.

December 16, 2016

Updated CentOS Vagrant Images Available (v1611.01)

December 16, 2016 03:17 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 15 December 2016, as well as the following user-visible changes:

  • the size of the boot partition has been increased to 1GB in centos/7, to conform with the new upstream recommendations
  • the centos/7 image is now based on CentOS Linux 7.3.1611

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 seems unable to assign an address to VirtualBox host-only interfaces.
  6. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.0 on OS X 10.11.6, with VirtualBox 5.0.30.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 74a95be409cef813881f5312dc1221e2559cdbf25f45d5234d81e91632f99cce --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

November 24, 2016

Zabbix, selinux and CentOS 7.3.1611

November 24, 2016 11:00 PM

If you're using CentOS, you probably noticed that we have a CR repository containing all the built packages for the next minor release, so that people can "opt-in" and already use those packages, before they are released with the full installable tree and iso images.

Using those packages on a subset of your nodes can be interesting, as it permits you to catch some errors/issues/conflicts before the official release (and so symlink on mirrors being changed to that new major.minor version)

For example, I tested myself some roles and found an issue with zabbix-agent refusing to start on a node fully updated/rebooted with CR pkgs (so what will become 7.3.1611 release). The issue was due to selinux denying something (that was allowed in previous policy)

Here is what selinux had to say about it :

type=AVC msg=audit(1480001303.440:2626): avc:  denied  { setrlimit } for  pid=22682 comm="zabbix_agentd" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:zabbix_agent_t:s0 tclass=process

It's true that there was an update for selinux policy : from selinux-policy-3.13.1-60.el7_2.9.noarch to selinux-policy-3.13.1-102.el7.noarch.

What's interesting is that I found the reported issue at Zabbix side, but for zabbix-server (here it's the agent, server is running fine) : ZBX-10542

Clearly something that was working before and now denied, so I created a bug report and hopefully one fix will come in an updated selinux-policy package. But I doubt that it will be available soon.

So in the mean time, what you have to do is :

  • either put zabbix_agent_t into permissive mode with semanage permissive -a zabbix_agent_t
  • either build and distribute a custom selinux policy in your infra (preferred method for me)

For those interested, the following .te (type enforcement) will allow you to build a custom .pp selinux policy file (that you can load with semodule) :

module local-zabbix 1.0;

require {
    type zabbix_agent_t;
    class process setrlimit;
}

#============= zabbix_agent_t ==============
allow zabbix_agent_t self:process setrlimit;

You can now use your configuration management platform to distribute that built .pp policy (you don't need to build it on every node). I'll not dive into details, but I wrote some slides around this (for Ansible and Puppet) for a talk I gave some time ago, so feel free to read those, especially the last slides (with examples)

November 15, 2016

Updated CentOS Vagrant Images Available (v1610.01)

November 15, 2016 05:23 PM

Official Vagrant images for CentOS Linux 6.8 and CentOS Linux 7.2.1511 for x86_64 are now available for download, featuring updated packages to 30 October 2016, as well as the following user-visible changes:

  • several optimisations to make the images smaller and faster:
    • do not install most firmware packages
    • do not install microcode_ctl
    • do not build a rescue initramfs (resulting in significantly faster kernel updates)
    • do not load the floppy module on centos/7 (this reduces boot time by ca. 5s)
  • [security]: do not allow regular users to use su to become root or vagrant – see issue #76
  • set the SELinux type of /etc/sudoers.d/vagrant to etc_t

Known Issues

  1. The centos/7 image is based on CentOS Linux 7.2.1511, since CentOS Linux 7.3 is not available yet.
  2. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible.

  3. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to your Vagrantfile.

  4. Please use Vagrant 1.8.6 (version 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610, while version 1.8.7 is unable to download or update boxes due to Vagrant bug #7969).
  5. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

If you are using CentOS Linux on the host, we recommend installing Vagrant from SCL and using the libvirt images. In general, the Vagrant packages provided by your Linux distribution are preferable, since they usually backport fixes for some upstream bugs. If you are using Vagrant on other operating systems, please use Vagrant 1.8.6 (see Known issues, item 4).

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum ce12f84646efab28b007bdf16f3134686a23fa052f809c4600919561274051da --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

Some of the optimisations in this release were inspired by the Vagrant images from Fedora Cloud and Debian Cloud.

We would also like to thank the following people (in alphabetical order):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

November 10, 2016

Introducing CentOS Container Image Scanners

November 10, 2016 06:52 PM

Over past few months, we’ve been working on CentOS Community Container Pipeline which aims to help developers focus on what they love doing most – write awesome code – and sysadmins have an insight into the image by providing metadata about it! The project code is hosted at Github.com since its inception. The hosted service, that runs off this code, is available to the community at large, and delivers content to registry.centos.org.
What is CentOS Community Container Pipeline?

CentOS Community Container Pipeline enables developers and sysadmins to have a container images built, tested and scanned on the CentOS Project’s infrastructure right after a developer pushes code to the git repository!

Container Pipeline Flow

Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise distribution of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). The result of these scanners is combined into a mail and sent to the author of the container image. Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift.

Why scan images?

Building container images and spinning containers is rather simple. Having more information a.k.a metadata about the container images before running them in one’s production environment is of paramount value! Of course, the kind of information is what makes it of paramount or negligible value. That’s what we aim to provide with CentOS Community Container Pipeline.

Scanners in CentOS Community Container Pipeline

At this point we have two scanners operational. One that checks your CentOS Linux based container images for package updates and other that verifies them. Both the scanners are based on atomic tool developed by the Project Atomic folks. We are working on rolling out more scanners in near future!

Atomic Scanner

The scanners based on atomic are run automatically by the Pipeline after successful completion of image building process. These scanners can be run stand-alone as well! That is, you can install the scanner on your CentOS Linux based systems and run it against a container image built on CentOS Linux base image. And it does this without bringing up or executing the container itself.

In the pipeline, upon completion of scan process, the user is notified about issues with the image that need to be addressed. Addressing these issues would instill more confidence in deploying the resulting container image in a production environment.

Besides scanning an image after it is built, in near future, scanners would also run periodically and provide developer with the actionable information.

yum update scanner

This scanner provides user with the information about RPM packages that need to be updated in the container image. If you’re a developer this information is helpful to ensure you’re running latest packages with bug and security fixes to avoid having surprises in production.

Example output:

$ atomic scan --scanner pipeline-scanner --rootfs /mnt registry.centos.org/centos/centos
...

Files associated with this scan are in /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885.

Scanner ran succesfully and has stored the scan data under /var directory. Let’s see the output:

$ cat /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885/_mnt/image_scan_results.json
{
    "Scanner": "pipeline-scanner", 
    "Successful": "true", 
    "Start Time": "2016-11-10-10-42-46-265018", 
    "Scan Results": {
        "Package Updates": [
            "bind-license.noarch", 
            "kmod.x86_64", 
            "kmod-libs.x86_64", 
            "kpartx.x86_64", 
            "openssl-libs.x86_64", 
            "python.x86_64", 
            "python-libs.x86_64", 
            "systemd.x86_64", 
            "systemd-libs.x86_64", 
            "tzdata.noarch"
        ], 
        "OS Release": "CentOS Linux 7 (Core)"
    }, 
    "Scan Type": "Image Scan", 
    "CVE Feed Last Updated": "NA", 
    "Finished Time": "2016-11-10-10-42-52-184442", 
    "UUID": "mnt"
}

The Package Updates key in above output lists packages that need to be updated in the scanned container image.

RPM verify scanner

As its name suggests RPM verify scanner verifies all installed files (libraries and binaries) via RPM packages in given container image. It reports any modified or tampered libraries and binaries in given container image. This is useful to ensure that given container image is not shipped with any tainted libraries or binaries.

Example output:

$ atomic scan --scanner rpm-verify docker.io/centos/postgresql
{
    "Scanner": "scanner-rpm-verify",
    "Successful": "true",
    "Start Time": "2016-11-10-19-49-06-740445",
    "Scan Results": {
        "rpmVa_issues": [
            {
                "config": false,
                "issue": "missing",
                "rpm": {Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise version of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift.
                    "VENDOR": "CentOS",
                    "PACKAGER": "CentOS BuildSystem ",
                    "BUILDHOST": "worker1.bsys.centos.org",
                    "RPM": "glibc-2.17-55.el7_0.1.x86_64",
                    "SIGNATURE": "RSA/SHA256, Sat Aug 30 02:20:20 2014, Key ID 24c6a8a7f4a80eb5"
                },
                "filename": "/sbin/sln"
            },
            {
                "config": false,
                "issue": "........P",
                "rpm": {
                    "VENDOR": "CentOS",
                    "PACKAGER": "CentOS BuildSystem ",
                    "BUILDHOST": "worker1.bsys.centos.org",
                    "RPM": "iputils-20121221-6.el7.x86_64",
                    "SIGNATURE": "RSA/SHA256, Fri Jul  4 07:38:44 2014, Key ID 24c6a8a7f4a80eb5"
                },
                "filename": "/usr/sbin/clockdiff"
            }
        ]
    },
    "Scan Type": "RPM Verify scan for finding tampered files.",
    "CVE Feed Last Updated": "NA",
    "Finished Time": "2016-11-10-19-49-10-933952",
    "UUID": "da4ffaac638fada8723c6721721d99b0dfaba67d79c8507e881ee8327e17ecb"
}

Adding your container to the pipeline

It’s simple! Add an entry for your opensource project under index.d directory on CentOS Container Index. You can see a few files representing projects or individual developers under this directory already. Also, you need to have a cccp.yml file in your project that has information useful for the Container Pipeline to use. You can refer respective GitHub repos to get more information. Or get in touch with us on #centos-devel IRC channel on FreeNode network.

Dharmit Shah and Navid Shaikh

November 07, 2016

Welcoming new members to the CentOS Container team

November 07, 2016 02:08 PM

Join me in warmly welcoming Dharmit Shah, Bama Charan Kundu and Navid Shaikh to the CentOS Container team.

They are primarily focused on delivering and curating the CentOS Container Pipeline (https://github.com/centos/container-pipeline-service ). In the coming weeks, keep an eye out for announcements from them in the CentOS Blog at https://seven.centos.org .

– KB

November 05, 2016

Vim 8 for CentOS Linux 7

November 05, 2016 03:40 AM

Matěj Cepl is curating a set of Vim 8 rpms for EL7 over at
https://copr.fedorainfracloud.org/coprs/mcepl/vim8/ – consider them testing grade, and I am sure he would appreciate feedback and issue reports.

Now go get the shinny new Vim8.

$ rpm -q vim-enhanced
vim-enhanced-8.0.0054-1.0.8.el7.centos.x86_64

Enjoy! And dont forget to drop by and say thanks to Matěj over at https://matej.ceplovi.cz/blog/

October 27, 2016

Security contact for the CentOS Project

October 27, 2016 07:52 PM

If you find any security issue in a CentOS.org website or service, please let us know; the same goes for any issues in CentOS Linux as well as the SIG content on centos.org. And the best way to get in touch is to email security@centos.org – and if the content is sensitive, please use the corrosponding gpg key to encrypt the content with. eg for CentOS Linux 7 specific issue, please encrypt the content with the CentOS Linux 7 key. Similarly for any content specific to the Virt SIG, please use the CentOS SIG Virt key.

How can you verify the keys ? The fingerprints are published behind https at https://www.centos.org/keys/.

DNS data for the www.centos.org website is :
www.centos.org has address 85.12.30.226
www.centos.org has IPv6 address 2a01:788:a002:0:225:90ff:fe33:f34c

– KB

October 26, 2016

Adding a timeout for your CI jobs at ci.centos.org

October 26, 2016 09:10 PM

The typical workflow for most ci.centos.org ( cico ) jobs is :

* Call Duffy's API endpoint with node/get and grab some machines
* Setup the machines environment for the ci job to come
* Push content to nodes
* Run the tests
* Clear out / tear down
* Call Duffy's API end point with node/done to return the machines
* Report status via Jenkins

Machines handed out in this manner to the CI Jobs are available for upto 6 hours at a time, at which point they are reaped back into the available pool for other jobs to consume. What this also means is that if for any reason, the job gets stuck, it could be upto six hours before the developer/user gets any feedback about the tests failing.

The usual way to resolve this situation is to setup a timeout in the jenkins job. That would allow Jenkins to watch for the run, on timeout, kill the job and report failure. However, if your job is setup with a single build step that also includes requesting the machines and returning them when done, Jenkins killing the job will mean your machines wont get returned for upto 6 hrs. Given that most projects are setup with a quota of 10 deployed machines; not returning them when done, would mean your jobs get put into a queue that isnt clearing out in a rush.

One way to work around this would be to split the machine request and machine return functions into a pre-build and post-build step, and then pass over the session-id for the deployed machines via the build steps. That way, you could trap and report specific conditions. A varioation of this would be to setup conditional build steps, and have them execute different functions as needed.

An easy and simple workaround however, is to just wrap the test commands in a /usr/bin/timeout call. timeout is delivered as a binary from the coreutils package on CentOS Linux 7 and would be available on all machines, including the jenkins worker instances. Take a look at https://github.com/almighty/almighty-jobs/blob/master/devtools-ci-index.yaml#L64 for a quick example of how this would work in a JJB template. This way we can timeout on the job, and yet be able to return nodes or handle any other content we need, in the same ci job script. A script that then does not have or need any Jenkins specific content, making it possible to run from developer laptops or as child jobs on its own.

/usr/bin/timeout ( man 1 timeout ) also allows you to preserve the sub commands exit status, if you need to track and report different status from your ci jobs. And ofcourse, there are many other uses for /usr/bin/timeout as well!

– KB

October 20, 2016

(ab)using Alias for Zabbix

October 20, 2016 10:00 PM

It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on CBS and also armhfp

There are really cool things in Zabbix, including Low-Level Discovery. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.

But what happens if you suddenly want to for example to create some calculated item on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want to gather stats from ..." .. but it seems I was wrong. Zabbix user macros can be used in multiple places, but not everywhere. (It seems that I wasn't the only one not understanding the doc coverage for this, but at least that bug report will have an effect on the doc to clarify this)

That's when I discussed this in #zabbix (on irc.freenode.net) that RichLV pointed me to something that could be interesting for my case : Alias. I must admit that it's the first time I was hearing about it, and I don't even know when it landed in Zabbix (or if I just overlooked it at first sight).

So cool, now I can just have our config mgmt pushing for example a /etc/zabbix/zabbix_agentd.d/interface-alias.conf file that looks like this and reload zabbix-agent :

Alias=net.if.default.out:net.if.out[enp4s0f0]
Alias=net.if.default.in:net.if.in[enp4s0f0]

That means that now, whatever the interface name will be (as puppet in our case will create that file for us) , we'll be able to get values from net.if.default.out and net.if.default.in keys, automatically. Cool

That also means that if you want to aggregate all this into a single key for a group of nodes (and so graph that too), you can do something always referencing those new keys (example for the total outgoing bandwidth for a group of hosts) :

grpsum["Your group name","net.if.default.out",last,0]

And from that point, you can easily also configure triggers, and graphs too. Now going back to work on some other calculated items for total bandwith usage for a period of time and triggers based on some max_bw_usage user macro.

October 01, 2016

CentOS-7 1609 Rolling ISOs Now Live

October 01, 2016 07:01 AM

Rolling ISOs

The CentOS Linux team produces rolling CentOS-7 isos, normally on a monthly basis.

The most recently completed version of those ISOs are version 1609 (16 is for 2016, 09 is for September).

The team usually creates all our ISO and cloud images based on all updates through the 28th of the month in question .. so 1609 would mean these ISOs will contain all updates for CentOS-7 through September 28th, 2016.

These rolling ISOs have the same installer as the most recent CentOS-7 point release (currently 7.2.1511) so that they install on the same hardware as our original ISOs, while the packages installed are the latest updates.

This means that the actual kernel that boots up on the ISO is the 7.2.1511 default kernel (kernel-3.10.0-327.el7.x86_64.rpm), but that the kernel installed is the latest kernel package (kernel-3.10.0-327.36.1.el7.x86_64.rpm for the 1609 ISOs).

These normal Rolling ISOs can be downloaded from this LINK and here are the sha256sums:
CentOS-7-x86_64-DVD-1609-01.iso:
3948f7a31a8693b2df90dc31252551dcd5aa9d13f4910ad9ce28fcddc247d84f 

CentOS-7-x86_64-Everything-1609-01.iso:
602383c2aa93f6d7df46bd47658dcbf9b9d567108dec56ba60ce46a2f51c6eb2 

CentOS-7-x86_64-LiveGNOME-1609-01.iso:

f6ee8af6814bc58e2c8424db862a443649f3a57b5f85caf63704ab52d5bbac68 

CentOS-7-x86_64-LiveKDE-1609-01.iso:
1349c70e815d46c49d6ea459de6fbc074f5131c803343db18d32987ee78fd303 


CentOS-7-x86_64-Minimal-1609-01.iso:
54721e5e444a3191b18b0fabe1c35e12b65f93aa31732beb7899212d19cab69b 


You can verify the sha256sum of your downloaded ISO following these instructions prior to install.

The DVD ISO contains everything needed to do an install, but still fits on one 4.3 GB DVD.  This is the most versatile install that will fit on a single DVD and if you are new to CentOS this likely the installer you want.  If you pick Minimum Install in this installer, you can do an install that is identical to Minimal ISO.  You can also install many different Workstation and Server installs from this ISO, including both GNOME and KDE.

The Everything ISO has all packages, even those not used by the installer.  You usually do not need this ISO unless you do not have access to the internet and want to install things later from this DVD and not included by the graphical installer.  Most users will not need this ISO, it is > 7 GB but can do installs from a USB key that is big enough to hold it (currently an 8 GB key).

The LiveGNOME ISO is a Basic GNOME Workstation install, but there is no modification or personalization allowed during the install.  It is a much easier install to do, but any extras packages must be installed from the internet later.

The LiveKDE ISO is Basic KDE Workstation install.  It also does not allow modification or personalization until after the install has finished.

The Minimal ISO is a very small and quick install that boots to the command console and has network connectivity and a firewall.  It is used by System Administrators for the minimal install that they can then add functionality to.  You need to know what you are doing to use this ISO.

Newer Hardware Support

As explained above, the normal rolling ISOs boot from the Point Release installer.  Sometimes there is newer hardware that might not be supported in the point release installer, but could be supported with a newer kernel.  This installer is much less tested and is only recommended if you can not get one of the normal installers to work for you.

There are only 2 ISOs in this family, here are the links and sha256sums:
CentOS-7-x86_64-DVD-1609-99.iso:
90c7148ddccbb278d45d06805dee6599ec1acc585cafd02d56c6b8e32a238fa9 

CentOS-7-x86_64-Minimal-1609-99.iso:
1cfbbc73cc7a0eb17d7fe2fa5b1adf07492e340540603e8e1fd28b52e95f02e3

You can verify the ISO's sha256 sum using this LINK, and the descriptions above are the same for these two ISOs.


September 21, 2016

CentOS Infra public service dashboard

September 21, 2016 10:00 PM

As soon as you're running some IT services, there is one thing that you already know : you'll have downtimes, despite all your efforts to avoid those...

As the old joke says : "What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....

You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to announce an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) mailing lists. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).

In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated RTO, we also send a mail to the concerned mailing lists (or not).

So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on https://status.centos.org.

We use Zabbix to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix proxies (because of our "distributed" network setup right now, with machines all around the world). For some of those services listed on status.centos.org, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard. The simple way to link those together was to use zabbix custom alertscripts and you can even customize those to send specific macros and have that alertscript just parsing and then updating the dashboard.

We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again Patrick Uiterwijk who wrote that tool for Fedora initially (and that we adapted to our needs).

August 15, 2016

CentOS at cPanel 2016

August 15, 2016 10:23 AM

The CentOS team will have a booth at the cPanel 2016 WEIRED Conference in Portland, Oregon at the Hilton Portland & Executive Tower on October 3rd through the 5th 2016.

I (Johnny Hughes) will be there to discuss all things CentOS and we may have some guests at the booth from some of our Special Interest Groups and others from the CentOS Community.

If you are planning to be at the conference, please stop by and see us.

June 21, 2016

CentOS at 2016 Texas Linux Fest

June 21, 2016 06:25 PM

We will have a CentOS Booth at the 2016 Texas Linux Fest on July 8th and 9th in the Austin Texas Convention Center.

Please stop by the CentOS booth for some Swag and discussion.

We will also have several operational CentOS-7 Arm32 devices at the booth, including a Raspberry Pi2, Raspberry Pi3, CubieTruck (Cubieboard3) and CubieTruck Plus (Cubieboard5).  These devices are showcasing our AltArch Special Interest Group, which produce ppc64, ppc64le, armhfp (Arm32), aarch64 Arm64), and i686 (x86 32) architectures of CentOS-7.

We also will be glad to discuss the new things happening within the project, including a number of operational Special Interest Groups (SIGs) that are producing add on software for CentOS including The Xen Hypervisor, OpenStack (via RDO), Storage (GlusterFS and Ceph), Software Collections, Cloud Images (AWS, Azure, Oracle, Vagrant Boxes, KVM), Containers (Docker and Project Atomic).

So, if you have been using CentOS for the past 12 years, all that is happening just like it always has (long lived standard Linux distro with LTS), as well as all the new hypervisor, container and cloud capabilities.

May 02, 2016

Generating multiple certificates with Letsencrypt from a single instance

May 02, 2016 10:00 PM

Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the Letsencrypt initiative. I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).

If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the csr and get the signed cert back (and the whole chain too) One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)

So what are our options then ? The letsencrypt documentation mentions several plugins like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)

The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of CDN way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on all the nodes (as you don't know in advance which one will be verified by the ACME server)

So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with SANs ) in a transpartent way ? I so thought about something like this :

Single Letsencrypt node

The idea would be to :

  • use a central node : let's call it central.domain.com (vm, docker container, make-your-choice-here) to launch the letsencrypt client
  • have the ACME server hitting transparently one of the web servers without any changed/uploaded file
  • the server getting the GET request for that file using the letsencrypt central node as a backend node
  • ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.

The good news is that it's possible and even really easy to implement, through ProxyPass (for httpd/Apache web server) or proxy_pass (for nginx based setup)

For example, for the httpd vhost config for sub1.domain.com (three nodes in our example) we can just add this in the .conf file :

<Location "/.well-known/">
    ProxyPass "http://central.domain.com/.well-known/"
</Location>

So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node):

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub1.domain.com

Same if you run nginx instead (let's assume this for sub2.domain.com and sub3.domain.com) , you just have to add a snippet in your vhost .conf file (and before the / definition too):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }

And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub2.domain.com -d sub3.domain.com

Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)

The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this :

   RewriteEngine On
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (RewriteCond are cumulatives so it will not be redirect if url starts with .well-known:

   RewriteEngine On
   RewriteCond $1 !^.well-known
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }
location / {
        rewrite        ^ https://$server_name$request_uri? permanent;
   }

Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)

April 28, 2016

IPv6 connectivity status within the CentOS.org infra

April 28, 2016 10:00 PM

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also msync.centos.org

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like www.centos.org as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within centos.org !

January 26, 2016

EPEL round table at FOSDEM 2016

January 26, 2016 06:57 PM

As a follow-up to last year’s literally-a-discussion-in-the-hallway about EPEL with a few dozen folks at FOSDEM 2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM, “Wither EPEL? Harvesting the next generation of software for the enterprise” in the distro devroom. As a treat, Stephen Smoogen will be moderating the panel; Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.

If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.

The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process (SIGs provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.

Hope to see you there!

December 14, 2015

Kernel 3.10.0-327 issue on AMD Neo processor

December 14, 2015 11:00 PM

As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel. Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues. That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a AMD Athlon(tm) II Neo K345 Dual-Core Processor. So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web. When rebooting on the newer kernel, it panics directly.

Two bug reports are open for this, one on the CentOS Bug tracker, linked also to the upstream one. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :

  • boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
  • once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
  • as root, run grub2-mkconfig -o /etc/grub2.conf

Hope it can help others too

November 30, 2015

Kernel IO wait and megaraid controller

November 30, 2015 11:00 PM

Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our Zabbix monitoring instance complaining about web scenarios failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but iotop is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).

As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.

At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.

That server has the following raid adapter :

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)

That means that you need to use the MegaCLI tool for that.

A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) page explaining the issue with default settings and the "Patrol Read" operation. While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"

I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode. Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :

VM iowait

That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)

Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).

Hope it can help others too

September 23, 2015

CentOS AltArch SIG status

September 23, 2015 10:00 PM

Recently I had (from an Infra side) to start deploying KVM guests for the ppc64 and ppc64le arches, so that AltArch SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about Power8 and the fact you can just use libvirt/virt-install to quickly provision new VMs on PowerKVM , but I'll do that in a separate post.

Parallel to ppc64/ppc64le, armv7hl interested some Community members, and the discussion/activity about that arch is discussed on the dedicated mailing list. It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )

Last (but not least) in this AltArch list is i686 : Johnny built all packages and are already publicly available on buildlogs.centos.org , each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.

If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the centos-devel list !


Powered by Planet!
Last updated: April 28, 2017 04:00 AM