March 13, 2017

Updated CentOS Vagrant Images Available (v1702.01)

March 13, 2017 10:23 AM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 28 February 2017.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  7. The metadata of the images for VMware is set to vmware_fusion. Please specify vmware_fusion as the provider when downloading the images, even if you’re using VMware Workstation.

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.0 on OS X 10.11.6, with VirtualBox 5.0.30.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 48745c0f2dd4fbee366d830e3e333b637528ad936dd66ed5911df2adc02f46d7 --provider libvirt --box-version 1702.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

March 08, 2017

[infra] scheduled major outage for several services

March 08, 2017 12:36 PM

As announced, and confirmed on the centos-devel list, next week we’ll have a major outage impacting several services that are hosted in the same DC : due to some reorganization at the DC/Cage level, we’ll have to shutdown/move/reconfigure a big part of our hosted infra for the following services :

  • https://cbs.centos.org (Koji)
  • https://accounts.centos.org (auth backend, and also https://id.centos.org, our idp in front of ACO)
  • https://ci.centos.org (jenkins-driven CI environment)
  • https://registry.centos.org (that one will be temporary migrated to a read-only registry, so that people already pointing to that node will continue to be able to pull images)

We’re working on a plan to minimize the downtime/reconfiguration part, but at first sight, due to the hardware move of the racks/recabling parts/etc, the announced downtime will be probably ~48h.

What does that mean ? That during this maintenance window, nobody will be able to build/tests packages, nor be able to triggers automatically CI jobs (important). This hardware migration is scheduled for March 14th, starting at 13:00 UTC.

We’ll obviously try to restore those services as soon as possible, to minimize the impact on people building pkgs for SIGs

If you have questions, feel free to discuss this in the #centos-devel channel on irc.freenode.net, or the centos-devel mailing list

February 15, 2017

New CentOS Atomic Host with Updated Docker, Kubernetes and Etcd

February 15, 2017 03:08 PM

An updated version of CentOS Atomic Host (tree version 7.20170209), is now available, including significant updates to docker (version 1.12.5), kubernetes (version 1.4) and etcd (version 3.0.15).

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.14.1-5.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.5-14.el7.centos.x86_64
  • etcd-3.0.15-1.el7.x86_64
  • flannel-0.5.5-2.el7.x86_64
  • kernel-3.10.0-514.6.1.el7.x86_64
  • kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64
  • ostree-2016.15-1.atomic.el7.x86_64
  • rpm-ostree-client-2016.13-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from
the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-10f53a06
us-west-2 ami-4d9b1c2d
us-west-1 ami-4ae1bd2a
eu-west-1 ami-1daa8c7b
eu-central-1 ami-e8c20987
ap-southeast-1 ami-a8388fcb
ap-northeast-1 ami-ba2b67dd
ap-southeast-2 ami-1f84857c
ap-northeast-2 ami-adbd6dc3
sa-east-1 ami-1f492e73

SHA Sums

6f8b91373c763cf96ffead6ca044ddf6eea5c0b102a239933c112a7f1089396e  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2
380dcbdd4514f87f8915fee418cc965985c89a91b9182af622e36ffad26c9e04  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2.gz
0bf3d5ec95d40cee94bc80e7c19206b3a260d2835fa43f1e99965bb8f99a777d  CentOS-Atomic-Host-7.1701-GenericCloud.qcow2.xz
bc55326e54832e3e08530e41cb738c4b293a7645c960a4c9be7f66024770a68c  CentOS-Atomic-Host-7.1701-Installer.iso
aaba6ca5e3b0a64abff843bff28eb82092e39fe82f120c76614822334ff22462  CentOS-Atomic-Host-7.1701-Vagrant-Libvirt.box
8d3c64895a40638cb8659186a0caabef9fc10ba944a130eda53f7d2109cfba35  CentOS-Atomic-Host-7.1701-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

January 31, 2017

Weekly scanning of container images with CentOS Container Pipeline

January 31, 2017 01:10 PM

As a part of CentOS Container Pipeline project, we’ve been continually discussing, debating and working towards features that developers and sysadmins out there would like to have from a build pipeline. In the sense, besides just building the container images upon a push to some git repository, what else would add value for the devs and admins?

If you read the previous blog that talked about CentOS Container Image scanners, you already know that we have scanners based on atomic scan. These scanners scan the container image post build and report the results as an email to the user. If you’re already using it, you might find the JSON content of the email to be a bit untidy. But, rest assured, we’re working towards making it more eye candy. 🙂

As is the case with build pipelines similar to CentOS Container Pipeline, most container images are scanned only at the time of build. However, with CentOS Container Pipeline, we cannot afford to have such an architecture. Enterprises, academics, research institutes and various other large & small scale projects that use CentOS as their base platform for servers and developing containerized applications, often have stringent security rules which require them to update to the latest version of enterprise Linux packages. Besides security updates, new version of packages are often bundled with new features!

So, we figured it would be helpful for the devs and admins to have a weekly update about the status of their container images. In simplest terms, weekly image scans present exactly same output to the users as a post-build scan does, albeit on a weekly basis instead of forgetting about the images after building them. Weekly scans are a part of our Scheduled Scans story wherein we want to be able to provide the users with various time intervals, at the end of which, they want to get their container image scanned.

Based on the results of such scheduled scan, a dev or an admin can decide if their image needs to be upgraded or are they OK with its current state. So far, the only way you can do this is by running a container and checking the result of yum check-update.

To use this feature now, all you need to do is checkout the first blog about how to get started with CentOS Container Pipeline. Once you build images with CentOS Container Pipeline, those images are automatically scanned on a weekly basis and an email is sent out to the user for each of his/her image(s).

If you would like to have a feature included in CentOS Container Pipeline, come talk to us on the IRC channel #centos-devel on Freenode server. Alternatively, you can also checkout our GitHub repo and open an issue for discussion there. We are excited to hear and understand about features that developers and sysadmins would find helpful!

January 13, 2017

create a new github.com repo from the cli

January 13, 2017 11:03 PM

I often get into a state where I’ve started some work, done some commits etc and then realised I dont have a place to push the code to. Getting it on github has involved getting the browser out, login to github, click click click {pain}. So, here is how you can create a new repo for your login name on github.com without moving away from the shell.


curl -H "X-GitHub-OTP: XXXXX" -u 'LoginName' https://api.github.com/user/repos -d '{"name":"Repo-To-Create"}'

You need to supply your OTP pass and replace XXXX with it, and ofcourse your own LoginName and finally the Repo-To-Create. Once this call runs, curl will ask for your password and you should get the github API dump a bunch of details ( or tell you that it failed, in which case you need to check the call ).

now the usual ‘git remote add github git@github.com:LoginName/Repo-To-Create‘ and you are off.

regards,

December 16, 2016

Updated CentOS Vagrant Images Available (v1611.01)

December 16, 2016 03:17 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 15 December 2016, as well as the following user-visible changes:

  • the size of the boot partition has been increased to 1GB in centos/7, to conform with the new upstream recommendations
  • the centos/7 image is now based on CentOS Linux 7.3.1611

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 seems unable to assign an address to VirtualBox host-only interfaces.
  6. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.0 on OS X 10.11.6, with VirtualBox 5.0.30.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 74a95be409cef813881f5312dc1221e2559cdbf25f45d5234d81e91632f99cce --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

November 15, 2016

Updated CentOS Vagrant Images Available (v1610.01)

November 15, 2016 05:23 PM

Official Vagrant images for CentOS Linux 6.8 and CentOS Linux 7.2.1511 for x86_64 are now available for download, featuring updated packages to 30 October 2016, as well as the following user-visible changes:

  • several optimisations to make the images smaller and faster:
    • do not install most firmware packages
    • do not install microcode_ctl
    • do not build a rescue initramfs (resulting in significantly faster kernel updates)
    • do not load the floppy module on centos/7 (this reduces boot time by ca. 5s)
  • [security]: do not allow regular users to use su to become root or vagrant – see issue #76
  • set the SELinux type of /etc/sudoers.d/vagrant to etc_t

Known Issues

  1. The centos/7 image is based on CentOS Linux 7.2.1511, since CentOS Linux 7.3 is not available yet.
  2. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible.

  3. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to your Vagrantfile.

  4. Please use Vagrant 1.8.6 (version 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610, while version 1.8.7 is unable to download or update boxes due to Vagrant bug #7969).
  5. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

If you are using CentOS Linux on the host, we recommend installing Vagrant from SCL and using the libvirt images. In general, the Vagrant packages provided by your Linux distribution are preferable, since they usually backport fixes for some upstream bugs. If you are using Vagrant on other operating systems, please use Vagrant 1.8.6 (see Known issues, item 4).

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum ce12f84646efab28b007bdf16f3134686a23fa052f809c4600919561274051da --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

Some of the optimisations in this release were inspired by the Vagrant images from Fedora Cloud and Debian Cloud.

We would also like to thank the following people (in alphabetical order):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

November 10, 2016

Introducing CentOS Container Image Scanners

November 10, 2016 06:52 PM

Over past few months, we’ve been working on CentOS Community Container Pipeline which aims to help developers focus on what they love doing most – write awesome code – and sysadmins have an insight into the image by providing metadata about it! The project code is hosted at Github.com since its inception. The hosted service, that runs off this code, is available to the community at large, and delivers content to registry.centos.org.
What is CentOS Community Container Pipeline?

CentOS Community Container Pipeline enables developers and sysadmins to have a container images built, tested and scanned on the CentOS Project’s infrastructure right after a developer pushes code to the git repository!

Container Pipeline Flow

Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise distribution of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). The result of these scanners is combined into a mail and sent to the author of the container image. Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift.

Why scan images?

Building container images and spinning containers is rather simple. Having more information a.k.a metadata about the container images before running them in one’s production environment is of paramount value! Of course, the kind of information is what makes it of paramount or negligible value. That’s what we aim to provide with CentOS Community Container Pipeline.

Scanners in CentOS Community Container Pipeline

At this point we have two scanners operational. One that checks your CentOS Linux based container images for package updates and other that verifies them. Both the scanners are based on atomic tool developed by the Project Atomic folks. We are working on rolling out more scanners in near future!

Atomic Scanner

The scanners based on atomic are run automatically by the Pipeline after successful completion of image building process. These scanners can be run stand-alone as well! That is, you can install the scanner on your CentOS Linux based systems and run it against a container image built on CentOS Linux base image. And it does this without bringing up or executing the container itself.

In the pipeline, upon completion of scan process, the user is notified about issues with the image that need to be addressed. Addressing these issues would instill more confidence in deploying the resulting container image in a production environment.

Besides scanning an image after it is built, in near future, scanners would also run periodically and provide developer with the actionable information.

yum update scanner

This scanner provides user with the information about RPM packages that need to be updated in the container image. If you’re a developer this information is helpful to ensure you’re running latest packages with bug and security fixes to avoid having surprises in production.

Example output:

$ atomic scan --scanner pipeline-scanner --rootfs /mnt registry.centos.org/centos/centos
...

Files associated with this scan are in /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885.

Scanner ran succesfully and has stored the scan data under /var directory. Let’s see the output:

$ cat /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885/_mnt/image_scan_results.json
{
    "Scanner": "pipeline-scanner", 
    "Successful": "true", 
    "Start Time": "2016-11-10-10-42-46-265018", 
    "Scan Results": {
        "Package Updates": [
            "bind-license.noarch", 
            "kmod.x86_64", 
            "kmod-libs.x86_64", 
            "kpartx.x86_64", 
            "openssl-libs.x86_64", 
            "python.x86_64", 
            "python-libs.x86_64", 
            "systemd.x86_64", 
            "systemd-libs.x86_64", 
            "tzdata.noarch"
        ], 
        "OS Release": "CentOS Linux 7 (Core)"
    }, 
    "Scan Type": "Image Scan", 
    "CVE Feed Last Updated": "NA", 
    "Finished Time": "2016-11-10-10-42-52-184442", 
    "UUID": "mnt"
}

The Package Updates key in above output lists packages that need to be updated in the scanned container image.

RPM verify scanner

As its name suggests RPM verify scanner verifies all installed files (libraries and binaries) via RPM packages in given container image. It reports any modified or tampered libraries and binaries in given container image. This is useful to ensure that given container image is not shipped with any tainted libraries or binaries.

Example output:

$ atomic scan --scanner rpm-verify docker.io/centos/postgresql
{
    "Scanner": "scanner-rpm-verify",
    "Successful": "true",
    "Start Time": "2016-11-10-19-49-06-740445",
    "Scan Results": {
        "rpmVa_issues": [
            {
                "config": false,
                "issue": "missing",
                "rpm": {Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise version of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift.
                    "VENDOR": "CentOS",
                    "PACKAGER": "CentOS BuildSystem ",
                    "BUILDHOST": "worker1.bsys.centos.org",
                    "RPM": "glibc-2.17-55.el7_0.1.x86_64",
                    "SIGNATURE": "RSA/SHA256, Sat Aug 30 02:20:20 2014, Key ID 24c6a8a7f4a80eb5"
                },
                "filename": "/sbin/sln"
            },
            {
                "config": false,
                "issue": "........P",
                "rpm": {
                    "VENDOR": "CentOS",
                    "PACKAGER": "CentOS BuildSystem ",
                    "BUILDHOST": "worker1.bsys.centos.org",
                    "RPM": "iputils-20121221-6.el7.x86_64",
                    "SIGNATURE": "RSA/SHA256, Fri Jul  4 07:38:44 2014, Key ID 24c6a8a7f4a80eb5"
                },
                "filename": "/usr/sbin/clockdiff"
            }
        ]
    },
    "Scan Type": "RPM Verify scan for finding tampered files.",
    "CVE Feed Last Updated": "NA",
    "Finished Time": "2016-11-10-19-49-10-933952",
    "UUID": "da4ffaac638fada8723c6721721d99b0dfaba67d79c8507e881ee8327e17ecb"
}

Adding your container to the pipeline

It’s simple! Add an entry for your opensource project under index.d directory on CentOS Container Index. You can see a few files representing projects or individual developers under this directory already. Also, you need to have a cccp.yml file in your project that has information useful for the Container Pipeline to use. You can refer respective GitHub repos to get more information. Or get in touch with us on #centos-devel IRC channel on FreeNode network.

Dharmit Shah and Navid Shaikh

November 07, 2016

Welcoming new members to the CentOS Container team

November 07, 2016 02:08 PM

Join me in warmly welcoming Dharmit Shah, Bama Charan Kundu and Navid Shaikh to the CentOS Container team.

They are primarily focused on delivering and curating the CentOS Container Pipeline (https://github.com/centos/container-pipeline-service ). In the coming weeks, keep an eye out for announcements from them in the CentOS Blog at https://seven.centos.org .

– KB

November 05, 2016

Vim 8 for CentOS Linux 7

November 05, 2016 03:40 AM

Matěj Cepl is curating a set of Vim 8 rpms for EL7 over at
https://copr.fedorainfracloud.org/coprs/mcepl/vim8/ – consider them testing grade, and I am sure he would appreciate feedback and issue reports.

Now go get the shinny new Vim8.

$ rpm -q vim-enhanced
vim-enhanced-8.0.0054-1.0.8.el7.centos.x86_64

Enjoy! And dont forget to drop by and say thanks to Matěj over at https://matej.ceplovi.cz/blog/

October 27, 2016

Security contact for the CentOS Project

October 27, 2016 07:52 PM

If you find any security issue in a CentOS.org website or service, please let us know; the same goes for any issues in CentOS Linux as well as the SIG content on centos.org. And the best way to get in touch is to email security@centos.org – and if the content is sensitive, please use the corrosponding gpg key to encrypt the content with. eg for CentOS Linux 7 specific issue, please encrypt the content with the CentOS Linux 7 key. Similarly for any content specific to the Virt SIG, please use the CentOS SIG Virt key.

How can you verify the keys ? The fingerprints are published behind https at https://www.centos.org/keys/.

DNS data for the www.centos.org website is :
www.centos.org has address 85.12.30.226
www.centos.org has IPv6 address 2a01:788:a002:0:225:90ff:fe33:f34c

– KB

October 26, 2016

Adding a timeout for your CI jobs at ci.centos.org

October 26, 2016 09:10 PM

The typical workflow for most ci.centos.org ( cico ) jobs is :

* Call Duffy's API endpoint with node/get and grab some machines
* Setup the machines environment for the ci job to come
* Push content to nodes
* Run the tests
* Clear out / tear down
* Call Duffy's API end point with node/done to return the machines
* Report status via Jenkins

Machines handed out in this manner to the CI Jobs are available for upto 6 hours at a time, at which point they are reaped back into the available pool for other jobs to consume. What this also means is that if for any reason, the job gets stuck, it could be upto six hours before the developer/user gets any feedback about the tests failing.

The usual way to resolve this situation is to setup a timeout in the jenkins job. That would allow Jenkins to watch for the run, on timeout, kill the job and report failure. However, if your job is setup with a single build step that also includes requesting the machines and returning them when done, Jenkins killing the job will mean your machines wont get returned for upto 6 hrs. Given that most projects are setup with a quota of 10 deployed machines; not returning them when done, would mean your jobs get put into a queue that isnt clearing out in a rush.

One way to work around this would be to split the machine request and machine return functions into a pre-build and post-build step, and then pass over the session-id for the deployed machines via the build steps. That way, you could trap and report specific conditions. A varioation of this would be to setup conditional build steps, and have them execute different functions as needed.

An easy and simple workaround however, is to just wrap the test commands in a /usr/bin/timeout call. timeout is delivered as a binary from the coreutils package on CentOS Linux 7 and would be available on all machines, including the jenkins worker instances. Take a look at https://github.com/almighty/almighty-jobs/blob/master/devtools-ci-index.yaml#L64 for a quick example of how this would work in a JJB template. This way we can timeout on the job, and yet be able to return nodes or handle any other content we need, in the same ci job script. A script that then does not have or need any Jenkins specific content, making it possible to run from developer laptops or as child jobs on its own.

/usr/bin/timeout ( man 1 timeout ) also allows you to preserve the sub commands exit status, if you need to track and report different status from your ci jobs. And ofcourse, there are many other uses for /usr/bin/timeout as well!

– KB

October 20, 2016

(ab)using Alias for Zabbix

October 20, 2016 10:00 PM

It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on CBS and also armhfp

There are really cool things in Zabbix, including Low-Level Discovery. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.

But what happens if you suddenly want to for example to create some calculated item on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want to gather stats from ..." .. but it seems I was wrong. Zabbix user macros can be used in multiple places, but not everywhere. (It seems that I wasn't the only one not understanding the doc coverage for this, but at least that bug report will have an effect on the doc to clarify this)

That's when I discussed this in #zabbix (on irc.freenode.net) that RichLV pointed me to something that could be interesting for my case : Alias. I must admit that it's the first time I was hearing about it, and I don't even know when it landed in Zabbix (or if I just overlooked it at first sight).

So cool, now I can just have our config mgmt pushing for example a /etc/zabbix/zabbix_agentd.d/interface-alias.conf file that looks like this and reload zabbix-agent :

Alias=net.if.default.out:net.if.out[enp4s0f0]
Alias=net.if.default.in:net.if.in[enp4s0f0]

That means that now, whatever the interface name will be (as puppet in our case will create that file for us) , we'll be able to get values from net.if.default.out and net.if.default.in keys, automatically. Cool

That also means that if you want to aggregate all this into a single key for a group of nodes (and so graph that too), you can do something always referencing those new keys (example for the total outgoing bandwidth for a group of hosts) :

grpsum["Your group name","net.if.default.out",last,0]

And from that point, you can easily also configure triggers, and graphs too. Now going back to work on some other calculated items for total bandwith usage for a period of time and triggers based on some max_bw_usage user macro.

October 01, 2016

CentOS-7 1609 Rolling ISOs Now Live

October 01, 2016 07:01 AM

Rolling ISOs

The CentOS Linux team produces rolling CentOS-7 isos, normally on a monthly basis.

The most recently completed version of those ISOs are version 1609 (16 is for 2016, 09 is for September).

The team usually creates all our ISO and cloud images based on all updates through the 28th of the month in question .. so 1609 would mean these ISOs will contain all updates for CentOS-7 through September 28th, 2016.

These rolling ISOs have the same installer as the most recent CentOS-7 point release (currently 7.2.1511) so that they install on the same hardware as our original ISOs, while the packages installed are the latest updates.

This means that the actual kernel that boots up on the ISO is the 7.2.1511 default kernel (kernel-3.10.0-327.el7.x86_64.rpm), but that the kernel installed is the latest kernel package (kernel-3.10.0-327.36.1.el7.x86_64.rpm for the 1609 ISOs).

These normal Rolling ISOs can be downloaded from this LINK and here are the sha256sums:
CentOS-7-x86_64-DVD-1609-01.iso:
3948f7a31a8693b2df90dc31252551dcd5aa9d13f4910ad9ce28fcddc247d84f 

CentOS-7-x86_64-Everything-1609-01.iso:
602383c2aa93f6d7df46bd47658dcbf9b9d567108dec56ba60ce46a2f51c6eb2 

CentOS-7-x86_64-LiveGNOME-1609-01.iso:

f6ee8af6814bc58e2c8424db862a443649f3a57b5f85caf63704ab52d5bbac68 

CentOS-7-x86_64-LiveKDE-1609-01.iso:
1349c70e815d46c49d6ea459de6fbc074f5131c803343db18d32987ee78fd303 


CentOS-7-x86_64-Minimal-1609-01.iso:
54721e5e444a3191b18b0fabe1c35e12b65f93aa31732beb7899212d19cab69b 


You can verify the sha256sum of your downloaded ISO following these instructions prior to install.

The DVD ISO contains everything needed to do an install, but still fits on one 4.3 GB DVD.  This is the most versatile install that will fit on a single DVD and if you are new to CentOS this likely the installer you want.  If you pick Minimum Install in this installer, you can do an install that is identical to Minimal ISO.  You can also install many different Workstation and Server installs from this ISO, including both GNOME and KDE.

The Everything ISO has all packages, even those not used by the installer.  You usually do not need this ISO unless you do not have access to the internet and want to install things later from this DVD and not included by the graphical installer.  Most users will not need this ISO, it is > 7 GB but can do installs from a USB key that is big enough to hold it (currently an 8 GB key).

The LiveGNOME ISO is a Basic GNOME Workstation install, but there is no modification or personalization allowed during the install.  It is a much easier install to do, but any extras packages must be installed from the internet later.

The LiveKDE ISO is Basic KDE Workstation install.  It also does not allow modification or personalization until after the install has finished.

The Minimal ISO is a very small and quick install that boots to the command console and has network connectivity and a firewall.  It is used by System Administrators for the minimal install that they can then add functionality to.  You need to know what you are doing to use this ISO.

Newer Hardware Support

As explained above, the normal rolling ISOs boot from the Point Release installer.  Sometimes there is newer hardware that might not be supported in the point release installer, but could be supported with a newer kernel.  This installer is much less tested and is only recommended if you can not get one of the normal installers to work for you.

There are only 2 ISOs in this family, here are the links and sha256sums:
CentOS-7-x86_64-DVD-1609-99.iso:
90c7148ddccbb278d45d06805dee6599ec1acc585cafd02d56c6b8e32a238fa9 

CentOS-7-x86_64-Minimal-1609-99.iso:
1cfbbc73cc7a0eb17d7fe2fa5b1adf07492e340540603e8e1fd28b52e95f02e3

You can verify the ISO's sha256 sum using this LINK, and the descriptions above are the same for these two ISOs.


September 21, 2016

CentOS Infra public service dashboard

September 21, 2016 10:00 PM

As soon as you're running some IT services, there is one thing that you already know : you'll have downtimes, despite all your efforts to avoid those...

As the old joke says : "What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....

You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to announce an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) mailing lists. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).

In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated RTO, we also send a mail to the concerned mailing lists (or not).

So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on https://status.centos.org.

We use Zabbix to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix proxies (because of our "distributed" network setup right now, with machines all around the world). For some of those services listed on status.centos.org, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard. The simple way to link those together was to use zabbix custom alertscripts and you can even customize those to send specific macros and have that alertscript just parsing and then updating the dashboard.

We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again Patrick Uiterwijk who wrote that tool for Fedora initially (and that we adapted to our needs).

August 15, 2016

CentOS at cPanel 2016

August 15, 2016 10:23 AM

The CentOS team will have a booth at the cPanel 2016 WEIRED Conference in Portland, Oregon at the Hilton Portland & Executive Tower on October 3rd through the 5th 2016.

I (Johnny Hughes) will be there to discuss all things CentOS and we may have some guests at the booth from some of our Special Interest Groups and others from the CentOS Community.

If you are planning to be at the conference, please stop by and see us.

June 21, 2016

CentOS at 2016 Texas Linux Fest

June 21, 2016 06:25 PM

We will have a CentOS Booth at the 2016 Texas Linux Fest on July 8th and 9th in the Austin Texas Convention Center.

Please stop by the CentOS booth for some Swag and discussion.

We will also have several operational CentOS-7 Arm32 devices at the booth, including a Raspberry Pi2, Raspberry Pi3, CubieTruck (Cubieboard3) and CubieTruck Plus (Cubieboard5).  These devices are showcasing our AltArch Special Interest Group, which produce ppc64, ppc64le, armhfp (Arm32), aarch64 Arm64), and i686 (x86 32) architectures of CentOS-7.

We also will be glad to discuss the new things happening within the project, including a number of operational Special Interest Groups (SIGs) that are producing add on software for CentOS including The Xen Hypervisor, OpenStack (via RDO), Storage (GlusterFS and Ceph), Software Collections, Cloud Images (AWS, Azure, Oracle, Vagrant Boxes, KVM), Containers (Docker and Project Atomic).

So, if you have been using CentOS for the past 12 years, all that is happening just like it always has (long lived standard Linux distro with LTS), as well as all the new hypervisor, container and cloud capabilities.

May 02, 2016

Generating multiple certificates with Letsencrypt from a single instance

May 02, 2016 10:00 PM

Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the Letsencrypt initiative. I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).

If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the csr and get the signed cert back (and the whole chain too) One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)

So what are our options then ? The letsencrypt documentation mentions several plugins like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)

The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of CDN way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on all the nodes (as you don't know in advance which one will be verified by the ACME server)

So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with SANs ) in a transpartent way ? I so thought about something like this :

Single Letsencrypt node

The idea would be to :

  • use a central node : let's call it central.domain.com (vm, docker container, make-your-choice-here) to launch the letsencrypt client
  • have the ACME server hitting transparently one of the web servers without any changed/uploaded file
  • the server getting the GET request for that file using the letsencrypt central node as a backend node
  • ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.

The good news is that it's possible and even really easy to implement, through ProxyPass (for httpd/Apache web server) or proxy_pass (for nginx based setup)

For example, for the httpd vhost config for sub1.domain.com (three nodes in our example) we can just add this in the .conf file :

<Location "/.well-known/">
    ProxyPass "http://central.domain.com/.well-known/"
</Location>

So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node):

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub1.domain.com

Same if you run nginx instead (let's assume this for sub2.domain.com and sub3.domain.com) , you just have to add a snippet in your vhost .conf file (and before the / definition too):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }

And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub2.domain.com -d sub3.domain.com

Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)

The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this :

   RewriteEngine On
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (RewriteCond are cumulatives so it will not be redirect if url starts with .well-known:

   RewriteEngine On
   RewriteCond $1 !^.well-known
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }
location / {
        rewrite        ^ https://$server_name$request_uri? permanent;
   }

Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)

April 28, 2016

IPv6 connectivity status within the CentOS.org infra

April 28, 2016 10:00 PM

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also msync.centos.org

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like www.centos.org as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within centos.org !

January 26, 2016

EPEL round table at FOSDEM 2016

January 26, 2016 06:57 PM

As a follow-up to last year’s literally-a-discussion-in-the-hallway about EPEL with a few dozen folks at FOSDEM 2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM, “Wither EPEL? Harvesting the next generation of software for the enterprise” in the distro devroom. As a treat, Stephen Smoogen will be moderating the panel; Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.

If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.

The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process (SIGs provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.

Hope to see you there!

December 14, 2015

Kernel 3.10.0-327 issue on AMD Neo processor

December 14, 2015 11:00 PM

As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel. Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues. That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a AMD Athlon(tm) II Neo K345 Dual-Core Processor. So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web. When rebooting on the newer kernel, it panics directly.

Two bug reports are open for this, one on the CentOS Bug tracker, linked also to the upstream one. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :

  • boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
  • once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
  • as root, run grub2-mkconfig -o /etc/grub2.conf

Hope it can help others too

November 30, 2015

Kernel IO wait and megaraid controller

November 30, 2015 11:00 PM

Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our Zabbix monitoring instance complaining about web scenarios failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but iotop is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).

As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.

At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.

That server has the following raid adapter :

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)

That means that you need to use the MegaCLI tool for that.

A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) page explaining the issue with default settings and the "Patrol Read" operation. While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"

I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode. Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :

VM iowait

That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)

Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).

Hope it can help others too

September 23, 2015

CentOS AltArch SIG status

September 23, 2015 10:00 PM

Recently I had (from an Infra side) to start deploying KVM guests for the ppc64 and ppc64le arches, so that AltArch SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about Power8 and the fact you can just use libvirt/virt-install to quickly provision new VMs on PowerKVM , but I'll do that in a separate post.

Parallel to ppc64/ppc64le, armv7hl interested some Community members, and the discussion/activity about that arch is discussed on the dedicated mailing list. It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )

Last (but not least) in this AltArch list is i686 : Johnny built all packages and are already publicly available on buildlogs.centos.org , each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.

If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the centos-devel list !

September 16, 2015

CentOS Dojo in Barcelona

September 16, 2015 10:00 PM

So, thanks to the folks from Opennebula, we'll have another CentOS Dojo in Barcelona on Tuesday 20th October 2015. That even will be colocated with the Opennebulaconf happening the days after that Dojo. If you're attending the OpennebulaConf, or if you're just in the area and would like to attend the CentOS Dojo, feel free to register

Regarding the Dojo content, I'll be myself giving a presentation about Selinux : covering a little bit of intro (still needed for some folks afraid of using it , don't know why but we'll change that ...) about selinux itself, how to run it on bare-metal, virtual machines and there will be some slides for the mandatory container hype thing. But we'll also cover managing selinux booleans/contexts, etc through your config management solution. (We'll cover puppet and ansible as those are the two I'm using on a daily basis) and also how to build and deploy custom selinux policies with your config management solution.

On the other hand, if you're a CentOS user and would like yourself to give a talk during that Dojo, feel free to submit a talk ! More informations about the Dojo on the dedicated wiki page

See you there !

September 09, 2015

Ext4 limitation with GDT blocks number

September 09, 2015 10:00 PM

In the last days, I encountered a strange issue^Wlimitation with Ext4 that I wouldn't have thought of. I've used ext2/ext3/ext4 for quite some time and so I've been used to resize the filesystem "online" (while "mounted"). In the past you had to use ext2online for that, then it was integrated into resize2fs itself.

The logic is simple and always the same : extend your underlaying block device (or add another one), then modify the LVM Volume Group (if needed), then the Logical Volume and finally the resize2fs operation, so something like

lvextend -L +${added_size}G /dev/mapper/${name_of_your_logical_volume} 
resize2fs /dev/mapper/${name_of_your_logical_volume}

I don't know how much times I've used that, but this time resize2fs wasn't happy :

resize2fs: Operation not permitted While trying to add group #16384

I remember having had in the past an issue because of the journal size not being big enough. But this wasn't the case here.

FWIW, you can always verify your journal size with dumpe2fs /dev/mapper/${name_of_your_logical_volume} |grep "Journal Size"

Small note : if you need to increase the journal size, you have to do it "offline" as you have to remove the journal and then add it back with a bigger size (and that also takes time) :

umount /$path_where_that_fs_is_mounted
tune2fs -O ^has_journal /dev/mapper/${name_of_your_logical_volume}
# Assuming we want to increase to 128Mb
tune2fs -j -J size=128 /dev/mapper/${name_of_your_logical_volume} 

But in that case, as said, it wasn't really the root cause : while the resize2fs: Operation not permitted doesn't give much informations, dmesg was more explicit :

EXT4-fs warning (device dm-2): ext4_group_add: No reserved GDT blocks, can't resize

The limitation is that when the initial Ext4 filesystem is created, the number of reserved/calculated GDT blocks for that filesystem will allow to grow it by a factor of 1000.

Ouch, that system (CentOS 6.7) I was working on had been provisioned in the past for a certain role, and that particular fs/mount point was set to 2G (installed like this through the Kickstart setup ). But finally role changed and so the filesystem has been extended/resized some times, until I tried to extend it to more than 2TiB, which then caused resize2fs to complain ...

So two choices :

  • you do it "offline" through umount, e2fsck, resize2fs, e2fsck, mount (but time consumming)
  • you still have plenty of space in the VG, and you just want to create another volume with correct size, format it, rsync content, umount old one and mount the new one.

That means that I learned something new (one learns something new every day !), and also the fact that you then need to take that limitation in mind when using a kickstart (that doesn't include the --grow option, but a fixed size for the filesystem).

Hope that it can help

September 02, 2015

Implementing TLS for postfix

September 02, 2015 10:00 PM

As some initiatives (like Let's Encrypt as one example) try to force TLS usage everywhere. We thought about doing the same for the CentOS.org infra. Obviously we already had some x509 certificates, but not for every httpd server that was serving content for CentOS users. So we decided to enforce TLS usage on those servers. But TLS can be used obviously on other things than a web server.

That's why we considered implementing something for our Postfix nodes. The interesting part is that it's really easy (depending of course at the security level one may want to reach/use). There are two parts in the postfix main.cf that can be configured :

  • outgoing mails (aka your server sends mail to other SMTPD servers)
  • incoming mails (aka remote clients/servers send mail to your postfix/smtpd server)

Let's start with the client/outgoing part : just adding those lines in your main.cf will automatically configure it to use TLS when possible, but otherwise fall back on clear if remote server doesn't support TLS :

# TLS - client part
smtp_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_scache 

The interesting part is the smtp_tls_security_level option : as you see, we decided to force it to may . That's what Postfix official TLS documentation calls "Opportunistic TLS" : in some words it will try TLS (even with untrusted remote certs !) and will only default to clear if no remote TLS support is available. That's the option we decided to use as it doesn't break anything, and even if the remote server has a self-signed cert, it's still better to use TLS with self-signed than clear text, right ?

Once you have reloaded your postfix configuration, you'll directly see in your maillog that it will start trying TLS and deliver mails to servers configured for it :

Sep  3 07:50:37 mailsrv postfix/smtp[1936]: setting up TLS connection to ASPMX.L.GOOGLE.com[173.194.207.27]:25
Sep  3 07:50:37 mailsrv postfix/smtp[1936]: Trusted TLS connection established to ASPMX.L.GOOGLE.com[173.194.207.27]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Sep  3 07:50:37 mailsrv postfix/smtp[1936]: DF584A00774: to=<>, orig_to=<>, relay=ASPMX.L.GOOGLE.com[173.194.207.27]:25, delay=1, delays=0/0.12/0.22/0.71, dsn=2.0.0, status=sent (250 2.0.0 OK 1441266639 79si29025652qku.67 - gsmtp)

Now let's have a look at the other part : when you want your server to present the STARTTLS feature when remote servers/clients try to send you mails (still in postfix main.cf) :

# TLS - server part
smtpd_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtpd_tls_cert_file = /etc/pki/tls/certs/<%= postfix_myhostname %>-postfix.crt 
smtpd_tls_key_file = /etc/pki/tls/private/<%= postfix_myhostname %>.key
smtpd_tls_security_level = may
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache

Still easy, but here we also add our key/cert to the config but if you decide to use a signed by a trusted CA cert (like we do for centos.org infra), be sure that the cert is the concatenated/bundled version of both your cert and the CAChain cert. That's also documented in the Postfix TLS guide, and if you're already using Nginx, you already know what I'm talking about as you already have to do it too.

If you've correctly configured your cert/keys and reloaded your postfix config, now remote SMTPD servers will also (if configured to do so) deliver mails to your server through TLS. Bonus point if you're using a cert signed by a trusted CA, as from a client side you'll see this :

Sep  2 16:17:22 hoth postfix/smtp[15329]: setting up TLS connection to mail.centos.org[72.26.200.203]:25
Sep  2 16:17:22 hoth postfix/smtp[15329]: Trusted TLS connection established to mail.centos.org[72.26.200.203]:25: TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)
Sep  2 16:17:23 hoth postfix/smtp[15329]: CC8351C00C9: to=<fake_one_for_blog_post@centos.org>, relay=mail.centos.org[72.26.200.203]:25, delay=1.6, delays=0.19/0.03/1.1/0.31, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as A7299A006E2)

The Trusted TLS connection established part shows that your smtpd server presents a correct cert (bundle) and that the remote server sending you mails trusts the CA used to sign that cert.

There are a lot of TLS options that you can also add for tuning/security reasons, and all can be seen through postconf |grep tls, but also on the Postfix postconf doc

June 02, 2015

Update on CentOS GSoC 2015

June 02, 2015 03:34 PM

Here’s an update on the CentOS Project Google Summer of Code for 2015 posted on the CentOS Seven blog:

http://seven.centos.org/2015/06/centos-and-gsoc-2015-suddenly-come-seven-on-7/

This might be of interest to the Fedora Project community, so I’m pushing my own reference here to appear on the Fedora Planet. Much of the work happening in the CentOS GSoC effort may be useful as-is or as elements within Fedora work. (In at least one case, the RootFS build factory for Arm, the work is also happening partially in Fedora, so it’s a triple-win.)

May 20, 2015

CentOS 7 armv7hl build in progress

May 20, 2015 10:00 PM

As more and more people were showing interest in CentOS on the ARM platform, we thought that it would be a good idea to start trying building CentOS 7 for that platform. Jim started with arm64/aarch64 and got an alpha build ready and installable.

On my end, I configured some armv7hl nodes, "donated" to the project by Scaleway. The first goal was to init some Plague builders to distribute the jobs on those nodes, which is now done. Then working on a "self-contained" buildroot , so that all other packages can be rebuilt only against that buildroot. So building first gcc from CentOS 7 (latest release, better arm support), then glibc, etc, etc ... That buildroot is now done and is available here.

Now the fun started (meaning that 4 armv7hl nodes are currently (re)building a bunch of SRPMS) and you can follow the status on the Arm-dev List if you're interested, or even better, if you're willing to join the party and have a look at the build logs for packages that failed to rebuild. The first target would be to have a "minimal" install working, so basically having sshd/yum working. Then try other things like GUI environment.

As plague-server required mod_python (deprecated now) we don't have any Web UI people can have a look at. But I created a "quick-and-dirty" script that gathers information from the mysql DB, and outputs that here :

The other interesting step will be to produce .img files that would work on some armv7hl nodes. So diving into uboot for Odroid C1 (just as an example) ....

I'll also try to maintain a dedicated Wiki page for the arm32 status in the following days/weeks/etc ..

May 13, 2015

Firefox 38 and TLS less than 1.2

May 13, 2015 05:19 AM

Red Hat released the source code for Firefox 38.  We have (or willbe
today) releasing this for CentOS-5, CentOS-6, and CentOS-7.

It does not, by default, connect to https sites with TLS less than 1.2.
This means it will not connect to sites on CentOS-5, for example ..
there are many others.

In any event, here is a wiki article that explains potential issues and
workarounds:

http://wiki.centos.org/TipsAndTricks/Firefox38onCentOS

May 05, 2015

Hacking initrd.img for fun and profit

May 05, 2015 10:00 PM

During my presentation at Loadays 2015 , I was mentioning some tips and tricks around Anaconda and kickstart, and so how to deploy CentOS , fully automated. I asked the audience about where to store the kickstart, that would be used then by anaconda to install CentOS (same works for RHEL/Fedora), and I got several answers, like "on the http server", or "on the ftp server", which is where most people will put their kickstart files. Some would generate those files files "dynamically" (through $cfgmgmt - I use Ansible with Jinja2 template for this - ) as a bonus point.

But it's not mandatory to host your kickstart file on a publicly available http/ftp/nfs server, and surely not when having to reinstall nodes not in the same DC. Within the CentOS.org infra, I sometimes have to reinstall remote nodes ("donated" to the Project) that are running CentOS 5 or 6 to 7. That's how injecting your ks file directly into the initrd.img really helps. (yes, so network server needed). Just as an intro, here is how you can remotely trigger a CentOS install, without any medium/iso/pxe environment : basically you just need to download the pxeboot images (so vmlinuz and initrd.img), provide some default settings for Anaconda (for the network config, and how to grab stage2 image, and so where is the install tree) On the machine to be reinstalled :

cd /boot/
wget http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/{vmlinuz,initrd.img}

Now you can generate and copy your kickstart file for that node and send it to the remote node (with scp, etc ..) Next step on that remote node is to "inject" the kickstart directly in the initrd.img :

#assuming we have copied the ks file as ks.cfg in /boot already
echo ks.cfg | cpio -c -o >> initrd.img

So now we have a kernel/initrd.img, containing the kickstart file. You can modify grub(2) to add a new menu entry, make it the default one for next reboot and enjoy. But I usually prefer not doing that, if you need someone to reset that node remotely if something wrong happens, so instead of modifying grub(2), I just use kexec to reboot directly with the new kernel (without having to power cycle the node) :

# can be changed to something else, if for example node is running another distro not using yum as package manager
yum install -y wget kexec-tools 
kexec -l vmlinuz --append='net.ifnames=0 biosdevname=0 ksdevice=eth0 inst.ks=file:/ks.cfg inst.lang=en_GB inst.keymap=be-latin1 ip=your.ip netmask=your.netmask gateway=your.gw dns=your.dns' --initrd=initrd.img && kexec -e

As you can see in the append line, I just tell anaconda/kernel to not use the new nic naming (default now in CentOS 7, and sometimes hard to guess in advance), assuming that eth0 is the one to use (verify carefully that !), and the traditional ks= line in fact now just points to /ks.cfg ( initrd.img being / ). The rest is self-explained.

The other cool stuff, is that you can use the same "inject" technique but for Virtual Machines installed through virt-install : it supports injecting directly files in the initrd.img, so easier than for bare metal nodes : you just have to use two parameters for virt-install :

  • --initrd-inject=/path/to/your/ks.cfg
  • --extra-args "console=ttyS0 ks=file:/ks.cfg”

Hope this helps


Powered by Planet!
Last updated: March 29, 2017 10:30 AM