April 17, 2018

YUM4/DNF for CentOS 7 updates

April 17, 2018 05:39 AM

I am pleased to announce some significant updates to our ConfigManagement Special Interest Group for YUM4.  This provides YUM4, based on DNF technology, for testing on CentOS Linux 7/x86_64.  These updates are based on feedback from our prior test release last October. It includes signed packages, core DNF plugins, and uses a version of RPM very similar to and compatible with the upcoming version of CentOS 7.5.

This initiative is based on a partnership with the upstream YUM and DNF maintainers for the future of package management.  Our testing thus far indicates no major problems, but we would love to find out how it fits into your existing YUM 3 workflows. So please consider filling out the short survey - your feedback helps us all get better.

YUM 4 provides significant improvements such as fast dependency resolution and a stable, documented API. See the references below for detailed improvements. We have made every effort to preserve the existing end-user experience that is available with YUM 3. This is the primary reason for making YUM 4 available for testing now.

“What’s with the YUM4 name?”

We recognize that we need to enable users to test YUM4 (/usr/bin/yum4) within their existing workflows in order to fully understand compatibility while retaining YUM version 3 (/usr/bin/yum) as the default.  Yes, they can both be used on the same system, switching back and forth.  We do not recommend this behavior, but it should work with the only known issue being that each version retains its own separate history.  So using the Rollback capability is not recommended as each version will not be aware of the other’s history. Note that the YUM4 name is temporary for the coexistence of versions 3 & 4.

“So, what all has changed?”

The documentation does a great job explaining the differences in great detail. In short, your existing experience using yum to install, remove, and update are identical. However, there are changes such as some of the plugins and yum utilities are now consolidated into `dnf-plugins-core`. Some of the yum CLI options changed and are either converted for you automatically or silently ignored when that behavior is automatically included. Existing custom plugins written for YUM 3 will not work with YUM 4. Please reference the DNF API Reference and Changes in DNF hook API compared to YUM 3 links for further information.

“I found a bug, what should I do?”

Please report any found bugs on Red Hat Bugzilla against Fedora/dnf component (make sure to mention versions and that you use package from CentOS).

And remember to submit feedback in the short survey to help us understand how it can be improved further.

“Three step install, get started right away”

# yum install centos-release-yum4
# yum install yum4
# yum4 install dnf-plugins-core

“I was already testing a previous version of YUM4.  How do I update?”

# yum4 update centos-release-yum4
# yum4 update yum4

 

Many thanks to the CentOS Project team for their assistance in making this happen!

April 10, 2018

Updated CentOS Vagrant Images Available (v1803.01)

April 10, 2018 07:08 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.4.1708 for x86_64 (based on the sources of RHEL 7.4). All included packages have been updated to 3rd April 2018.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  9. Some people reported "could not resolve host" errors when running the centos/7 image for VirtualBox on Windows hosts. Try adding the following line to your Vagrantfile:
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

Downloads

The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ export box_checksum="4440a10744855ec2819d726074958ad6cff56bb5a616f6a45b0a42d602aa1154"
$ vagrant box add --checksum-type sha256 --checksum $box_checksum --provider libvirt --box-version 1803.01 centos/7

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations;
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro;
  • Kirill Kalachev, for reporting and debugging the host name errors with VirtualBox on Windows hosts.

April 09, 2018

Seven.centos.org is dead .. long life to blog.centos.org !

April 09, 2018 06:03 AM

When we initially launched seven.centos.org, the idea was just to have a single blog instance that CentOS Dev and QA team members could use to give feedback and also report status update about the rebuild and testing of CentOS 7 : that was an easy entry point for people wanting to know how far we were in the process, what to expect, etc (and so give more transparency that during the CentOS 6 rebuild era) ... That was in 2014.

Then it continued to be used by some contributors who wanted to give hints or talk about CentOS 7 new features, but without having a personal blog (or if their personal blog wasn't aggregated through our http://planet.centos.org instance). As more and more people joined the CentOS SIGs , seven.centos.org was more and more used a central blogging platform around the CentOS ecosystem, and so not really anymore about the status of CentOS 7 itself (which was released in July 2014). We even linked authentication against our (deployed in the mean time) https://accounts.centos.org (through OpenID).

So we thought it was time to rename it to blog.centos.org, to reflect the reality. All previous links/permalinks are still working, but default URL is now blog.centos.org.

Happy blogging !

April 06, 2018

CentOS Atomic Host 7.1803 Available for Download

April 06, 2018 01:34 AM

The CentOS Atomic SIG has released an updated version of CentOS Atomic Host (7.1803), a lean operating system designed to run Linux containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

This release rolls up all package minor updates that shipped through the month of March, including, most significantly, a move to docker version 1.13.

CentOS Atomic Host includes these core component versions:

  • atomic-1.22.1-1.gitd36c015.el7.centos.x86_64
  • cloud-init-0.7.9-9.el7.centos.6.x86_64
  • docker-1.13.1-53.git774336d.el7.centos.x86_64
  • etcd-3.2.15-1.el7.x86_64
  • flannel-0.7.1-2.el7.x86_64
  • kernel-3.10.0-693.21.1.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.14-2.el7.x86_64
  • rpm-ostree-client-2017.11-1.atomic.el7.x86_64

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

# atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

April 03, 2018

SuperComputing is #PoweredByCentOS

April 03, 2018 07:36 PM

Last week I, and one of my colleagues, had the opportunity to attend SuperComputing Asia in Singapore. The great thing about the various SuperComputing conferences is getting to see what amazing things people are doing with HPC (High Performance Computing) to make the world a better place. This was very much the case last week at SC-Asia.

We had the opportunity to interview three people who are using HPC to solve real world problems, and I wanted to share those interviews with you.

First we spoke with Abhishek Saha who is an engineering student at National University of Singapore. He's working with the  Hydroinformatics Institute of Singapore to simulate water run-off across the entire island, to propose solutions for flooding.

Next, we spoke with Nick Zang who is a research fellow at Nanyang Technological University. He's investigating jet engine noise, and ways of reducing that noise:

Finally, we spoke with Yap Jia Qing, who is the Founder & CEO of Nurture.AI, an organization dedicated to encouraging AI researchers to publish their findings in AI along with open source implementations of the research, in order to reduce the burden of reproducing, and then building on, that research. This, in turn, greatly accelerates the progress of AI research.

The first two of these researchers are using CentOS in their their supercomputing infrastrucures, as well as using the large CentOS infrastructure at the National SuperComputing Center. Nurture.ai is an Ubuntu shop. All of the work from all three of these projects is open source, in an effort to accelerate research and implementations.

March 26, 2018

CentOS Linux can only come from the CentOS Project

March 26, 2018 09:00 AM

An open letter from the CentOS Board.

We didn’t think we would have to say this, but here it is:

A rebuild of CentOS Linux is NOT CentOS Linux.

We can’t tell you how good a particular rebuild is, but we can definitely tell you one thing:  if we didn’t build it, it is not CentOS Linux.

The CentOS Project trademark guidelines make it clear that no one has the project’s permission to use the “CentOS” mark for software that is not built and signed by the project.

https://www.centos.org/legal/trademarks/

Unless the binaries are from the CentOS Project, it is not CentOS Linux. It should not be called “CentOS”. Doing so causes confusion with everyone. The only official maintainer of any images is the CentOS Project.

Other groups are welcome to take the CentOS sources, rebuild them, and produce their own modified distribution, as long as they do not call it CentOS or otherwise act without our permission in using the CentOS name. Such distributions are not CentOS, and they should have their own name.

Better yet, we welcome anyone to participate in the CentOS Project and to help us with CentOS Linux. To build something into CentOS Linux you need to be an active part of the community, such as these folks:

If you want your work with open source software to be included via one of the above or a new SIG, here’s where to start:

https://wiki.centos.org/SpecialInterestGroup

The value of CentOS Linux is in the community:  the participants and the users. When you use CentOS Linux you are part of a community full of people helping each other. You are using the platform that underlies so much upstream open source community development. That is the value of the trademark -- it says that you are getting the real software from the real community.

If you are interested in using (real) CentOS Linux in various places, you can find our software here:

https://www.centos.org/download/

March 10, 2018

Updated CentOS Vagrant Images Available (v1802.01)

March 10, 2018 07:55 AM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.4.1708 for x86_64 (based on the sources of RHEL 7.4). All included packages have been updated to 28th February 2018.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  9. Some people reported "could not resolve host" errors when running the centos/7 image for VirtualBox on Windows hosts. Try adding the following line to your Vagrantfile:
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

Downloads

The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ export box_checksum="4440a10744855ec2819d726074958ad6cff56bb5a616f6a45b0a42d602aa1154"
$ vagrant box add --checksum-type sha256 --checksum $box_checksum --provider libvirt --box-version 1801.02 centos/7

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations;
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro;
  • Kirill Kalachev, for reporting and debugging the host name errors with VirtualBox on Windows hosts.

March 06, 2018

CentOS Atomic Host 7.1802 Available for Download

March 06, 2018 10:29 PM

The CentOS Atomic SIG has released an updated version of CentOS Atomic Host (7.1802), a lean operating system designed to run Linux containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

This release rolls up all package minor updates that shipped through the month of February, including, most significantly, a newer version of rpm-ostree with support for overriding base packages during package layering operations. (see below for more details)

CentOS Atomic Host includes these core component versions:

  • atomic-1.20.1-9.git436cf5d.el7.centos.x86_64
  • cloud-init-0.7.9-9.el7.centos.2.x86_64
  • docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
  • etcd-3.2.11-1.el7.x86_64
  • flannel-0.7.1-2.el7.x86_64
  • kernel-3.10.0-693.17.1.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.14-2.el7.x86_64
  • rpm-ostree-client-2017.11-1.atomic.el7.x86_64

rpm-ostree override

While it's been possible to layer new packages onto the base CentOS Atomic tree for some time now, overriding existing base packages with layered alternatives either wasn't possible or was considered experimental. Version 7.1802 now allows for overriding base packages.

For example, the origin-clients package that includes OpenShift Origin's "oc" tool conflicts with the kubernetes-client package included in the base tree. You can use package layering and overrides to install the openshift-release rpm, remove the conflicting rpms, and install the origin-clients rpm:

# rpm-ostree install centos-release-openshift-origin
# rpm-ostree override remove kubernetes-client kubernetes-node -r

# rpm-ostree install origin-clients -r

# oc cluster up
Starting OpenShift using openshift/origin:v3.7.0 ...
Pulling image openshift/origin:v3.7.0
...

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

# atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets every two weeks as part of the Project Atomic community meeting at 16:00 UTC on Monday in the #atomic channel. You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

February 22, 2018

Linchpin 1.5 release

February 22, 2018 08:15 PM

LinchPin is a simple and flexible hybrid cloud orchestration tool. Its intended purpose is managing cloud resources across multiple infrastructures. These resources can be provisioned, decommissioned, and configured all using declarative data and a simple command-line interface.

Linchpin recently release 1.5, and I had an opportunity to talk with Clint Savage earlier this week about Linchpin and what it offers the world.

You can read more about Linchpin at some of the following places:

Docs: http://linchpin.readthedocs.io
IRC: #linchpin on Freenode
Github: https://github.com/CentOS-PaaS-SIG/linchpin
Mailing list: https://www.redhat.com/mailman/listinfo/linchpin

Linchpin is part of the CentOS PaaS SIG, which you can read more about at https://wiki.centos.org/SpecialInterestGroup/PaaS/

Also, Clint wrote this great article last year, which will give you more background: https://opensource.com/article/17/6/linchpin

 

February 19, 2018

Using newer PHP stack (built and distributed by CentOS) on CentOS 7

February 19, 2018 11:00 PM

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.

But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.

The application that we use for https://bugs.centos.org is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].

That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.

Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)

The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here

So, how do we proceed ? easy ! First let's add the repository :

yum install centos-release-scl

And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)

So here we go :

yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm

As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)

Let's do this :

systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf << EOF
AddType text/html .php 
DirectoryIndex index.php
<FilesMatch \.php$>
      SetHandler "proxy:fcgi://127.0.0.1:9000"
</FilesMatch>
EOF
systemctl restart httpd

And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's blog post about this (Thank you again Remi !)

Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)

February 14, 2018

CentOS Dojo @ FOSDEM: Videos

February 14, 2018 09:12 PM

For those of you who were unable to attend the CentOS Dojo in Brussels, here are all of the videos from the event.

Subscribe to our YouTube at youtube.com/TheCentOSProject 

KB's "State of CentOS"

Bert Van Vreckem - Basic troubleshooting of network services

Tomas Oulevey - Anaconda addon development

Matthias Runge - Opstools SIG

Haikel Guemar - Metrics with Gnocchi

Colin Charles - Understanding the MySQL database ecosystem

Fabian Arrotin - Content caching

Sean O'Keeffee - Foreman and Katello

Tom Callaway  - Building modern code with devtoolset

Spyros Trigazis - Practical system containers with Atomic

Kris Buytaert - Deplyong your SaaS stack OnPrem

February 06, 2018

FOSDEM 2018

February 06, 2018 08:25 PM

Another FOSDEM is history. I wanted to take a moment to thank all of the people that helped out at the CentOS table at FOSDEM, as well as at the Dojo before FOSDEM.

FOSDEM

We had about 75 people in attendance at the Dojo on Friday, with 12 presentations from various speakers. Some of these presentations are already available on YouTube, with the rest coming over the next few days.

FOSDEM

Traffic was steady at the CentOS table, from people new to Linux, all the way 15-year CentOS sysadmin veterans. A huge thank you to everyone who dropped by and chatted with us.

FOSDEM

If you missed FOSDEM and the Brussels Dojo, there's always other opportunities to meet CentOS people. This year we expect to have another 4 or 5 Dojos around the world, starting in Singapore next month, and moving on to Meyrin (Switzerland), Oak Ridge (USA), and Delhi (India). If you'd like to host a Dojo anywhere in the world, please get in touch with the Centos-Promo mailing list to see how we can help you achieve your goal. We can usually help find speakers, venues, and funding for your event.

January 20, 2018

Updated CentOS Vagrant Images Available (v1801.01)

January 20, 2018 05:27 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.4.1708 for x86_64 (based on the sources of RHEL 7.4). All included packages have been updated to 9 January 2017 and include important fixes for the Meltdown and Spectre vulnerabilities affecting modern processors.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  9. Some people reported "could not resolve host" errors when running the centos/7 image for VirtualBox on Windows hosts. Try adding the following line to your Vagrantfile:
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

Downloads

The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ export box_checksum="4440a10744855ec2819d726074958ad6cff56bb5a616f6a45b0a42d602aa1154"
$ vagrant box add --checksum-type sha256 --checksum $box_checksum --provider libvirt --box-version 1801.02 centos/7

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations;
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro;
  • Kirill Kalachev, for reporting and debugging the host name errors with VirtualBox on Windows hosts.

Update : this blog post was updated on January Wednesday 24th to reflect different checksum as the image to use is 1801_02

January 18, 2018

Diagnosing nf_conntrack/nf_conntrack_count issues on CentOS mirrorlist nodes

January 18, 2018 11:00 PM

Yesterday, I got some alerts for some nodes in the CentOS Infra from both our monitoring system, but also confirmed by some folks reporting errors directly in our #centos-devel irc channel on Freenode.

The impacted nodes were the nodes we use for mirrorlist service. For people not knowing what they are used for, here is a quick overview of what happens when you run "yum update" on your CentOS node :

  • yum analyzes the .repo files contained under /etc/yum.repos.d/
  • for CentOS repositories, it knows that it has to use a list of mirrors provided by a server hosted within the centos infra (mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra )
  • yum then contacts one of the server behind "mirrorlist.centos.org" (we have 4 nodes so far : two in Europe and two in USA, all available over IPv4 and IPv6)
  • mirrorlist checks the src ip and sends back a list of current/up2date mirrors in the country (some GeoIP checks are done)
  • yum then opens connection to those validated mirrors

We monitor the response time for those services, and average response time is usually < 1sec (with some exceptions, mostly due to network latency also for nodes in other continents). But yesterday the values where not only higher, but also even completely missing from our monitoring system, so no data received. Here is a graph from our monitoring/Zabbix server :

mirrorlist-response-time-error.png

So clearly something was happening and time to also find some patterns. Also from our monitoring we discovered that the number of tracked network connections by the kernel was also suddenly higher than usual. In fact, as soon as your node does some state tracking with netfilter (like for example -m state ESTABLISHED,RELATED ), it keeps that in memory. You can easily retrive number of actively tracked connections like this :

cat /proc/sys/net/netfilter/nf_conntrack_count 

So it's easy to guess what happens if the max (/proc/sys/net/netfilter/nf_conntrack_max) is reached : kernel drops packets (from dmesg):

nf_conntrack: table full, dropping packet

Depending on the available memory, you can get default values, which can be changed in real-time. Don't forget to also tune then the Hash size (basic rule is nf_conntrack_max / 4) On the mirrorlist nodes, we had default values of 262144 (so yeah, keeping track of that amount of connections in memory), so to get quickly the service in shape :

new_number="524288"
echo ${new_number} > /proc/sys/net/netfilter/nf_conntrack_max
echo $(( $new_number / 4 )) > /sys/module/nf_conntrack/parameters/hashsize

Other option was also to flush the table (you can do that with conntrack -F , tool from conntrack-tools package) but it's really only a temporary fix, and that will not help you getting the needed info for proper troubleshooting (see below)

Here is the Zabbix graph showing that for some nodes it was higher than default values, but now kernel wasn't dropping packets.

ip_conntrack_count.png

We could then confirm that service was then working fine (not "flapping" anymore).

So one can think that it was the only solution for the problem and stop investigation there. But what is the root cause of this ? What happened that opened so many (unclosed) connections to those mirrorlist nodes ? Let's dive into nf_conntrack table again !

Not only you have the number of tracked connections (through /proc/sys/net/netfilter/nf_conntrack_count) but also the whole details about those. So let's dump that into a file for full analysis and try to find a pattern :

cat /proc/net/nf_conntrack > conntrack.list
cat conntrack.list |awk '{print $7}'|sed 's/src=//g'|sort|uniq -c|sort -n -r|head

Here we go : same range of IPs on all our mirrorlist servers having thousands of ESTABLISHED connection. Not going to give you all details about this (goal of this blog post isn't "finger pointing"), but we suddenly identified the issue. So we took contact with network team behind those identified IPs to report that behaviour, still to be tracked, but wondering myself if a Firewall doing NAT wasn't closing tcp connections at all, more to come.

At least mirrorlist response time is now back at usual state :

mirrorlist-response-time.png

So you can also let your configuration management now set those parameters through dedicated .conf under /etc/systctl.d/ to ensure that they'll be applied automatically.

January 09, 2018

Using a RaspberryPI3 as Unifi AP controller with CentOS 7

January 09, 2018 11:00 PM

That's something I should have blogged about earlier, but I almost forgot about it, until I read on twitter other people having replaced their home network equipment with Ubnt/Ubiquiti gear so I realized that it was on my to 'TOBLOG' list.

During the winter holidays, the whole family was at home, and also with kids on the WiFi network. Of course I already had a different wlan for them, separated/seggregated from the main one, but plenty of things weren't really working on that crappy device. So it was time to setup something else. I had opportunity to play with some Ubiquiti devices in the past, so finding even an old Unifi UAP model was enough for my needs (just need Access Point, routing/firewall being done on something else).

If you've already played with those tools, you know that you need a controller to setup the devices up , and because it's 'only' a java/mongodb stack, I thought it would be trivial to setup on a low-end device like RaspberryPi3 (not limited to that , so all armhfp boards on which you can run CentOS would work)

After having installed CentOS 7 armhfp minimal on the device, and once logged, I just had to add the mandatory unofficial epel repository for mongodb

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Epel rebuild for armhfp
baseurl=https://armv7.dev.centos.org/repodir/epel-pass-1/
enabled=1
gpgcheck=0

EOF

After that, just installed what's required to run the application :

yum install mongodb mongodb-server java-1.8.0-openjdk-headless -y

The "interesting" part is that now Ubnt only provides .deb packages , so we just have to download/extract what we need (it's all java code) and start it :

tmp_dir=$(mktemp -d)
cd $tmp_dir
curl -O http://dl.ubnt.com/unifi/5.6.26/unifi_sysvinit_all.deb
ar vx unifi_sysvinit_all.deb
tar xvf data.tar.xz
mv usr/lib/unifi/ /opt/UniFi
cd /opt/UniFi/bin
/bin/rm -Rf $tmp_dir
ln -s /bin/mongod

You can start it "by hand" but let's create a simple systemd file and use it directly :

cat > /etc/systemd/system/unifi.service << EOF
[Unit]
Description=UBNT UniFi Controller
After=syslog.target network.target

[Service]
WorkingDirectory=/opt/UniFi
ExecStart=/usr/bin/java -jar /opt/UniFi/lib/ace.jar start
ExecStop=/usr/bin/java -jar /opt/UniFi/lib/ace.jar stop

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable unifi --now

Don't forget that :

  • it's "Java"
  • running on slow armhfp processor

So that will take time to initialize. You can follow progress in /opt/UniFi/logs/server.log and wait for the TLS port to be opened :

while true ; do sleep 1 ; ss -tanp|grep 8443 && break ; done

Dont forget to open the needed ports for firewall and you can then reach the Unifi controller running on your armhfp board.

January 02, 2018

turn off unused GPU on the laptop

January 02, 2018 09:41 PM

Lots of us have dual graphics cards in the laptops these days, but almost everyone I know tends to use one or the other, hardly ever switching on the fly, since typical usage patterns tend to stick for periods of time.

One thing that almost no one seems to do however is turn off the unusued gpu – when on the move, this can have a significant impact on your battery life.

On CentOS Linux 7, the way to do this would be something like this :

echo ‘OFF’ > /sys/kernel/debug/vgaswitcheroo/switch

And thats it, lterally send it the OFF and the unused gpu is powered down.

You can also querry the interface as follows:

# cat /sys/kernel/debug/vgaswitcheroo/switch

On my Thinkpad T460p it looks like this :

0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynOff:0000:02:00.0

For more info on the vgaswitcheroo, take a look at your kernel document eg https://www.kernel.org/doc/html/v4.10/gpu/vga-switcheroo.html

Enjoy!

January 01, 2018

Lightweigth CentOS 7 i686 desktop on older machine

January 01, 2018 11:00 PM

So, end of the year is always when you have some "time off" and so can work on various projects that were left behind. While searching for other hardware collecting dust in my furniture (other blog post coming soon about that too) I found my old Asus Eeepc 900 and was wondering if I could resurrect it.

While it was working CentOS 5 and then 6 "just fine" I wanted to give it a try with CentOS 7.

Of course, if you remember the specs from that ~2008 small netbook, you remember that it had :

  • slow cpu (Intel(R) Celeron(R) M processor 900MHz)
  • only 1Gb of ram
  • very limited disk space (ASUS-PHISON OB SSD 4GB + additional 8GB for my model)

Setting up the full Gnome3 experience on it would be completely useless and also unusable. So let's try to setup CentOS 7 AltArch minimal (needed as cpu is only i686/32bits) and add what we need after that. So here we go :

  • Download netinstall iso image (I used "local" mirror for me , so http://mirror.nucleus.be/centos-altarch/7/isos/i386/CentOS-7-i386-NetInstall-1611.iso)
  • use dd to transfer it to usb storage key
  • starting the installed on the eeepc
  • wait .... wait .... wait ...

Once installed and up2date, one needs to add additional repositories that aren't there by default. As a reminder, there is no official Epel builds for i686 (same as for armhfp ) but Johnny started to rebuild Epel SRPMs for that specific reason, so here we go :

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Epel rebuild for i686
baseurl=https://buildlogs.centos.org/c7-epel/
enabled=1
gpgcheck=0

EOF

cat > /etc/yum.repos.d/kernel.repo << EOF
[kernel]
name=LTS kernel for i686
baseurl=https://buildlogs.centos.org/c7.1708.exp.i386/
enabled=1
gpgcheck=0

EOF

If you see the other kernel repository, that's because the needed ath5k kernel module for the Wifi device in the Eeepc isn't there in the default kernel nor available through elrepo, but it works with that 4.9.x LTS kernel we build and maintain/update for AltArch so let's use it.

We can install what we need (YMMV though) :

yum update -y
yum groupinstall -y 'X Window System'
yum install -y openbox lightdm lightdm-gtk 
systemctl enable lightdm.service
yum install -y tint2 terminator firefox terminus-fonts-console terminus-fonts network-manager-applet gnome-keyring dejavu-sans-fonts dejavu-fonts-common dejavu-serif-fonts dejavu-sans-mono-fonts open-sans-fonts overpass-fonts liberation-mono-fonts liberation-serif-fonts google-crosextra-caladea-fonts google-crosextra-carlito-fonts 

echo 'tint2 &' >> /etc/xdg/openbox/autostart
echo 'nm-applet &' >> /etc/xdg/openbox/autostart
systemctl reboot

The last line with tint2 , terminator and firefox is purely optional but that's what I needed on my eeepc. Same for network-manager-applet, but once installed, it gives you easy to work with applet integrated in openbox environment.

You can then customize it, etc, but I like it so far for what I wanted to use that old netbook for :

CentOS 7 i686 running on Asus Eeepc 900

November 01, 2017

Community contributed Kickstarts for CentOS Linux

November 01, 2017 12:25 PM

hi,

At https://github.com/CentOS/Community-Kickstarts we’ve been collecting community contributed kickstarts for various roles, deployments and versions. If you are writing and/or using kickstarts in your setup, it would be awesome to have them hosted here as well, please feel free to send PR’s. Just keep in mind a few basic things:

  • Kickstarts should end in .cfg or .ks
  • Generally should install from mirror.centos.org unless otherwise noted
  • If a hashed password is provided, include the plaintext version in a comment. Since these kickstarts are for example purposes, please use password or centos as the passwords as needed.
  • Kickstart names should provide a version and brief description, for example centos5-raid5.cfg or centos7-workstation.ks

Take a look at the README that has a few more pieces of info about this repository https://github.com/CentOS/Community-Kickstarts/blob/master/README.md

October 11, 2017

Four years later with CentOS and Red Hat

October 11, 2017 07:00 AM

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

First and foremost, CentOS is about integration, and its important to know who the community is. We’re your sysadmins and operations teams. We’re your SREs, the OPS in your devops. We’re a force multiplier to developers, the angry voice that says “stop disabling SELinux” and “show me your unit tests”. We’re the community voice encouraging you to do things the right way, rather than taking an easy shortcut we know from experience will come back to bite you.

What we’re not is developers. We may pull in kernel patches, but we’re not kernel developers. We can help you do the root cause analysis to figure out why your app is suddenly not performing, but we aren’t the ones to write the code to fix it. We don’t determine priority for what does or doesn’t get fixed, that’s what Red Hat does.

The core distribution of CentOS is and has always been based on code written by Red Hat. This doesn’t mean it’s a choice of “either CentOS or RHEL,” because we’re in this together. CentOS provides Red Hat a community platform for building and testing things like OpenStack with RDO. We build new ecosystems around ARM servers. We provide a base layer for others to innovate around emerging technologies like NFV. But none of this would be possible without the work of RH’s engineering teams.

The community can build, organize and deliver tools in any number of creative ways, but ultimately the code behind them is being developed by engineers paid to address the needs of Red Hat’s customers. You can bet that RH is keeping an eye on what the CentOS community is using and building, but that doesn’t necessarily translate to business need.

We’re here to empower operators who want to experiment on top of the enterprise base lifespan. We’re here to bring tools and technology to those for whom it may be otherwise be out of reach. We’re here to take use cases and lessons learned from the community back to Red Hat as advocates. We’re happy to serve both audiences in this capacity, but let’s not forget how we buy the ‘free as in beer’.

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

October 10, 2017

Using Ansible Openstack modules on CentOS 7

October 10, 2017 10:00 PM

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

The good news is that Ansible has already some modules for Openstack but it has some requirements and a little bit of understanding before being able to use those.

First of all, all the ansible os_ modules need "shade" on the host included in the play, and that will be responsible of all os_ modules launch. At the time of writing this post, it's not yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on our CBS builders

Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through os-client-config

That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.

That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :

- name: Ensuring we have required pkgs for ansible/openstack
  yum:
    name: python2-shade
    state: installed

- name: Ensuring local directory to hold the os-client-config file
  file:
    path: /etc/openstack
    state: directory
    owner: root
    group: root

- name: Adding clouds.yaml for os-client-config for further actions
  template:
    src: clouds.yaml.j2
    dest: /etc/openstack/clouds.yaml
    owner: root
    group: root
    mode: 0700

Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play before using the os_* modules :

clouds:
  {{ cloud_name }}:
    auth:
      username: admin
      project_name: admin
      password: {{ openstack_admin_pass }}
      auth_url: http://{{ openstack_controller }}:5000/v3/
      user_domain_name: default
      project_domain_name: default
    identity_api_version: 3

You just have to adapt to your needs (see doc for this) but the interesting part is the identity_api_version to force v3.

Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :

- name: Configuring OpenStack user[s]
  os_user:
    cloud: "{{ cloud_name }}"
    default_project: "{{ item.0.name }}"
    domain: "{{ item.0.domain_id }}"
    name: "{{ item.1.login }}"
    email: "{{ item.1.email }}"
    password: "{{ item.1.password }}"           
  with_subelements:
    - "{{ cloud_projects }}"
    - users  
  no_log: True

From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this :

cloud_projects:
  - name: demo
    description: demo project
    domain_id: default
    quota_cores: 20
    quota_instances: 10
    quota_ram: 40960
    users:
      - login: demo_user
        email: demo@centos.org
        password: Ch@ngeM3
        role: admin # can be _member_ or admin
      - login: demo_user2
        email: demo2@centos.org
        password: Ch@ngeMe2

Now that it works, you can explore all the other os_* modules and I'm already using those to :

  • Import cloud images in glance
  • Create networks and subnets in neutron
  • Create projects/users/roles in keystone
  • Change quotas for those projects

I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later.

September 28, 2017

Using CentOS 7 armhfp VM on CentOS 7 aarch64

September 28, 2017 10:00 PM

Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own Cubietruck or RaspberryPi3 is time consuming : removing the sdcard, reflashing with the correct CentOS 7 image and booting/testing the pkg/update/etc ...

So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt. Let's so start with a CentOS 7 aarch64 minimal setup and then :

yum install qemu-kvm-tools qemu-kvm virt-install libvirt libvirt-python libguestfs-tools-c
systemctl enable libvirtd --now

That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.

So here is just what we need :

  • Import the RootFS from an existing image
curl http://mirror.centos.org/altarch/7/isos/armhfp/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img.xz|unxz >/var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img
  • Convert image to qcow2 (that will give us more flexibility) and extend it a little bit
qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2
qemu-img resize /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 +15G
  • Extract kernel+initrd as libvirt will boot that directly for the VM
mkdir /var/lib/libvirt/armhfp-boot
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/ /var/lib/libvirt/armhfp-boot/

So now that we have a RootFS, and also kernel/initrd, we can just use virt-install to create the VM (pointing to existing backend qcow2) :

virt-install \
 --name centos7_armhfp \
 --memory 4096 \
 --boot kernel=/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.40-203.el7.armv7hl,initrd=/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.40-203.el7.armv7hl.img,kernel_args="console=ttyAMA0 rw root=/dev/sda3" \
 --disk /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 \
 --import \
 --arch armv7l \
 --machine virt \

And here we go : we have a armhfp VM that boots really fast (compared to a armhfp board using a microsd card of course)

At this stage, you can configure the node, etc.. The only thing you have to remember is that of course kernel will be provided from outside the VM, so just extract it from an updated VM to boot on that kernel. Let's show how to do that, as in the above example, we configured the VM to run with 4Gb of ram, but only 3 are really seen inside (remember the 32bits mode and so the need for PAE on i386 ?)

So let's use this example to show how to switch kernel : From the armhfp VM :

# Let extend first as we have bigger disk
growpart /dev/sda 3
resize2fs /dev/sda3
yum update -y
yum install kernel-lpae
systemctl poweroff # we'll modify libvirt conf file for new kernel

Back to the hypervisor we can again extract needed files :

virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae /var/lib/libvirt/armhfp-boot/boot/
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img /var/lib/libvirt/armhfp-boot/boot/

And just virsh edit centos7_armhfp so that kernel and armhfp are pointing to correct location:

<kernel>/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae</kernel>
<initrd>/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img</initrd>

Now that we have a "gold" image, we can even use exiting tools to provision quickly other nodes on that hypervisor ! :

time virt-clone --original centos7_armhfp --name armhfp_guest1 --file /var/lib/libvirt/images/armhfp_guest1.qcow2
Allocating 'armhfp_guest1.qcow2'                                               |  18 GB  00:00:02     

Clone 'armhfp_guest1' created successfully.

real    0m2.809s
user    0m0.473s
sys 0m0.062s

time virt-sysprep --add /var/lib/libvirt/images/armhfp_guest1.qcow2 --operations defaults,net-hwaddr,machine-id,net-hostname,ssh-hostkeys,udev-persistent-net --hostname guest1

virsh start armhfp_guest1

As simple as that. Of course, in the previous example we were just using the default network from libvirt, and not any bridge, but you get the idea : all the rest with well-known concept for libvirt on linux.

September 20, 2017

Boosting CentOS server performance

September 20, 2017 07:00 AM

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

The fix

I know most people who found this don’t care about the details, so we’ll get right to the good stuff. Check your active tuned profile. If your output looks like the example below, you probably want to change it.

[root@centos ~]# tuned-adm active
Current active profile: balanced

The ‘balanced’ profile means the CPU governor is set to powersave, which won’t do your server any favors. You can validate this by running cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor. To fix it, run the command below:

[root@centos ~]# tuned-adm profile throughput-performance

That’s it. This changes the governor to performance which should give you a pretty decent performance bump without any additional changes, and across all hardware platforms.If you’re interested in figuring out why the default setting is set this way, I’ll explain.

Why the default is “wrong”

The tuned package is installed and enabled by default. When it runs for the first time, it tries to automatically select the best performance profile for the system by running a couple of comparisons. It does this by checking virt-what output, and using the contents of /etc/system-release-cpe. The tuned file /usr/lib/tuned/recommend.conf is then used as the rulebook to see what matches and what doesn’t.

This starts to unravel a bit with CentOS, because the packages are derived from RHEL(Red Hat Enterprise Linux), and while RHEL may differentiate between server, workstation, etc CentOS does not. If you look carefully at the recommends.conf check for the throughput-performance profile, you’ll see that they check to see if the strings computenode or server exist in /etc/system-release-cpe. On CentOS, neither one does, because the distribution doesn’t make that distinction. Because these strings aren’t found, the fallback option of balanced is chosen.

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

September 02, 2017

Battery and power status on your CentOS Linux laptop

September 02, 2017 07:06 PM

The upower cli tool will get you a ton of great info for the battery ( and other things related to power ). Make sure you have it installed ( rpm -q upower ), and give it a shot like this :

$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
  native-path:          BAT0
  vendor:               SMP
  model:                45N1703
  serial:               5616
  power supply:         yes
  updated:              Sat 02 Sep 2017 19:43:02 BST (39 seconds ago)
  has history:          yes
  has statistics:       yes
  battery
    present:             yes
    rechargeable:        yes
    state:               fully-charged
    warning-level:       none
    energy:              21.84 Wh
    energy-empty:        0 Wh
    energy-full:         21.9 Wh
    energy-full-design:  45.02 Wh
    energy-rate:         0.00219125 W
    voltage:             16.237 V
    percentage:          99%
    capacity:            48.645%
    technology:          lithium-polymer
    icon-name:          'battery-full-charged-symbolic'

As you can see after ~ 3 years of extensive use, I should really look for a replacement battery for this laptop, at 48% capacity, its not really doing very well.

To enumerate device paths, use the -e flag like this :

$ upower -e 
/org/freedesktop/UPower/devices/line_power_AC
/org/freedesktop/UPower/devices/battery_BAT0
/org/freedesktop/UPower/devices/keyboard_0003o046DoC52Bx0004
/org/freedesktop/UPower/devices/mouse_0003o046DoC52Bx0005
/org/freedesktop/UPower/devices/DisplayDevice

Now we can check how that external keyboards battery’s is doing

  native-path:          /sys/devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/0003:046D:C52B.0003/0003:046D:C52B.0004
  vendor:               Logitech, Inc.
  model:                K750
  serial:               D9ED612B
  power supply:         no
  updated:              Sat 02 Sep 2017 19:59:15 BST (29 seconds ago)
  has history:          yes
  has statistics:       no
  keyboard
    present:             yes
    rechargeable:        yes
    state:               discharging
    warning-level:       none
    luminosity:          80 lx
    percentage:          55%
    icon-name:          'battery-good-symbolic'
  History (charge):
    1504378755	55.000	discharging


Clearly the light in this room, right now, isnt bright enough to be charging the keyboard via its solar cells. Might leave it closer to the window tomorrow.

As you can see from the enumerated list, there is line_power_AC as well as the mouse ( which is actually a trackpad I used ). And if you are so inclined ( I wasent, but just did this for all my laptops.. ) you can track this info and graph it, push it to your monitoring service etc.

from the readme file:
UPower is an abstraction for enumerating power devices,
listening to device events and querying history and statistics.
Any application or service on the system can access the
org.freedesktop.UPower service via the system message bus.

Give it a shot.

August 31, 2017

Come help build duffy2 for CiCo

August 31, 2017 10:36 AM

When I came onboard with Red Hat, one of the key impacts that I wanted to be able to use Red Hat resources for was to help the wider opensource community write, manage and deliver better code. It was with that goal that I conceptualised, bootstrapped, argued for and then got the https://ci.centos.org/ project started up. Using well established industry standards ( Jenkins ! ) I was able to rapidly build out the provising infra around it, with copious amounts of Fabian’s help. My focus, at the time, was that it should be simple enough to just-work, but capable enough to keep working. There were many hacks involved, making it impossible to really adapt and grow outside of the service.

100’s of thousands of CI jobs later, I think we can call that bootstrap a success.

Today, as we move forward to adding more machine types, extending support for what we have – It gives me great pleasure to start talking about how the pieces come together, and how the service backend works – and open the entire stack up for folks to come help us get better, faster, better-tested and deliver duffy as a running service built on modern service development methodologies.

Come join me at https://github.com/kbsingh/duffy2 as we bootstrap the next instance of this service. Everyone’s welcome!

I also want to remind people that https://ci.centos.org is open to any open source project that can benefit from it ( including the access to bare metal hosts on demand ).

regards,

Git 2 on CentOS Linux 7

August 31, 2017 12:56 AM

The distro shipped git is still at version 1.8, but if you need or want a newer git version there are a few options. The CentOS SCL SIG {https://wiki.centos.org/SpecialInterestGroup/SCLo} publishes a git212 collection that hosts git version 2.12.2 ( at the moment, it will get updates as updates become avaialble ). There is a collection for git 2.5 as well ( Called sclo-git25 ), should you want that version.

In order to get setup, first get the centos-release-scl package on the machine, that will setup the scl yum repo’s and the sig’s RPM Signing key.
yum install centos-release-scl

With that in place, you should be able to check what scl collections are available for git with a yum command like this :
yum list sclo-git\*

And then install the version you want with :
yum install sclo-git212.x86_64

Once that completes, you can check that the scl is installed and working with something like this :
$ scl enable sclo-git212 /bin/bash
$ git --version
git version 2.12.2

This is good, but I find it a pain to need to enable scl’s all the time, so I use a line in my bashrc like this :
source scl_source enable sclo-git212

With that in place, every shell now has git version 2. And any other apps you run, in the shell would have this ver of git as well.

August 04, 2017

Keeping an eye on CentOS performance with Grafana

August 04, 2017 07:00 AM

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.

Grafana Screenshot

Collectd

What, you thought I’d jump straight into Grafana? We have to have data to collect first, and the best way to do that on CentOS is via collectd

The simplest way to get collectd on CentOS is via the EPEL repository. If you’re new to CentOS, or aren’t familiar with the Fedora’s EPEL repo, the command below is all you need to get started.

yum install epel-release

Now that the EPEL repo is enabled, it’s easy enough to install collectd in the same manner:

yum install collectd

There are a number of additional collectd plugins available in EPEL, but for our purposes here the base is enough. I would encourage you to explore the available plugins if your needs aren’t met by the base plugin.

Now that it’s installed, we need to configure collectd to send data out. Collectd generates the stats, but we need to put it someplace that Grafana can use.

In /etc/collectd.conf there are a few things we need to configure. In the Global section, uncomment the lines for Hostname, BaseDir, PIDFile, PluginDir, and TypesDB. You’ll need to modify Hostname, but the rest should be fine as the defaults. It should look something like the snippet below:

Hostname    "YourHostNameHere"
#FQDNLookup   true
BaseDir     "/var/lib/collectd"
PIDFile     "/var/run/collectd.pid"
PluginDir   "/usr/lib64/collectd"
TypesDB     "/usr/share/collectd/types.db"

Now that we have the basic app information set, we need to enable the plugins we wish to use. For my instance, I have syslog, cpu, disk, interface, load, memory, and network uncommented. Of these, the default values are fine for everything except network. The network plugin is used to send data to our collector, which in this case is influxdb. The network plugin will need to point to your influxdb server. Since we’re doing everything locally in this example, we’re pointing to localhost. It should look like the following:

<Plugin network>
  Server "127.0.0.1" "8096"
</Plugin>

InfluxDB

Now that we’re done with Collectd, we have to configure influxdb to pull in the data collectd is generating. Since influxdb isn’t in EPEL, we’ll have to pull this in from their repository. The command below makes it easy.

cat <<EOF > /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/centos/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key
EOF

Once that’s done, install the package with yum install influxdb and then it’s ready to configure. There are only a few things that need to happen in the /etc/influxdb/influxdb.conf config file.

In the [http] section of your /etc/influxdb/influxdb.conf, set enabled = true, and bind-address = ":8096". It should look like this:

[http]
  # Determines whether HTTP endpoint is enabled.
enabled = true

  # The bind address used by the HTTP service.
bind-address = ":8086"

Then scroll down to the [[collectd]] section and configure it like the section below:

[[collectd]]
  enabled = true
  bind-address = ":8096"
  database = "collectd"
  typesdb = "/usr/share/collectd"

At this point we can go ahead and start both services to ensure that they’re working properly. To begin, we’ll enable collectd, and ensure that it’s sending data. As with other services, we’ll use systemd for this. In the sample below, you’ll see the commands used, and the output of a running collectd daemon.

[jperrin@monitor ~]$ sudo systemctl enable collectd
[jperrin@monitor ~]$ sudo systemctl start collectd
[jperrin@monitor ~]$ sudo systemctl status collectd
● collectd.service - Collectd statistics daemon
   Loaded: loaded (/usr/lib/systemd/system/collectd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-08-02 11:22:18 PDT; 6min ago
     Docs: man:collectd(1)
           man:collectd.conf(5)
 Main PID: 18366 (collectd)
   CGroup: /system.slice/collectd.service
           └─18366 /usr/sbin/collectd

Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "disk" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "interface" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "load" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "memory" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "network" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: Systemd detected, trying to signal readyness.
Aug 2 11:22:18 monitor collectd[18366]: Initialization complete, entering read-loop.
Aug 2 11:22:18 monitor systemd[1]: Started Collectd statistics daemon.

Now that collectd is working, start up InfluxDB and make sure it’s gathering data from collectd.

[jperrin@monitor ~]$ sudo systemctl enable influxdb
[jperrin@monitor ~]$ sudo systemctl start influxdb
[jperrin@monitor ~]$ sudo systemctl status influxdb
● influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/usr/lib/systemd/system/influxdb.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-07-29 18:28:20 PDT; 1 weeks 6 days ago
     Docs: https://docs.influxdata.com/influxdb/
 Main PID: 23459 (influxd)
   CGroup: /system.slice/influxdb.service
           └─23459 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-12T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'interrupt' AND time > 417367h GR...) service=query
Aug 2 10:35:10 monitor influxd[23459]: [httpd] 172.20.1.40, 172.20.1.40,::1 - - [12/Aug/2017:10:35:10 -0700] "GET /query?db=collectd&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22load_shortte...ean%28%22value%
Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-02T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'nice' AND time > 417367h GROUP B...) service=query

As we can see in the output above, the service is working, and the data is being collected. From here, the only thing left to do is present it via Grafana.

Grafana

To install Grafana, we’ll create another repository as we did with InfluxDB. Unfortunately the Grafana folks don’t keep release versions separate in the repo, so this looks like we’re using an EL6 repo despite doing this work on EL7.

cat <<EOF > /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packagecloud.io/grafana/stable/el/6/$basearch
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
EOF

Now that the repository is in place and enabled, we can install grafana the same as the rest, yum install grafana. Once this is done, we can start working on the configuration. For this tutorial, we’re just going to set an admin username and password, because we’re doing this for a tutorial, and a single user instance. I would absolutely encourage you to read the docs if you want to start doing a bit more with grafana.

To accomplish this reasonably basic configuration, simply uncomment the admin_user and admin_password lines in the [security] section of /etc/grafana/grafana.ini, and set your own values. In this instance I’m using admin/admin, because that’s what you do in examples, right?

[security]
# default admin user, created on startup
admin_user = admin

# default admin password, can be changed before first start of grafana,  or in profile settings
admin_password = admin

Collectd data source for grafana

Now you can start grafana with systemctl start grafana-server, and configure it via the web interface. After you log in for the first time, you’ll be prompted to configure a few things including a data source, and a dashboard. Since we’re doing this all on the localhost, you’ll be able to cheat and use the data source settings in the screenshot. Don’t worry, we’re nearly there and there’s only a little left to do.

Once you have the datasource configured, you’ll be prompted to create your first dashboard. While you can certainly do this, it’s a little intimidating for a first run with grafana. One easy solution to this is to import one of the available templates offered on Grafana’s website. In my case, I opted to use the Host Overview. It provides a nice group of metrics and graphs as a base to use and build from.

Once you’ve gotten everything set up, it’s now down to personal preference and further tinkering. Once again I would very much recommend reading the documentation because there are a wealth of options and changes I didn’t touch on for this intro.

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.

July 27, 2017

Using NFS for OpenStack (glance,nova) with selinux

July 27, 2017 10:00 PM

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)

It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let's test this before then using NFS through Infiniband (using IPoIB), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)

It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2;

require {
    type glance_api_t;
    type virtlogd_t;
    type nfs_t;
    class file { append getattr open read write unlink create };
    class dir { search getattr write remove_name create add_name };
}

#============= glance_api_t ==============
allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name };
allow glance_api_t nfs_t:file { write getattr unlink open create read};

#============= virtlogd_t ==============
allow virtlogd_t nfs_t:dir search;
allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes

July 22, 2017

Hands-on with a Minnowboard Dual-e

July 22, 2017 07:00 AM

Connected Minnowboard Dual-e

I recently got my hands on one of the dual ethernet Minnowboards from Adi Engineering. I’ve been on the hunt for a nice low power, small form factor development board for a while, but nearly everything available in my price range comes with a single network card.

This board is a bit of an improvement over previous Minnowboards, with an M.2 slot, and dual ethernet, but lacks the onboard emmc available on previous versions. Since I had a few spare m2 ssds around, it’s not a huge deal for my purposes. Once I’ve gotten through testing this board out, the plan is to build out a demo cluster to bring around to various conferences to showcase what we’re currently doing with the distribution, so you may hear a bit more from me on this in the future.

July 21, 2017

A Fresh Start

July 21, 2017 07:00 AM

For the last few years, I’ve not really cared at all about a semi-permanent slice of home on the internet. I’ve stuck mostly with twitter and only the occasional blog post, usually on someone else’s platform. A few folks like Ben Cotton have tried to reform me. They’ve gotten me to the point where I’m starting to feel a little guilty about being a digital vagrant…and so here we are.

I can’t promise miracles, but I am going to try to write more frequently, and rebuilding some proper website tooling seemed like an interesting way to go about preparing. This time, if I stop maintaining this little website slice, I’ll at least have the decency to feel guilty about it.

For the last few years, I’ve not really cared at all about a semi-permanent slice of home on the internet. I’ve stuck mostly with twitter and only the occasional blog post, usually on someone else’s platform. A few folks like Ben Cotton have tried to reform me. They’ve gotten me to the point where I’m starting to feel a little guilty about being a digital vagrant…and so here we are.

May 15, 2017

Linking Foreman with Zabbix through MQTT

May 15, 2017 10:00 PM

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)


Powered by Planet!
Last updated: April 27, 2018 07:30 AM