November 04, 2019

CentOS Community newsletter, November 2019 (#1911)

November 04, 2019 07:33 PM

Dear CentOS enthusiast,

CentOS is more than just code. If you want to contribute in other non-code ways - documentation, design, promotion, events - we want to hear from you. See the "Contributing" section below for more details.

IN THIS EDITION:

News

This month the infrastructure team has been working hard on getting Centos 8 and CentOS Stream into the CBS (Community Build System). On the 29th, Thomas announced that this work had been completed and detailed what still needs to be done. If you're interested in building packages against either of these targets, you're encouraged to read that mailing list thread thoroughly, and ask any questions you may have there.

Earlier in the month, a meeting was held in Boston including representatives from various parts of Red Hat, discussing what needed to be done internally to facilitate the cooperation between the Red Hat Enterprise Linux (RHEL) Engineering and QE teams, and the CentOS community. There too, plenty remains to be done, but we're making progress towards making this a true upstream of RHEL. We appreciate your patience as we make the many changes that are needed to make this a success.

If you're considering using CentOS Stream, either in production, or as a development platform, we'd love to hear from you. We particularly want to hear what we can do better to help you succeed, so that we can make this platform something that serves everyone's needs.

Releases and updates

This month has seen a moderate number of updates/releases:

Errata and Enhancements Advisories

We issued the following CEEA (CentOS Errata and Enhancements Advisories) during October:

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during October:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during October:

Events

October was a quiet month for events, but we do have a couple of upcoming events that we want to be sure are on your calendar:

SuperComputing 19, Denver

As you may know, SuperComputing is overwhelmingly powered by CentOS. We'll be at SuperComputing19 in Denver in just a few weeks, hanging out at the Red Hat booth to discuss your SuperComputing and HPC needs.

FOSDEM and the CentOS Dojo

FOSDEM is one of the largest, and oldest, open source gatherings in the world. CentOS has had a presence there for many years, and we plan to be there again in 2020. FOSDEM is, as usual, the first weekend in February (Feb 1-2 2020) in Brussels Belgium.

CentOS expects to have a table in the main exhibitor area (we'll find out for sure in a couple weeks), and, from a content perspective, we encourage you to keep an eye on the distributions devroom, where content relating to CentOS, and other Linux distributions, will be presented.

Also, like every year, we plan to hold our CentOS Dojo on the Friday before FOSDEM - January 31st - at the Marriott Grand Place. Details are on the CentOS wiki. The call for presentations is now open. We want to hear what you're working on which may be of interest to the CentOS community. Have a look at last year's schedule for an idea of what kinds of talks we've run in the past.

The call for presentation closes on November 18th, so that we have time to build the schedule and promote the event a little more widely. So don't wait!

Contributing

As with any open source project, there's a lot more than just code. If you want to get involved, but you're not a programmer or packager, there's still a ton of places where you can plug in.

  • Design - Graphic and design elements for the product itself, the website, materials for events, and so on, are always a great need. This is true of any open source community, where the focus on code can tend to neglect other aspects.
  • Events - While CentOS has an official presence at a few events during the year, we want a wider reach. If you're planning to attend an event, and want to represent CentOS in some way, get in touch with us on the centos-promo mailing list to see how we can support you.
  • Promotion - The Promo SIG does a lot in addition to just events. This includes this newsletter, our social media presence, blog posts, and various other things. We need your help to expand this effort.
  • Documentation - Any open source project is only as good as its documentation. If people can't use it, it doesn't matter. If you're a writer, you are in great demand.

If any of these things are of interest to you, please come talk to us on the centos-devel mailing list, the centos-promo mailing list, or any of the various social media channels.

We look forward to hearing from you, and helping you figure out where you can fit in.

October 28, 2019

Fixing heat/fan issue on Thinkpad t490s running CentOS 8/Stream

October 28, 2019 11:00 PM

It's usually always a good thing to receive a newer laptop, as usually that means shiny new hardware, better performances and also better battery life. I was really pleased with previous Lenovo Thinkpad t460s and so the normal choice was its successor, also because default model following company standard, and so the t490s

When I received the laptop, I was a little bit surprized (had no real time to review/analyze in advance) by some choices :

  • No SD card reader anymore (useful when having to "dd" some image for armhfp tests)
  • Old docking style is gone and you have to connect through usb-c/thunderbolt
  • Embedded gigabit ethernet in the t490s (Intel Corporation Ethernet Connection (6) I219-LM (rev 30)) isn't used at all when docked, but going through usb-net device

Installing CentOS Stream (so running kernel 4.18.0-147.6.el8.x86_64 when writing this post) was a breeze, after I turned on SecureBoot (useful also because you can also use fwupd to get LVFS firmware updates automagically as I did for my t460s)

But quickly I realized a huge difference between my previous t460s and the new t490s : heat/temperature and so fan usage. To a point where it was really impossible to just even use our official video-conferencing solution : fan going crazy, laptop unresponsive (load average climbing to ~16), and video/sound completely "off-sync".

Dmesg was also full of such warnings :

[248849.131909] CPU1: Core temperature/speed normal
[248894.211874] CPU1: Package temperature above threshold, cpu clock throttled (total events = 1221232)
[248894.211897] CPU5: Package temperature above threshold, cpu clock throttled (total events = 1221232)
[248894.211902] CPU3: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211903] CPU0: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211903] CPU6: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211904] CPU4: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211905] CPU2: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211905] CPU7: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.212895] CPU1: Package temperature/speed normal
[248894.212895] CPU5: Package temperature/speed normal
[248894.212908] CPU4: Package temperature/speed normal

After some quick research, I found some links about known issues on some (recent) Lenovo thinkpads and some possible solutions explaining the issue[s]:

Nice, or not so (still waiting for Lenovo to fix this through FW update for the t490s - when writing this blog post). I quickly tried to rebuild a community proposed fix and rpm is available in my Copr repository.

But, as stated on said github repo, it doesn't work with SecureBoot, so I temporary disabled it to test said fix, but it wasn't magical either, so I decided to re-eanble SecureBoot and be back in "normal" mode.

Then I found another interesting forum thread about t480 and fan/heat issue, so I decided to have a look.

Indeed : 'Thunderbolt BIOS Assist Mode' was disabled too in my case (wondering why it came with that disabled, while it was coming with RHEL8 installed, and pre-loaded) : let's enable it and see how that goes :

T490s settings

OMG ! instead of having a terminal open with "watch sensors" running, I wanted to have a quick look directly from gnome, so just installed the gnome-shell-extension-system-monitor-applet (available now in epel8-testing) and so far so good :

When running normal workload, while connected to Dock and two external displays), it runs like this :

temperature

And yesterday I was happy (ultimate test) to be in a video conf-call for more than one hour, with no video/sound issue and temperature just climbed a little bit, but nothing unusual when using such video call :

temperature

Hope it helps, also not if you run Linux on a t490s but any recent Lenovo Thinkpad (or even Yoga it seems) model. Now still waiting on Lenovo to release firmware for the throttling issue but at least the laptop is currently usable :)

October 08, 2019

CentOS Community newsletter, October 2019 (#1910)

October 08, 2019 03:40 PM

Dear CentOS enthusiast,

If you'd like to help out with the process of putting together the newsletter, please see the Contributing section at the end. We're always looking for help!

IN THIS EDITION:

Releases and updates

The big news in September was the release of CentOS Linux 8, along with CentOS Stream. CentOS Linux 8 is exactly what you expected - a rebuild of Red Hat Enterprise Linux (RHEL) 8 - but CentOS Stream is a new aspect of the CentOS Project that needs a little more explanation.

CentOS Stream is a rolling preview of what will be in the next minor release of RHEL. CentOS Stream will be updated regularly (the exact cadence is still a work in progress) and will give you the opportunity to verify your code and workloads against what’s coming next.

The motivation for doing this is to provide a platform where people can develop against CentOS Stream, and, by doing so, be ready for market the day that the next minor version of RHEL ships. CentOS Stream will be developer beta level code (not alpha) containing features ready for validation to include in the next minor release of RHEL. Red Hat wants CentOS Stream to be a great experience for developers to target the next minor release of RHEL (released every 6 months). Code that is delivered to CentOS Stream is what Red Hat engineers intend to go into the next minor release of RHEL and will have gone through CI.

If you’re interested in building a project on Stream, we encourage you to look into the SIGs - https://wiki.centos.org/SpecialInterestGroup - which are a place to package and test on CentOS, using the Community Build System (CBS) and the CentOS CI. Bring your ideas to the centos-devel mailing list, and we’ll help you figure out the way forward.

Finally, note that this is still a work in progress. We hope to have regular updates to CentOS Stream within the next few months, but tooling for that does not exist yet, and so there will be a lot of manual processes at first. We appreciate your patience while we get things up and running.

We are working on a feedback mechanism that is going to evolve over time. CentOS Stream must have the ability to get feedback and suggestions to be successful. We will announce details as things solidify.

You can download CentOS Stream, as well as CentOS Linux, at https://www.centos.org/download/ and you can read more details on the centos-devel mailing list, at https://lists.centos.org/pipermail/centos-devel/2019-October/017922.html

Errata and Enhancements Advisories

We issued the following CEEA (CentOS Errata and Enhancements Advisories) during September:

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during September:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during September:

Other releases

The following releases also happened during September:

Events

In September, we had a presence at the Webpros Summit (formerly the cPanel conference) in Atlanta, Georgia. The cPanel community are long-term supporters of CentOS, so this is always a fun event. It was also a great place for some early conversations about CentOS Stream as a place to develop and test products.

While there, Johnny Hughes gave an excellent presentation about the CentOS Linux 8 release, what's in it, and why it was a longer process than usual.

As usual, there's a number of events coming up where you can find members of the CentOS community.

October 28–30, in Portland, we'll be at LISA19, the\premier conference for operations professionals, where we share real-world knowledge about designing, building, securing, and maintaining the critical systems of our interconnected world. Come see us at the Red Hat booth with your CentOS questions and stories.

Then, in November, we'll be in Denver at SC19 - the international conference for high performance computing, networking, storage and analysis. Once again, come see us at the Red Hat booth. As usual, or main interest there is the always-awesome Student Cluster Competition, where tomorrow's HPC experts compete to build a supercomputer with off-the-shelf hardware and open source software ... and most of them choose CentOS. Supercomputing is #PoweredByCentOS!

Finally, I want to keep reminding you that we'll be doing another Dojo at FOSDEM, on January 31st 2020. Details will be coming soon to the CentOS Wiki. Think about what you might want to present about, and be sure to join us in Brussels!

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • Tell us what you're working on
  • Provide a report from the SIG on which you participate
  • Tell us about an event that you attended where there was CentOS content
  • Write an article on an interesting person or topic
  • Tell us about a news article that covered the use of CentOS in an interesting way
  • Suggest an topic that you'd like to see someone else write an article on

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly (rbowen@centosproject.org) with ideas or articles that you'd like to see in the next newsletter.

 

 

September 24, 2019

CentOS 8 and CentOS Stream released

September 24, 2019 08:11 PM

We are excited to announce the release of CentOS 8, and of the new RHEL upstream, CentOS Streams. Details can be found on the CentOS-Announce mailing list.

September 17, 2019

Release for CentOS Linux 7 (1908) on the x86_64 Architecture

September 17, 2019 02:55 PM

We are pleased to announce the general availability of CentOS Linux 7 (1908) for the x86_64 architecture. Effectively immediately, this is the current release for CentOS Linux 7 and is tagged as 1908, derived from Red Hat Enterprise Linux 7.7 Source Code.

Full details are on the centos-devel mailing list.

September 02, 2019

CentOS Community Newsletter, September 2019 (#1909)

September 02, 2019 12:51 PM

Dear CentOS enthusiast,

If you'd like to help out with the process of putting together the newsletter, please see the Contributing section at the end. We're always looking for help!

IN THIS EDITION:

Releases and updates

August was unusually slow in terms of updates and errata - primarily because everyone has been focused on the CentOS 8 build.

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during August:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during August:

Events

August was another busy month for CentOS events.

At the beginning of the month, CentOS had a presence at DevConf.IN, the annual developer event in India. Vipul Siddharth represented us there, and wrote up a summary of that event.

The following week, we had a table at Flock, the annual Fedora conference, in Budapest, and Vipul also wrote a great writeup of that event on his blog.

On the 14th, we held our second annual CentOS Dojo at DevConf.US, featuring talks about Keylime, Terraform, Buildah, and other topics. We had roughly 35 people in attendance. The videos of the presentations are now available on the CentOS YouTube channel .

Then, we were at the Red Hat booth at the Open Source Summit in San Diego, August 21-23. We were able to meet many people who use CentOS in a variety of industries, and find out about their interests and concerns. If you dropped by, thanks. It's always a pleasure to talk with you at events.

Next month, we'll be at the cPanel event in Atlanta, September 23rd - 25th at the Atlanta Marriott Marquis. Our own Johnny Hughes will be talking about what's up with CentOS 8, and we'll have a booth where you can drop by for your CentOS swag needs. As you probably know, CentOS is the backbone of the web hosting industry, and the cPanel event is where they gather to discuss their trade. I hope to see you there!

And, looking forward just a little further, remember that FOSDEM is coming in just a few months, and we'll be there. We will, once again, be running a Dojo at FOSDEM. You can see details from this year's event in the CentOS wiki, and the 2020 event should look similar. Watch Twitter, the mailing lists, or whatever is your preferred channel, for updates soon.

SIG (Special Interest Group) Report

SIGs - Special Interest Groups - are where people work on the stuff that runs on top of CentOS.

The following are the SIG reports for this month.

CentOS Virtualization SIG Quarterly Report

Purpose

Packaging and maintaining different FOSS based virtualization applications that one can install and run natively on CentOS.

https://wiki.centos.org/SpecialInterestGroup/Virtualization

Membership Update

We are always looking for new members.

No changes in members this month.

Releases and Packages

oVirt 4.3 has been released and Virt SIG repositories are publicly available. oVirt 4.4 development is in progress upstream now

Health and Activity

The Virtualization SIG remains fairly healthy. All the projects within the SIG are updating regularly on biweekly meetings.

oVirt is planning a conference in Rome in  October 2019

Issues for the Board

oVirt pushed a patch for having a CentOS appliance including oVirt Guest Agent in https://github.com/CentOS/sig-cloud-instance-build/pull/127, it's under consideration for CentOS 7.7 inclusion.

oVirt would have been happy to consume CentOS 8 alpha / beta / development builds to be ready to ship packages for CentOS 8 on its GA. Would be nice to get early access to the rpms within the SIGs.

 

Opstools quarterly report, 01 June - Aug 31 2019

Purpose

Opstools provides tools for operators.

https://wiki.centos.org/SpecialInterestGroup/OpsTools

Membership update

No members left or were added to the SIG in the last quarter.

Health and activity

We are phasing out fluentd and sensu; patches have been proposed to OpenStack. Their respective replacements are rsyslog (included in RHEL) and collectd-sensubility. The latter is a plugin to collectd; it will create events in collectd which can be acted on as on other collectd events.

Once we'll have CentOS 8, we'd be rebuilding all our packages for RHEL8; opstools packages used to be consumed by OpenStack Kolla, but since there are no CentOS 8 builds, this relation has been dropped for now.

We intend to get the integration back, once there are builds based on CentOS 8.

Collectd has been updated to 5.9.0 and 5.9.1 upstream. We did not include these releases for now,
as they contain some severe bugs.

Issues for the board

none at the moment.

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • Tell us what you're working on
  • Provide a report from the SIG on which you participate
  • Tell us about an event that you attended where there was CentOS content
  • Write an article on an interesting person or topic
  • Tell us about a news article that covered the use of CentOS in an interesting way
  • Suggest an topic that you'd like to see someone else write an article on

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly (rbowen@centosproject.org) with ideas or articles that you'd like to see in the next newsletter.

 

 

August 07, 2019

CentOS Community Newsletter, August 2019 (#1908)

August 07, 2019 07:02 PM

Dear CentOS enthusiast,

It's been another busy month, but better a few days late than never!

If you'd like to help out with the process of putting together the newsletter, please see the Contributing section at the end. We're always looking for help!

Releases and updates

We had a very large number of updates/enhancements in July:

Errata and Enhancements Advisories

We issued the following CEEA (CentOS Errata and Enhancements Advisories) during July:

Errata and Security Advisories

We issued the following CESA (CentOS Errata and Security Advisories) during July:

Errata and Bugfix Advisories

We issued the following CEBA (CentOS Errata and Bugfix Advisories) during July:

Events

Last week we were at DevConf.in in Bangalore. If you dropped by, thanks!

Next week - August 14th - we'll be gathering at Boston University, in Boston, Massachusetts, for the second annual CentOS Dojo at DevConf.US. There's still space to register, if you wish to attend. In addition to the regular sessions, there will be an opportunity to give lightning talks about what you're working on, as requested by last year's attendees. More details are available on the event wiki page.

And the week after that - August 21-23 - we will be at the Open Source Summit in San Diego. Drop by to see us at the Red Hat booth!

If you are presenting anything about CentOS, at any event anywhere in the world, please do let us know, so that we can promote your presence there, and your talk.

If you'd like to run a CentOS Dojo, or other community event, we may be able to help. Get in touch via the centos-devel mailing list, or via our Twitter account @CentOSProject.

Contributing to CentOS Pulse

We are always on the look-out for people who are interested in helping to:

  • Tell us what you're working on
  • Provide a report from the SIG on which you participate
  • Tell us about an event that you attended where there was CentOS content
  • Write an article on an interesting person or topic
  • Tell us about a news article that covered the use of CentOS in an interesting way
  • Suggest an topic that you'd like to see someone else write an article on

Please see the page with further information about contributing. You can also contact the Promotion SIG, or just email Rich directly (rbowen@centosproject.org) with ideas or articles that you'd like to see in the next newsletter.

 

August 06, 2019

CentOS Dojo at DevConf.US – August 14th, 2019 in Boston

August 06, 2019 04:11 PM

The CentOS Project is pleased to be hosting a one-day Dojo, in conjunction with the upcoming DevConfUS conference, on August 14, 2019.

The one-day event, located on the campus of Boston University in the George Sherman Union Building, will feature talks on:

  • Running CentOS and Terraform on AWS
  • Supercomputing at NC State University
  • An Introduction to Keylime
  • Using Applications Streams
  • Lightning talks about what you’re working on

The event is free, but attendees should register for the event so planners can get an idea of attendance. 

In the evening we’ll be gathering at a local watering hole for less formal discussions, accompanied by food and great local beers - location to be announced on the day of the event!

CentOS will continue its presence at DevConfUS with a booth and various talks, so even if you miss the Dojo, there’s still plenty of time to meet with folks working on CentOS. We look forward to seeing you soon!

July 10, 2019

CentOS Atomic Host 7.1906 Available for Download

July 10, 2019 10:37 PM

The CentOS Atomic SIG has released an updated version of CentOS Atomic Host (7.1906), an operating system designed to run Linux containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host includes these core component versions:

  • atomic-1.22.1-26.gitb507039.el7.centos.x86_64
  • rpm-ostree-client-2018.5-2.atomic.el7.x86_64
  • ostree-2018.5-1.el7.x86_64
  • cloud-init-18.2-1.el7.centos.2.x86_64
  • docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
  • kernel-3.10.0-957.21.3.el7.x86_64
  • podman-1.3.2-1.git14fdcd0.el7.centos.x86_64
  • flannel-0.7.1-4.el7.x86_64
  • etcd-3.3.11-2.el7.centos.x86_64

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, or qcow2 image. For links to media, see the CentOS wiki.

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

# atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation – join us!

You’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

July 09, 2019

IBM, Red Hat, and CentOS

July 09, 2019 02:50 PM

CentOS community,

Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat which will operate as a distinct unit within IBM moving forward.

What does this mean for Red Hat’s contributions to the CentOS project?

In short, nothing.

Red Hat always has and will continue to be a champion for open source and projects like CentOS. IBM is committed to Red Hat’s independence and role in open source software communities so that we can continue this work without interruption or changes.

Our mission, governance, and objectives remain the same. We will continue to execute the existing project roadmap. Red Hat associates will continue to contribute to the upstream in the same ways they have been. And, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project.

We will do this together, with the community, as we always have.

If you have questions or would like to learn more about today’s news, I encourage you to review the list of materials below. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog

More info:

Press release

Chris Wright blog - Red Hat and IBM: Accelerating the adoption of open source

FAQ on Red Hat Community Blog

July 07, 2019

Updated CentOS Vagrant Images Available (v1905.01)

July 07, 2019 06:19 AM

We are pleased to announce new official Vagrant images of CentOS Linux 6.10 and CentOS Linux 7.6.1810 for x86_64. All included packages have been updated to May 30th, 2019.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools
  4. Some people reported "could not resolve host" errors when running the centos/7 image for VirtualBox on Windows hosts. We don't have access to any Windows computer, but some people reported that adding the following line to the Vagrantfile fixed the problem:
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

Downloads

The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

Once you are sure that the checksums are properly signed by the CentOS Project, you have to include them in your Vagrantfile (Vagrant unfortunately ignores the checksum provided from the command line). Here's the relevant snippet from my own Vagrantfile, using v1803.01 and VirtualBox:

Vagrant.configure(2) do |config|
  config.vm.box = "centos/7"

  config.vm.provider :virtualbox do |virtualbox, override|
    virtualbox.memory = 1024
    override.vm.box_download_checksum_type = "sha256"
    override.vm.box_download_checksum = "b24c912b136d2aa9b7b94fc2689b2001c8d04280cf25983123e45b6a52693fb3"
    override.vm.box_url = "https://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1803_01.VirtualBox.box"
  end
end

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or in #centos on Freenode IRC.

Ackowledgements

I would like to warmly thank Brian Stinson, Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images. I would also like to thank the CentOS Project Lead, Karanbir Singh, without whose years of continuous support we wouldn't have had the Vagrant images in their present form.

I would also like to thank the following people (in alphabetical order):

  • Graham Mainwaring, for helping with tests and validations;
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro;
  • Kirill Kalachev, for reporting and debugging the host name errors with VirtualBox on Windows hosts.

July 03, 2019

CentOS Community Newsletter, July 2019 (#1907)

July 03, 2019 07:08 PM

Dear CentOS enthusiast,

Yes, I'm running a little behind schedule with this month's newsletter. That's because I just got back from the Open Source Summit in Shanghai, where I met a number of CentOS enthusiasts. More about that a little later.

April 28, 2019

Renew/Extend Puppet CA/puppetmasterd certs

April 28, 2019 10:00 PM

Puppet CA/puppetmasterd cert renewal

While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days. Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea):

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC
Warning: Certificate 'puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC

So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)

Puppetmasterd/CA

Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a Foreman driven puppetmasterd, so foreman host is where all this will happen, YMMV)

tar cvzf /root/puppet-ssl-backup.tar.gz /var/lib/puppet/ssl/

CA itself

We first need to regenerate the CSR for the CA cert, and sign it again Ideally we confirm that the ca_key.pem and the existing ca_crt.pem "matches" through modulus (should be equals)

cd /var/lib/puppet/ssl/ca
( openssl rsa -noout -modulus -in ca_key.pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in ca_crt.pem  2> /dev/null | openssl md5 ) 

(stdin)= cbc4d35f58b28ad7c4dca17bd4408403
(stdin)= cbc4d35f58b28ad7c4dca17bd4408403

As it's the case, we can now Regenerate from that private key and existing crt a CSR

openssl x509 -x509toreq -in ca_crt.pem -signkey ca_key.pem -out ca_csr.pem
Getting request Private Key
Generating certificate request

Now that we have the CSR for CA, we need to sign it again, but we have to add extensions

cat > extension.cnf << EOF
[CA_extensions]
basicConstraints = critical,CA:TRUE
nsComment = "Puppet Ruby/OpenSSL Internal Certificate"
keyUsage = critical,keyCertSign,cRLSign
subjectKeyIdentifier = hash
EOF

And now archive old CA crt and sign (new) extended one

cp ca_crt.pem ca_crt.pem.old
openssl x509 -req -days 3650 -in ca_csr.pem -signkey ca_key.pem -out ca_crt.pem -extfile extension.cnf -extensions CA_extensions
Signature ok
subject=/CN=Puppet CA: puppetmasterd.domain.com
Getting Private key

openssl x509 -in ca_crt.pem -noout -text|grep -A 3 Validity
 Validity
            Not Before: Apr 29 08:25:49 2019 GMT
            Not After : Apr 26 08:25:49 2029 GMT

Puppetmasterd server

We have also to regen the CSR from the existing cert (assuming our fqdn for our cert is correctly also the currently set hostname)

cd /var/lib/puppet/ssl
openssl x509 -x509toreq -in certs/$(hostname).pem -signkey private_keys/$(hostname).pem -out certificate_requests/$(hostname)_csr.pem
Getting request Private Key
Generating certificate request

Now that we have CSR, we can sign with new CA

cp certs/$(hostname).pem certs/$(hostname).pem.old #Backing up
openssl x509 -req -days 3650 -in certificate_requests/$(hostname)_csr.pem -CA ca/ca_crt.pem \
  -CAkey ca/ca_key.pem -CAserial ca/serial -out certs/$(hostname).pem
Signature ok  

Validating that puppetmasted key and new certs are matching (so crt and private keys are ok)

( openssl rsa -noout -modulus -in private_keys/$(hostname).pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in certs/$(hostname).pem 2> /dev/null | openssl md5 )

(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5
(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5

It seems all good, so let's restart puppetmasterd/httpd (foremand launches puppetmasterd for us)

systemctl restart puppet

Puppet agents

From this point, puppet agents will not complain about the puppetmasterd cert, but still about the fact that CA itself will expire soon :

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56GMT

But as we have now the new ca_crt.pem at the puppetmasterd/foreman side, we can just distribute it on clients (through puppet or ansible or whatever) and then it will continue to work

cd /var/lib/puppet/ssl/certs
mv ca.pem ca.pem.old

And now distribute the new ca_crt.pem as ca.pem here

puppet snippet for this (in our puppet::agent class)

 file { '/var/lib/puppet/ssl/certs/ca.pem': 
   source => 'puppet:///puppet/ca_crt.pem', 
   owner => 'puppet', 
   group => 'puppet', 
   require => Package['puppet'],
 }

Next time you'll "puppet agent -t" or that puppet will contact puppetmasterd, it will apply the new cert on and on next call, no warning, issue anymore

Info: Computing checksum on file /var/lib/puppet/ssl/certs/ca.pem
Info: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]: Filebucketed /var/lib/puppet/ssl/certs/ca.pem to puppet with sum c63b1cc5a39489f5da7d272f00ec09fa
Notice: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]/content: content changed '{md5}c63b1cc5a39489f5da7d272f00ec09fa' to '{md5}e3d2e55edbe1ad45570eef3c9ade051f'

Hope it helps

December 06, 2018

Using go-toolset on CentOS Linux 7/x86_64

December 06, 2018 01:30 PM

With golang now gone from the CentOS Linux 7 distro ( deprecated upstream ), the best way to get golang for your system is to get it from the SCL.

Firstly, enable scl itself :

yum install centos-release-scl

Then install the go-toolset-7 scl ( this brings in version 1.10.2 at the moment )

yum install go-toolset-7

In order to use it, interactively you can run the scl enable command, which would also involve spawning a new shell. Note that the /bin/bash can be replaced with the commmand or shell you want to work in :

$ scl enable go-toolset-7 /bin/bash
$ go version
go version go1.10.2 linux/amd64
$ which go
/opt/rh/go-toolset-7/root/usr/bin/go

If you want, like I do, want to just make this the default go for all our shells, add something like this to your .bashrc

source scl_source enable go-toolset-7

MAny thanks to the CentOS SCL SIG for shipping this go-toolset collection.

November 06, 2018

Implementing Zabbix custom LLD rules with Ansible

November 06, 2018 11:00 PM

While I have to admit that I'm using Zabbix since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.

I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use macros defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.

But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called Low-level Discovery (LLD in short).

By default, if you use (or have modified) some zabbix templates, you can see those in actions for the mounted filesystems or even the present network interfaces in your linux OS. That's the "magic" : you added a new mount point or a new interface ? Zabbix discovers it automatically and start monitoring it, and also graph values for those.

So back to our monitoring problem and the need for multiple templates : what if we could use LLD too and so have Zabbix automatically checking our deployed instances (multiple ones) automatically ? The good is that we can : one can create custom LLD rules and so it would work OOTB when only one template would be added for those nodes.

If you read the link above for custom LLD rule, you'll see some examples about a script being called at the agent level, from the zabbix server, at periodic interval, to "discover" those custom discovery checks. The interesting part to notice is that it's a json that is returned to zabbix server , pointing to a new key, that is declared at the template level.

So it (usually) goes like this :

  • create a template
  • create a new discovery rule, give it a name and a key (and also eventually add Filters)
  • deploy a new UserParameter at the agent level reporting to that key the json string it needs to declare to zabbix server
  • Zabbix server receives/parses that json and based on the checks/variables declared in that json, it will create , based on those returned macros, some Item Prototypes, Trigger prototypes and so on ...

Magic! ... except that in my specific case, for some reasons I never allowed the zabbix user to really launch commands, due to limited rights and also the Selinux context in which it's running (for interested people, it's running in the zabbix_agent_t context)

I suddenly didn't want to change that base rule for our deployments, but the good news is that you don't have to use UserParameter for LLD ! . It's true that if you look at the existing Discovery Rules for "Network interface discovery", you'll see the key net.if.discovery, that is used for everything after, but the Type is "Zabbix agent". We can use something else in that list, like we already do for a "normal" check

I'm already (ab)using the Trapper item type for a lot of hardware checks : reason is simple : as zabbix user is limited (and I don't want to grant more rights for it), I have some scripts checking for hardware raid controllers (if any), etc, and reporting back to zabbix through zabbix_sender.

Let's use the same logic for the json string to be returned to Zabbix server for LLD. (as yes, Trapper is in the list for the discovery rule Type.

It's even easier for us, as we'll control that through Ansible : It's what is already used to deploy/configure our RepoSpanner instances so we have all the logic there.

Let's first start by creating the new template for repospanner, and create a discovery rule (detecting each instances and settings) :

zabbix-discovery-type.png

You can then apply that template to host[s] and wait ... but first we need to report back from agent to server which instances are deployed/running. So let's see how to implement that through ansible.

To keep it short, in Ansible we have the following (default values, not the correct ones) variables (from roles/repospanner/default.yml):

...
repospanner_instances:
  - name: default
    admin_cli: False
    admin_ca_cert:
    admin_cert:
    admin_key:
    rpc_port: 8443
    rpc_allow_from:
      - 127.0.0.1
    http_port: 8444
    http_allow_from:
      - 127.0.0.1
    tls_ca_cert: ca.crt
    tls_cert: nodea.regiona.crt
    tls_key: nodea.regiona.key
    my_cn: localhost.localdomain
    master_node : nodea.regiona.domain.com # to know how to join a cluster for other nodes
    init_node: True # To be declared only on the first node
...

That simple example has only one instance, but you can easily see how to have multiple ones, etc So here is the logic : let's have ansible, when configuring the node, create the file that will be used zabbix_sender (triggered by ansible itself) to send the json to zabbix server. zabbix_sender can use a file that is separated (man page) like this :

  • hostname (or '-' to use name configured in zabbix_agentd.conf)
  • key
  • value

Those three fields have to be separated by one space only, and important : you can't have extra empty line (but something can you easily see when playing with this the first time)

How does our file (roles/repospanner/templates/zabbix-repospanner-lld.j2) look like ? :

- repospanner.lld.instances { "data": [ {% for instance in repospanner_instances -%} { "{{ '{#INSTANCE}' }}": "{{ instance.name }}", "{{ '{#RPCPORT}' }}": "{{ instance.rpc_port }}", "{{ '{#HTTPPORT}' }}": "{{ instance.http_port }}" } {%- if not loop.last -%},{% endif %} {% endfor %} ] }

If you have already used jinja2 templates for Ansible, it's quite easy to understand. But I have to admit that I had troubles with the {#INSTANCE} one : that one isn't an ansible variable, but rather a fixed name for the macro that we'll send to zabbix (and so reused as macro everywhere). But ansible, when trying to translate the jinja2 template, was complaining about missing "comment' : Indeed {# ... #} is a comment in jinja2. So the best way (thanks to people in #ansible for that trick) is to include it in {{ }} brackets but then escape it so that it would be rendered as {#INSTANCE} (nice to know if you have to do that too ....)

The rest is trival : excerpt from monitoring.yml (included in that repospanner role) :

- name: Distributing zabbix repospanner check file
  template:
    src: "{{ item }}.j2"
    dest: "/usr/lib/zabbix/{{ item }}"
    mode: 0755
  with_items:
    - zabbix-repospanner-check
    - zabbix-repospanner-lld
  register: zabbix_templates   
  tags:
    - templates

- name: Launching LLD to announce to zabbix
  shell: /bin/zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -i /usr/lib/zabbix/zabbix-repospanner-lld
  when: zabbix_templates is changed

And this is how is rendered on one of my test node :

- repospanner.lld.instances { "data": [ { "{#INSTANCE}": "namespace_rpms", "{#RPCPORT}": "8443", "{#HTTPPORT}": "8444" }, { "{#INSTANCE}": "namespace_centos", "{#RPCPORT}": "8445", "{#HTTPPORT}": "8446" }  ] }

As ansible auto-announces/push that back to zabbix, zabbix server can automatically start creating (through LLD, based on the item prototypes) some checks and triggers/graphs and so start monitoring each newly instance. You want to add a third one ? (we have two in our case) : ansible pushes the config, would modify the .j2 template and would notify zabbix server. etc, etc ...

The rest is just "normal" operation for zabbix : you can create items/trigger prototypes and just use those special Macros coming from LLD :

zabbix-item-prototypes.png

It was worth spending some time in the LLD doc and in #zabbix to discuss LLD, but once you see the added value, and that you can automatically configure it through Ansible, one can see how powerful it can be.

September 23, 2018

Updated mirrorlist code in the CentOS Infra

September 23, 2018 10:00 PM

Recently I had to update the existing code running behind mirrorlist.centos.org (the service that returns you a list of validated mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was still using the Maxmind GeoIP Legacy country database. As you can probably know, Maxmind announced that they're discontinuing the Legacy DB, so that was one reason to update the code. Switching to GeoLite2 , with python2-geoip2 package was really easy to do and so was done already and pushed last month.

But that's when I discussed with Anssi (if you don't know him, he's maintaining the CentOS external mirrors DB up2date, including through the centos-mirror list ) that we thought about not only doing that change there, but in the whole chain (so on our "mirror crawler" node, and also for the isoredirect.centos.org service), and random chat like these are good because suddenly we don't only want to "fix" one thing, but also take time on enhancing it and so adding more new features.

The previous code was already supporting both IPv4 and IPv6, but it was consuming different data sources (as external mirrors were validated differently for ipv4 vs ipv6 connnectivity). So the first thing was to rewrite/combine the new code on the "mirror crawler" process for dual-stack tests, and also reflect that change o nthe frontend (aka mirrorlist.centos.org) nodes.

While we were working on this, Anssi proposed to also not adapt the isoredirect.centos.org code, but convert it in the same python format as the mirrorlist.centos.org, which he did.

Last big change also that was added is the following : only some repositories/architectures were checked/validated in the past but not all the other ones (so nothing from the SIGs and nothing from AltArch, so no mirrorlist support for i386/armhfp/aarch64/ppc64/ppc64le).

While it wasn't a real problem in the past when we launched the SIGs concept, and that we added after that the other architectures (AltArch), we suddenly started suffering from some side-effects :

  • More and more users "using" RPM content from mirror.centos.org (mainly through SIGs - which is a good indicator that those are successful, which is a good "problem to solve")
  • We are currently losing some nodes in that mirror.centos.org network (it's still entirely based on free dedicated servers donated to the project)

To address first point, offloading more content to the 600+ external mirrors we have right now would be really good, as those nodes have better connectivity than we do, and with more presence around the globe too, so slowly pointing SIGs and AltArch to those external mirrors will help.

The other good point is that , as we switched to the GeoLite2 City DB, it gives us more granularity and also for example, instead of "just" returning you a list of 10 validated mirrors for USA (if your request was identified as coming from that country of course), you now get a list of validated mirrors in your state/region instead. That means that then for such big countries having a lot of mirrors, we also better distribute the load amongst all of those, which is a big win for everybody - users and mirrors admins - )

For people interested in the code, you'll see that we just run several instances of the python code, behind Apache running with mod_proxy_balancer. That means that if we just need to increase the number of "instances", it's easy to do but so far it's running great with 5 running instances per node (and we have 4 nodes behind mirrorlist.centos.org). Worth noting that on average, each of those nodes gets 36+ millions requests per week for the mirrorlist service (so 144+ millions in total per week)

So in (very) short summary :

  • mirrorlist.centos.org code now supports SIGs/AltArch repositories (we'll sync with SIGs to update their .repo file to use mirrorlist= instead of baseurl= soon)
  • we have better accuracy for large countries, so we redirect you to a 'closer' validated mirror

One reminder btw : you know that you can verify which nodes are returned to you with some simple requests :

# to force ipv4
curl 'http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates' -4
# to force ipv6
curl 'http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates' -6

Last thing I wanted to mention was a potential way to fix point #2 from the list : when I checked in our "donated nodes" inventory, we still are running CentOS on nodes from ~2003 (yes, you read that correctly), so if you want to help/sponsor the CentOS Project, feel free to reach out !

February 19, 2018

Using newer PHP stack (built and distributed by CentOS) on CentOS 7

February 19, 2018 11:00 PM

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.

But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.

The application that we use for https://bugs.centos.org is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].

That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.

Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)

The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here

So, how do we proceed ? easy ! First let's add the repository :

yum install centos-release-scl

And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)

So here we go :

yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm

As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)

Let's do this :

systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf << EOF
AddType text/html .php 
DirectoryIndex index.php
<FilesMatch \.php$>
      SetHandler "proxy:fcgi://127.0.0.1:9000"
</FilesMatch>
EOF
systemctl restart httpd

And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's blog post about this (Thank you again Remi !)

Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)

January 18, 2018

Diagnosing nf_conntrack/nf_conntrack_count issues on CentOS mirrorlist nodes

January 18, 2018 11:00 PM

Yesterday, I got some alerts for some nodes in the CentOS Infra from both our monitoring system, but also confirmed by some folks reporting errors directly in our #centos-devel irc channel on Freenode.

The impacted nodes were the nodes we use for mirrorlist service. For people not knowing what they are used for, here is a quick overview of what happens when you run "yum update" on your CentOS node :

  • yum analyzes the .repo files contained under /etc/yum.repos.d/
  • for CentOS repositories, it knows that it has to use a list of mirrors provided by a server hosted within the centos infra (mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra )
  • yum then contacts one of the server behind "mirrorlist.centos.org" (we have 4 nodes so far : two in Europe and two in USA, all available over IPv4 and IPv6)
  • mirrorlist checks the src ip and sends back a list of current/up2date mirrors in the country (some GeoIP checks are done)
  • yum then opens connection to those validated mirrors

We monitor the response time for those services, and average response time is usually < 1sec (with some exceptions, mostly due to network latency also for nodes in other continents). But yesterday the values where not only higher, but also even completely missing from our monitoring system, so no data received. Here is a graph from our monitoring/Zabbix server :

mirrorlist-response-time-error.png

So clearly something was happening and time to also find some patterns. Also from our monitoring we discovered that the number of tracked network connections by the kernel was also suddenly higher than usual. In fact, as soon as your node does some state tracking with netfilter (like for example -m state ESTABLISHED,RELATED ), it keeps that in memory. You can easily retrive number of actively tracked connections like this :

cat /proc/sys/net/netfilter/nf_conntrack_count 

So it's easy to guess what happens if the max (/proc/sys/net/netfilter/nf_conntrack_max) is reached : kernel drops packets (from dmesg):

nf_conntrack: table full, dropping packet

Depending on the available memory, you can get default values, which can be changed in real-time. Don't forget to also tune then the Hash size (basic rule is nf_conntrack_max / 4) On the mirrorlist nodes, we had default values of 262144 (so yeah, keeping track of that amount of connections in memory), so to get quickly the service in shape :

new_number="524288"
echo ${new_number} > /proc/sys/net/netfilter/nf_conntrack_max
echo $(( $new_number / 4 )) > /sys/module/nf_conntrack/parameters/hashsize

Other option was also to flush the table (you can do that with conntrack -F , tool from conntrack-tools package) but it's really only a temporary fix, and that will not help you getting the needed info for proper troubleshooting (see below)

Here is the Zabbix graph showing that for some nodes it was higher than default values, but now kernel wasn't dropping packets.

ip_conntrack_count.png

We could then confirm that service was then working fine (not "flapping" anymore).

So one can think that it was the only solution for the problem and stop investigation there. But what is the root cause of this ? What happened that opened so many (unclosed) connections to those mirrorlist nodes ? Let's dive into nf_conntrack table again !

Not only you have the number of tracked connections (through /proc/sys/net/netfilter/nf_conntrack_count) but also the whole details about those. So let's dump that into a file for full analysis and try to find a pattern :

cat /proc/net/nf_conntrack > conntrack.list
cat conntrack.list |awk '{print $7}'|sed 's/src=//g'|sort|uniq -c|sort -n -r|head

Here we go : same range of IPs on all our mirrorlist servers having thousands of ESTABLISHED connection. Not going to give you all details about this (goal of this blog post isn't "finger pointing"), but we suddenly identified the issue. So we took contact with network team behind those identified IPs to report that behaviour, still to be tracked, but wondering myself if a Firewall doing NAT wasn't closing tcp connections at all, more to come.

At least mirrorlist response time is now back at usual state :

mirrorlist-response-time.png

So you can also let your configuration management now set those parameters through dedicated .conf under /etc/systctl.d/ to ensure that they'll be applied automatically.

January 09, 2018

Using a RaspberryPI3 as Unifi AP controller with CentOS 7

January 09, 2018 11:00 PM

That's something I should have blogged about earlier, but I almost forgot about it, until I read on twitter other people having replaced their home network equipment with Ubnt/Ubiquiti gear so I realized that it was on my to 'TOBLOG' list.

During the winter holidays, the whole family was at home, and also with kids on the WiFi network. Of course I already had a different wlan for them, separated/seggregated from the main one, but plenty of things weren't really working on that crappy device. So it was time to setup something else. I had opportunity to play with some Ubiquiti devices in the past, so finding even an old Unifi UAP model was enough for my needs (just need Access Point, routing/firewall being done on something else).

If you've already played with those tools, you know that you need a controller to setup the devices up , and because it's 'only' a java/mongodb stack, I thought it would be trivial to setup on a low-end device like RaspberryPi3 (not limited to that , so all armhfp boards on which you can run CentOS would work)

After having installed CentOS 7 armhfp minimal on the device, and once logged, I just had to add the mandatory unofficial epel repository for mongodb

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Epel rebuild for armhfp
baseurl=https://armv7.dev.centos.org/repodir/epel-pass-1/
enabled=1
gpgcheck=0

EOF

After that, just installed what's required to run the application :

yum install mongodb mongodb-server java-1.8.0-openjdk-headless -y

The "interesting" part is that now Ubnt only provides .deb packages , so we just have to download/extract what we need (it's all java code) and start it :

tmp_dir=$(mktemp -d)
cd $tmp_dir
curl -O http://dl.ubnt.com/unifi/5.6.26/unifi_sysvinit_all.deb
ar vx unifi_sysvinit_all.deb
tar xvf data.tar.xz
mv usr/lib/unifi/ /opt/UniFi
cd /opt/UniFi/bin
/bin/rm -Rf $tmp_dir
ln -s /bin/mongod

You can start it "by hand" but let's create a simple systemd file and use it directly :

cat > /etc/systemd/system/unifi.service << EOF
[Unit]
Description=UBNT UniFi Controller
After=syslog.target network.target

[Service]
WorkingDirectory=/opt/UniFi
ExecStart=/usr/bin/java -jar /opt/UniFi/lib/ace.jar start
ExecStop=/usr/bin/java -jar /opt/UniFi/lib/ace.jar stop

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable unifi --now

Don't forget that :

  • it's "Java"
  • running on slow armhfp processor

So that will take time to initialize. You can follow progress in /opt/UniFi/logs/server.log and wait for the TLS port to be opened :

while true ; do sleep 1 ; ss -tanp|grep 8443 && break ; done

Dont forget to open the needed ports for firewall and you can then reach the Unifi controller running on your armhfp board.

January 02, 2018

turn off unused GPU on the laptop

January 02, 2018 09:41 PM

Lots of us have dual graphics cards in the laptops these days, but almost everyone I know tends to use one or the other, hardly ever switching on the fly, since typical usage patterns tend to stick for periods of time.

One thing that almost no one seems to do however is turn off the unusued gpu – when on the move, this can have a significant impact on your battery life.

On CentOS Linux 7, the way to do this would be something like this :

echo ‘OFF’ > /sys/kernel/debug/vgaswitcheroo/switch

And thats it, lterally send it the OFF and the unused gpu is powered down.

You can also querry the interface as follows:

# cat /sys/kernel/debug/vgaswitcheroo/switch

On my Thinkpad T460p it looks like this :

0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynOff:0000:02:00.0

For more info on the vgaswitcheroo, take a look at your kernel document eg https://www.kernel.org/doc/html/v4.10/gpu/vga-switcheroo.html

Enjoy!

January 01, 2018

Lightweigth CentOS 7 i686 desktop on older machine

January 01, 2018 11:00 PM

So, end of the year is always when you have some "time off" and so can work on various projects that were left behind. While searching for other hardware collecting dust in my furniture (other blog post coming soon about that too) I found my old Asus Eeepc 900 and was wondering if I could resurrect it.

While it was working CentOS 5 and then 6 "just fine" I wanted to give it a try with CentOS 7.

Of course, if you remember the specs from that ~2008 small netbook, you remember that it had :

  • slow cpu (Intel(R) Celeron(R) M processor 900MHz)
  • only 1Gb of ram
  • very limited disk space (ASUS-PHISON OB SSD 4GB + additional 8GB for my model)

Setting up the full Gnome3 experience on it would be completely useless and also unusable. So let's try to setup CentOS 7 AltArch minimal (needed as cpu is only i686/32bits) and add what we need after that. So here we go :

  • Download netinstall iso image (I used "local" mirror for me , so http://mirror.nucleus.be/centos-altarch/7/isos/i386/CentOS-7-i386-NetInstall-1611.iso)
  • use dd to transfer it to usb storage key
  • starting the installed on the eeepc
  • wait .... wait .... wait ...

Once installed and up2date, one needs to add additional repositories that aren't there by default. As a reminder, there is no official Epel builds for i686 (same as for armhfp ) but Johnny started to rebuild Epel SRPMs for that specific reason, so here we go :

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Epel rebuild for i686
baseurl=https://buildlogs.centos.org/c7-epel/
enabled=1
gpgcheck=0

EOF

cat > /etc/yum.repos.d/kernel.repo << EOF
[kernel]
name=LTS kernel for i686
baseurl=https://buildlogs.centos.org/c7.1708.exp.i386/
enabled=1
gpgcheck=0

EOF

If you see the other kernel repository, that's because the needed ath5k kernel module for the Wifi device in the Eeepc isn't there in the default kernel nor available through elrepo, but it works with that 4.9.x LTS kernel we build and maintain/update for AltArch so let's use it.

We can install what we need (YMMV though) :

yum update -y
yum groupinstall -y 'X Window System'
yum install -y openbox lightdm lightdm-gtk 
systemctl enable lightdm.service
yum install -y tint2 terminator firefox terminus-fonts-console terminus-fonts network-manager-applet gnome-keyring dejavu-sans-fonts dejavu-fonts-common dejavu-serif-fonts dejavu-sans-mono-fonts open-sans-fonts overpass-fonts liberation-mono-fonts liberation-serif-fonts google-crosextra-caladea-fonts google-crosextra-carlito-fonts 

echo 'tint2 &' >> /etc/xdg/openbox/autostart
echo 'nm-applet &' >> /etc/xdg/openbox/autostart
systemctl reboot

The last line with tint2 , terminator and firefox is purely optional but that's what I needed on my eeepc. Same for network-manager-applet, but once installed, it gives you easy to work with applet integrated in openbox environment.

You can then customize it, etc, but I like it so far for what I wanted to use that old netbook for :

CentOS 7 i686 running on Asus Eeepc 900

November 01, 2017

Community contributed Kickstarts for CentOS Linux

November 01, 2017 12:25 PM

hi,

At https://github.com/CentOS/Community-Kickstarts we’ve been collecting community contributed kickstarts for various roles, deployments and versions. If you are writing and/or using kickstarts in your setup, it would be awesome to have them hosted here as well, please feel free to send PR’s. Just keep in mind a few basic things:

  • Kickstarts should end in .cfg or .ks
  • Generally should install from mirror.centos.org unless otherwise noted
  • If a hashed password is provided, include the plaintext version in a comment. Since these kickstarts are for example purposes, please use password or centos as the passwords as needed.
  • Kickstart names should provide a version and brief description, for example centos5-raid5.cfg or centos7-workstation.ks

Take a look at the README that has a few more pieces of info about this repository https://github.com/CentOS/Community-Kickstarts/blob/master/README.md

October 11, 2017

Four years later with CentOS and Red Hat

October 11, 2017 07:00 AM

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

First and foremost, CentOS is about integration, and its important to know who the community is. We’re your sysadmins and operations teams. We’re your SREs, the OPS in your devops. We’re a force multiplier to developers, the angry voice that says “stop disabling SELinux” and “show me your unit tests”. We’re the community voice encouraging you to do things the right way, rather than taking an easy shortcut we know from experience will come back to bite you.

What we’re not is developers. We may pull in kernel patches, but we’re not kernel developers. We can help you do the root cause analysis to figure out why your app is suddenly not performing, but we aren’t the ones to write the code to fix it. We don’t determine priority for what does or doesn’t get fixed, that’s what Red Hat does.

The core distribution of CentOS is and has always been based on code written by Red Hat. This doesn’t mean it’s a choice of “either CentOS or RHEL,” because we’re in this together. CentOS provides Red Hat a community platform for building and testing things like OpenStack with RDO. We build new ecosystems around ARM servers. We provide a base layer for others to innovate around emerging technologies like NFV. But none of this would be possible without the work of RH’s engineering teams.

The community can build, organize and deliver tools in any number of creative ways, but ultimately the code behind them is being developed by engineers paid to address the needs of Red Hat’s customers. You can bet that RH is keeping an eye on what the CentOS community is using and building, but that doesn’t necessarily translate to business need.

We’re here to empower operators who want to experiment on top of the enterprise base lifespan. We’re here to bring tools and technology to those for whom it may be otherwise be out of reach. We’re here to take use cases and lessons learned from the community back to Red Hat as advocates. We’re happy to serve both audiences in this capacity, but let’s not forget how we buy the ‘free as in beer’.

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

October 10, 2017

Using Ansible Openstack modules on CentOS 7

October 10, 2017 10:00 PM

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

The good news is that Ansible has already some modules for Openstack but it has some requirements and a little bit of understanding before being able to use those.

First of all, all the ansible os_ modules need "shade" on the host included in the play, and that will be responsible of all os_ modules launch. At the time of writing this post, it's not yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on our CBS builders

Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through os-client-config

That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.

That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :

- name: Ensuring we have required pkgs for ansible/openstack
  yum:
    name: python2-shade
    state: installed

- name: Ensuring local directory to hold the os-client-config file
  file:
    path: /etc/openstack
    state: directory
    owner: root
    group: root

- name: Adding clouds.yaml for os-client-config for further actions
  template:
    src: clouds.yaml.j2
    dest: /etc/openstack/clouds.yaml
    owner: root
    group: root
    mode: 0700

Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play before using the os_* modules :

clouds:
  {{ cloud_name }}:
    auth:
      username: admin
      project_name: admin
      password: {{ openstack_admin_pass }}
      auth_url: http://{{ openstack_controller }}:5000/v3/
      user_domain_name: default
      project_domain_name: default
    identity_api_version: 3

You just have to adapt to your needs (see doc for this) but the interesting part is the identity_api_version to force v3.

Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :

- name: Configuring OpenStack user[s]
  os_user:
    cloud: "{{ cloud_name }}"
    default_project: "{{ item.0.name }}"
    domain: "{{ item.0.domain_id }}"
    name: "{{ item.1.login }}"
    email: "{{ item.1.email }}"
    password: "{{ item.1.password }}"           
  with_subelements:
    - "{{ cloud_projects }}"
    - users  
  no_log: True

From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this :

cloud_projects:
  - name: demo
    description: demo project
    domain_id: default
    quota_cores: 20
    quota_instances: 10
    quota_ram: 40960
    users:
      - login: demo_user
        email: demo@centos.org
        password: Ch@ngeM3
        role: admin # can be _member_ or admin
      - login: demo_user2
        email: demo2@centos.org
        password: Ch@ngeMe2

Now that it works, you can explore all the other os_* modules and I'm already using those to :

  • Import cloud images in glance
  • Create networks and subnets in neutron
  • Create projects/users/roles in keystone
  • Change quotas for those projects

I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later.

September 28, 2017

Using CentOS 7 armhfp VM on CentOS 7 aarch64

September 28, 2017 10:00 PM

Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own Cubietruck or RaspberryPi3 is time consuming : removing the sdcard, reflashing with the correct CentOS 7 image and booting/testing the pkg/update/etc ...

So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt. Let's so start with a CentOS 7 aarch64 minimal setup and then :

yum install qemu-kvm-tools qemu-kvm virt-install libvirt libvirt-python libguestfs-tools-c
systemctl enable libvirtd --now

That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.

So here is just what we need :

  • Import the RootFS from an existing image
curl http://mirror.centos.org/altarch/7/isos/armhfp/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img.xz|unxz >/var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img
  • Convert image to qcow2 (that will give us more flexibility) and extend it a little bit
qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2
qemu-img resize /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 +15G
  • Extract kernel+initrd as libvirt will boot that directly for the VM
mkdir /var/lib/libvirt/armhfp-boot
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/ /var/lib/libvirt/armhfp-boot/

So now that we have a RootFS, and also kernel/initrd, we can just use virt-install to create the VM (pointing to existing backend qcow2) :

virt-install \
 --name centos7_armhfp \
 --memory 4096 \
 --boot kernel=/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.40-203.el7.armv7hl,initrd=/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.40-203.el7.armv7hl.img,kernel_args="console=ttyAMA0 rw root=/dev/sda3" \
 --disk /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 \
 --import \
 --arch armv7l \
 --machine virt \

And here we go : we have a armhfp VM that boots really fast (compared to a armhfp board using a microsd card of course)

At this stage, you can configure the node, etc.. The only thing you have to remember is that of course kernel will be provided from outside the VM, so just extract it from an updated VM to boot on that kernel. Let's show how to do that, as in the above example, we configured the VM to run with 4Gb of ram, but only 3 are really seen inside (remember the 32bits mode and so the need for PAE on i386 ?)

So let's use this example to show how to switch kernel : From the armhfp VM :

# Let extend first as we have bigger disk
growpart /dev/sda 3
resize2fs /dev/sda3
yum update -y
yum install kernel-lpae
systemctl poweroff # we'll modify libvirt conf file for new kernel

Back to the hypervisor we can again extract needed files :

virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae /var/lib/libvirt/armhfp-boot/boot/
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img /var/lib/libvirt/armhfp-boot/boot/

And just virsh edit centos7_armhfp so that kernel and armhfp are pointing to correct location:

<kernel>/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae</kernel>
<initrd>/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img</initrd>

Now that we have a "gold" image, we can even use exiting tools to provision quickly other nodes on that hypervisor ! :

time virt-clone --original centos7_armhfp --name armhfp_guest1 --file /var/lib/libvirt/images/armhfp_guest1.qcow2
Allocating 'armhfp_guest1.qcow2'                                               |  18 GB  00:00:02     

Clone 'armhfp_guest1' created successfully.

real    0m2.809s
user    0m0.473s
sys 0m0.062s

time virt-sysprep --add /var/lib/libvirt/images/armhfp_guest1.qcow2 --operations defaults,net-hwaddr,machine-id,net-hostname,ssh-hostkeys,udev-persistent-net --hostname guest1

virsh start armhfp_guest1

As simple as that. Of course, in the previous example we were just using the default network from libvirt, and not any bridge, but you get the idea : all the rest with well-known concept for libvirt on linux.

September 20, 2017

Boosting CentOS server performance

September 20, 2017 07:00 AM

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

The fix

I know most people who found this don’t care about the details, so we’ll get right to the good stuff. Check your active tuned profile. If your output looks like the example below, you probably want to change it.

[root@centos ~]# tuned-adm active
Current active profile: balanced

The ‘balanced’ profile means the CPU governor is set to powersave, which won’t do your server any favors. You can validate this by running cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor. To fix it, run the command below:

[root@centos ~]# tuned-adm profile throughput-performance

That’s it. This changes the governor to performance which should give you a pretty decent performance bump without any additional changes, and across all hardware platforms.If you’re interested in figuring out why the default setting is set this way, I’ll explain.

Why the default is “wrong”

The tuned package is installed and enabled by default. When it runs for the first time, it tries to automatically select the best performance profile for the system by running a couple of comparisons. It does this by checking virt-what output, and using the contents of /etc/system-release-cpe. The tuned file /usr/lib/tuned/recommend.conf is then used as the rulebook to see what matches and what doesn’t.

This starts to unravel a bit with CentOS, because the packages are derived from RHEL(Red Hat Enterprise Linux), and while RHEL may differentiate between server, workstation, etc CentOS does not. If you look carefully at the recommends.conf check for the throughput-performance profile, you’ll see that they check to see if the strings computenode or server exist in /etc/system-release-cpe. On CentOS, neither one does, because the distribution doesn’t make that distinction. Because these strings aren’t found, the fallback option of balanced is chosen.

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

September 02, 2017

Battery and power status on your CentOS Linux laptop

September 02, 2017 07:06 PM

The upower cli tool will get you a ton of great info for the battery ( and other things related to power ). Make sure you have it installed ( rpm -q upower ), and give it a shot like this :

$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
  native-path:          BAT0
  vendor:               SMP
  model:                45N1703
  serial:               5616
  power supply:         yes
  updated:              Sat 02 Sep 2017 19:43:02 BST (39 seconds ago)
  has history:          yes
  has statistics:       yes
  battery
    present:             yes
    rechargeable:        yes
    state:               fully-charged
    warning-level:       none
    energy:              21.84 Wh
    energy-empty:        0 Wh
    energy-full:         21.9 Wh
    energy-full-design:  45.02 Wh
    energy-rate:         0.00219125 W
    voltage:             16.237 V
    percentage:          99%
    capacity:            48.645%
    technology:          lithium-polymer
    icon-name:          'battery-full-charged-symbolic'

As you can see after ~ 3 years of extensive use, I should really look for a replacement battery for this laptop, at 48% capacity, its not really doing very well.

To enumerate device paths, use the -e flag like this :

$ upower -e 
/org/freedesktop/UPower/devices/line_power_AC
/org/freedesktop/UPower/devices/battery_BAT0
/org/freedesktop/UPower/devices/keyboard_0003o046DoC52Bx0004
/org/freedesktop/UPower/devices/mouse_0003o046DoC52Bx0005
/org/freedesktop/UPower/devices/DisplayDevice

Now we can check how that external keyboards battery’s is doing

  native-path:          /sys/devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/0003:046D:C52B.0003/0003:046D:C52B.0004
  vendor:               Logitech, Inc.
  model:                K750
  serial:               D9ED612B
  power supply:         no
  updated:              Sat 02 Sep 2017 19:59:15 BST (29 seconds ago)
  has history:          yes
  has statistics:       no
  keyboard
    present:             yes
    rechargeable:        yes
    state:               discharging
    warning-level:       none
    luminosity:          80 lx
    percentage:          55%
    icon-name:          'battery-good-symbolic'
  History (charge):
    1504378755	55.000	discharging


Clearly the light in this room, right now, isnt bright enough to be charging the keyboard via its solar cells. Might leave it closer to the window tomorrow.

As you can see from the enumerated list, there is line_power_AC as well as the mouse ( which is actually a trackpad I used ). And if you are so inclined ( I wasent, but just did this for all my laptops.. ) you can track this info and graph it, push it to your monitoring service etc.

from the readme file:
UPower is an abstraction for enumerating power devices,
listening to device events and querying history and statistics.
Any application or service on the system can access the
org.freedesktop.UPower service via the system message bus.

Give it a shot.

August 31, 2017

Come help build duffy2 for CiCo

August 31, 2017 10:36 AM

When I came onboard with Red Hat, one of the key impacts that I wanted to be able to use Red Hat resources for was to help the wider opensource community write, manage and deliver better code. It was with that goal that I conceptualised, bootstrapped, argued for and then got the https://ci.centos.org/ project started up. Using well established industry standards ( Jenkins ! ) I was able to rapidly build out the provising infra around it, with copious amounts of Fabian’s help. My focus, at the time, was that it should be simple enough to just-work, but capable enough to keep working. There were many hacks involved, making it impossible to really adapt and grow outside of the service.

100’s of thousands of CI jobs later, I think we can call that bootstrap a success.

Today, as we move forward to adding more machine types, extending support for what we have – It gives me great pleasure to start talking about how the pieces come together, and how the service backend works – and open the entire stack up for folks to come help us get better, faster, better-tested and deliver duffy as a running service built on modern service development methodologies.

Come join me at https://github.com/kbsingh/duffy2 as we bootstrap the next instance of this service. Everyone’s welcome!

I also want to remind people that https://ci.centos.org is open to any open source project that can benefit from it ( including the access to bare metal hosts on demand ).

regards,

Git 2 on CentOS Linux 7

August 31, 2017 12:56 AM

Update: SCL SIG now has rh-git218, so instead of using sclo-212, use rh-git218 instead. To do an update from git 2.12 to 2.18, I’ve just gone down the path of yum erase sclo-212 && yum install rh-git218

The distro shipped git is still at version 1.8, but if you need or want a newer git version there are a few options. The CentOS SCL SIG {https://wiki.centos.org/SpecialInterestGroup/SCLo} publishes a git212 collection that hosts git version 2.12.2 ( at the moment, it will get updates as updates become avaialble ). There is a collection for git 2.5 as well ( Called sclo-git25 ), should you want that version.

In order to get setup, first get the centos-release-scl package on the machine, that will setup the scl yum repo’s and the sig’s RPM Signing key.
yum install centos-release-scl

With that in place, you should be able to check what scl collections are available for git with a yum command like this :
yum list sclo-git\*

And then install the version you want with :
yum install sclo-git212.x86_64

Once that completes, you can check that the scl is installed and working with something like this :
$ scl enable sclo-git212 /bin/bash
$ git --version
git version 2.12.2

This is good, but I find it a pain to need to enable scl’s all the time, so I use a line in my bashrc like this :
source scl_source enable sclo-git212

With that in place, every shell now has git version 2. And any other apps you run, in the shell would have this ver of git as well.

August 04, 2017

Keeping an eye on CentOS performance with Grafana

August 04, 2017 07:00 AM

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.

Grafana Screenshot

Collectd

What, you thought I’d jump straight into Grafana? We have to have data to collect first, and the best way to do that on CentOS is via collectd

The simplest way to get collectd on CentOS is via the EPEL repository. If you’re new to CentOS, or aren’t familiar with the Fedora’s EPEL repo, the command below is all you need to get started.

yum install epel-release

Now that the EPEL repo is enabled, it’s easy enough to install collectd in the same manner:

yum install collectd

There are a number of additional collectd plugins available in EPEL, but for our purposes here the base is enough. I would encourage you to explore the available plugins if your needs aren’t met by the base plugin.

Now that it’s installed, we need to configure collectd to send data out. Collectd generates the stats, but we need to put it someplace that Grafana can use.

In /etc/collectd.conf there are a few things we need to configure. In the Global section, uncomment the lines for Hostname, BaseDir, PIDFile, PluginDir, and TypesDB. You’ll need to modify Hostname, but the rest should be fine as the defaults. It should look something like the snippet below:

Hostname    "YourHostNameHere"
#FQDNLookup   true
BaseDir     "/var/lib/collectd"
PIDFile     "/var/run/collectd.pid"
PluginDir   "/usr/lib64/collectd"
TypesDB     "/usr/share/collectd/types.db"

Now that we have the basic app information set, we need to enable the plugins we wish to use. For my instance, I have syslog, cpu, disk, interface, load, memory, and network uncommented. Of these, the default values are fine for everything except network. The network plugin is used to send data to our collector, which in this case is influxdb. The network plugin will need to point to your influxdb server. Since we’re doing everything locally in this example, we’re pointing to localhost. It should look like the following:

<Plugin network>
  Server "127.0.0.1" "8096"
</Plugin>

InfluxDB

Now that we’re done with Collectd, we have to configure influxdb to pull in the data collectd is generating. Since influxdb isn’t in EPEL, we’ll have to pull this in from their repository. The command below makes it easy.

cat <<EOF > /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/centos/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key
EOF

Once that’s done, install the package with yum install influxdb and then it’s ready to configure. There are only a few things that need to happen in the /etc/influxdb/influxdb.conf config file.

In the [http] section of your /etc/influxdb/influxdb.conf, set enabled = true, and bind-address = ":8096". It should look like this:

[http]
  # Determines whether HTTP endpoint is enabled.
enabled = true

  # The bind address used by the HTTP service.
bind-address = ":8086"

Then scroll down to the [[collectd]] section and configure it like the section below:

[[collectd]]
  enabled = true
  bind-address = ":8096"
  database = "collectd"
  typesdb = "/usr/share/collectd"

At this point we can go ahead and start both services to ensure that they’re working properly. To begin, we’ll enable collectd, and ensure that it’s sending data. As with other services, we’ll use systemd for this. In the sample below, you’ll see the commands used, and the output of a running collectd daemon.

[jperrin@monitor ~]$ sudo systemctl enable collectd
[jperrin@monitor ~]$ sudo systemctl start collectd
[jperrin@monitor ~]$ sudo systemctl status collectd
● collectd.service - Collectd statistics daemon
   Loaded: loaded (/usr/lib/systemd/system/collectd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-08-02 11:22:18 PDT; 6min ago
     Docs: man:collectd(1)
           man:collectd.conf(5)
 Main PID: 18366 (collectd)
   CGroup: /system.slice/collectd.service
           └─18366 /usr/sbin/collectd

Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "disk" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "interface" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "load" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "memory" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "network" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: Systemd detected, trying to signal readyness.
Aug 2 11:22:18 monitor collectd[18366]: Initialization complete, entering read-loop.
Aug 2 11:22:18 monitor systemd[1]: Started Collectd statistics daemon.

Now that collectd is working, start up InfluxDB and make sure it’s gathering data from collectd.

[jperrin@monitor ~]$ sudo systemctl enable influxdb
[jperrin@monitor ~]$ sudo systemctl start influxdb
[jperrin@monitor ~]$ sudo systemctl status influxdb
● influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/usr/lib/systemd/system/influxdb.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-07-29 18:28:20 PDT; 1 weeks 6 days ago
     Docs: https://docs.influxdata.com/influxdb/
 Main PID: 23459 (influxd)
   CGroup: /system.slice/influxdb.service
           └─23459 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-12T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'interrupt' AND time > 417367h GR...) service=query
Aug 2 10:35:10 monitor influxd[23459]: [httpd] 172.20.1.40, 172.20.1.40,::1 - - [12/Aug/2017:10:35:10 -0700] "GET /query?db=collectd&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22load_shortte...ean%28%22value%
Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-02T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'nice' AND time > 417367h GROUP B...) service=query

As we can see in the output above, the service is working, and the data is being collected. From here, the only thing left to do is present it via Grafana.

Grafana

To install Grafana, we’ll create another repository as we did with InfluxDB. Unfortunately the Grafana folks don’t keep release versions separate in the repo, so this looks like we’re using an EL6 repo despite doing this work on EL7.

cat <<EOF > /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packagecloud.io/grafana/stable/el/6/$basearch
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
EOF

Now that the repository is in place and enabled, we can install grafana the same as the rest, yum install grafana. Once this is done, we can start working on the configuration. For this tutorial, we’re just going to set an admin username and password, because we’re doing this for a tutorial, and a single user instance. I would absolutely encourage you to read the docs if you want to start doing a bit more with grafana.

To accomplish this reasonably basic configuration, simply uncomment the admin_user and admin_password lines in the [security] section of /etc/grafana/grafana.ini, and set your own values. In this instance I’m using admin/admin, because that’s what you do in examples, right?

[security]
# default admin user, created on startup
admin_user = admin

# default admin password, can be changed before first start of grafana,  or in profile settings
admin_password = admin

Collectd data source for grafana

Now you can start grafana with systemctl start grafana-server, and configure it via the web interface. After you log in for the first time, you’ll be prompted to configure a few things including a data source, and a dashboard. Since we’re doing this all on the localhost, you’ll be able to cheat and use the data source settings in the screenshot. Don’t worry, we’re nearly there and there’s only a little left to do.

Once you have the datasource configured, you’ll be prompted to create your first dashboard. While you can certainly do this, it’s a little intimidating for a first run with grafana. One easy solution to this is to import one of the available templates offered on Grafana’s website. In my case, I opted to use the Host Overview. It provides a nice group of metrics and graphs as a base to use and build from.

Once you’ve gotten everything set up, it’s now down to personal preference and further tinkering. Once again I would very much recommend reading the documentation because there are a wealth of options and changes I didn’t touch on for this intro.

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.


Powered by Planet!
Last updated: November 18, 2019 06:15 PM