October 21, 2017

CentOS Dojo at CERN

October 21, 2017 01:45 PM

Yesterday we held our first - hopefully of many - dojo at CERN. We had around 70 people in attendance, representing many organizations and nations. And we had presentations from many different projects within the CentOS ecosystem.

If you're not familiar with CentOS Dojos, you can read more about them here: https://wiki.centos.org/Events/Dojo/

And if you're not familiar with CERN, you can read about it on Wikipedia, or on CERN's own website.

The dojo was in two parts.

On Thursday, a small group of CentOS SIG leaders and board members gathered to discuss plans for tackling some of the challenges in the CentOS project. You can read more about what was discussed on the centos-devel mailing list.

On Friday, we had the main event, with presentations from the CentOS board, SIG leaders, and organizations using CentOS. This included a presentation from CERN on their use of CentOS, Ceph, and OpenStack to process the data from the LHC - The Large Hadron Collider - as they analyze the nature of subatomic particles, and of the world.

We were very pleased with the day, and intend to do more event in the future, both at CERN, and at other organizations. If you're interested in hosting a dojo at your organization, get in touch with Rich Bowen to get started. Also, watch this site for a blog post about what's involved in running a dojo.

For more about what happened at the dojo, see Rich's blog posts. Also, watch this space for video and slides from the event.

 

October 12, 2017

Upcoming Dojo at CERN, FOSDEM

October 12, 2017 12:51 PM

Next week, we're holding a Dojo at CERN, in Meyrin, Switzerland. This will feature content from various of our Special Interest Groups (SIGs), and an overview of how CERN is using CentOS in their work to unravel the secrets of the universe.

We still have a little space, if you are interested in coming. You can find out more details about the event, and register, at http://cern.ch/centos

In the weeks following the event, video of the presentations, will be appearing here. Follow us on Twitter (@CentOSProject) to find out when they're posted.

Meanwhile, we're also planning a Dojo in Brussels, on the Friday before FOSDEM, as we've been doing this for a number of years now. The CFP is now open, if you're interested in presenting. We're looking for any talks about work that you're doing on CentOS, or anything that you're doing using CentOS. The CFP closes October 30th.

October 11, 2017

Four years later with CentOS and Red Hat

October 11, 2017 07:00 AM

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

First and foremost, CentOS is about integration, and its important to know who the community is. We’re your sysadmins and operations teams. We’re your SREs, the OPS in your devops. We’re a force multiplier to developers, the angry voice that says “stop disabling SELinux” and “show me your unit tests”. We’re the community voice encouraging you to do things the right way, rather than taking an easy shortcut we know from experience will come back to bite you.

What we’re not is developers. We may pull in kernel patches, but we’re not kernel developers. We can help you do the root cause analysis to figure out why your app is suddenly not performing, but we aren’t the ones to write the code to fix it. We don’t determine priority for what does or doesn’t get fixed, that’s what Red Hat does.

The core distribution of CentOS is and has always been based on code written by Red Hat. This doesn’t mean it’s a choice of “either CentOS or RHEL,” because we’re in this together. CentOS provides Red Hat a community platform for building and testing things like OpenStack with RDO. We build new ecosystems around ARM servers. We provide a base layer for others to innovate around emerging technologies like NFV. But none of this would be possible without the work of RH’s engineering teams.

The community can build, organize and deliver tools in any number of creative ways, but ultimately the code behind them is being developed by engineers paid to address the needs of Red Hat’s customers. You can bet that RH is keeping an eye on what the CentOS community is using and building, but that doesn’t necessarily translate to business need.

We’re here to empower operators who want to experiment on top of the enterprise base lifespan. We’re here to bring tools and technology to those for whom it may be otherwise be out of reach. We’re here to take use cases and lessons learned from the community back to Red Hat as advocates. We’re happy to serve both audiences in this capacity, but let’s not forget how we buy the ‘free as in beer’.

After 4 years of being at Red Hat, I still occasionally get questions that show not everyone understands what Red Hat means to CentOS, or what CentOS provides to Red Hat. They tend to think in terms of competition, like there’s an either or choice. Reality just doesn’t bear that out.

October 10, 2017

Using Ansible Openstack modules on CentOS 7

October 10, 2017 10:00 PM

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

The good news is that Ansible has already some modules for Openstack but it has some requirements and a little bit of understanding before being able to use those.

First of all, all the ansible os_ modules need "shade" on the host included in the play, and that will be responsible of all os_ modules launch. At the time of writing this post, it's not yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on our CBS builders

Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through os-client-config

That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.

That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :

- name: Ensuring we have required pkgs for ansible/openstack
  yum:
    name: python2-shade
    state: installed

- name: Ensuring local directory to hold the os-client-config file
  file:
    path: /etc/openstack
    state: directory
    owner: root
    group: root

- name: Adding clouds.yaml for os-client-config for further actions
  template:
    src: clouds.yaml.j2
    dest: /etc/openstack/clouds.yaml
    owner: root
    group: root
    mode: 0700

Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play before using the os_* modules :

clouds:
  {{ cloud_name }}:
    auth:
      username: admin
      project_name: admin
      password: {{ openstack_admin_pass }}
      auth_url: http://{{ openstack_controller }}:5000/v3/
      user_domain_name: default
      project_domain_name: default
    identity_api_version: 3

You just have to adapt to your needs (see doc for this) but the interesting part is the identity_api_version to force v3.

Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :

- name: Configuring OpenStack user[s]
  os_user:
    cloud: "{{ cloud_name }}"
    default_project: "{{ item.0.name }}"
    domain: "{{ item.0.domain_id }}"
    name: "{{ item.1.login }}"
    email: "{{ item.1.email }}"
    password: "{{ item.1.password }}"           
  with_subelements:
    - "{{ cloud_projects }}"
    - users  
  no_log: True

From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this :

cloud_projects:
  - name: demo
    description: demo project
    domain_id: default
    quota_cores: 20
    quota_instances: 10
    quota_ram: 40960
    users:
      - login: demo_user
        email: demo@centos.org
        password: Ch@ngeM3
        role: admin # can be _member_ or admin
      - login: demo_user2
        email: demo2@centos.org
        password: Ch@ngeMe2

Now that it works, you can explore all the other os_* modules and I'm already using those to :

  • Import cloud images in glance
  • Create networks and subnets in neutron
  • Create projects/users/roles in keystone
  • Change quotas for those projects

I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later.

September 28, 2017

Using CentOS 7 armhfp VM on CentOS 7 aarch64

September 28, 2017 10:00 PM

Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own Cubietruck or RaspberryPi3 is time consuming : removing the sdcard, reflashing with the correct CentOS 7 image and booting/testing the pkg/update/etc ...

So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt. Let's so start with a CentOS 7 aarch64 minimal setup and then :

yum install qemu-kvm-tools qemu-kvm virt-install libvirt libvirt-python libguestfs-tools-c
systemctl enable libvirtd --now

That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.

So here is just what we need :

  • Import the RootFS from an existing image
curl http://mirror.centos.org/altarch/7/isos/armhfp/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img.xz|unxz >/var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img
  • Convert image to qcow2 (that will give us more flexibility) and extend it a little bit
qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2
qemu-img resize /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 +15G
  • Extract kernel+initrd as libvirt will boot that directly for the VM
mkdir /var/lib/libvirt/armhfp-boot
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/ /var/lib/libvirt/armhfp-boot/

So now that we have a RootFS, and also kernel/initrd, we can just use virt-install to create the VM (pointing to existing backend qcow2) :

virt-install \
 --name centos7_armhfp \
 --memory 4096 \
 --boot kernel=/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.40-203.el7.armv7hl,initrd=/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.40-203.el7.armv7hl.img,kernel_args="console=ttyAMA0 rw root=/dev/sda3" \
 --disk /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 \
 --import \
 --arch armv7l \
 --machine virt \

And here we go : we have a armhfp VM that boots really fast (compared to a armhfp board using a microsd card of course)

At this stage, you can configure the node, etc.. The only thing you have to remember is that of course kernel will be provided from outside the VM, so just extract it from an updated VM to boot on that kernel. Let's show how to do that, as in the above example, we configured the VM to run with 4Gb of ram, but only 3 are really seen inside (remember the 32bits mode and so the need for PAE on i386 ?)

So let's use this example to show how to switch kernel : From the armhfp VM :

# Let extend first as we have bigger disk
growpart /dev/sda 3
resize2fs /dev/sda3
yum update -y
yum install kernel-lpae
systemctl poweroff # we'll modify libvirt conf file for new kernel

Back to the hypervisor we can again extract needed files :

virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae /var/lib/libvirt/armhfp-boot/boot/
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img /var/lib/libvirt/armhfp-boot/boot/

And just virsh edit centos7_armhfp so that kernel and armhfp are pointing to correct location:

<kernel>/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae</kernel>
<initrd>/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img</initrd>

Now that we have a "gold" image, we can even use exiting tools to provision quickly other nodes on that hypervisor ! :

time virt-clone --original centos7_armhfp --name armhfp_guest1 --file /var/lib/libvirt/images/armhfp_guest1.qcow2
Allocating 'armhfp_guest1.qcow2'                                               |  18 GB  00:00:02     

Clone 'armhfp_guest1' created successfully.

real    0m2.809s
user    0m0.473s
sys 0m0.062s

time virt-sysprep --add /var/lib/libvirt/images/armhfp_guest1.qcow2 --operations defaults,net-hwaddr,machine-id,net-hostname,ssh-hostkeys,udev-persistent-net --hostname guest1

virsh start armhfp_guest1

As simple as that. Of course, in the previous example we were just using the default network from libvirt, and not any bridge, but you get the idea : all the rest with well-known concept for libvirt on linux.

September 26, 2017

Status update for CentOS Container Pipeline

September 26, 2017 10:12 AM

The goal of CentOS Container Pipeline project is to let any open-source project build container images on the CentOS Linux and additionally provide them with:

  • Dockerfile lint report
  • Container scanner reports that:
    • Scan the image for RPM updates
    • Scan the image’s RUN label for capabilities that resulting container might have when started
    • Scan the image to verify installed RPM packages
    • Scan the image for possible updates to third party packages installed via npm, pip or gem
  • Cause of build whenever an image is built/rebuilt.

In this article we’d like to summarize the features provided by the Pipeline and current state of the project. To get an idea of container images already available via registry.centos.org, please check the wiki page of Container Pipeline.

How does the CentOS Container Pipeline work?

Let’s say you have an open-source project that you’d like to containerize on CentOS platform. The source code is hosted on one of the various web-based Git version control repositories like GitHub, Bitbucket, GitLab, etc accessible over the Internet. You have a Dockerfile that uses CentOS base image to build the container (we can help you here if your existing Dockerfile is based on Alpine, Debian, Ubuntu, etc.)

Now all you need to do is create cccp.yml file in the repo at same location as your Dockerfile and open a pull request on CentOS/container-index repository to get started (more on the yaml file and how to open PR later in the post.) The generated container image can then be pulled via:

$ docker pull registry.centos.org/<app-id>/<job-id>:<desired_tag>

The cccp.yml or cccp.yaml, that’s required in your Git repository, must contain value for job-id at the very least. This is generally the name of the image like httpd for an Apache web server image or nginx for an NGINX image, so on and so forth.

 

For the pull request to be opened on CentOS/container-index, you’ll need to:

  • Fork the repository under your GitHub username
  • Clone it onto your system
  • Add a yml entry under `index.d` directory. Name of this yml file is recommended to be same as appid that you want in the aforementioned `docker pull` command.
  • Contents of this yaml file should be like the example below:
    Projects
        - id: 1
          app-id: centos
          job-id: centos
          git-url: https://github.com/CentOS/sig-cloud-instance-images
          git-branch: CentOS-7
          git-path: docker
          target-file: Dockerfile
          desired-tag: latest
          notify-email: you@example.com
          depends-on: null

    id should be an integer and shouldn’t repeat in the yml file.
    app-id is the namespace of your container images. This should be same as filename
    job-id is the name you want for your container image
    git-url is the complete URL to your Git repo
    git-branch is the branch within your repo. Default is `master`
    target-file is the name of Dockerfile to be used to build container image
    desired-tag is the tag you’d like to apply to resulting container image
    notify-email is the email address you’d like to be notified upon
    depends-on is the container image that your resulting image is dependent on. Generally the one used in FROM statement in Dockerfile. Image mentioned here must exist in the container-index.

    For more info on the yml file, we recommend you refer its dedicated section in README. For more examples on writing the yml file, we recommend you refer the index.d directory which contains yml files for various open-source projects as well as individual users.

Once the pull request is merged, Container Pipeline Service hosted on CentOS infrastructure picks it up and lints the Dockerfile, builds the container image, tests it, scans it using various atomic scanners and sends the result of these processes to email address you mentioned as `notify-email`. If it detects any issue at any of the above stages, it’ll stop right there and send you an email along with logs.

Once the image is built for the first time, every time you push a change to the Git repository’s (`git-url` variable) branch being tracked via the container-index (`git-branch` variable), a new image is built and lint-build-test-scan processes are re-executed. This provides the developer with a feedback on the changes (s)he pushed.

Weekly image scanning, RPM tracking and parent image update

Besides the one-time image scanning that happens when the image is built for the first time, CentOS Container Pipeline service does a weekly scanning and sends the results to the developer. This email only contains the information generated by the atomic scanners, albeit from a fresh run.

The Pipeline service also tracks the RPM repositories enabled in the container image. It checks these repositories once everyday to find if there’s any update available from any of the repos. If it finds an update, the container images which have those repositories enabled, will be re-built and re-scanned.

If the parent image of the project (`depends-on` variable) is updated, the child image automatically gets re-built and re-scanned.

Work in Progress features

Besides the features mentioned above, we are working on providing the ability to build images for aarch64 architecture.

We are also working on saving data-points that will store state of the Pipeline to database and help us churn useful metrics out of it. One thing where we'll be able to use it is to generate a real-time view of the build process.

Feature to let user know what is the current status of their build.

We are working on providing a brief summary of errors/warnings that scanners found in the container image.

Known issues

There are a few issues we’re working on right now and hope to get them fixed soon

  • Monitoring the overall service is in its nascent stages and we need to improve it to know of an issue before the users point them towards it. We use Sentry for monitoring the Pipeline service
  • Although we have a UI for the registry at https://registry.centos.org/, we need to tweak it to be more useful for the end-user to:
    • Have a quick look at the Dockerfile used to build the image
    • Access the logs for historic builds
  • RPM tracking issues wherein a project removed/updated from CentOS/container-index doesn’t get deleted/modified in the underlying database and hence triggers rebuild for incorrect image when it finds any of the various enabled repositories updated.

Have questions or suggestions?

We are always looking forward to community participation and community feedback. The project is open-source from day one. If you have any queries around how to get started or, why certain things works in certain way or, you would like to see a feature or, anything else, feel free to ping us on #centos-devel IRC channel on Freenode network.

Dharmit Shah ( dharmit on irc )

September 20, 2017

New CentOS Atomic Host with OverlayFS Storage

September 20, 2017 10:26 PM

The CentOS Atomic SIG has released an updated version of CentOS Atomic Host (7.1708), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

This release, which is based on the RHEL 7.4 source code, includes an updated kernel that supports overlayfs container storage, among other enhancements.

CentOS Atomic Host includes these core component versions:

  • atomic-1.18.1-3.1.git0705b1b.el7.x86_64
  • cloud-init-0.7.9-9.el7.centos.2.x86_64
  • docker-1.12.6-48.git0fdc778.el7.centos.x86_64
  • etcd-3.1.9-2.el7.x86_64
  • flannel-0.7.1-2.el7.x86_64
  • kernel-3.10.0-693.2.2.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.7-1.el7.x86_64
  • rpm-ostree-client-2017.6-6.atomic.el7.x86_64

OverlayFS Storage

In previous releases of CentOS Atomic Host, SELinux had to be in permissive or disabled mode for OverlayFS storage to work. Now you can run the OverlayFS file system with SELinux in enforcing mode. CentOS Atomic Host still defaults to devicemapper storage, but you can switch to OverlayFS using the following commands:

$ systemctl stop docker
$ atomic storage reset
  # Reallocate space to the root VG - tweak how much to your liking
$ lvm lvextend -r -l +50%FREE atomicos/root
$ atomic storage modify --driver overlay2
$ systemctl start docker

For more information on storage management options, see the upstream RHEL documentation.

Containerized Master

CentOS Atomic Host ships without the kubernetes-master package built into the image. For information on how to run these kubernetes components as system containers, consult the CentOS wiki.

If you prefer to run Kubernetes from installed rpms, you can layer the master components onto your Atomic Host image using rpm-ostree package layering with the command: atomic host install kubernetes-master -r.

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets every two weeks on Tuesday at 04:00 UTC in #centos-devel, and on the alternating weeks, meets as part of the Project Atomic community meeting at 16:00 UTC on Monday in the #atomic channel. You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

Boosting CentOS server performance

September 20, 2017 07:00 AM

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

The fix

I know most people who found this don’t care about the details, so we’ll get right to the good stuff. Check your active tuned profile. If your output looks like the example below, you probably want to change it.

[root@centos ~]# tuned-adm active
Current active profile: balanced

The ‘balanced’ profile means the CPU governor is set to powersave, which won’t do your server any favors. You can validate this by running cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor. To fix it, run the command below:

[root@centos ~]# tuned-adm profile throughput-performance

That’s it. This changes the governor to performance which should give you a pretty decent performance bump without any additional changes, and across all hardware platforms.If you’re interested in figuring out why the default setting is set this way, I’ll explain.

Why the default is “wrong”

The tuned package is installed and enabled by default. When it runs for the first time, it tries to automatically select the best performance profile for the system by running a couple of comparisons. It does this by checking virt-what output, and using the contents of /etc/system-release-cpe. The tuned file /usr/lib/tuned/recommend.conf is then used as the rulebook to see what matches and what doesn’t.

This starts to unravel a bit with CentOS, because the packages are derived from RHEL(Red Hat Enterprise Linux), and while RHEL may differentiate between server, workstation, etc CentOS does not. If you look carefully at the recommends.conf check for the throughput-performance profile, you’ll see that they check to see if the strings computenode or server exist in /etc/system-release-cpe. On CentOS, neither one does, because the distribution doesn’t make that distinction. Because these strings aren’t found, the fallback option of balanced is chosen.

Last week I spent entirely too much time trying to track down a performance issue for the AArch64/ARM64 build of CentOS. While we don’t and won’t do performance comparisons or optimizations, this was fully in the realm of “something’s wrong here”. After a bit of digging, this issued turns out to impact just about everyone running CentOS on their servers who isn’t doing custom performance tuning.

September 13, 2017

Updated CentOS Vagrant Images Available (v1708.01)

September 13, 2017 08:12 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.4.1708 for x86_64 (based on the sources of RHEL 7.4). All included packages have been updated to 12 September 2017.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 2.0.0 on OS X 10.11.6, with VirtualBox 5.1.26.

Downloads

The official images can be downloaded from Vagrant Cloud. We provide images for HyperV, libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

September 02, 2017

Battery and power status on your CentOS Linux laptop

September 02, 2017 07:06 PM

The upower cli tool will get you a ton of great info for the battery ( and other things related to power ). Make sure you have it installed ( rpm -q upower ), and give it a shot like this :

$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
  native-path:          BAT0
  vendor:               SMP
  model:                45N1703
  serial:               5616
  power supply:         yes
  updated:              Sat 02 Sep 2017 19:43:02 BST (39 seconds ago)
  has history:          yes
  has statistics:       yes
  battery
    present:             yes
    rechargeable:        yes
    state:               fully-charged
    warning-level:       none
    energy:              21.84 Wh
    energy-empty:        0 Wh
    energy-full:         21.9 Wh
    energy-full-design:  45.02 Wh
    energy-rate:         0.00219125 W
    voltage:             16.237 V
    percentage:          99%
    capacity:            48.645%
    technology:          lithium-polymer
    icon-name:          'battery-full-charged-symbolic'

As you can see after ~ 3 years of extensive use, I should really look for a replacement battery for this laptop, at 48% capacity, its not really doing very well.

To enumerate device paths, use the -e flag like this :

$ upower -e 
/org/freedesktop/UPower/devices/line_power_AC
/org/freedesktop/UPower/devices/battery_BAT0
/org/freedesktop/UPower/devices/keyboard_0003o046DoC52Bx0004
/org/freedesktop/UPower/devices/mouse_0003o046DoC52Bx0005
/org/freedesktop/UPower/devices/DisplayDevice

Now we can check how that external keyboards battery’s is doing

  native-path:          /sys/devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/0003:046D:C52B.0003/0003:046D:C52B.0004
  vendor:               Logitech, Inc.
  model:                K750
  serial:               D9ED612B
  power supply:         no
  updated:              Sat 02 Sep 2017 19:59:15 BST (29 seconds ago)
  has history:          yes
  has statistics:       no
  keyboard
    present:             yes
    rechargeable:        yes
    state:               discharging
    warning-level:       none
    luminosity:          80 lx
    percentage:          55%
    icon-name:          'battery-good-symbolic'
  History (charge):
    1504378755	55.000	discharging


Clearly the light in this room, right now, isnt bright enough to be charging the keyboard via its solar cells. Might leave it closer to the window tomorrow.

As you can see from the enumerated list, there is line_power_AC as well as the mouse ( which is actually a trackpad I used ). And if you are so inclined ( I wasent, but just did this for all my laptops.. ) you can track this info and graph it, push it to your monitoring service etc.

from the readme file:
UPower is an abstraction for enumerating power devices,
listening to device events and querying history and statistics.
Any application or service on the system can access the
org.freedesktop.UPower service via the system message bus.

Give it a shot.

August 31, 2017

Come help build duffy2 for CiCo

August 31, 2017 10:36 AM

When I came onboard with Red Hat, one of the key impacts that I wanted to be able to use Red Hat resources for was to help the wider opensource community write, manage and deliver better code. It was with that goal that I conceptualised, bootstrapped, argued for and then got the https://ci.centos.org/ project started up. Using well established industry standards ( Jenkins ! ) I was able to rapidly build out the provising infra around it, with copious amounts of Fabian’s help. My focus, at the time, was that it should be simple enough to just-work, but capable enough to keep working. There were many hacks involved, making it impossible to really adapt and grow outside of the service.

100’s of thousands of CI jobs later, I think we can call that bootstrap a success.

Today, as we move forward to adding more machine types, extending support for what we have – It gives me great pleasure to start talking about how the pieces come together, and how the service backend works – and open the entire stack up for folks to come help us get better, faster, better-tested and deliver duffy as a running service built on modern service development methodologies.

Come join me at https://github.com/kbsingh/duffy2 as we bootstrap the next instance of this service. Everyone’s welcome!

I also want to remind people that https://ci.centos.org is open to any open source project that can benefit from it ( including the access to bare metal hosts on demand ).

regards,

Git 2 on CentOS Linux 7

August 31, 2017 12:56 AM

The distro shipped git is still at version 1.8, but if you need or want a newer git version there are a few options. The CentOS SCL SIG {https://wiki.centos.org/SpecialInterestGroup/SCLo} publishes a git212 collection that hosts git version 2.12.2 ( at the moment, it will get updates as updates become avaialble ). There is a collection for git 2.5 as well ( Called sclo-git25 ), should you want that version.

In order to get setup, first get the centos-release-scl package on the machine, that will setup the scl yum repo’s and the sig’s RPM Signing key.
yum install centos-release-scl

With that in place, you should be able to check what scl collections are available for git with a yum command like this :
yum list sclo-git\*

And then install the version you want with :
yum install sclo-git212.x86_64

Once that completes, you can check that the scl is installed and working with something like this :
$ scl enable sclo-git212 /bin/bash
$ git --version
git version 2.12.2

This is good, but I find it a pain to need to enable scl’s all the time, so I use a line in my bashrc like this :
source scl_source enable sclo-git212

With that in place, every shell now has git version 2. And any other apps you run, in the shell would have this ver of git as well.

August 24, 2017

CR Repository for CentOS Linux 7 (1708) Released

August 24, 2017 12:46 PM

In an earlier article on CentOS Linux 7 (1708), I explained the the basic release process and about things like the Continuous Release (CR) repository.  I am not going to go into detail about those two topics here, just update you on the process.

The packages that make up what will be our CentOS Linux 7 (1708), minus the distribution installer and the new release files, are now in our CR repository.  Package release announcements.  The release notes are here for the release.

The CentOS team rolled in a bugfix to iptables to prevent the service from not restarting on any systems that do not use firewalld but have ip6tables and iptables on.  We used this patch in the fix.  We do not normally do that (normally only fix bugs when the source RPM is released upstream), but in this case we decided to because of the impact of no firewall on a huge number of internet facing machines that use CentOS.  The new package versioning (iptables-*1.4.21-18.0.el7*) is such that when a replacement is released by Red Hat for RHEL and after we build it, it will replace our version in this release.

Some things of note about this release

We had a record number of missing build requirements (that is, things required to build the release that are not actually in the distribution to run the released packages).  These packages are not part of RHEL proper and each one has to be researched and an appropriate package found (usually from EPEL or the Fedora Archives) to build the packages.  In the 1708 release, we had 11 of those source packages to find.  In the previous four CentOS release cycles there were a total of 5.

There are a larger number of services that have been rebased to newer versions in this release than in the past.  This seems to be a trend by the RHEL engineers to give the releases newer software, especially in the desktop/GUI areas, while still backporting most of the server related packages to maintain ABI/API compatibility.   The release notes talk about specific libraries that were rebased.  I like rebases, they give us newer stuff and who doesn't like that 🙂 .. but, they also mean newer shared libraries and that makes finding the correct build order for packages even more important.  This lead to a larger number of packages requiring rebuilding more than one time during the process because they had older shared library links initially.

Who should not install the CR (and wait for the full release)

The CR repository is not on by default and it is an opt-in process.  It usually takes 2-3 weeks after the CR release for the final release (to get the installer working, compile the full tree from older releases and the newer updates, generate install media, cloud images, vagrant boxes, container images, etc.).

Everyone wants to upgrade immediately (me too 🙂 ), but you may need to hold off for some of the following reasons.

  1.  If you use lots of third party repositories (EPEL, Nux!, CentOS Plus, etc.) then some of those packages could be outdated and the developers may need to link against newer libraries.  Your upgrade might work fine, it might not.  The CR is how we make our packages available to these devs, so it make take some amout of time before everything in 3rd party repos (and even CentOS Plus and CentOS Extras) work completely.
  2. Special Interest Group content also may need to be rebuilt against the newer shared libraries before they work.  I don't track each and every SIG, but I am involved in the Virt SIG and I maintain several of the xen packages there.  The xen repository needs a newer libvirt and seabios and then to have the xen packages rebuilt against that (as an example).  CR is how we make the new packages available for the Devs to be able to do that.

Each install is unique and should be tested before upgrading.  Many of the libraries have compatibility versions in the release an some things will work from the above, others will not.  yum should tell you any errors if it can not upgrade.

How to enable CR

You can enable CR with the command:  yum-config-manager --enable cr

After that you can upgrade with the command:  yum update

Notes:

  1. A change to rdma-core package from a nonarch to an arch version might want to bring in i686 packages due to the way 'obsoletes' work in yum.  This is a known upstream issue: bug   and is in the release notes .. so if you don't want the extra couple i686 packages do this for the update:  yum update rdma-core  After that completes, run yum update for the rest of the updated packages.
  2. The version of libgpod that was in EPEL before this update set was newer than the one released.  If you have that installed, you must first do yum downgrade libgpod , then do yum update

Enjoy!

August 11, 2017

New CentOS Atomic Release and Kubernetes System Containers Now Available

August 11, 2017 06:53 PM

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (7.1707), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

The release, which came as part of the monthly CentOS release stream, was a modest one, including only a single glibc bugfix update. The next Atomic Host release will be based on the RHEL 7.4 source code and will include support for overlayfs container storage, among other enhancements.

Outside of the Atomic Host itself, the SIG has updated its Kubernetes container images to be usable as system containers. What's more, in addition to the Kubernetes 1.5.x-based containers that derive from RHEL, the Atomic SIG is now producing packages and containers that provide the current 1.7.x version of Kubernetes.

Containerized Master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. You can install the master kubernetes components (apiserver, scheduler, and controller-manager) as system containers, using the following commands:

# atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-apiserver:latest

# atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-scheduler:latest

# atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-controller-manager:latest

Kubernetes 1.7.x

The CentOS Virt SIG is now producing Kubernetes 1.7.x rpms, available through this yum repo. The Atomic SIG is maintaining system containers based on these rpms that can be installed as as follows:

on your master

# atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-sig-apiserver:latest

# atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-sig-scheduler:latest

# atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-sig-controller-manager:latest

on your node(s)

# atomic install --system --system-package=no --name kubelet registry.centos.org/centos/kubernetes-sig-kubelet:latest

# atomic install --system --system-package=no --name kube-proxy registry.centos.org/centos/kubernetes-sig-proxy:latest

Both the 1.5.x and 1.7.x sets of containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and function as drop-in replacements for the installed rpms. If you prefer to run Kubernetes from installed rpms, you can layer the master components onto your Atomic Host image using rpm-ostree package layering with the command: atomic host install kubernetes-master.

The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets every two weeks on Tuesday at 04:00 UTC in #centos-devel, and on the alternating weeks, meets as part of the Project Atomic community meeting at 16:00 UTC on Monday in the #atomic channel. You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

August 04, 2017

CentOS Linux 7 (1708); based on RHEL 7.4 Source Code

August 04, 2017 11:04 AM

Red Hat released Red Hat Enterprise Linux 7.4 on August 1st, 2017 (Info).  In the CentOS world, we call this type of release a 'Point Release', meaning that the major version of a distribution (in this case Red Hat Enterprise Linux 7) is getting a new point in time update set (in this case '.4').  In this specific case, there were about 700 Source packages that were updated.  On this release date, the CentOS Project team began building a point release of CentOS Linux 7, CentOS Linux 7 (1708), with this new source code from Red Hat.  Here is how we do it.

When there is a new release of RHEL 7 source code, the public release of this source code happens on the CentOS git server (git.centos.org).  We then use a published set of tools (tools) to build Source RPMs (info) from the released git source code and immediately start building the updated version of CentOS Linux.  We use a program called mock to build Binary RPM packages from the SRPMs.

At the time of this article (5:00am CDT on August 4th, 2017), we have completed building 574 of the approximately 700 SRPMs needed for our point release.  Note that this is the largest number of packages in an EL7 point release so far.

What's Next

Continuous Release Repository

Once we complete the building of the 700 SRPMs in the point release, they will start our QA process.  Once the packages have gone through enough of the QA process to ensure they are built correctly, do the normal things, and link against the proper libraries (tutorial; linking), the first set of updates that we will we release will be from our Continuous Release repository.  These binary packages will be the same packages that we will use in the next full tree and they will be available as updates in the current CR repo in the current release.  If you have opted into CR repo (explained in the above link), then after we populate and announce the release of those packages, then a normal 'yum update' will upgrade you to the new packages.

Normally the only packages that we don't release into the CR repo are the new centos-release package and the new anaconda package, and any packages associated specifically with them.  Anaconda is the installer used to create the new ISOs for install and the centos-release package has information on the new full release.  These packages, excluded from CR, will be in the new installer tree and on the new install media of the full release.

Historically, the CR repo is released between 7 and 14 days after Red Hat source code release, so we would expect CR availability between the 8th and 15th of August, 2017 for this release.

Final Release and New Install Media, CentOS Linux 7 (1708)

After the CR Repo release, the CentOS team and the QA team will continue QA testing and we will create a compilation of the newly built and released CR packages and packages still relevant from the last release into a new repository for CentOS Linux 7 (1708). Once we have a new repo, we will create and test install media. The repository and install media will then be made available on mirror.centos.org.  It will be in a directory labeled 7.4.1708.

Historically, the final release becomes available 3 to 6 weeks after the release of the source code by Red Hat.  So, we would expect our full release to happen sometime between August 22nd and  September 12th, 2017.

As I mentioned earlier, this is the largest point release yet in terms of number of packages released in the EL7 cycle to date, so it may take a few days longer for each above cycle.  Also for building this set of packages we also need something new .. the Developer Tool Set (version 6) compiler .. for some packages.  This should not be a major issue as the Software Collections Special Interest Group (SCL SIG) has a working version of that tool set already released.  A big think you to that SIG, as they have saved me a huge amount of work and time for this release.

Keeping an eye on CentOS performance with Grafana

August 04, 2017 07:00 AM

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.

Grafana Screenshot

Collectd

What, you thought I’d jump straight into Grafana? We have to have data to collect first, and the best way to do that on CentOS is via collectd

The simplest way to get collectd on CentOS is via the EPEL repository. If you’re new to CentOS, or aren’t familiar with the Fedora’s EPEL repo, the command below is all you need to get started.

yum install epel-release

Now that the EPEL repo is enabled, it’s easy enough to install collectd in the same manner:

yum install collectd

There are a number of additional collectd plugins available in EPEL, but for our purposes here the base is enough. I would encourage you to explore the available plugins if your needs aren’t met by the base plugin.

Now that it’s installed, we need to configure collectd to send data out. Collectd generates the stats, but we need to put it someplace that Grafana can use.

In /etc/collectd.conf there are a few things we need to configure. In the Global section, uncomment the lines for Hostname, BaseDir, PIDFile, PluginDir, and TypesDB. You’ll need to modify Hostname, but the rest should be fine as the defaults. It should look something like the snippet below:

Hostname    "YourHostNameHere"
#FQDNLookup   true
BaseDir     "/var/lib/collectd"
PIDFile     "/var/run/collectd.pid"
PluginDir   "/usr/lib64/collectd"
TypesDB     "/usr/share/collectd/types.db"

Now that we have the basic app information set, we need to enable the plugins we wish to use. For my instance, I have syslog, cpu, disk, interface, load, memory, and network uncommented. Of these, the default values are fine for everything except network. The network plugin is used to send data to our collector, which in this case is influxdb. The network plugin will need to point to your influxdb server. Since we’re doing everything locally in this example, we’re pointing to localhost. It should look like the following:

<Plugin network>
  Server "127.0.0.1" "8096"
</Plugin>

InfluxDB

Now that we’re done with Collectd, we have to configure influxdb to pull in the data collectd is generating. Since influxdb isn’t in EPEL, we’ll have to pull this in from their repository. The command below makes it easy.

cat <<EOF > /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/centos/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key
EOF

Once that’s done, install the package with yum install influxdb and then it’s ready to configure. There are only a few things that need to happen in the /etc/influxdb/influxdb.conf config file.

In the [http] section of your /etc/influxdb/influxdb.conf, set enabled = true, and bind-address = ":8096". It should look like this:

[http]
  # Determines whether HTTP endpoint is enabled.
enabled = true

  # The bind address used by the HTTP service.
bind-address = ":8086"

Then scroll down to the [[collectd]] section and configure it like the section below:

[[collectd]]
  enabled = true
  bind-address = ":8096"
  database = "collectd"
  typesdb = "/usr/share/collectd"

At this point we can go ahead and start both services to ensure that they’re working properly. To begin, we’ll enable collectd, and ensure that it’s sending data. As with other services, we’ll use systemd for this. In the sample below, you’ll see the commands used, and the output of a running collectd daemon.

[jperrin@monitor ~]$ sudo systemctl enable collectd
[jperrin@monitor ~]$ sudo systemctl start collectd
[jperrin@monitor ~]$ sudo systemctl status collectd
● collectd.service - Collectd statistics daemon
   Loaded: loaded (/usr/lib/systemd/system/collectd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-08-02 11:22:18 PDT; 6min ago
     Docs: man:collectd(1)
           man:collectd.conf(5)
 Main PID: 18366 (collectd)
   CGroup: /system.slice/collectd.service
           └─18366 /usr/sbin/collectd

Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "disk" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "interface" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "load" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "memory" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: plugin_load: plugin "network" successfully loaded.
Aug 2 11:22:18 monitor collectd[18366]: Systemd detected, trying to signal readyness.
Aug 2 11:22:18 monitor collectd[18366]: Initialization complete, entering read-loop.
Aug 2 11:22:18 monitor systemd[1]: Started Collectd statistics daemon.

Now that collectd is working, start up InfluxDB and make sure it’s gathering data from collectd.

[jperrin@monitor ~]$ sudo systemctl enable influxdb
[jperrin@monitor ~]$ sudo systemctl start influxdb
[jperrin@monitor ~]$ sudo systemctl status influxdb
● influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/usr/lib/systemd/system/influxdb.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-07-29 18:28:20 PDT; 1 weeks 6 days ago
     Docs: https://docs.influxdata.com/influxdb/
 Main PID: 23459 (influxd)
   CGroup: /system.slice/influxdb.service
           └─23459 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-12T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'interrupt' AND time > 417367h GR...) service=query
Aug 2 10:35:10 monitor influxd[23459]: [httpd] 172.20.1.40, 172.20.1.40,::1 - - [12/Aug/2017:10:35:10 -0700] "GET /query?db=collectd&epoch=ms&q=SELECT+mean%28%22value%22%29+FROM+%22load_shortte...ean%28%22value%
Aug 2 10:35:10 monitor influxd[23459]: [I] 2017-08-02T17:35:10Z SELECT mean(value) FROM collectd.autogen.cpu_value WHERE host =~ /^monitor$/ AND type_instance = 'nice' AND time > 417367h GROUP B...) service=query

As we can see in the output above, the service is working, and the data is being collected. From here, the only thing left to do is present it via Grafana.

Grafana

To install Grafana, we’ll create another repository as we did with InfluxDB. Unfortunately the Grafana folks don’t keep release versions separate in the repo, so this looks like we’re using an EL6 repo despite doing this work on EL7.

cat <<EOF > /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packagecloud.io/grafana/stable/el/6/$basearch
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
EOF

Now that the repository is in place and enabled, we can install grafana the same as the rest, yum install grafana. Once this is done, we can start working on the configuration. For this tutorial, we’re just going to set an admin username and password, because we’re doing this for a tutorial, and a single user instance. I would absolutely encourage you to read the docs if you want to start doing a bit more with grafana.

To accomplish this reasonably basic configuration, simply uncomment the admin_user and admin_password lines in the [security] section of /etc/grafana/grafana.ini, and set your own values. In this instance I’m using admin/admin, because that’s what you do in examples, right?

[security]
# default admin user, created on startup
admin_user = admin

# default admin password, can be changed before first start of grafana,  or in profile settings
admin_password = admin

Collectd data source for grafana

Now you can start grafana with systemctl start grafana-server, and configure it via the web interface. After you log in for the first time, you’ll be prompted to configure a few things including a data source, and a dashboard. Since we’re doing this all on the localhost, you’ll be able to cheat and use the data source settings in the screenshot. Don’t worry, we’re nearly there and there’s only a little left to do.

Once you have the datasource configured, you’ll be prompted to create your first dashboard. While you can certainly do this, it’s a little intimidating for a first run with grafana. One easy solution to this is to import one of the available templates offered on Grafana’s website. In my case, I opted to use the Host Overview. It provides a nice group of metrics and graphs as a base to use and build from.

Once you’ve gotten everything set up, it’s now down to personal preference and further tinkering. Once again I would very much recommend reading the documentation because there are a wealth of options and changes I didn’t touch on for this intro.

I’ve spent a bit of time setting up CentOS as a home router due to a number of frustrations with existing home routers on the market. This was both a good exercise and a bit of nostalgia from my early days with Linux. Once I’d finished getting the basics set up, I wanted a way to track various statistics. Network traffic, disk usage, etc. The venerable cacti is certainly an option, but that’s feeling a bit legacy these days. I’d prefer to use a newer tool with a more modern feel. This is what led me to Grafana. Below is a basic walkthrough for how I’ve set things up. This is a very basic install, that incorporates Collectd, influxdb, and Grafana all on the same host.

August 02, 2017

Updated CentOS Vagrant Images Available (v1707.01)

August 02, 2017 10:40 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 31 July 2017 and the following changes:

  • we are again using the same kickstarts for Hyper-V and the other hypervisors
  • you can now login on the serial console (useful if networking is broken)

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

July 27, 2017

Using NFS for OpenStack (glance,nova) with selinux

July 27, 2017 10:00 PM

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)

It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let's test this before then using NFS through Infiniband (using IPoIB), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)

It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2;

require {
    type glance_api_t;
    type virtlogd_t;
    type nfs_t;
    class file { append getattr open read write unlink create };
    class dir { search getattr write remove_name create add_name };
}

#============= glance_api_t ==============
allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name };
allow glance_api_t nfs_t:file { write getattr unlink open create read};

#============= virtlogd_t ==============
allow virtlogd_t nfs_t:dir search;
allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes

No! Don’t turn off SELinux!

July 27, 2017 02:24 PM

One of the daily activities of the CentOS Community Lead is searching the Internet looking for new and interesting content about CentOS that we can share on the @CentOSProject Twitter account, or Facebook, Google +, or Reddit. There's quite a bit of content out there, too, since CentOS is very popular.

Unfortunately, some of the content gets unshared, based on one simple text search:

"SELinux AND disable"

That setting is indicative of one thing: the author is advocating the deactivation of SELinux, one of the most important security tools any Linux user can have. When that step is outlined, we have to pass sharing it and even recommend readers ignore such advice completely.

What is SELinux?

But why do articles feel the need to outright deactivate SELinux rather than help readers work through any problems they might have? Is SELinux that hard?

Actually, it's really not.

According to Thomas Cameron, Chief Architect for Red Hat, SELinux is a form of mandatory access control. In the past, UNIX and Linux systems have used discretionary access control, where a user will own a file, the user's group will own the file, and everyone else is considered to be other. Users have the discretion to set permissions on their own files, and Linux will not stop them, even if the new permissions might be less than secure, such as setting chmod 777 to your home directory.

"[Linux] will absolutely give you a gun, and you know where your foot is," Cameron said back in 2015 at Red Hat Summit. The situation gets even more dangerous when a user has root permissions, but that is the nature of discretionary access control.

With a mandatory access control system like SELinux in place, policies can be set and implemented by administrators that can typically prevent even the most reckless user from giving away the keys to the store. These policies are also fixed so not even root access can change it. In the example above, if a user had implemented chmod 777 on their home directory, there should be a policy in place within SELinux to prevent other users or processes from accessing that home directory.

Policies can be super fine-grained, setting access rules for anything from users, files, memory, sockets, and ports.

In distros like CentOS, there are typically two kinds of policies.

  • Targeted. Targeted processes are protected by SELinux, and everything else is unconfined.
  • MLS. Multi-level/multi-category security policies that are complex and often overkill for most organizations.

Targeted SELinux is the operational level most SELinux users are going to work with. There are two important concepts to keep in mind when working with SELinux, Cameron emphasized.

The first is labeling, where files, processes, ports, etc. are all labeled with an SELinux context. For files and directories, these labels are handled as extended attributes within the filesystem itself. For processes, ports, and the rest, labels are managed by the Linux kernel.

Within the SELinux label is a type category (along with SELinux user, role, and level categories). Those latter aspects of the label are really only useful for complex MLS policies. But for targeted policies, type enforcement is key. A process that is running a given context -- say, httpd_t --should be allowed to interact with a file that has an httpd_config_t label, for example.

Together, labeling and type enforcement form the core functionality of SELinux. This simplification of SELinux, and the wealth of useful tools in the SELinux ecosystem have made SELinux a lot more easy to manage than the old days.

So why is that when SELinux throws an error, so many tutorials and recommendations simply tell you to turn off SELinux enforcement? For Cameron, that's analogous to turning your car's radio up really loud when you hear it making a weird noise.

Instead of turning SELinux off and thus leaving your CentOS system vulnerable to any number of problems, try checking the common problems that come up when working with SELinux. These problems typically include:

  •  Mislabeling. This is the most common type of SELinux error, where something has the wrong label and it needs fixed.
  • Policy Modification. If SELinux has set a certain policy by default, based on use cases over time, you may have a specific need to change that policy slightly.
  • Policy Bugs. An outright mistake in the policy.
  • An Actual Attack. If this is the case, 'setenforce=0' would seem a very bad idea.

Don't Turn It Off!

If someone tells you not to run SELinux, this is not based on any reason other than supposed convenience or misinformation about SELinux.

The "convenience" argument would seem to be moot, given that a little investigation of SELinux errors using tools like `sealert` reveal verbose and detailed messages on what the problem is and exactly what commands are needed to get the problem solved.

Indeed, Cameron recommends that instead turning off SELinux altogether, run the process with SELinux in permissive mode temporarily and when policy violations (known as AVC denials) show up in the SELinux logs, you can either fix the boolean settings within existing policies to allow the new process to run without error. Or, if needed, build new policy modules on a test machine, move them to production machines and use `semodule - i` to install them, and set booleans based on what is learned on the test machines.

This is not 2010 anymore; SELinux on CentOS is not difficult to untangle and does not have to be pushed aside in favor of convenience anymore.

You can read more about SELinux in the CentOS wiki. Or see the SELinux coloring book, for a gentler introduction to what it is and how it works.

July 22, 2017

Hands-on with a Minnowboard Dual-e

July 22, 2017 07:00 AM

Connected Minnowboard Dual-e

I recently got my hands on one of the dual ethernet Minnowboards from Adi Engineering. I’ve been on the hunt for a nice low power, small form factor development board for a while, but nearly everything available in my price range comes with a single network card.

This board is a bit of an improvement over previous Minnowboards, with an M.2 slot, and dual ethernet, but lacks the onboard emmc available on previous versions. Since I had a few spare m2 ssds around, it’s not a huge deal for my purposes. Once I’ve gotten through testing this board out, the plan is to build out a demo cluster to bring around to various conferences to showcase what we’re currently doing with the distribution, so you may hear a bit more from me on this in the future.

July 21, 2017

A Fresh Start

July 21, 2017 07:00 AM

For the last few years, I’ve not really cared at all about a semi-permanent slice of home on the internet. I’ve stuck mostly with twitter and only the occasional blog post, usually on someone else’s platform. A few folks like Ben Cotton have tried to reform me. They’ve gotten me to the point where I’m starting to feel a little guilty about being a digital vagrant…and so here we are.

I can’t promise miracles, but I am going to try to write more frequently, and rebuilding some proper website tooling seemed like an interesting way to go about preparing. This time, if I stop maintaining this little website slice, I’ll at least have the decency to feel guilty about it.

For the last few years, I’ve not really cared at all about a semi-permanent slice of home on the internet. I’ve stuck mostly with twitter and only the occasional blog post, usually on someone else’s platform. A few folks like Ben Cotton have tried to reform me. They’ve gotten me to the point where I’m starting to feel a little guilty about being a digital vagrant…and so here we are.

July 18, 2017

CentOS Atomic Host 7.1706 Released

July 18, 2017 11:38 PM

An updated version of CentOS Atomic Host (tree version 7.1706), is now available. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.17.2-9.git2760e30.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-32.git88a4867.el7.centos.x86_64
  • etcd-3.1.9-1.el7.x86_64
  • flannel-0.7.1-1.el7.x86_64
  • kernel-3.10.0-514.26.2.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.5-3.el7.x86_64
  • rpm-ostree-client-2017.5-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-70e8fd66
ap-south-1 ami-c0c4bdaf
eu-west-2 ami-dba8bebf
eu-west-1 ami-42b6593b
ap-northeast-2 ami-7b5e8015
ap-northeast-1 ami-597a9e3f
sa-east-1 ami-95aedaf9
ca-central-1 ami-473e8123
ap-southeast-1 ami-93b425f0
ap-southeast-2 ami-e1332f82
eu-central-1 ami-e95ffd86
us-east-2 ami-1690b173
us-west-1 ami-189fb178
us-west-2 ami-a52a34dc

SHA Sums

f854d6ea3fd63b887d644b1a5642607450826bbb19a5e5863b673936790fb4a4  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2
9e35d7933f5f36f9615dccdde1469fcbf75d00a77b327bdeee3dbcd9fe2dd7ac  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.gz
836a27ff7f459089796ccd6cf02fcafd0d205935128acbb8f71fb87f4edb6f6e  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.xz
e15dded673f21e094ecc13d498bf9d3f8cf8653282cd1c83e5d163ce47bc5c4f  CentOS-Atomic-Host-7.1706-Installer.iso
5266a753fa12c957751b5abba68e6145711c73663905cdb30a81cd82bb906457  CentOS-Atomic-Host-7.1706-Vagrant-Libvirt.box
b85c51420de9099f8e1e93f033572f28efbd88edd9d0823c1b9bafa4216210fd  CentOS-Atomic-Host-7.1706-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

July 04, 2017

Updated CentOS Vagrant Images Available (v1706.01)

July 04, 2017 02:22 PM

2017-07-14: We have release version 1706.02 of centos/6, which fixes a regression introduced by the "stack clash" patch (this made Java crash, just like on centos/7). The packages were all updated to 2017-07-14, and include additional fixes.

2017-07-05: We have released version 1706.02 of centos/7, providing RHBA-2017:1674-1, which fixes a regression introduced by the patch for the "stack clash" vulnerability. Existing boxes don't need to be destroyed and recreated: running sudo yum update inside the box will upgrade the kernel if needed. There is no patch for CentOS Linux 6 at this time, but we plan to provide an updated centos/6 image if such a patch is later released.

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 2 Juli 2017. This release also includes an updated kernel with important security fixes.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

June 27, 2017

HPC Student Cluster Competition, Frankfurt, Germany

June 27, 2017 03:51 PM

Last week, as I mentioned in my earlier post, I was in Frankfurt, Germany, for the ISC High Performance Computing conference. The thing that grabbed my attention, more than anything else, was the Student Cluster Competition.  11 teams from around the world - mostly from Universities - were competing to create the fastest (by a variety of measures) student supercomputer. These students have progressed from earlier regional competitions, and are the world's finest young HPC experts. Just being there was an amazing accomplishment. And these young people were obviously thrilled to be there.

Each team had hardware that had been sponsored by major HPC vendors. I talked with several of the teams about this. The UPC Thunderchip team, from Barcelona Tech, (Winner of the Fan Favorite award!) said that their hardware, for example, had been donated by (among other vendors) CoolIT systems, who had donated the liquid cooling system that sat atop their rack.

(When I was in college, we had a retired 3B2 that someone had dumpster-dived for us, but I'm not bitter.)

Over the course of the week, these teams were given a variety of data challenges. Some of them, they knew ahead of time and had optimized for. Others were surprise challenges, which they had to optimize for on the fly.

While the jobs were running, the students roamed the show floor, talking with vendors, and, I'm sure, making contacts that will be beneficial in their future careers.

Now, granted, I had a bit of a ulterior motive. I was trying to find out the role that CentOS plays in this space. And, as I mentioned in my earlier post, 8 of the 11 teams were running CentOS. (One - University of Hamburg - was running Fedora. Two - NorthEast/Purdue, and Barcelona Tech - were running Ubuntu) And teams that placed first, second, and third in the competition - (First place: Tsinghua University, Beijing. Second place: Centre for High Performance Computing South Africa. Third place: Beihang University, Beijing.) - were also running CentOS. And many of the research organizations I talked to were also running CentOS on their HPC clusters.

I ended up doing interviews with just two of the teams, about their hardware, and what tests that they had to complete on them to win the contest. You can see those on my YouTube channel, HERE and HERE.

At the end, while just three teams walked away with the trophies, all of these students had an amazing opportunity. I was so impressed with their professionalism, as well as their brilliance.

And good luck to the teams who have been invited to the upcoming competition in Denver. I hope I'll be able to observe that one, too!

May 15, 2017

Linking Foreman with Zabbix through MQTT

May 15, 2017 10:00 PM

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

May 07, 2017

Deploying Openstack through puppet on CentOS 7 - a Journey

May 07, 2017 10:00 PM

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

 yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y
 packstack --gen-answer-file=answers.txt
 vim answers.txt
 packstack --answer-file=answers.txt --dry-run
 * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

April 13, 2017

Deploying Openstack PoC on CentOS with linux bridge

April 13, 2017 10:00 PM

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

April 12, 2017

Remotely kicking a CentOS install through ligthweight 1Mb iso image

April 12, 2017 10:00 PM

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0
ifstat

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

initrd http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/initrd.img
chain http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/vmlinuz net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/7/os/x86_64/ inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see https://wiki.centos.org/HowTos/RemoteiPXE , but that one defaults to dhcp

Hope it helps

January 16, 2017

Enabling SPF record for centos.org

January 16, 2017 11:00 PM

In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was Email Spoofing (aka "forged from address"). That's how I discovered that we never implemented SPF for centos.org (while some of the Infra team members had that on their personal SMTP servers).

While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :

sending

The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after being authorized of course) through servers listed in your SPF.

One thing to know with SPF is that it breaks plain forwarding and aliases but it's not how you will setup your SPF record, but how originator domain does it : For example if you have joe@domain.com sending to joe@otherdomain.com itself being an alias to joe2@domain.com, that will break, as MX for domain.com will see that a mail for domain.com was 'sent' from otherdomain.com and not from an IP listed in their SPF. There are workaround for this though, aka remailing and SRS

receiving

So you have a SPF in place and so restrict from where you are sending mails ? Great, but SPF only works if other SMTP servers involved are checking for it, and so you should do the same ! The fun part is that even if you have CentOS 7, and so Postfix 2.10, there is nothing by default that let you verify SPF : as stated on this page :

Note: Postfix already ships with SPF support, in the form of a plug-in policy daemon. This is the preferred integration model, at least until SPF is mandated by standards. 

So for our postfix setup, we decided to use pypolicy-spf : lightweight, easy , written in python. The needed packages are already available in Epel, but we also rebuilt it on CBS. Once installed, configured and integrated with Postfix, you'll start (based on your .conf settings) blocking mail that arrives to your SMTP servers, but from IP/servers not listed in the originator domain SPF policy (if any).

If you have issues with our SPF current policy on centos.org, feel free to reach us in #centos-devel on irc.freenode.net to discuss it.

January 13, 2017

create a new github.com repo from the cli

January 13, 2017 11:03 PM

I often get into a state where I’ve started some work, done some commits etc and then realised I dont have a place to push the code to. Getting it on github has involved getting the browser out, login to github, click click click {pain}. So, here is how you can create a new repo for your login name on github.com without moving away from the shell.


curl -H "X-GitHub-OTP: XXXXX" -u 'LoginName' https://api.github.com/user/repos -d '{"name":"Repo-To-Create"}'

You need to supply your OTP pass and replace XXXX with it, and ofcourse your own LoginName and finally the Repo-To-Create. Once this call runs, curl will ask for your password and you should get the github API dump a bunch of details ( or tell you that it failed, in which case you need to check the call ).

now the usual ‘git remote add github git@github.com:LoginName/Repo-To-Create‘ and you are off.

regards,


Powered by Planet!
Last updated: October 21, 2017 10:30 PM