March 31, 2015

CentOS-7 (1503) is released

March 31, 2015 06:32 PM

Today the CentOS-Project announced the immediate availability of CentOS-7 (1503), the second release of CentOS-7.

 

Find out more about the release announcement here: http://lists.centos.org/pipermail/centos-announce/2015-March/021006.html.

Also don’t forget to read the release notes at the wiki: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.

March 20, 2015

CentOS-7 / CR repo has been populated

March 20, 2015 02:41 PM

Waiting for the new package set in the next CentOS-7 release ? A majority of them are now available on every CentOS-7 machine by running the following commands :

yum update
yum --enablerepo=cr list updates

Its important you run a ‘yum update’ first, since the cr repo definitions are only available in the newer centos-release rpms. Once you are happy with the new content that is going to come via the CR repos, you can apply them with :
yum --enablerepo=cr update

For more information on whats been pushed to these CR Repos, look at the announcement email at : http://lists.centos.org/pipermail/centos-announce/2015-March/020980.html

You can get more information on the CR process, the repository and the content inside it at : http://wiki.centos.org/AdditionalResources/Repositories/CR

– KB

March 11, 2015

CentOS-7 next release

March 11, 2015 03:36 PM

Red Hat Enterprise Linux 7.1 was released a few days back, You can go read the release notes now. Its a great way to find out the big changes and enhancements in this release.

On the CentOS side of things, we have been working through last week on getting the sources organised and the builds started up. We are pretty close to finishing the first cycle of builds. The next step would be to get the content into QA repos so people can start poking it. From there on, content will make its way into the CR/ repos, and we will goto work on the distribution media ( ie. the ISOS, Cloud Images, containers, live media etc ). Once that is done, we have another couple of days for QA around those bits, followed by the wider release.

This release for CentOS-7 is going to be tagged 1503 to indicate month of upstream release.

In terms of a timeline, this is where things stand : We hope to start moving content into the CR repos by the 13th/14th of March. This should set us up for releasing the distro around the end of the following week 18th to 20th of March. Ofcourse, this is a projected date and might change depending on how much time and work the QA cycles take up and any challenges we hit in the distro media building stages.

Note that the CR repos contain packages that have not been through as much QA as the content at release time, so while they do give people a path to early-access to next-release rpms and content, they come with the added risk.

Some of the special interest groups in CentOS are keen to get started with the new content, and we are trying to find a way to make that work – but at this time there is no mechanism that bridges the two buildsystems ( the CentOS Distro one, and the Community Build System used by the SIGs ). So for the time being the SIGs will just need to wait a few more days before they can start their trials and builds. For the future, its something we will try and find a solution for, so in the next release SIGs can start doing their builds and testing as the distro packages are being built.

– KB

March 10, 2015

SCALE 13x – no talking, all walking, and a great ally skills workshop

March 10, 2015 02:40 PM

For the first time in-I-can’t-remember I didn’t submit a talk to SCALE, so it was with a different personal energy that I attended SCALE 13x on 19 to 22 February this year. Not having a do-or-die public-speaking-scheduled-thing in front of me allowed for a more relaxed state of mind. Yet it was strange to not be one of the speakers this time. Still, all my old SCALE friends made me feel very welcome and accommodated. As usual, it was nice to have my family there, where so many know them as former speakers and regular attendees.

Rather than focus on talking to an audience, this time I spent my energy walking around the expo hall-and-wherever to talk with as many projects and companies as possible. My goal was to get an idea of who uses CentOS Linux, for what purposes, and get ideas of what people need and want from the project. I also provided information on what the Project has been up to especially around SIGs. That activity was fun, informative, and interesting.

Also I spent my share of time at the booth that housed the CentOS Project, RDO/OpenStack, oVirt, and OpenShift Origin. (I can’t wait to see the next iterations of Ryan’s Raspberry Pi 2 mini-cluster demo for OpenShift Origin.) I watched other people, including my wife, play with instruments and music software at the ever-popular Fedora Project booth (winners once again of a favorite booth award.) With a small rock concert and 3D printer, it was hard not to notice.

There were two sessions I was drawn to the most. The first was Ruth Suehle‘s keynote on Sunday morning, Makers:  The Next Frontier for Open Source. I’ve worked with Ruth a long time, seen her speak multiple times, seen lots of cool stuff that she’s made over the years, and I knew it would be an excellent talk. She used her great bully pulpit to teach and entreat the audience about the needs of the makers communities to get some serious clue and help from open source communities.

The other session was a workshop on Friday to learn skills as a man to be an ally for women when sexist things happen. This is something I’m interested in, being a better ally for people, including in the face of sexism and sexist behavior. For myself, I’ve begun calling myself a born again feminist. To me that means I’ve had a later-in-life realization that while I’ve always supported the ideas and topics around feminism, I wasn’t really aware how deeply pervasive sexism is, how blandly I’d looked past it, and that I could be part of the solution. Part of being part of the solution is not being afraid of being a feminist in name and action.

The workshop (described in detail here) was lead by Valerie Aurora, who’s gone from kernel hacker to executive director of the Ada Initiative. The Ada Initiative “supports women in open technology and culture …” Thus the workshop was primarily for people working in open technology and open culture. It started with a brief introduction that was useful in many ways, such as reminding us about how to best engage with difficult online exchanges (more advanced than ‘don’t feed the trolls’), the reason for needing male allies (hint:  it’s about doing something good with the privileged position and power that one has in society), and keeping it all in a useful context by not having the workshop be a place to debate “is there sexism?” Instead we acknowledge there is something broken, it needs fixing, and we here can do something about it. You can watch an introduction and highlights of the workshop in this video that Valerie gave to the staff at the Wikimedia Foundation, with closed captioned subtitles available for English.

For the majority of the workshop, we were in small groups (4 to 6 people) to discuss approaches we would take to certain scenarios. One scenario (as I recall them) was, “A woman is standing outside of your group at an event and looks as if she might be interested in joining the discussion. How would you handle this?” Another was, “At a work party someone comments that a co-worker with a large number of children must get a lot of sex.” Then the small groups discussed our approaches, and presented some ideas or questions back to the overall group. And then on to the next scenario.

The discussion/collaboration session was really useful in a number of ways. First, it helped give specific and general ideas of how to handle — and not handle — specific scenarios. Second, it also served to give a crosscut of different types of situations that do occur, so you can take skills from one scenario more easily in to another. Not only was it useful for dealing with sexist situations, it was easy to see the same thinking and skills could be applied to any situation where someone is objectified, made to be an Other, treated as a stereotype, and so forth — thus useful for handling racism, ageism, and so forth. Third, it was useful to get a chance to practice what to say in response when we witness sexism, partially because it’s helps us to have something to immediately say rather than being shocked and mute.

The format of the workshop was great. Elements included working in small groups, a person in each group being a gatekeeper who makes sure everyone in the group is heard from, presenting ideas back to the overall group in a discussion format, all the way down to how we introduced ourselves to our small groups. I also appreciated moving across groups at least once, that helped us get fresher perspectives with each scenario.

This is definitely a workshop I’d like to bring to any tech company. All of us can use help and perspective on how to react when someone does something sexist, or we have a chance to do something about systemic sexism. We can agree that it’s unkind to make people feel uncomfortable, and it’s kind to help people by pushing against the discomfort making.

There is something I’ve noticed for most of my life. When talking with my peers — people who are born mainly after the 1960s in a post-feminist-creation era — we are often in agreement about how people should treat each other along the axes of sex, race, gender, and so forth. And while I see in younger generations a huge amount of support for ideas such as “people should be able to legally marry whomever they want”, I still hear a lot of people afraid of the f-word — feminism. It’s as if people are in full agreement with the concepts behind the word, but afraid to use the word itself. This is the other part of my ‘born again’ experience, that I need to embrace the word as well as the concept in order to really align myself correctly, live correctly, and be a good ally of all people.

Building CentOS Linux 7 for ARMv8

March 10, 2015 11:06 AM

As I’d mentioned previously, the fine folks of Applied Micro were kind enough to give us an X-C1 development box to see if it was feasible to build and run CentOS Linux 7. My first attempt through, I realized I hadn’t been taking decent notes, so I scrapped most of the package work and started over. This time I took a few more notes, and hopefully I’ve documented some things that will help everyone. If you’re interested in discussing or joining the ARMv8 process, feel free to join our ARM development mailing list, or find me on Freenode’s irc network in #centos-devel (nick: Evolution ).

 

Initial Steps

The official Linux architecture name for ARMv8 is aarch64. However both terms seem to be in circulation and we use them to imply the same thing.

My plan for the X-C1 box was to install Fedora, and use mock to get a decent buildroot in order. Because the X-C1 box came with a uboot based image by default, I had to replace it with uefi first. The directions for doing that can be found on Fedora’s aarch64 wiki page. Once uboot was successfully replaced with UEFI, I installed Fedora 21 and mock. I chose F21 largely because there I couldn’t find a Fedora 19 image to work from, but there are Fedora 19 packages available to help bootstrap a C7 mock chroot, which is really what I was after. I copied this repository to a local system both to not hammer the remote machine, and to reduce the latency.

 

Host Modifications

While I worked on getting the roughly 150 packages needed for a basic mock buildroot built, I kept running into a recurring problem with failed tests for elfutils. Part of the elfutils test suite tests coredumps and it seems that the buildhost’s systemd-coredump utility was stealing them. After some time digging around with this, I ended up with the following commands:

# echo "kernel.core_pattern=core" > /etc/sysctl.d/50-coredump.conf
# sysctl --system

Once that was done, the elfutils build tests passed without issue.

 

Package Builds

Initially I attempted to work out a build order that would allow me to build from the ground up, but I quickly realized this was foolish. When starting from the bottom like this, everything has a build dependency on something else. Instead I chose to work toward a clean mock init first, and then work up from that point. Since I only have one board to build against, I’m currently abusing bash and cycling things through mock directly. The idea of using koji or plague seemed a bit overkill with just one build host doing the work. Once everything has been built (or thrown out of the build) against the F19 repository, it will be time to do the build again against itself to ensure that everything is properly linked and self-hosting.

 

It’s worth noting that some of the packages in the F19 repository are actually tagged as F20 and are newer than what exists in CentOS Linux 7. I found it necessary to exclude these newer versions, and often to exclude other packages as the build has progressed. While not an exhaustive list, examples are:

  • sqlite-3.8
  • tcl-8.5.14
  • tk
  • Various perl modules

 

Exclusions:

I mentioned that a few packages have been ejected from the build outright. Some of these are because the build dependencies either aren’t, or can’t be met. The prime example of this is ADA support, which requires the same cross-compiled (or otherwise bootstrapped) ADA version to build (yay for circular dependencies). Since nothing appears to explicitly depend on the ADA packages like libgnat, for now I’ve removed them. Down the road, if I’m able to properly add support I will certainly revisit this decision.

 

Substitutions:

There are a few packages from CentOS Linux 7 that I just won’t be able to use. The primary issue is the kernel. The 3.10 kernel just doesn’t have the support for aarch64 that’s needed, so my current plan is to target 3.19 as the kernel to ship for aarch64. This is still speculation, as I’ve been procrastinating on actually building it. I imagine that will happen for the next blog post update :-)

The other problematic package is anaconda. I’m unsure if I can patch the version in 7 to support aarch64, or if I’ll need to ‘forward-port’ and use a more recent version from fedora to handle the installation. If anyone from the community has insights or suggestions for this, please speak up.

I’ll continue posting updates as the build progresses, or as I find interesting things worth mentioning.

 

March 03, 2015

CentOS Linux 7 and Arm

March 03, 2015 11:54 AM

ARMv7

With the growing list of easily accessible ARM hardware like the RaspBerry Pi 2 and the ODROID-C1, several community efforts have sprouted, working out the details for getting CentOS-7 built and available for the new boards. One of our UK based community members has made the most progress so far, posting his build process on the CentOS arm development list. As he progresses, he’s also been keeping some fairly detailed notes about what changes he’s had to make. Once he’s been able to produce an installable (or extractable) image, we’ll see about incorporating and maintaining his changes as branches in git. With a bit more work, we should be able to start rolling out a fully community built and supported 32bit arm build of CentOS-7.armv7-web

ARMv8

Far from stopping there, work is underway on the 64bit ARM front as well. The fine folks at Applied Micro were kind enough to lend us two of their X-C1 ARMv8 development kits. After a bit of work to replace the default uboot with UEFI, and a few early missteps, the work on an aarch64 port of CentOS-7 is progressing along rather nicely as well. I’ll work on documenting the build system, steps to duplicate for anyone who has the hardware and wants to participate, and potential changes required.

 

If you’d like to get involved or want to follow the progress of the work, please join our arm development list, or join us in #centos-devel on freenode irc.

February 20, 2015

Pulp Project : Managing RPM repositories on CentOS – From CentOS Dojo Brussels 2015

February 20, 2015 03:00 PM

At the CentOS Dojo Brussels 2015 Julien Pivotto presented an introduction to Pulp Project and how it makes life easier for people needing to manage rpm repositories, including your own content and syncing down upstream distro content.

In this session he covers:

  • What is pulp?
  • How does it work?
  • Mirrors management
  • Repositories workflows
  • RPM’s deployment and release management

This Video is now online at https://www.youtube.com/video/IkhCvNXWMC4

You can get the slides from this session at the event page on http://www.slideshare.net/roidelapluie/an-introduction-to-the-pulp-project

Regards

Intoduction to RPM packaging – From CentOS Dojo Brussels 2015

February 20, 2015 11:27 AM

At the CentOS Dojo Brussels 2015 Brian Stinson presented an introduction to RPM packaging session, focused on sysadmins looking to make the next step into packaging their own apps as well as dependencies.

In this session he covers:

  • Short overview of the RPM format
  • Setting up an rpmbuild environment
  • Building packages with rpmbuild
  • Building packages with Mock
  • Where to look for further reading

This Video is now online at https://www.youtube.com/video/CTTbu_q2xiQ

You can get the slides from this session at the event page on http://wiki.centos.org/Events/Dojo/Brussels2015

Regards

February 05, 2015

Guide to Software Collections – From CentOS Dojo Brussels 2015

February 05, 2015 11:38 AM

At the CentOS Dojo Brussels 2015 Honza Horak presented on Software Collections. Starting from what they are, how they work and how they are implemented. During this 42 min session he also ran through how people can create their own collections and how they can extend existing ones.

Software Collections are a way to deliver parallel installable rpm tree’s that might contain extension to existing software already on the machine, or might deliver a new version of a component ( eg. hosting multiple versions of python or ruby on the same machine at the same time, still manageable via rpm tools )

This Video is now online at https://www.youtube.com/video/8TmK2g9amj4

You can get the slides from this session at the event page on http://wiki.centos.org/Events/Dojo/Brussels2015

Regards

January 23, 2015

More builders available for Koji/CBS

January 23, 2015 04:54 PM

As you probably know, the CentOS Project now hosts the CBS effort, (aka Community Build System), that is used to build all packages for the CentOS SIGs.

There was already one physical node dedicated to Koji Web and Koji Hub, and another node dedicated to the build threads (koji-builder). As we have now more people building packages, we thought it was time to add more builders to the mix, and here we go: http://cbs.centos.org/koji/hosts lists now two added machines that are dedicated to Koji/CBS.

Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz with 8cores/sockets (+ Hyperthreading activated)  , and 32Gb of RAM. Let's see how the SIGs members will keep those builders busy and throwing a bunch of interesting packages for the CentOS Community :-) . Have a nice week-end

January 19, 2015

I do terrible things sometimes

January 19, 2015 12:00 AM

Abandon hope…

This is not a how-to, but more of a detailed confession about a terrible thing I’ve done in the last few days. The basic concept for this crime against humanity came during a user’s group meeting where several companies expressed overwhelming interest in containers, but were pinned to older, unsupported versions of CentOS due to 3rd party software constraints. They asked if it would be possible to run a CentOS-4 based container instead of a full VM. While obviously migrating from CentOS-4 to a more recent (and supported) version would be preferable, there are some benefits to migrating a CentOS 4 system to a Docker container. I played around with this off and on over the weekend, and finally came up with something fairly functional. I immediately destroyed it so there could be no evidence linking me to this activity.

The basics for how I accomplied this are listed below. They are terrible. Please do NOT follow them.

Disable selinux on your container host.

Look, I told you this was terrible. Dan Walsh and Vaclav Pavlin of Red Hat were kind enough to provide us patches for SELinux in CentOS-6, and then again for CentOS-5. I’m not going to repay their kindness by dragging them into this mess too. Dan is a really nice guy, please don’t make him cry.

The reason we disable selinux is explained on the CentOS-Devel mailing list. Since there’s no patch for CentOS-4 containers, selinux has to be disabled on the host for things to work properly.

Build a minimal vm.

Initially I tried running a slightly modified version of our CentOS-5 kickstart file for Docker through the usual build process. This mostly worked, however it was somewhat unreliable. The build process did not always exit cleanly, often leaving behind broken loop objects I couldn’t unmount. The resulting container worked, but had no functional rpmdb. The conversion trick used with CentOS-5 didn’t work properly with CentOS-4, even accounting for version differences.

I finally decided to build a normal vm image using virt-install. You could use virt-manager to do this part, it really doesn’t matter. There have been a number of functional improvements to anaconda over the years, and going back to the CentOS-4 installer hammers this home. I had to adjust my kickstart to use the old format, removing several more modern options I’d taken for granted. I ended up with the following. For this install, I made sure to install to an image file for easy extraction later on.

install
url --url=http://vault.centos.org/4.9/os/x86_64/
lang en_US.UTF-8
network --device=eth0 --bootproto=dhcp
rootpw --iscrypted $1$UKLtvLuY$kka6S665oCFmU7ivSDZzU.
authconfig --enableshadow
selinux --disabled
timezone --utc UTC

clearpart --all --initlabel
part / --fstype ext3 --size=1024 --grow
reboot
%packages
@Base

%post
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root

rpm -q grub redhat-logos
rm -rf /boot
rm -rf /etc/ld.so.cache

Extract to tarball

Because we’re wiping out /boot and locking the root user, this image really won’t be useful for anything except converting to a container. The next step is to extract the contents into a smaller archive we can use to build our container. In order to do this, we’ll use the virt-tar-out command. This image is not going to be as small as the regular CentOS containers in the Docker index. This is partly due to rpm dependencies, and partly to how the image is created. Honestly, if you’re doing this, a few megs of wasted disk space is the least of your worries.

virt-tar-out -a /path/to/centos-4.img / - | xz --best > /path/to/centos-4-docker.tar.xz

Building the Container

At this point we have enough that we could actually just do a cat centos-4-docker.tar.xz | docker import - centos4, but there are still a few cleanup items that need to be addressed. From here, a basic Dockerfile that provides a vew changes is in order. Since CentOS-4 is End-of-Life and no longer served via the mirrors, the contents of /etc/yum.repos.d/ need to be modified to point to your local mirror as well as /etc/sysconfig/rhn/sources if you intend to still use the up2date utility. To do this, copy your existing yum repo files and sources from your working CentOS-4 systems into the directory with the container tarball, and use a Dockerfile similar to the one below.

FROM scratch
MAINTAINER you <your@emailaddress.com>
ADD centos-4-docker.tar.xz /
ADD CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo
ADD sources /etc/sysconfig/rhn/sources

DELETE IT ALL

All that’s left now is to run docker’s build command, and you have a successfully built a CentOS-4 base container to use for migration purposes, or just to make your inner sysadmin cry. Either way. This is completely unsupported. If you’ve treated this as a how-to and followed the steps, I would recommend the following actions:

  1. Having a long think about the decisions in your life that led you to this moment
  2. Drinking
  3. Sobbing uncontrollably
  4. Apologizing to everyone around you.

January 16, 2015

Docker: to keep those old apps ticking

January 16, 2015 07:40 PM

Got an old, long running app on CentOS-4 that you want to retain ? Running a full blown VM not worth while ? Well, Docker can help. As Jim found out earlier today :

Given that CentOS-4 is out of life now, we are not going to push CentOS-4 images to official CentOS collection on the Docker registry, but if folks want this, please ask and we can publish a short howto on whats involved in building your own.

Ofcourse, always consider migrating the app to a newer, supported platform like CentOS-6 or 7 before trying these sort of workarounds.

Docker is available out of the box, by defauly, on all CentOS-7/x86_64 installs.

- KB

January 15, 2015

Libguestfs preview for EL 7.1

January 15, 2015 11:33 AM

Want to see whats coming with libguestfs in EL 7.1 ? Richard Jones has setup a preview repo at http://people.redhat.com/~rjones/libguestfs-RHEL-7.1-preview that contains all the bits you need.

To set this up:

# cat >/etc/yum.repos.d/libguestfs-RHEL-7.1-preview.repo EOF
[libguestfs-RHEL-7.1-preview]
name=libguestfs RHEL 7.1 preview - x86_64
baseurl=http://people.redhat.com/~rjones/libguestfs-RHEL-7.1-preview/
enabled=1
gpgcheck=0
EOF

You should now be able to run a 'yum install libguestfs-tools'. There are some other interesting things in the repo as well, so feel free to poke around ( including an updated virt-v2v ). Remember to send testing feedback to http://www.redhat.com/mailman/listinfo/libguestfs

- KB

January 14, 2015

CentOS-India meetup group announced!

January 14, 2015 02:13 PM

Hi,

There is a meetup group for CentOS usere in India at : http://www.meetup.com/CentOS-India and they are looking for people to come join as well as people to help run local meetings in different parts of the country.

So if you are based in India, use or would like to use CentOS Linux, go ahead and join up.

- KB

January 12, 2015

Provisioning quickly nodes in a SeaMicro chassis with Ansible

January 12, 2015 02:19 PM

Recently I had to quickly test and deploy CentOS on 128 physical nodes, just to test hardware and that all currently "supported" CentOS releases could be installed quickly when needed. The interesting bit is that it was a completely new infra, without any traditional deployment setup in place, so obviously, as sysadmin, we directly think about pxe/kickstart, which is so trivial to setup. That was the first time I had to "play" with SeaMicro devices/chassis though, and so understanding how they work (the SeaMicro 15K fabric chassis, to be precise). One thing to note is that those seamicro chassis don't provide remote VGA/KVM feature (but who cares, as we'll automate the whole thing, right ? ) but they instead provide either cli (ssh) or rest api access to the management interface, so that you can quickly reset/reconfigure a node, changing vlan assignement, and so on.

It's not a secret that I like to use Ansible for ad-hoc tasks, and I thought that it would be (again) a good tool for that quick task. If you have used Ansible already, you know that you have to declare nodes and variables (not needed, but really useful) in the inventory (if you don't gather inventory from an external source). To configure my pxe setup (and so being able to reconfigure it when needed) I obviously needed to get mac addresses from all 64 nodes in each chassis, decide that hostnames will be n${slot-number}., etc .. (and yes in Seamicro slot 1 = 0/0, slot 2 = 1/0, and so on ...)

The following quick-and-dirty bash script let you do that quickly in 2 seconds (ssh into chassis, gather information, and fill some variables in my ansible host_vars/${hostname} file) :

#!/bin/bash
ssh admin@hufty.ci.centos.org "enable ;  show server summary | include Intel ; quit" | while read line ;
  do
  seamicrosrvid=$(echo $line |awk '{print $1}')
  slot=$(echo $seamicrosrvid| cut -f 1 -d '/')
  id=$(( $slot + 1)); ip=$id ; mac=$(echo $line |awk '{print $3}')
  echo -e "name: n${id}.hufty.ci.centos.org \nseamicro_chassis: hufty \nseamicro_srvid: $seamicrosrvid \nmac_address: $mac \nip: 172.19.3.$ip \ngateway: 172.19.3.254 \nnetmask: 255.255.252.0 \nnameserver: 172.19.0.12 \ncentos_dist: 6" > inventory/n${id}.hufty.ci.centos.org
done

Nice so we have all ~/ansible/hosts/host_vars/${inventory_hostname} files in one go (I let you add ${inventory_hostname} in the ~/ansible/hosts/hosts.cfg file with the same script, but modify to your needs
For the next step, we assume that we already have dnsmasq installed on the "head" node, and that we also have a httpd setup to provide the kickstart to the nodes during installation.
So our basic ansible playbook looks like this :

---
- hosts: ci-nodes
  sudo: True
  gather_facts: False

  vars:
    deploy_node: admin.ci.centos.org
    seamicro_user_login: admin
    seamicro_user_pass: obviously-hidden-and-changed
    seamicro_reset_body:
      action: reset
      using-pxe: "true"
      username: "{{ seamicro_user_login }}"
      password: "{{ seamicro_user_pass }}"

  tasks:
    - name: Generate kickstart file[s] for Seamicro node[s]
      template: src=../templates/kickstarts/ci-centos-{{ centos_dist }}-ks.j2 dest=/var/www/html/ks/{{ inventory_hostname }}-ks.cfg mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Adding the entry in DNS (dnsmasq)
      lineinfile: dest=/etc/hosts regexp="^{{ ip }} {{ inventory_hostname }}" line="{{ ip }} {{ inventory_hostname }}"
      delegate_to: "{{ deploy_node }}"
      notify: reload_dnsmasq

    - name: Adding the DHCP entry in dnsmasq
      template: src=../templates/dnsmasq-dhcp.j2 dest=/etc/dnsmasq.d/{{ inventory_hostname }}.conf
      delegate_to: "{{ deploy_node }}"
      register: dhcpdnsmasq

    - name: Reloading dnsmasq configuration
      service: name=dnsmasq state=restarted
      run_once: true
      when: dhcpdnsmasq|changed
      delegate_to: "{{ deploy_node }}"

    - name: Generating the tftp configuration boot file
      template: src=../templates/pxeboot-ci dest=/var/lib/tftpboot/pxelinux.cfg/01-{{ mac_address | lower | replace(":","-") }} mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Resetting the Seamicro node[s]
      uri: url=https://{{ seamicro_chassis }}.ci.centos.org/v2.0/server/{{ seamicro_srvid }}
           method=POST
           HEADER_Content-Type="application/json"
           body='{{ seamicro_reset_body | to_json }}'
           timeout=60
      delegate_to: "{{ deploy_node }}"

    - name: Waiting for Seamicro node[s] to be available through ssh ...
      action: wait_for port=22 host={{ inventory_hostname }} timeout=1200
      delegate_to: "{{ deploy_node }}"

  handlers:
    - name: reload_dnsmasq
      service: name=dnsmasq state=reloaded

The first thing to notice is that you can use Ansible to provision nodes that aren't already running : people think than ansible is just to interact with already provisioned and running nodes, but by providing useful informations in the inventory, and by delegating actions, we can already start "managing" those yet-to-come nodes.
All the templates used in that playbook are really basic ones, so nothing "rocket science". For example the only diff for the kickstart.j2 template is that we inject ansible variables (for network and storage) :

network  --bootproto=static --device=eth0 --gateway={{ gateway }} --ip={{ ip }} --nameserver={{ nameserver }} --netmask={{ netmask }} --ipv6=auto --activate
network  --hostname={{ inventory_hostname }}
<snip>
part /boot --fstype="ext4" --ondisk=sda --size=500
part pv.14 --fstype="lvmpv" --ondisk=sda --size=10000 --grow
volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14
logvol /home  --fstype="xfs" --size=2412 --name=home --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=100000
logvol /  --fstype="xfs" --size=8200 --name=root --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=1000000
logvol swap  --fstype="swap" --size=2136 --name=swap --vgname=vg_{{ inventory_hostname_short }}
<snip>

The dhcp step isn't mandatory, but at least in that subnet we only allow dhcp to "already known" mac address, retrieved from the ansible inventory (and previously fetched directly from the seamicro chassis) :

# {{ name }} ip assignement
dhcp-host={{ mac_address }},{{ ip }}

Same thing for the pxelinux tftp config file :

SERIAL 0 9600
DEFAULT text
PROMPT 0
TIMEOUT 50
TOTALTIMEOUT 6000
ONTIMEOUT {{ inventory_hostname }}-deploy

LABEL local
        MENU LABEL (local)
        MENU DEFAULT
        LOCALBOOT 0

LABEL {{ inventory_hostname}}-deploy
        kernel CentOS/{{ centos_dist }}/{{ centos_arch}}/vmlinuz
        MENU LABEL CentOS {{ centos_dist }} {{ centos_arch }}- CI Kickstart for {{ inventory_hostname }}
        {% if centos_dist == 7 -%}
	append initrd=CentOS/7/{{ centos_arch }}/initrd.img net.ifnames=0 biosdevname=0 ip=eth0:dhcp inst.ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
	{% else -%}
        append initrd=CentOS/{{ centos_dist }}/{{ centos_arch }}/initrd.img ksdevice=eth0 ip=dhcp ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
 	{% endif %}

The interesting part is the one on which I needed to spend more time : as said, it was the first time I had to play with SeaMicro hardware, so I had to dive into documentation (which I *always* do, RTFM FTW !) and understand how to use their Rest API but once done, it was a breeze. Ansible by default doesn't provide a native resource for Seamicro, but that's why Rest exists, right and thanfully, Ansible has a native URI module, which we use here . The only thing on which I had to spend more time was to understand how to properly construct the body, but declaring in the yaml file as a variable/list and then converting it on the fly to json (with the magical body='{{ seamicro_reset_body | to_json }}' ) was the way to go and is so self-explained when read now.

And here we go, calling that ansible playbook and suddenly 128 physical machines were being installed (and reinstalled with different CentOS versions - 5,6,7 - and arches i386,x86_64)

Hope this helps if you  have to interact with Seamicro chassis from within an ansible playbook too

January 06, 2015

The EPEL and CentOS Project relationship

January 06, 2015 11:15 AM

On Saturday 31st Jan, after close of Fosdem day 1 – I am working to bring together a group of people who all care about the EPEL and CentOS Project relationships to try and workout how best to move things forward. Key points to address are how SIG’s and other efforts in CentOS can consume, rely on, feedback to and message around content in EPEL and similarly how can CentOS efforts feedback into EPEL components – the overall aim being to workout a plan and a way for the two buildsystems to talk to each other ( the CentOS Community one and the EPEL one ), and to set some level of expectations across the project efforts.

Everyone is welcome to come along for the conversation, but it would be most productive for people who are CentOS SIG members and EPEL contributors / administrators and users who rely on EPEL content on their CentOS Linux installs.

I’ve started a thread to setup some of the basic topics on the centos-devel list, you can track it here. And there is a list of people who want to make it for the conversation at the bottom of the CentOS Fosdem 2015 planning page. If you are able to make it, let me know and I will add your name to the list. Remember this is a post-fosdem day 1 thing, in the early evening of the 31st Jan 2015.

See you there!

December 15, 2014

CentOS in OpenShift Commons

December 15, 2014 12:24 PM

Happy to announce that the CentOS Project is now a part of the OpenShift Commons initiative.

In their own words :

The Commons builds connections and collaboration across OpenShift communities, projects and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.

A significant amount of OpenShift development and community delivery work is now done on CentOS Linux, and I am hoping that this new association between the two projects allows us to further build on that platform.

- KB

Reproducible CentOS containers

December 15, 2014 12:00 AM

Around 9 months ago, I took over the creation of the official CentOS images contained in the Docker index. Prior to that, the index officially had one lonely and outdated CentOS 6.4 image that we at the CentOS Project were unaware of. Docker sort of exploded into our view and we spent a bit of time playing catch-up, trying to get things done the way we as a distribution would like to see them done. One of these actions was to do away with minor-versioned containers.

We chose to drop the minor version from our containers, and petitioned the Docker registry maintainer to remove the existing one (not built by us). The reasoning for this is fairly straightforward: A larger percentage of users never updated to the current containers, and so most of the bugs submitted to us were for older versions. Even recently Dan Walsh has had to post reminders to run updates. By only having a centos6 or centos7 image, we tried to remove the mindset for minor versions. Since the containers themselves are a svelte 132 packages, they amount to little more than the dependencies needed for bash and yum. In theory, the differences between a 6.5 image and a 6.6 image should be entirely negligible. In fact, by default any package installed on a 6.5 container would be from the 6.6 repositories.

That said, the number one request since we stopped shipping point releases is… you guessed it: Point releases. While I continue to maintain that our position of updates is the proper one, real world usage often runs counter to ivory tower thinking. A number of valid use cases were brought up in the course of discussions with community members asking for containers to be tagged with minor point releases, and I have agreed to reconsider tagging minor version images.

Beginning with the January monthly rollout, I will add minor tags for the 5, 6 builds in the Docker index. The minor tags will be for 5.11, 6.6. For 7 builds it will correspond to date tagged build name, the same as the installation media. These tags will be built from, and correspond to the respective CentOS installation media, and so will not contain updates. This means if you are using the minor tags, you could potentially / would be exposing your containers to exploits that have been patched in the rolling updates. The latest, 5, 6, and 7 tags will continue to point to the rolling monthly releases, which I would highly recommend using.

December 02, 2014

FreeIPA 4.1.2 and CentOS

December 02, 2014 06:04 PM

The FreeIPA community is looking for your help and feedback!

The FreeIPA development team is excited to share with you a new version of the FreeIPA server 4.1.2 running in a container on top of CentOS. It is the first time a FreeIPA upstream release is available in the CentOS docker index. It is a preview of the features that will eventually make their way in the main CentOS distribution. This version of FreeIPA showcases multiple new major features as well as improvements to existing components above what is currently available in CentOS 7.0

 

In order to use this docker container, please run
docker pull centos/freeipa

Then follow the guide/documentation available at https://registry.hub.docker.com/u/centos/freeipa/

 

These features include:

– Backup and Restore
Ability to backup server data and restore an instance in the case of disaster
http://www.freeipa.org/page/V3/Backup_and_Restore

– CA Certificate Management Utility
A tool to change IPA chaining or rotate the CA certificate on already installed server
http://www.freeipa.org/page/V4/CA_certificate_renewal

– ID Views
Ability to store POSIX data and SSH keys in IPA for users belonging to a trusted Active Directory domain. Alternative POSIX data and SSH keys can also be stored for regular IPA users making it possible to serve different POSIX data to different clients (requires SSSD 1.12.3 or later). This is useful in migration scenarios where consolidation of multiple identity stores (local files, NIS domains, legacy LDAP servers, ..) with duplicated identities and inconsistent POSIX attributes needs to be retained for some clients.
http://www.freeipa.org/page/V4/Migrating_existing_environments_to_Trust

Note: The solution requires the latest SSSD bits availble the Copr REPO. https://copr.fedoraproject.org/coprs/mkosek/freeipa/

– DNSSEC
With this version we are introducing for the first time the ability to manage DNSSEC signatures on DNS data. This feature will be available in the community version only and would not be included into CentOS 7.1.
http://www.freeipa.org/page/Releases/4.1.0#DNSSEC_Support

There are also significant improvements in UI, permissions,  keytab management, automatic membership and SUDO rules handling.
More information can be found here:
http://www.freeipa.org/page/V4/Automember_rebuild_membership
http://www.freeipa.org/page/V4/Forward_zones
http://www.freeipa.org/page/V4/Keytab_Retrieval
http://www.freeipa.org/page/V4/Keytab_Retrieval_Management
http://www.freeipa.org/page/V4/PatternFly_Adoption

The biggest and the most interesting feature of FreeIPA 4.1.2 is support for the two factor authentication using HOTP/TOTP compatible software tokens like FreeOTP (open source compatible alternative to Google Authenticator) and hardware tokens like Yubikeys. This feature allows Kerberos and LDAP clients of a FreeIPA server to authenticate using the normal account password as the first factor and an OTP
token as a second factor. For those environments where a 2FA solution is already in place, FreeIPA can act as a proxy via RADIUS. More about this feature can be read here.
http://www.freeipa.org/page/V4/OTP

If you want to see this feature in CentOS 7.1 proper we need your help!
Please give it a try and provide feedback. We really, really need it!

Please use freeipa-users@redhat.com if you have any questions.
If you notice any issues or want to file an RFE you can do it here:
https://fedorahosted.org/freeipa/ (requires a Fedora account).
You can also find us on irc.freenode.net on #freeipa.

November 26, 2014

And now a few words from Paul C.

November 26, 2014 08:13 PM

Although some people in open source communities might not be aware of him, Paul Cormier holds a singular position in the open source world. This hinges on the detail that Red Hat is the longest standing and most successful company at promoting the growth of free/open source software and especially the acceptance of that software in the enterprise (large businesses.) Paul is a Red Hat EVP, but he is also the President of Products and Technologies, meaning he is ultimately accountable for what Red Hat does in creating products the open source way. Paul has held this position essentially for the last dozen years, and so has overseen everything in Red Hat from the creation of Fedora Linux to the rise of cloud computing that Red Hat is an intimate part of.

In other words, when Paul C. speaks — keynote or in-person — he is someone really worth paying close attention to.

In this post on Red Hat’s open source community website, “One Year Later:  Paul Cormier on Red Hat and the CentOS Project“, I provide some introduction and background around a video interview Paul did with ServerWatch about the Red Hat and CentOS Project relationship.

(Speaking of ‘intimately’, that explains my relationship to Red Hat and the CentOS Project — I spent all of 2013 architecting and delivering on making CentOS Linux the third leg in the stool of Red Hat platform technologies. When I say in the “One Year Later…” article about “making sure (Paul C. is) happy and excited about Red Hat joining forces with the CentOS Project,” that responsibility is largely mine.)

November 24, 2014

Switching from Ethernet to Infiniband for Gluster access (or why we had to …)

November 24, 2014 10:37 AM

As explained in my previous (small) blog post, I had to migrate a Gluster setup we have within CentOS.org Infra. As said in that previous blog post too, Gluster is really easy to install, and sometimes it can even "smells" too easy to be true. One thing to keep in mind when dealing with Gluster is that it's a "file-level" storage solution, so don't try to compare it with "block-level" solutions (so typically a NAS vs SAN comparison, even if "SAN" itself is wrong for such discussion, as SAN is what's *between* your nodes and the storage itself, just a reminder.)

Within CentOS.org infra, we have a multiple nodes Gluster setup, that we use for multiple things at the same time. The Gluster volumes are used to store some files, but also to host (different gluster volumes with different settings/ACLs) KVM virtual-disks (qcow2). People knowing me will say : "hey, but for performances reasons, it's faster to just dedicate for example a partition , or a Logical Volume instead of using qcow2 images sitting on top a filesystem for Virtual Machines, right ?" and that's true. But with our limited amount of machines, and a need to "move" Virtual Machine without a proper shared storage solution (and because in our setup, those physical nodes *are* both glusterd and hypervisors), Gluster was an easy to use solution to :

It was working, but not that fast ... I then heard about the fact that (obviously) accessing those qcow2 images file through fuse wasn't efficient at all, but that Gluster had libgfapi that could be used to "talk" directly to the gluster daemons, bypassing completely the need to mount your gluster volumes locally through fuse. Thankfully, qemu-kvm from CentOS 6 is built against libgfapi so can use that directly (and that's the reason why it's automatically installed when you install KVM hypervisor components). Results ? better , but still not was I/we was/were expecting ...

When trying to find the issue, I discussed with some folks in the #gluster irc channel (irc.freenode.net) and suddenly I understood something that it's *not* so obvious for Gluster in distributed+replicated mode : for people having dealt with storage solutions at the hardware level (or people using DRBD, which I did too in the past, and that I also liked a lot ..) in the past, we expect the replication to happens automatically at the storage/server side, but that's not true for Gluster : in fact Glusterd just exposes metadata to gluster clients, which then know where to read/write (being "redirected" to correct gluster nodes). That means so than replication happens at the *client* side : in replicated mode, the clients will write itself twice the same data : once on each server ...

So back to our example, as our nodes have 2*1Gb/s Ethernet card, and that one is a bridge used by the Virtual Machines, and the other one "dedicated" to gluster, and that each node is itself a glusterd/gluster client, I let you think about the max perf we could get : for a write operation : 1Gbit/s , divided by two (because of the replication) so ~ 125MB / 2 => in theory ~ 62 MB/sec (and then remove tcp/gluster/overhead and that drops to ~ 55MB/s)

How to solve that ? well, I tested that theory and confirmed directly that it was the case, when in distributed mode only, write performances were automatically doubled. So yes, running Gluster on Gigabit Ethernet suddenly was the bottleneck. Upgrading to 10Gb wasn't something we could do, but , thanks to Justin Clift (and some other Gluster folks), we were able to find some "second hand" Infiniband hardware (10Gbps HCAs and switch)

While Gluster has native/builtin rdma/Infiniband capabilities (see "tranport" option in the "gluster create volume" command), we had in our case to migrate existing Gluster volumes from plain TCP/Ethernet to Infiniband, while trying to get the downtime as small as possible. That is/was my first experience with Infiniband, but it's not as hard as it seems, especially when you discover IPoIB (IP over Infiniband). So from a Syadmin POV, it's just "yet another network interface", but a 10Gbps one now :)

The Gluster volume migration then goes like this : (schedule a - obvious - downtime for this) :

On all gluster nodes (assuming that we start from machines installed only with @core group, so minimal ones) :

yum groupinstall "Infiniband Support"

chkconfig rdma on

<stop your clients or other  apps accessing gluster volumes, as they will be stopped>

service glusterd stop && chkconfig glusterd off &&  init 0

Install then the hardware in each server, connect all Infiniband cards to the IB switch (previously configured) and power back on all servers. When machines are back online, you have "just" to configure the ib interfaces. As in my cases, machines were "remote nodes" and not having a look at how they were configured, I  had to use some IB tools to see which port was connected (a tool like "ibv_devinfo" showed me which port was active/connected, while "ibdiagnet" shows you the topology and other nodes/devices). In our case it was port 2, so let's create the ifcfg-ib{0,1} devices (and ib1 being the one we'll use) :

DEVICE=ib1
TYPE=Infiniband
BOOTPROTO=static
BROADCAST=192.168.123.255
IPADDR=192.168.123.2
NETMASK=255.255.255.0
NETWORK=192.168.123.0
ONBOOT=yes
NM_CONTROLLED=no
CONNECTED_MODE=yes

The interesting part here is the "CONNECTED_MODE=yes" : for people who already uses iscsi, you know that Jumbo frames are really important if you have a dedicated VLAN (and that the Ethernet switch support Jumbo frames too). As stated in the IPoIB kernel doc , you can have two operation mode : datagram (default 2044 bytes MTU) or  Connected (up to 65520 bytes MTU). It's up to you to decide which one to use, but if you understood the Jumbo frames thing for iscsi, you get the point already.

An "ifup ib1" on all nodes will bring the interfaces up and you can verify that everything works by pinging each other node, including with larger mtu values :

ping -s 16384 <other-node-on-the-infiniband-network>

If everything's fine, you can then decide to start gluster *but* don't forget that gluster uses FQDN (at least I hope that's how you configured initially your gluster setup, already on a dedicated segment, and using different FQDN for the storage vlan). You just have to update your local resolver (internal DNS, local hosts files, whatever you want) to be sure that gluster will then use the new IP subnet on the Infiniband network. (If you haven't previously defined different hostnames for your gluster setup, you can "just" update that in the different /var/lib/glusterd/peers/* and /var/lib/glusterd/vols/*/*.vol)

Restart the whole gluster stack (on all gluster nodes) and verify that it works fine :

service glusterd start

gluster peer status

gluster volume status

# and if you're happy with the results :

chkconfig glusterd on

So, in a short summary:

  • Infiniband isn't that difficult (and surely if you use IPoIB, which has though a very small overhead)
  • Migrating gluster from Ethernet to Infiniband is also easy (and surely if you planned carefully your initial design about IP subnet/VLAN/segment/DNS resolution for "transparent" move)

November 21, 2014

Updating to Gluster 3.6 packages on CentOS 6

November 21, 2014 03:08 PM

I had to do yesterday some maintenance yesterday on our Gluster nodes used within CentOS.org infra. Basically I had to reconfigure some gluster volumes to use Infiniband instead of Ethernet. (I'll write a dedicated blog post about that migration later).

While a lot of people directly consume packages from Gluster.org (for example http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/), you'll be able (soon) to also install directly those packages on CentOS, through packages built by the Storage SIG . At the moment I'm writing this blog post, gluster 3.6.1 packages are built and available on our Community Build Server Koji setup , but still in testing (and unsigned).

"But wait, there are already glusterfs packages tagged 3.6 in CentOS 6.6, right ? " will you say. Well, yes, but not the full stack. What you see in the [base] (or [updates]) repository are the client packages, as for example a base CentOS 6.x can be a gluster client (through fuse, or libgfapi - really interesting to speed up qemu-kvm instead of using the default fuse mount point ..) , but the -server package isn't there. So the reason why you can either use the upstream gluster.org yum repositories or the Storage SIG one to have access to the full stack, and so run glusterd on CentOS.

Interested in testing those packages ? Wanting to test the update before those packages will be released by the Storage SIG ? here we go : http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/ (packages available for CentOS 7 too)

By the way, if you never tested Gluster, it's really easy to setup and play with, even within Virtual Machines. Interesting reading : (quick start) : http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

November 13, 2014

EPEL Orphaned packages and their dependents to be removed Dec 17th

November 13, 2014 10:22 AM

Hi,

The EPEL repository runs from within the Fedora project, sharing resources ( including, importantly their source trees ) with the Fedora ecosystem; over the years its proven to be a large and helpful resource for anyone running CentOS Linux.

One key challenge they have however, much like CentOS Linux, is that the entire effort is run by a few people helped along by a small group of voulenteers. So while the package list they provide is huge, the people putting in the work behind it is small. A fallout from this is that over the years a significant chunk of packages in the EPEL repo are now orphaned. They once had a maintainer but either that maintainer has gone away now, or has other priorities.

A few days back, Steven announced that they were going to start working to drop these orphaned packages unless someone steps up to help maintain them. You can read his announcement here : https://lists.fedoraproject.org/pipermail/epel-devel/2014-November/010430.html

This is a great time for anyone looking to get involved with packages and packaging as a whole and wanting to contribute into the larger CentOS Linux ecosystem to jump in and take ownership of content ( ideally stuff that you care about, hence likely to keep it managed for a period of time ). They have a simple process to get started, document at the Joining EPEL page here : https://fedoraproject.org/wiki/Joining_EPEL and you can see a list of packages being orphaned on the urls from Steven’s post linked above.

Regards,

October 29, 2014

CentOS Dojo at LISA14 in Seattle on November 10th, 2014

October 29, 2014 08:03 PM

Join us at the all day (09:00 to 17:00) CentOS Dojo on Monday, November 10th, 2014 at the LISA14 conference in Seattle, Washington.

There will be at least three CentOS board members there (Johnny Hughes, Jim Perrin, and Karsten Wade).

The current topics include:

  • CI environment scaling by Dave Nalley
  • DevOps Isn’t Just for WebOps: The Guerrilla’s Guide to Cultural Change by Michael Stahnke
  • The EPEL Phoenix Saga by Stephen Smoogen
  • Docker in the Distro by Jim Perrin
  • Managing your users by Matt Simmons
Visit the CentOS Wiki for more information.

October 28, 2014

CentOS-6.6 is Released

October 28, 2014 11:27 AM

CentOS 6.6 is now released, see the Announcement.

So, the Continuous Release RPMs where released on 21 October (7 days after RHEL 6.6) and the Full Release was done 28 October (14 days after RHEL 6.6).

Enjoy.



October 21, 2014

Continuous Release Repository RPMs for CentOS-6.6 Released

October 21, 2014 07:46 AM

The CentOS team has released the Continuous Release (CR) Repository RPMs for CentOS-6.6 into their 6.5/cr tree.  See the Release Announcement.

Now a little more about the release process.

  1. Red Hat releases a version of Red Hat Enterprise Linux.  In this case the version is Red Hat Enterprise Linux 6.6 (RHEL-6.6), which was released on October 14th, 2014.  With that release by Red Hat comes the source code which RHEL 6.6 is based on.
  2. The CentOS team takes that released source code and starts building it for their CentOS release (in this case CentOS-6.6).  This process can not start until the Source Code from Red Hat is available, which in this case was October 14th.
  3. At some point, all the Source Code has been built and there are RPMs available, this is normally 1-5 days depending on how many Source RPMs there are to build and how many times the order needs to be changed to get the builds done correctly.
  4. After the CentOS team thinks they have a good set of binary RPMs built, they submit them to the QA team (a team of volunteers who do QA for the releases).  This QA process includes the t_functional suite and several knowledgeable system administrators downloading and running tests on the RPMs to validate updating with them works as planned.
  5. At this point there are tested RPMs ready, and the CentOS team needs to build an installer tree. This means, take the new RPMs and move them into place in the main tree, remove the older ones RPMs they are replacing, run the build installer to create an installable tree, test that installable tree.  This process can take up to 7 days.
  6. Once there is an installable tree, all the ISOs have to be created and tested.  We have to create the ISOs, upload them to the QA process, test them for installs via ISOs (correct sizes, how to split the ISOs, what is on the Live CDs and LiveDVDs to keep them below the max size to fit on media, etc.).  We then also test the installs for UEFI installs, Secure Boot installs (CentOS-7 only), coping to USB Keys and checking the installs that way, etc.  This process can also take up to 7 days.
So, in the process above, we can have vetted binary RPMs ready to go as soon as 5 days after we start, but it may be 14 or more days after that before we have a complete set of ISOs to do a full release.  Thus the reason for the CR Repository.

The CR Repository


The process of building and testing an install tree, then creating and testing several types of ISO sets from that install tree (DVD Installer, Minimum Install ISO, LiveCD, LiveDVD, etc) can take 1-2 weeks after all the RPMs are built and have gone through initial QA testing.

The purpose of the CR repository is to provide quicker access to RPMs for an upcoming CentOS point release while further QA testing is ongoing and the ISO installers are being built and tested.

Updates in the CR for CentOS-6.6

More Information about CR.

CentOS-6.6 Release Notes (Still in progress until the actual CentOS-6.6 release).

Upstream RHEL-6.6 Release Notes and Technical Notes.

October 15, 2014

Running MariaDB, FreeIPA, and More with CentOS Containers

October 15, 2014 02:03 PM

The CentOS Project is pleased to announce four new Docker images in the CentOS Container Set, providing popular, ready to use containerized applications and services. Today you can grab containers with MariaDB, Nginx, FreeIPA, and the Apache HTTP Server straight from the Docker Hub.

The new containers  are based on CentOS 7, and are tailored to provide just the right set of packages to provide MariaDB, Nginx, FreeIPA, or The Apache HTTP Server right out of the box.

The first set of applications and services provide two of the world’s most popular Web servers, MariaDB for your database needs, and FreeIPA to provide an integrated security information management solution.

The CentOS Container Set is an effort to leverage the CentOS Project to give developers and admins the building blocks to easily set up containerized services in their environment. Keep an eye on the CentOS blog for further releases, or help us as we continue to develop more!

To get started with one of the images, use: `docker pull centos/<app>` where <app> is the name of the container (*e.g.* `docker pull centos/mariadb`). You can find some quick “getting started” info on the Docker Hub page for each application.

Jason Brooks has written up a longer howto for FreeIPA  that details how to build the container (which is already done here, but you can rebuild the images if you like using the Dockerfiles on GitHub), and how to set it up to use FreeIPA with an application.

We have a larger set of Dockerfiles (derived initially from the Fedora Dockerfiles) set that we’re working on to develop pre-made CentOS Docker containers for easy use. Join the centos-devel mailing list to ask questions about the containers, or to provide feedback on their use. We also accept pull requests if you have any fixes or new Dockerfiles to contribute!

Koji – CentOS CBS infra and sslv3/Poodle important notification

October 15, 2014 10:46 AM

As most of you already know, there is an important SSLv3 vulnerability (CVE-2014-3566 - see https://access.redhat.com/articles/1232123) , known as Poodle.
While it's easy to disable SSLv3 in the allowed Protocols at the server level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some clients are still defaulting to SSLv3, and Koji does that.

We currently have disabled SSLv3 on our cbs.centos.org koji instance, so if you're a cbs/koji user, please adapt your local koji package (local fix !)
At the moment, there is no available upstream package, but the following patch has been tested by Fedora people too (and credits go to

https://lists.fedoraproject.org/pipermail/infrastructure/2014-October/014976.html)

=====================================================
- --- SSLCommon.py.orig    2014-10-15 11:42:54.747082029 +0200
+++ SSLCommon.py    2014-10-15 11:44:08.215257590 +0200
@@ -37,7 +37,8 @@
if f and not os.access(f, os.R_OK):
raise StandardError, "%s does not exist or is not
readable" % f

- -    ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only
+    #ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only
+    ctx = SSL.Context(SSL.TLSv1_METHOD)   # TLSv1 only
ctx.use_certificate_file(key_and_cert)
ctx.use_privatekey_file(key_and_cert)
ctx.load_client_ca(ca_cert)
@@ -45,7 +46,8 @@
verify = SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT
ctx.set_verify(verify, our_verify)
ctx.set_verify_depth(10)
- -    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)
+    #ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)
+    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1 | SSL.OP_NO_SSLv3)
return ctx
=====================================================

We'll keep you informed about possible upstream koji packages that would default to at least TLSv1

If you encounter a problem, feel free to drop into #centos-devel channel on irc.freenode.net and have a chat with us

October 01, 2014

Xen4CentOS XSA-108 Security update for CentOS-6

October 01, 2014 08:00 AM

There has been a fair amount of press in the last couple of days concerning Xen update XSA-108, and the fact that Amazon EC2 and Rackspace must reboot after this update:

Rackspace forced to reboot cloud over Xen bug

Amazon Reboots Cloud Servers, Xen Bug Blamed

There are other stories, but those articles cover the main issue.

As KB tweeted, the CentOS-6 Xen4CentOS release is also impacted by this issue and the CentOS team has released CESA-2014:X013 to deal with XSA-108.  There are also 3 other Xen4CentOS updates released:  CESA-2014:X010, CESA-2014:X011, CESA-2014:X012

If you are using Xen4CentOS on CentOS-6, please use yum update to get these security updates ... and like Rackspace and Amazon EC2, you need to reboot your dom0 machine after the updates are applied.

September 30, 2014

CentOS team at cPanel 2014

September 30, 2014 03:28 AM

The CentOS team will have a booth in the Exhibit Hall for the 2014 cPanel Conference at the Westin Galleria hotel in Houston, Texas from September 30th to October 1st 2014.

CentOS Board members Johnny Hughes (that's me :D) and Jim Perrin will be at the booth whenever the hall is open. 

We are looking forward to lots of discussions and we will have some swag to give out (Tee Shirts .. including the new 10 Year Anniversary tee, Stickers, etc.). We will also be happy to install CentOS on your laptop (or let you do it) ... or if you have a USB key available, we will put a CentOS iso on it for you to use for install later.

If you are going to be at cPanel 2014, come on down and see us!


Powered by Planet!
Last updated: April 18, 2015 11:30 AM