May 21, 2015

CentOS 7 armv7hl build in progress

May 21, 2015 12:36 PM

As more and more people were showing interest in CentOS on the ARM platform, we thought that it would be a good idea to start trying building CentOS 7 for that platform. Jim started with arm64/aarch64 and got an alpha build ready and installable.

On my end, I configured some armv7hl nodes, "donated" to the project by Scaleway. The first goal was to init some Plague builders to distribute the jobs on those nodes, which is now done. Then working on a "self-contained" buildroot , so that all other packages can be rebuilt only against that buildroot. So building first gcc from CentOS 7 (latest release, better arm support), then glibc, etc, etc ... That buildroot is now done and is available here.

Now the fun started (meaning that 4 armv7hl nodes are currently (re)building a bunch of SRPMS) and you can follow the status on the Arm-dev List if you're interested, or even better, if you're willing to join the party and have a look at the build logs for packages that failed to rebuild. The first target would be to have a "minimal" install working, so basically having sshd/yum working. Then try other things like GUI environment.

As plague-server required mod_python (deprecated now) we don't have any Web UI people can have a look at. But I created a "quick-and-dirty" script that gathers information from the mysql DB, and outputs that here :

The other interesting step will be to produce .img files that would work on some armv7hl nodes. So diving into uboot for Odroid C1 (just as an example) ....

I'll also try to maintain a dedicated Wiki page for the arm32 status in the following days/weeks/etc ..

May 18, 2015

CentOS 7 with GlusterFS on AArch64

May 18, 2015 06:09 PM

Initially I meant for this to be a much more in-depth blog post about running GlusterFS on AArch64 hardware, but honestly it’s been ridiculously easyto get running. There isn’t much to say about running it that isn’t already covered in the GlusterFS quickstart guide. Even building GlusterFS on AArch64 was a snap; I simply pulled down the glusterfs-3.6.3 src.rpm and ran it through mock on my AArch64 build system. A few seconds later, I had around a dozen glusterfs packages ready for installation. After bringing up a test box, I was confident that this was working entirely too well and something would explode at any minute. I followed the quickstart, and a few minutes later, I had a working test implementation of GlusterFS up and running. There were no explosions, no fireworks, and no errors. The whole thing was incredibly painless to get working, and I can’t wait to see people using it in production.

gluster volume status and installed packages

 

 

Hopefully this means getting an official GlusterFS build for AArch64 will be as simple as asking nicely, and possibly working with the Storage SIG for access to the builders.

May 13, 2015

Firefox 38 and TLS less than 1.2

May 13, 2015 05:19 AM

Red Hat released the source code for Firefox 38.  We have (or willbe
today) releasing this for CentOS-5, CentOS-6, and CentOS-7.

It does not, by default, connect to https sites with TLS less than 1.2.
This means it will not connect to sites on CentOS-5, for example ..
there are many others.

In any event, here is a wiki article that explains potential issues and
workarounds:

http://wiki.centos.org/TipsAndTricks/Firefox38onCentOS

May 08, 2015

Running CentOS Linux 7 on AArch64 hardware

May 08, 2015 09:07 PM

The journey for rebuilding the latest release of CentOS Linux 7 on AArch64 hardware has certainly been an interesting one. After a few bug reports, a couple minor patches, and several iterations through the build system, we finally have a release for community testing and consumption, which we’ll have available for install early next week. There are some differences with a traditional CentOS install that users wishing to experiment should keep in mind. Because AArch64 hardware is still very new, this build requires a newer kernel that we’ll be patching fairly regularly with help from the community as well as various vendors. The initial alpha release will ship with a kernel-4.0.0 based kernel, however we’re working on providing a 4.1 based kernel using ACPI very soon. After the initial kickoff next week, we’ll start setting expectations for fixes, release cycles (I’m thinking monthly, in keeping with other plans) and more. If you want to participate or contribute patches, please join our arm-dev list and say hi.

Teaser screenshots!

In the example below, I copied the boot.iso to USB via dd. For the hardware I have, the installer’s started over a serial interface, and then accessed via VNC. A text mode is available as well, just like the default CentOS 7 installer for x86_64 (you’ll probably want to use VNC initially).

Screenshot from 2015-05-05 18:35:40

The VNC based installer is identical to the one you’re already familiar with for CentOS. The only difference of note here, is that by default only the ‘minimal’ install is available. Additional packages may be installed after the installation completes. This is something we’ll improve on as the AArch64 build matures.

Screenshot from 2015-05-07 11:11:46

 

Just as you’d expect, once the installer completes successfully, you’ll be prompted to reboot.

Screenshot from 2015-05-07 11:19:04

 

After the installation completes and you have rebooted the system, the console login prompt shows the 4.0.0-1 kernel goodness, and you’re ready to deploy additional software and configure the system.

Screenshot from 2015-05-08 15:57:01

May 06, 2015

Signed Repository Metadata is now Available for CentOS 6 and 7 for the Updates Repo

May 06, 2015 02:35 PM

The CentOS Project is now providing a signed copy of the repodata metadata file (repomd.xml.asc) for our Updates Repository for both CentOS-6 and CentOS-7.  To use this feature, you would edit the file /etc/yum.repos.d/ CentOS-Base.repo and locate the [updates] section, the default looks like this:

#released updates
[updates]
name=CentOS-$releasever – Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

You would add in this option:

repo_gpgcheck=1

Currently we only have this option available on the [updates] repos for CentOS-6 and CentOS-7, but we will be rolling it out to all C6 and C7 repos in the future.

Yum will verify that the repo in question is signed with the RPM-GPG-KEY-CentOS-7  (or RPM-GPG-KEY-CentOS-6 for CentOS-6) key .. so you can be sure these updates come directly from the CentOS Project and no one else.

Here is a good read about GPG sign and verify RPM packages and yum repositories . It also explains why we are not rolling it into the CentOS-5 repos.

There is also further information on this CentOS Maillist thread.

 

Hacking initrd.img for fun and profit

May 06, 2015 06:16 AM

During my presentation at Loadays 2015 , I was mentioning some tips and tricks around Anaconda and kickstart, and so how to deploy CentOS , fully automated. I asked the audience about where to store the kickstart, that would be used then by anaconda to install CentOS (same works for RHEL/Fedora), and I got several answers, like "on the http server", or "on the ftp server", which is where most people will put their kickstart files. Some would generate those files files "dynamically" (through $cfgmgmt - I use Ansible with Jinja2 template for this - ) as a bonus point.

But it's not mandatory to host your kickstart file on a publicly available http/ftp/nfs server, and surely not when having to reinstall nodes not in the same DC. Within the CentOS.org infra, I sometimes have to reinstall remote nodes ("donated" to the Project) that are running CentOS 5 or 6 to 7. That's how injecting your ks file directly into the initrd.img really helps. (yes, so network server needed). Just as an intro, here is how you can remotely trigger a CentOS install, without any medium/iso/pxe environment : basically you just need to download the pxeboot images (so vmlinuz and initrd.img), provide some default settings for Anaconda (for the network config, and how to grab stage2 image, and so where is the install tree) On the machine to be reinstalled :

cd /boot/
wget http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/{vmlinuz,initrd.img}

Now you can generate and copy your kickstart file for that node and send it to the remote node (with scp, etc ..) Next step on that remote node is to "inject" the kickstart directly in the initrd.img :

#assuming we have copied the ks file as ks.cfg in /boot already
echo ks.cfg | cpio -c -o >> initrd.img

So now we have a kernel/initrd.img, containing the kickstart file. You can modify grub(2) to add a new menu entry, make it the default one for next reboot and enjoy. But I usually prefer not doing that, if you need someone to reset that node remotely if something wrong happens, so instead of modifying grub(2), I just use kexec to reboot directly with the new kernel (without having to power cycle the node) :

# can be changed to something else, if for example node is running another distro not using yum as package manager
yum install -y wget kexec-tools 
kexec -l vmlinuz --append='net.ifnames=0 biosdevname=0 ksdevice=eth0 inst.ks=file:/ks.cfg inst.lang=en_GB inst.keymap=be-latin1 ip=your.ip netmask=your.netmask gateway=your.gw dns=your.dns' --initrd=initrd.img && kexec -e

As you can see in the append line, I just tell anaconda/kernel to not use the new nic naming (default now in CentOS 7, and sometimes hard to guess in advance), assuming that eth0 is the one to use (verify carefully that !), and the traditional ks= line in fact now just points to /ks.cfg ( initrd.img being / ). The rest is self-explained.

The other cool stuff, is that you can use the same "inject" technique but for Virtual Machines installed through virt-install : it supports injecting directly files in the initrd.img, so easier than for bare metal nodes : you just have to use two parameters for virt-install :

  • --initrd-inject=/path/to/your/ks.cfg
  • --extra-args "console=ttyS0 ks=file:/ks.cfg”

Hope this helps

May 05, 2015

Last few days in CentOS, May 5th

May 05, 2015 05:32 PM

Just a short recap of some of the things going on around the CentOS Ecosystem.

* We have now got a 5 machine armv7 ( 32 bit ) Buildsystem running. Over the coming days and weeks you should keep an eye out for testing calls. If you can, and have interesting ARM hardware, feel free to join us at the arm-dev list ( http://lists.centos.org/pipermail/arm-dev/ – more information on the build system can be found in this thread: http://lists.centos.org/pipermail/arm-dev/2015-April/000126.html

* There is a lot of work being done to get XFCE in a good state for CentOS-6 and 7, you can track the conversation from this thread http://lists.centos.org/pipermail/centos-devel/2015-May/013326.html

* The RDO Project is running 2 test days for OpenStack on CentOS. You can get details and join the effort ( it runs 5th and 6th May ) at http://lists.centos.org/pipermail/centos-devel/2015-April/013309.html

* There is a Vagrant Box now available for CentOS, for user testing and feedback – if you use Vagrant on VirtualBox or Livbirt or vmware backends, please give this a try and send feedback to the centos devel list ( more info at : http://lists.centos.org/pipermail/centos-devel/2015-April/013297.html)

Events:

* We had a great CentOS Dojo at Bangalore, India on the 29th April. About 70 CentOS users came together to talk about containers. Details of the meeting are at http://www.meetup.com/CentOS-India/events/221769525/ and you can see some pictures at https://www.flickr.com/photos/saifikhan/sets/72157649944407033/

* OpenStack Summit is happening at Vancouver, CA from May 18th to 22nd. CentOS Project will have a presence there. If you are coming to the event, stop by and say hi! We will also have tshirts and stickers, so come along and help yourself to some of those.

* Netherlands UUG Spring Conference is taking place on the 28th May, ( https://www.nluug.nl/index.html ) I will be there speaking about CentOS Linux, The CentOS project and some of the new innitatives we are starting up, along with how people can get involved in these efforts.

In other news, 7 students have taken up the Google Summer of Code slots that were allocated to the CentOS Project, over the next few weeks expect to see some traffic on centos-devel list from those students – and we will be encouraging them to come and join the various SIG meetings and communicate outward their progress, and also ask for help if they get stuck in anywhere. They will be working on things ranging from Kpatch live patching, to Xen and Cloud installs, to improving our
documentation trails! I’m very excited to have these students onboard! Hope they have a great summer ahead and produce some great code.

- KB

April 22, 2015

Some recent news from CentOS : Apr 22 2015

April 22, 2015 11:37 AM

Hi,

This is a summary of some of the major things going on in the project, its not a comprehensive list, but should cover most of the major traction points:

Firstly, lets all welcome Brian Stinson to the fold (http://lists.centos.org/pipermail/centos-devel/2015-April/013211.html )

———-
Updates for CentOS 5/6/7 : All updates from upstream are released into the CentOS Linux mirror network.

———-
* Moving towards Signed Metadata ( ref: http://lists.centos.org/pipermail/centos-devel/2015-April/013210.html )

* Building a downstream CentOS based Atomic Host ( ref: http://lists.centos.org/pipermail/centos-devel/2015-April/013209.html )

———-
Other interesting things:

* The CentOS Mini Dojo in Bangalore April 2015 : http://wiki.centos.org/Events/Dojo/Bangalore2015

* Fabian was speaking at Loadays a few weekends back and did a great session on Installing CentOS, Slides from his presentation are available
here : http://people.centos.org/arrfab/Events/Loadays-2015/CentOS%20Install%20method%20review.pdf

* CentOS Project is participating in the Google Summer of Code for the first time this year, and we have been allocated 7 slots for projects. There are some very interesting projects in the pipeline. The landing page for the ideas is at http://wiki.centos.org/GSoC/2015/Ideas – and conversation around this has been taking place in both centos-devel list and the gsoc list ( http://lists.centos.org/ )

———-
Finally, I am going to try and run this weekly with a few notes from various places. Any and all help is appreciated. You can send me news to post in this at kbsingh centos.org.

Regards,

March 31, 2015

CentOS-7 (1503) is released

March 31, 2015 06:32 PM

Today the CentOS-Project announced the immediate availability of CentOS-7 (1503), the second release of CentOS-7.

 

Find out more about the release announcement here: http://lists.centos.org/pipermail/centos-announce/2015-March/021006.html.

Also don’t forget to read the release notes at the wiki: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.

March 20, 2015

CentOS-7 / CR repo has been populated

March 20, 2015 02:41 PM

Waiting for the new package set in the next CentOS-7 release ? A majority of them are now available on every CentOS-7 machine by running the following commands :

yum update
yum --enablerepo=cr list updates

Its important you run a ‘yum update’ first, since the cr repo definitions are only available in the newer centos-release rpms. Once you are happy with the new content that is going to come via the CR repos, you can apply them with :
yum --enablerepo=cr update

For more information on whats been pushed to these CR Repos, look at the announcement email at : http://lists.centos.org/pipermail/centos-announce/2015-March/020980.html

You can get more information on the CR process, the repository and the content inside it at : http://wiki.centos.org/AdditionalResources/Repositories/CR

– KB

March 11, 2015

CentOS-7 next release

March 11, 2015 03:36 PM

Red Hat Enterprise Linux 7.1 was released a few days back, You can go read the release notes now. Its a great way to find out the big changes and enhancements in this release.

On the CentOS side of things, we have been working through last week on getting the sources organised and the builds started up. We are pretty close to finishing the first cycle of builds. The next step would be to get the content into QA repos so people can start poking it. From there on, content will make its way into the CR/ repos, and we will goto work on the distribution media ( ie. the ISOS, Cloud Images, containers, live media etc ). Once that is done, we have another couple of days for QA around those bits, followed by the wider release.

This release for CentOS-7 is going to be tagged 1503 to indicate month of upstream release.

In terms of a timeline, this is where things stand : We hope to start moving content into the CR repos by the 13th/14th of March. This should set us up for releasing the distro around the end of the following week 18th to 20th of March. Ofcourse, this is a projected date and might change depending on how much time and work the QA cycles take up and any challenges we hit in the distro media building stages.

Note that the CR repos contain packages that have not been through as much QA as the content at release time, so while they do give people a path to early-access to next-release rpms and content, they come with the added risk.

Some of the special interest groups in CentOS are keen to get started with the new content, and we are trying to find a way to make that work – but at this time there is no mechanism that bridges the two buildsystems ( the CentOS Distro one, and the Community Build System used by the SIGs ). So for the time being the SIGs will just need to wait a few more days before they can start their trials and builds. For the future, its something we will try and find a solution for, so in the next release SIGs can start doing their builds and testing as the distro packages are being built.

– KB

March 10, 2015

SCALE 13x – no talking, all walking, and a great ally skills workshop

March 10, 2015 02:40 PM

For the first time in-I-can’t-remember I didn’t submit a talk to SCALE, so it was with a different personal energy that I attended SCALE 13x on 19 to 22 February this year. Not having a do-or-die public-speaking-scheduled-thing in front of me allowed for a more relaxed state of mind. Yet it was strange to not be one of the speakers this time. Still, all my old SCALE friends made me feel very welcome and accommodated. As usual, it was nice to have my family there, where so many know them as former speakers and regular attendees.

Rather than focus on talking to an audience, this time I spent my energy walking around the expo hall-and-wherever to talk with as many projects and companies as possible. My goal was to get an idea of who uses CentOS Linux, for what purposes, and get ideas of what people need and want from the project. I also provided information on what the Project has been up to especially around SIGs. That activity was fun, informative, and interesting.

Also I spent my share of time at the booth that housed the CentOS Project, RDO/OpenStack, oVirt, and OpenShift Origin. (I can’t wait to see the next iterations of Ryan’s Raspberry Pi 2 mini-cluster demo for OpenShift Origin.) I watched other people, including my wife, play with instruments and music software at the ever-popular Fedora Project booth (winners once again of a favorite booth award.) With a small rock concert and 3D printer, it was hard not to notice.

There were two sessions I was drawn to the most. The first was Ruth Suehle‘s keynote on Sunday morning, Makers:  The Next Frontier for Open Source. I’ve worked with Ruth a long time, seen her speak multiple times, seen lots of cool stuff that she’s made over the years, and I knew it would be an excellent talk. She used her great bully pulpit to teach and entreat the audience about the needs of the makers communities to get some serious clue and help from open source communities.

The other session was a workshop on Friday to learn skills as a man to be an ally for women when sexist things happen. This is something I’m interested in, being a better ally for people, including in the face of sexism and sexist behavior. For myself, I’ve begun calling myself a born again feminist. To me that means I’ve had a later-in-life realization that while I’ve always supported the ideas and topics around feminism, I wasn’t really aware how deeply pervasive sexism is, how blandly I’d looked past it, and that I could be part of the solution. Part of being part of the solution is not being afraid of being a feminist in name and action.

The workshop (described in detail here) was lead by Valerie Aurora, who’s gone from kernel hacker to executive director of the Ada Initiative. The Ada Initiative “supports women in open technology and culture …” Thus the workshop was primarily for people working in open technology and open culture. It started with a brief introduction that was useful in many ways, such as reminding us about how to best engage with difficult online exchanges (more advanced than ‘don’t feed the trolls’), the reason for needing male allies (hint:  it’s about doing something good with the privileged position and power that one has in society), and keeping it all in a useful context by not having the workshop be a place to debate “is there sexism?” Instead we acknowledge there is something broken, it needs fixing, and we here can do something about it. You can watch an introduction and highlights of the workshop in this video that Valerie gave to the staff at the Wikimedia Foundation, with closed captioned subtitles available for English.

For the majority of the workshop, we were in small groups (4 to 6 people) to discuss approaches we would take to certain scenarios. One scenario (as I recall them) was, “A woman is standing outside of your group at an event and looks as if she might be interested in joining the discussion. How would you handle this?” Another was, “At a work party someone comments that a co-worker with a large number of children must get a lot of sex.” Then the small groups discussed our approaches, and presented some ideas or questions back to the overall group. And then on to the next scenario.

The discussion/collaboration session was really useful in a number of ways. First, it helped give specific and general ideas of how to handle — and not handle — specific scenarios. Second, it also served to give a crosscut of different types of situations that do occur, so you can take skills from one scenario more easily in to another. Not only was it useful for dealing with sexist situations, it was easy to see the same thinking and skills could be applied to any situation where someone is objectified, made to be an Other, treated as a stereotype, and so forth — thus useful for handling racism, ageism, and so forth. Third, it was useful to get a chance to practice what to say in response when we witness sexism, partially because it’s helps us to have something to immediately say rather than being shocked and mute.

The format of the workshop was great. Elements included working in small groups, a person in each group being a gatekeeper who makes sure everyone in the group is heard from, presenting ideas back to the overall group in a discussion format, all the way down to how we introduced ourselves to our small groups. I also appreciated moving across groups at least once, that helped us get fresher perspectives with each scenario.

This is definitely a workshop I’d like to bring to any tech company. All of us can use help and perspective on how to react when someone does something sexist, or we have a chance to do something about systemic sexism. We can agree that it’s unkind to make people feel uncomfortable, and it’s kind to help people by pushing against the discomfort making.

There is something I’ve noticed for most of my life. When talking with my peers — people who are born mainly after the 1960s in a post-feminist-creation era — we are often in agreement about how people should treat each other along the axes of sex, race, gender, and so forth. And while I see in younger generations a huge amount of support for ideas such as “people should be able to legally marry whomever they want”, I still hear a lot of people afraid of the f-word — feminism. It’s as if people are in full agreement with the concepts behind the word, but afraid to use the word itself. This is the other part of my ‘born again’ experience, that I need to embrace the word as well as the concept in order to really align myself correctly, live correctly, and be a good ally of all people.

Building CentOS Linux 7 for ARMv8

March 10, 2015 11:06 AM

As I’d mentioned previously, the fine folks of Applied Micro were kind enough to give us an X-C1 development box to see if it was feasible to build and run CentOS Linux 7. My first attempt through, I realized I hadn’t been taking decent notes, so I scrapped most of the package work and started over. This time I took a few more notes, and hopefully I’ve documented some things that will help everyone. If you’re interested in discussing or joining the ARMv8 process, feel free to join our ARM development mailing list, or find me on Freenode’s irc network in #centos-devel (nick: Evolution ).

 

Initial Steps

The official Linux architecture name for ARMv8 is aarch64. However both terms seem to be in circulation and we use them to imply the same thing.

My plan for the X-C1 box was to install Fedora, and use mock to get a decent buildroot in order. Because the X-C1 box came with a uboot based image by default, I had to replace it with uefi first. The directions for doing that can be found on Fedora’s aarch64 wiki page. Once uboot was successfully replaced with UEFI, I installed Fedora 21 and mock. I chose F21 largely because there I couldn’t find a Fedora 19 image to work from, but there are Fedora 19 packages available to help bootstrap a C7 mock chroot, which is really what I was after. I copied this repository to a local system both to not hammer the remote machine, and to reduce the latency.

 

Host Modifications

While I worked on getting the roughly 150 packages needed for a basic mock buildroot built, I kept running into a recurring problem with failed tests for elfutils. Part of the elfutils test suite tests coredumps and it seems that the buildhost’s systemd-coredump utility was stealing them. After some time digging around with this, I ended up with the following commands:

# echo "kernel.core_pattern=core" > /etc/sysctl.d/50-coredump.conf
# sysctl --system

Once that was done, the elfutils build tests passed without issue.

 

Package Builds

Initially I attempted to work out a build order that would allow me to build from the ground up, but I quickly realized this was foolish. When starting from the bottom like this, everything has a build dependency on something else. Instead I chose to work toward a clean mock init first, and then work up from that point. Since I only have one board to build against, I’m currently abusing bash and cycling things through mock directly. The idea of using koji or plague seemed a bit overkill with just one build host doing the work. Once everything has been built (or thrown out of the build) against the F19 repository, it will be time to do the build again against itself to ensure that everything is properly linked and self-hosting.

 

It’s worth noting that some of the packages in the F19 repository are actually tagged as F20 and are newer than what exists in CentOS Linux 7. I found it necessary to exclude these newer versions, and often to exclude other packages as the build has progressed. While not an exhaustive list, examples are:

  • sqlite-3.8
  • tcl-8.5.14
  • tk
  • Various perl modules

 

Exclusions:

I mentioned that a few packages have been ejected from the build outright. Some of these are because the build dependencies either aren’t, or can’t be met. The prime example of this is ADA support, which requires the same cross-compiled (or otherwise bootstrapped) ADA version to build (yay for circular dependencies). Since nothing appears to explicitly depend on the ADA packages like libgnat, for now I’ve removed them. Down the road, if I’m able to properly add support I will certainly revisit this decision.

 

Substitutions:

There are a few packages from CentOS Linux 7 that I just won’t be able to use. The primary issue is the kernel. The 3.10 kernel just doesn’t have the support for aarch64 that’s needed, so my current plan is to target 3.19 as the kernel to ship for aarch64. This is still speculation, as I’ve been procrastinating on actually building it. I imagine that will happen for the next blog post update :-)

The other problematic package is anaconda. I’m unsure if I can patch the version in 7 to support aarch64, or if I’ll need to ‘forward-port’ and use a more recent version from fedora to handle the installation. If anyone from the community has insights or suggestions for this, please speak up.

I’ll continue posting updates as the build progresses, or as I find interesting things worth mentioning.

 

March 03, 2015

CentOS Linux 7 and Arm

March 03, 2015 11:54 AM

ARMv7

With the growing list of easily accessible ARM hardware like the RaspBerry Pi 2 and the ODROID-C1, several community efforts have sprouted, working out the details for getting CentOS-7 built and available for the new boards. One of our UK based community members has made the most progress so far, posting his build process on the CentOS arm development list. As he progresses, he’s also been keeping some fairly detailed notes about what changes he’s had to make. Once he’s been able to produce an installable (or extractable) image, we’ll see about incorporating and maintaining his changes as branches in git. With a bit more work, we should be able to start rolling out a fully community built and supported 32bit arm build of CentOS-7.armv7-web

ARMv8

Far from stopping there, work is underway on the 64bit ARM front as well. The fine folks at Applied Micro were kind enough to lend us two of their X-C1 ARMv8 development kits. After a bit of work to replace the default uboot with UEFI, and a few early missteps, the work on an aarch64 port of CentOS-7 is progressing along rather nicely as well. I’ll work on documenting the build system, steps to duplicate for anyone who has the hardware and wants to participate, and potential changes required.

 

If you’d like to get involved or want to follow the progress of the work, please join our arm development list, or join us in #centos-devel on freenode irc.

February 20, 2015

Pulp Project : Managing RPM repositories on CentOS – From CentOS Dojo Brussels 2015

February 20, 2015 03:15 PM

At the CentOS Dojo Brussels 2015 Julien Pivotto presented an introduction to Pulp Project and how it makes life easier for people needing to manage rpm repositories, including your own content and syncing down upstream distro content.

In this session he covers:

  • What is pulp?
  • How does it work?
  • Mirrors management
  • Repositories workflows
  • RPM’s deployment and release management

This Video is now online at https://www.youtube.com/video/IkhCvNXWMC4

You can get the slides from this session at the event page on http://www.slideshare.net/roidelapluie/an-introduction-to-the-pulp-project

Regards

Intoduction to RPM packaging – From CentOS Dojo Brussels 2015

February 20, 2015 11:27 AM

At the CentOS Dojo Brussels 2015 Brian Stinson presented an introduction to RPM packaging session, focused on sysadmins looking to make the next step into packaging their own apps as well as dependencies.

In this session he covers:

  • Short overview of the RPM format
  • Setting up an rpmbuild environment
  • Building packages with rpmbuild
  • Building packages with Mock
  • Where to look for further reading

This Video is now online at https://www.youtube.com/video/CTTbu_q2xiQ

You can get the slides from this session at the event page on http://wiki.centos.org/Events/Dojo/Brussels2015

Regards

February 05, 2015

Guide to Software Collections – From CentOS Dojo Brussels 2015

February 05, 2015 11:38 AM

At the CentOS Dojo Brussels 2015 Honza Horak presented on Software Collections. Starting from what they are, how they work and how they are implemented. During this 42 min session he also ran through how people can create their own collections and how they can extend existing ones.

Software Collections are a way to deliver parallel installable rpm tree’s that might contain extension to existing software already on the machine, or might deliver a new version of a component ( eg. hosting multiple versions of python or ruby on the same machine at the same time, still manageable via rpm tools )

This Video is now online at https://www.youtube.com/video/8TmK2g9amj4

You can get the slides from this session at the event page on http://wiki.centos.org/Events/Dojo/Brussels2015

Regards

January 23, 2015

More builders available for Koji/CBS

January 23, 2015 04:54 PM

As you probably know, the CentOS Project now hosts the CBS effort, (aka Community Build System), that is used to build all packages for the CentOSSIGs.

There was already one physical node dedicated to Koji Web and Koji Hub, and another node dedicated to the build threads (koji-builder). As we have now more people building packages, we thought it was time to add more builders to the mix, and here we go: http://cbs.centos.org/koji/hosts lists now two added machines that are dedicated to Koji/CBS.

Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz with 8cores/sockets (+ Hyperthreading activated)  , and 32Gb of RAM. Let's see how the SIGs members will keep those builders busy and throwing a bunch of interesting packages for the CentOS Community :-) . Have a nice week-end

January 19, 2015

I do terrible things sometimes

January 19, 2015 12:00 AM

Abandon hope…

This is not a how-to, but more of a detailed confession about a terrible thing I’ve done in the last few days. The basic concept for this crime against humanity came during a user’s group meeting where several companies expressed overwhelming interest in containers, but were pinned to older, unsupported versions of CentOS due to 3rd party software constraints. They asked if it would be possible to run a CentOS-4 based container instead of a full VM. While obviously migrating from CentOS-4 to a more recent (and supported) version would be preferable, there are some benefits to migrating a CentOS 4 system to a Docker container. I played around with this off and on over the weekend, and finally came up with something fairly functional. I immediately destroyed it so there could be no evidence linking me to this activity.

The basics for how I accomplied this are listed below. They are terrible. Please do NOT follow them.

Disable selinux on your container host.

Look, I told you this was terrible. Dan Walsh and Vaclav Pavlin of Red Hat were kind enough to provide us patches for SELinux in CentOS-6, and then again for CentOS-5. I’m not going to repay their kindness by dragging them into this mess too. Dan is a really nice guy, please don’t make him cry.

The reason we disable selinux is explained on the CentOS-Devel mailing list. Since there’s no patch for CentOS-4 containers, selinux has to be disabled on the host for things to work properly.

Build a minimal vm.

Initially I tried running a slightly modified version of our CentOS-5 kickstart file for Docker through the usual build process. This mostly worked, however it was somewhat unreliable. The build process did not always exit cleanly, often leaving behind broken loop objects I couldn’t unmount. The resulting container worked, but had no functional rpmdb. The conversion trick used with CentOS-5 didn’t work properly with CentOS-4, even accounting for version differences.

I finally decided to build a normal vm image using virt-install. You could use virt-manager to do this part, it really doesn’t matter. There have been a number of functional improvements to anaconda over the years, and going back to the CentOS-4 installer hammers this home. I had to adjust my kickstart to use the old format, removing several more modern options I’d taken for granted. I ended up with the following. For this install, I made sure to install to an image file for easy extraction later on.

install
url --url=http://vault.centos.org/4.9/os/x86_64/
lang en_US.UTF-8
network --device=eth0 --bootproto=dhcp
rootpw --iscrypted $1$UKLtvLuY$kka6S665oCFmU7ivSDZzU.
authconfig --enableshadow
selinux --disabled
timezone --utc UTC

clearpart --all --initlabel
part / --fstype ext3 --size=1024 --grow
reboot
%packages
@Base

%post
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root

rpm -q grub redhat-logos
rm -rf /boot
rm -rf /etc/ld.so.cache

Extract to tarball

Because we’re wiping out /boot and locking the root user, this image really won’t be useful for anything except converting to a container. The next step is to extract the contents into a smaller archive we can use to build our container. In order to do this, we’ll use the virt-tar-out command. This image is not going to be as small as the regular CentOS containers in the Docker index. This is partly due to rpm dependencies, and partly to how the image is created. Honestly, if you’re doing this, a few megs of wasted disk space is the least of your worries.

virt-tar-out -a /path/to/centos-4.img / - | xz --best > /path/to/centos-4-docker.tar.xz

Building the Container

At this point we have enough that we could actually just do a cat centos-4-docker.tar.xz | docker import - centos4, but there are still a few cleanup items that need to be addressed. From here, a basic Dockerfile that provides a vew changes is in order. Since CentOS-4 is End-of-Life and no longer served via the mirrors, the contents of /etc/yum.repos.d/ need to be modified to point to your local mirror as well as /etc/sysconfig/rhn/sources if you intend to still use the up2date utility. To do this, copy your existing yum repo files and sources from your working CentOS-4 systems into the directory with the container tarball, and use a Dockerfile similar to the one below.

FROM scratch
MAINTAINER you <your@emailaddress.com>
ADD centos-4-docker.tar.xz /
ADD CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo
ADD sources /etc/sysconfig/rhn/sources

DELETE IT ALL

All that’s left now is to run docker’s build command, and you have a successfully built a CentOS-4 base container to use for migration purposes, or just to make your inner sysadmin cry. Either way. This is completely unsupported. If you’ve treated this as a how-to and followed the steps, I would recommend the following actions:

  1. Having a long think about the decisions in your life that led you to this moment
  2. Drinking
  3. Sobbing uncontrollably
  4. Apologizing to everyone around you.

January 12, 2015

Provisioning quickly nodes in a SeaMicro chassis with Ansible

January 12, 2015 02:19 PM

Recently I had to quickly test and deploy CentOS on 128 physical nodes, just to test hardware and that all currently "supported" CentOS releases could be installed quickly when needed. The interesting bit is that it was a completely new infra, without any traditional deployment setup in place, so obviously, as sysadmin, we directly think about pxe/kickstart, which is so trivial to setup. That was the first time I had to "play" with SeaMicro devices/chassis though, and so understanding how they work (the SeaMicro 15K fabric chassis, to be precise). One thing to note is that those seamicro chassis don't provide remote VGA/KVM feature (but who cares, as we'll automate the whole thing, right ? ) but they instead provide either cli (ssh) or rest api access to the management interface, so that you can quickly reset/reconfigure a node, changing vlan assignement, and so on.

It's not a secret that I like to use Ansible for ad-hoc tasks, and I thought that it would be (again) a good tool for that quick task. If you have used Ansible already, you know that you have to declare nodes and variables (not needed, but really useful) in the inventory (if you don't gather inventory from an external source). To configure my pxe setup (and so being able to reconfigure it when needed) I obviously needed to get mac addresses from all 64 nodes in each chassis, decide that hostnames will be n${slot-number}., etc .. (and yes in Seamicro slot 1 = 0/0, slot 2 = 1/0, and so on ...)

The following quick-and-dirty bash script let you do that quickly in 2 seconds (ssh into chassis, gather information, and fill some variables in my ansible host_vars/${hostname} file) :

1
2
3
4
5
6
7
8
#!/bin/bash  
ssh admin@hufty.ci.centos.org "enable ; show server summary | include Intel ; quit" | while read line ;  
do  
  seamicrosrvid=\$(echo \$line |awk '{print \$1}')  
  slot=\$(echo \$seamicrosrvid| cut -f 1 -d '/')  
  id=\$(( \$slot + 1)); ip=\$id ; mac=\$(echo \$line |awk '{print \$3}')  
  echo -e "name: n${id}.hufty.ci.centos.org \nseamicro_chassis: hufty \nseamicro_srvid: $seamicrosrvid \nmac_address: $mac \nip: 172.19.3.$ip \ngateway: 172.19.3.254 \nnetmask: 255.255.252.0 \nnameserver: 172.19.0.12 \ncentos_dist: 6" >  inventory/n${id}.hufty.ci.centos.org  
done  

Nice so we have all \~/ansible/hosts/host_vars/${inventory_hostname} files in one go (I let you add ${inventory_hostname} in the \~/ansible/hosts/hosts.cfg file with the same script, but modify to your needs
For the next step, we assume that we already have dnsmasq installed on the "head" node, and that we also have a httpd setup to provide the kickstart to the nodes during installation.
So our basic ansible playbook looks like this :

---  
- hosts: ci-nodes  
  sudo: True  
  gather_facts: False

  vars:  
    deploy_node: admin.ci.centos.org  
    seamicro_user_login: admin  
    seamicro_user_pass: obviously-hidden-and-changed  
    seamicro_reset_body:  
    action: reset  
    using-pxe: "true"  
    username: "{{ seamicro_user_login }}"  
    password: "{{ seamicro_user_pass }}"

  tasks:  
    - name: Generate kickstart file[s] for Seamicro node[s]  
      template: src=../templates/kickstarts/ci-centos-{{ centos_dist}}-ks.j2 dest=/var/www/html/ks/{{ inventory_hostname }}-ks.cfg mode=0755  
      delegate_to: "{{ deploy_node }}"

    - name: Adding the entry in DNS (dnsmasq)  
      lineinfile: dest=/etc/hosts regexp="\^{{ ip }} {{ inventory_hostname }}" line="{{ ip }} {{ inventory_hostname }}"  
      delegate_to: "{{ deploy_node }}"  
      notify: reload_dnsmasq

    - name: Adding the DHCP entry in dnsmasq  
      template: src=../templates/dnsmasq-dhcp.j2 dest=/etc/dnsmasq.d/{{ inventory_hostname }}.conf  
      delegate_to: "{{ deploy_node }}"  
      register: dhcpdnsmasq

    - name: Reloading dnsmasq configuration  
      service: name=dnsmasq state=restarted  
      run_once: true  
      when: dhcpdnsmasq|changed  
      delegate_to: "{{ deploy_node }}"

    - name: Generating the tftp configuration boot file  
      template: src=../templates/pxeboot-ci dest=/var/lib/tftpboot/pxelinux.cfg/01-{{ mac_address | lower | replace(":","-") }} mode=0755  
      delegate_to: "{{ deploy_node }}"

    - name: Resetting the Seamicro node[s]  
      uri: url=https://{{ seamicro_chassis }}.ci.centos.org/v2.0/server/{{ seamicro_srvid }}  
           method=POST  
           HEADER_Content-Type="application/json"  
           body='{{ seamicro_reset_body | to_json }}'  
           timeout=60  
      delegate_to: "{{ deploy_node }}"

    - name: Waiting for Seamicro node[s] to be available through ssh ...  
      action: wait_for port=22 host={{ inventory_hostname }} timeout=1200  
      delegate_to: "{{ deploy_node }}"

  handlers:  
    - name: reload_dnsmasq  
      service: name=dnsmasq state=reloaded  

The first thing to notice is that you can use Ansible to provision nodes that aren't already running : people think than ansible is just to interact with already provisioned and running nodes, but by providing useful informations in the inventory, and by delegating actions, we can already start "managing" those yet-to-come nodes.
All the templates used in that playbook are really basic ones, so nothing "rocket science". For example the only diff for the kickstart.j2 template is that we inject ansible variables (for network and storage) :

network --bootproto=static --device=eth0 --gateway={{ gateway }}
--ip={{ ip }} --nameserver={{ nameserver }} --netmask={{ netmask }}
--ipv6=auto --activate  
network --hostname={{ inventory_hostname }}  
\<snip\>  
part /boot --fstype="ext4" --ondisk=sda --size=500  
part pv.14 --fstype="lvmpv" --ondisk=sda --size=10000 --grow  
volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14  
logvol /home --fstype="xfs" --size=2412 --name=home --vgname=vg_{{
inventory_hostname_short }} --grow --maxsize=100000  
logvol / --fstype="xfs" --size=8200 --name=root --vgname=vg_{{
inventory_hostname_short }} --grow --maxsize=1000000  
logvol swap --fstype="swap" --size=2136 --name=swap --vgname=vg_{{
inventory_hostname_short }}  
\<snip\>  

The dhcp step isn't mandatory, but at least in that subnet we only allow dhcp to "already known" mac address, retrieved from the ansible inventory (and previously fetched directly from the seamicro chassis) :

# {{ name }} ip assignement  
dhcp-host={{ mac_address }},{{ ip }}  

Same thing for the pxelinux tftp config file :

SERIAL 0 9600  
DEFAULT text  
PROMPT 0  
TIMEOUT 50  
TOTALTIMEOUT 6000  
ONTIMEOUT {{ inventory_hostname }}-deploy

LABEL local  
MENU LABEL (local)  
MENU DEFAULT  
LOCALBOOT 0

LABEL {{ inventory_hostname}}-deploy  
kernel CentOS/{{ centos_dist }}/{{ centos_arch}}/vmlinuz  
MENU LABEL CentOS {{ centos_dist }} {{ centos_arch }}- CI Kickstart
for {{ inventory_hostname }}  
{% if centos_dist == 7 -%}  
append initrd=CentOS/7/{{ centos_arch }}/initrd.img net.ifnames=0 biosdevname=0 ip=eth0:dhcp inst.ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8  
{% else -%}  
append initrd=CentOS/{{ centos_dist }}/{{ centos_arch }}/initrd.img ksdevice=eth0 ip=dhcp ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8  
{% endif %}  

The interesting part is the one on which I needed to spend more time : as said, it was the first time I had to play with SeaMicro hardware, so I had to dive into documentation (which I *always* do, RTFM FTW !) and understand how to use their Rest API but once done, it was a breeze. Ansible by default doesn't provide a native resource for Seamicro, but that's why Rest exists, right and thanfully, Ansible has a native URI module, which we use here . The only thing on which I had to spend more time was to understand how to properly construct the body, but declaring in the yaml file as a variable/list and then converting it on the fly to json (with the magical body='{{ seamicro_reset_body | to_json }}' ) was the way to go and is so self-explained when read now.

And here we go, calling that ansible playbook and suddenly 128 physical machines were being installed (and reinstalled with different CentOS versions - 5,6,7 - and arches i386,x86_64)

Hope this helps if you  have to interact with Seamicro chassis from within an ansible playbook too

December 15, 2014

Reproducible CentOS containers

December 15, 2014 12:00 AM

Around 9 months ago, I took over the creation of the official CentOS images contained in the Docker index. Prior to that, the index officially had one lonely and outdated CentOS 6.4 image that we at the CentOS Project were unaware of. Docker sort of exploded into our view and we spent a bit of time playing catch-up, trying to get things done the way we as a distribution would like to see them done. One of these actions was to do away with minor-versioned containers.

We chose to drop the minor version from our containers, and petitioned the Docker registry maintainer to remove the existing one (not built by us). The reasoning for this is fairly straightforward: A larger percentage of users never updated to the current containers, and so most of the bugs submitted to us were for older versions. Even recently Dan Walsh has had to post reminders to run updates. By only having a centos6 or centos7 image, we tried to remove the mindset for minor versions. Since the containers themselves are a svelte 132 packages, they amount to little more than the dependencies needed for bash and yum. In theory, the differences between a 6.5 image and a 6.6 image should be entirely negligible. In fact, by default any package installed on a 6.5 container would be from the 6.6 repositories.

That said, the number one request since we stopped shipping point releases is… you guessed it: Point releases. While I continue to maintain that our position of updates is the proper one, real world usage often runs counter to ivory tower thinking. A number of valid use cases were brought up in the course of discussions with community members asking for containers to be tagged with minor point releases, and I have agreed to reconsider tagging minor version images.

Beginning with the January monthly rollout, I will add minor tags for the 5, 6 builds in the Docker index. The minor tags will be for 5.11, 6.6. For 7 builds it will correspond to date tagged build name, the same as the installation media. These tags will be built from, and correspond to the respective CentOS installation media, and so will not contain updates. This means if you are using the minor tags, you could potentially / would be exposing your containers to exploits that have been patched in the rolling updates. The latest, 5, 6, and 7 tags will continue to point to the rolling monthly releases, which I would highly recommend using.

November 26, 2014

And now a few words from Paul C.

November 26, 2014 08:13 PM

Although some people in open source communities might not be aware of him, Paul Cormier holds a singular position in the open source world. This hinges on the detail that Red Hat is the longest standing and most successful company at promoting the growth of free/open source software and especially the acceptance of that software in the enterprise (large businesses.) Paul is a Red Hat EVP, but he is also the President of Products and Technologies, meaning he is ultimately accountable for what Red Hat does in creating products the open source way. Paul has held this position essentially for the last dozen years, and so has overseen everything in Red Hat from the creation of Fedora Linux to the rise of cloud computing that Red Hat is an intimate part of.

In other words, when Paul C. speaks — keynote or in-person — he is someone really worth paying close attention to.

In this post on Red Hat’s open source community website, “One Year Later:  Paul Cormier on Red Hat and the CentOS Project“, I provide some introduction and background around a video interview Paul did with ServerWatch about the Red Hat and CentOS Project relationship.

(Speaking of ‘intimately’, that explains my relationship to Red Hat and the CentOS Project — I spent all of 2013 architecting and delivering on making CentOS Linux the third leg in the stool of Red Hat platform technologies. When I say in the “One Year Later…” article about “making sure (Paul C. is) happy and excited about Red Hat joining forces with the CentOS Project,” that responsibility is largely mine.)

November 24, 2014

Switching from Ethernet to Infiniband for Gluster access (or why we had to ...)

November 24, 2014 10:37 AM

As explained in my previous (small) blog post, I had to migrate a Gluster setup we have within CentOS.org Infra. As said in that previous blog post too, Gluster is really easy to install, and sometimes it can even "smells" too easy to be true. One thing to keep in mind when dealing with Gluster is that it's a "file-level" storage solution, so don't try to compare it with "block-level" solutions (so typically a NAS vs SAN comparison, even if "SAN" itself is wrong for such discussion, as SAN is what's *between* your nodes and the storage itself, just a reminder.)

Within CentOS.org infra, we have a multiple nodes Gluster setup, that we use for multiple things at the same time. The Gluster volumes are used to store some files, but also to host (different gluster volumes with different settings/ACLs) KVM virtual-disks (qcow2). People knowing me will say : "hey, but for performances reasons, it's faster to just dedicate for example a partition , or a Logical Volume instead of using qcow2 images sitting on top a filesystem for Virtual Machines, right ?" and that's true. But with our limited amount of machines, and a need to "move" Virtual Machine without a proper shared storage solution (and because in our setup, those physical nodes *are* both glusterd and hypervisors), Gluster was an easy to use solution to :

It was working, but not that fast ... I then heard about the fact that (obviously) accessing those qcow2 images file through fuse wasn't efficient at all, but that Gluster had libgfapi that could be used to "talk" directly to the gluster daemons, bypassing completely the need to mount your gluster volumes locally through fuse. Thankfully, qemu-kvm from CentOS 6 is built against libgfapi so can use that directly (and that's the reason why it's automatically installed when you install KVM hypervisor components). Results ? better , but still not was I/we was/were expecting ...

When trying to find the issue, I discussed with some folks in the #gluster irc channel (irc.freenode.net) and suddenly I understood something that it's *not* so obvious for Gluster in distributed+replicated mode : for people having dealt with storage solutions at the hardware level (or people using DRBD, which I did too in the past, and that I also liked a lot ..) in the past, we expect the replication to happens automatically at the storage/server side, but that's not true for Gluster : in fact Glusterd just exposes metadata to gluster clients, which then know where to read/write (being "redirected" to correct gluster nodes). That means so than replication happens at the *client* side : in replicated mode, the clients will write itself twice the same data : once on each server ...

So back to our example, as our nodes have 2*1Gb/s Ethernet card, and that one is a bridge used by the Virtual Machines, and the other one "dedicated" to gluster, and that each node is itself a glusterd/gluster client, I let you think about the max perf we could get : for a write operation : 1Gbit/s , divided by two (because of the replication) so \~ 125MB / 2 => in theory \~ 62 MB/sec (and then remove tcp/gluster/overhead and that drops to \~ 55MB/s)

How to solve that ? well, I tested that theory and confirmed directly that it was the case, when in distributed mode only, write performances were automatically doubled. So yes, running Gluster on Gigabit Ethernet suddenly was the bottleneck. Upgrading to 10Gb wasn't something we could do, but , thanks to Justin Clift (and some other Gluster folks), we were able to find some "second hand" Infiniband hardware (10Gbps HCAs and switch)

While Gluster has native/builtin rdma/Infiniband capabilities (see "tranport" option in the "gluster create volume" command), we had in our case to migrate existing Gluster volumes from plain TCP/Ethernet to Infiniband, while trying to get the downtime as small as possible. That is/was my first experience with Infiniband, but it's not as hard as it seems, especially when you discover IPoIB(IP over Infiniband). So from a Syadmin POV, it's just "yet another network interface", but a 10Gbps one now :)

The Gluster volume migration then goes like this : (schedule a - obvious - downtime for this) :

On all gluster nodes (assuming that we start from machines installed only with @core group, so minimal ones) :

yum groupinstall "Infiniband Support"
chkconfig rdma on
#stop your clients or other apps accessing gluster volumes, as they will be stopped

service glusterd stop && chkconfig glusterd off &&  init 0

Install then the hardware in each server, connect all Infiniband cards to the IB switch (previously configured) and power back on all servers. When machines are back online, you have "just" to configure the ib interfaces. As in my cases, machines were "remote nodes" and not having a look at how they were configured, I  had to use some IB tools to see which port was connected (a tool like "ibv_devinfo" showed me which port was active/connected, while "ibdiagnet" shows you the topology and other nodes/devices). In our case it was port 2, so let's create the ifcfg-ib{0,1} devices (and ib1 being the one we'll use) :

DEVICE=ib1  
TYPE=Infiniband  
BOOTPROTO=static  
BROADCAST=192.168.123.255  
IPADDR=192.168.123.2  
NETMASK=255.255.255.0  
NETWORK=192.168.123.0  
ONBOOT=yes  
NM_CONTROLLED=no  
CONNECTED_MODE=yes

The interesting part here is the "CONNECTED_MODE=yes" : for people who already uses iscsi, you know that Jumbo frames are really important if you have a dedicated VLAN (and that the Ethernet switch support Jumbo frames too). As stated in the IPoIB kernel doc , you can have two operation mode : datagram (default 2044 bytes MTU) or  Connected (up to 65520 bytes MTU). It's up to you to decide which one to use, but if you understood the Jumbo frames thing for iscsi, you get the point already.

An "ifup ib1" on all nodes will bring the interfaces up and you can verify that everything works by pinging each other node, including with larger mtu values :

ping -s 16384 \<other-node-on-the-infiniband-network>

If everything's fine, you can then decide to start gluster *but* don't forget that gluster uses FQDN (at least I hope that's how you configured initially your gluster setup, already on a dedicated segment, and using different FQDN for the storage vlan). You just have to update your local resolver (internal DNS, local hosts files, whatever you want) to be sure that gluster will then use the new IP subnet on the Infiniband network. (If you haven't previously defined different hostnames for your gluster setup, you can "just" update that in the different /var/lib/glusterd/peers/ and /var/lib/glusterd/vols//*.vol)

Restart the whole gluster stack (on all gluster nodes) and verify that it works fine :

service glusterd start
gluster peer status
gluster volume status
# and if you're happy with the results :
chkconfig glusterd on

So, in a short summary:

  • Infiniband isn't that difficult (and surely if you use IPoIB, which has though a very small overhead)
  • Migrating gluster from Ethernet to Infiniband is also easy (and surely if you planned carefully your initial design about IP subnet/VLAN/segment/DNS resolution for "transparent" move)

November 21, 2014

Updating to Gluster 3.6 packages on CentOS 6

November 21, 2014 03:08 PM

I had to do yesterday some maintenance yesterday on our Gluster nodes used within CentOS.org infra. Basically I had to reconfigure some gluster volumes to use Infiniband instead of Ethernet. (I'll write a dedicated blog post about that migration later).

While a lot of people directly consume packages from Gluster.org (for example http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/), you'll be able (soon) to also install directly those packages on CentOS, through packages built by the Storage SIG. At the moment I'm writing this blog post, gluster 3.6.1 packages are built and available on our Community Build Server Koji setup , but still in testing (and unsigned).

"But wait, there are already glusterfs packages tagged 3.6 in CentOS 6.6, right ? " will you say. Well, yes, but not the full stack. What you see in the [base] (or [updates]) repository are the client packages, as for example a base CentOS 6.x can be a gluster client (through fuse, or libgfapi - really interesting to speed up qemu-kvm instead of using the default fuse mount point ..) , but the -server package isn't there. So the reason why you can either use the upstream gluster.org yum repositories or the Storage SIG one to have access to the full stack, and so run glusterd on CentOS.

Interested in testing those packages ? Wanting to test the update before those packages will be released by the Storage SIG ? here we go : http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/ (packages available for CentOS 7 too)

By the way, if you never tested Gluster, it's really easy to setup and play with, even within Virtual Machines. Interesting reading : (quick start) : http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

October 29, 2014

CentOS Dojo at LISA14 in Seattle on November 10th, 2014

October 29, 2014 08:03 PM

Join us at the all day (09:00 to 17:00) CentOS Dojo on Monday, November 10th, 2014 at the LISA14 conference in Seattle, Washington.

There will be at least three CentOS board members there (Johnny Hughes, Jim Perrin, and Karsten Wade).

The current topics include:

  • CI environment scaling by Dave Nalley
  • DevOps Isn’t Just for WebOps: The Guerrilla’s Guide to Cultural Change by Michael Stahnke
  • The EPEL Phoenix Saga by Stephen Smoogen
  • Docker in the Distro by Jim Perrin
  • Managing your users by Matt Simmons
Visit the CentOS Wiki for more information.

October 28, 2014

CentOS-6.6 is Released

October 28, 2014 11:27 AM

CentOS 6.6 is now released, see the Announcement.

So, the Continuous Release RPMs where released on 21 October (7 days after RHEL 6.6) and the Full Release was done 28 October (14 days after RHEL 6.6).

Enjoy.



October 21, 2014

Continuous Release Repository RPMs for CentOS-6.6 Released

October 21, 2014 07:46 AM

The CentOS team has released the Continuous Release (CR) Repository RPMs for CentOS-6.6 into their 6.5/cr tree.  See the Release Announcement.

Now a little more about the release process.

  1. Red Hat releases a version of Red Hat Enterprise Linux.  In this case the version is Red Hat Enterprise Linux 6.6 (RHEL-6.6), which was released on October 14th, 2014.  With that release by Red Hat comes the source code which RHEL 6.6 is based on.
  2. The CentOS team takes that released source code and starts building it for their CentOS release (in this case CentOS-6.6).  This process can not start until the Source Code from Red Hat is available, which in this case was October 14th.
  3. At some point, all the Source Code has been built and there are RPMs available, this is normally 1-5 days depending on how many Source RPMs there are to build and how many times the order needs to be changed to get the builds done correctly.
  4. After the CentOS team thinks they have a good set of binary RPMs built, they submit them to the QA team (a team of volunteers who do QA for the releases).  This QA process includes the t_functional suite and several knowledgeable system administrators downloading and running tests on the RPMs to validate updating with them works as planned.
  5. At this point there are tested RPMs ready, and the CentOS team needs to build an installer tree. This means, take the new RPMs and move them into place in the main tree, remove the older ones RPMs they are replacing, run the build installer to create an installable tree, test that installable tree.  This process can take up to 7 days.
  6. Once there is an installable tree, all the ISOs have to be created and tested.  We have to create the ISOs, upload them to the QA process, test them for installs via ISOs (correct sizes, how to split the ISOs, what is on the Live CDs and LiveDVDs to keep them below the max size to fit on media, etc.).  We then also test the installs for UEFI installs, Secure Boot installs (CentOS-7 only), coping to USB Keys and checking the installs that way, etc.  This process can also take up to 7 days.
So, in the process above, we can have vetted binary RPMs ready to go as soon as 5 days after we start, but it may be 14 or more days after that before we have a complete set of ISOs to do a full release.  Thus the reason for the CR Repository.

The CR Repository


The process of building and testing an install tree, then creating and testing several types of ISO sets from that install tree (DVD Installer, Minimum Install ISO, LiveCD, LiveDVD, etc) can take 1-2 weeks after all the RPMs are built and have gone through initial QA testing.

The purpose of the CR repository is to provide quicker access to RPMs for an upcoming CentOS point release while further QA testing is ongoing and the ISO installers are being built and tested.

Updates in the CR for CentOS-6.6

More Information about CR.

CentOS-6.6 Release Notes (Still in progress until the actual CentOS-6.6 release).

Upstream RHEL-6.6 Release Notes and Technical Notes.

October 15, 2014

Koji - CentOS CBS infra and sslv3/Poodle important notification

October 15, 2014 09:46 AM

As most of you already know, there is an important SSLv3 vulnerability (CVE-2014-3566 - see https://access.redhat.com/articles/1232123) , known as Poodle.
While it's easy to disable SSLv3 in the allowed Protocols at the server level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some clients are still defaulting to SSLv3, and Koji does that.

We currently have disabled SSLv3 on our cbs.centos.org koji instance, so if you're a cbs/koji user, please adapt your local koji package (local fix !)
At the moment, there is no available upstream package, but the following patch has been tested by Fedora people too (and credits go to

https://lists.fedoraproject.org/pipermail/infrastructure/2014-October/014976.html)

  --- SSLCommon.py.orig    2014-10-15 11:42:54.747082029 +0200  
  +++ SSLCommon.py    2014-10-15 11:44:08.215257590 +0200  
  @@ -37,7 +37,8 @@  
  if f and not os.access(f, os.R_OK):  
  raise StandardError, "%s does not exist or is not  
  readable" % f

  -    ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only  
  +    #ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only  
  +    ctx = SSL.Context(SSL.TLSv1_METHOD)   # TLSv1 only  
  ctx.use_certificate_file(key_and_cert)  
  ctx.use_privatekey_file(key_and_cert)  
  ctx.load_client_ca(ca_cert)  
  @@ -45,7 +46,8 @@  
  verify = SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT  
  ctx.set_verify(verify, our_verify)  
  ctx.set_verify_depth(10)  
  -    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)  
  +    #ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)  
  +    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1 |
 SSL.OP_NO_SSLv3)  
  return ctx  

We'll keep you informed about possible upstream koji packages that would default to at least TLSv1

If you encounter a problem, feel free to drop into #centos-devel channel on irc.freenode.net and have a chat with us

October 01, 2014

Xen4CentOS XSA-108 Security update for CentOS-6

October 01, 2014 08:00 AM

There has been a fair amount of press in the last couple of days concerning Xen update XSA-108, and the fact that Amazon EC2 and Rackspace must reboot after this update:

Rackspace forced to reboot cloud over Xen bug

Amazon Reboots Cloud Servers, Xen Bug Blamed

There are other stories, but those articles cover the main issue.

As KB tweeted, the CentOS-6 Xen4CentOS release is also impacted by this issue and the CentOS team has released CESA-2014:X013 to deal with XSA-108.  There are also 3 other Xen4CentOS updates released:  CESA-2014:X010, CESA-2014:X011, CESA-2014:X012

If you are using Xen4CentOS on CentOS-6, please use yum update to get these security updates ... and like Rackspace and Amazon EC2, you need to reboot your dom0 machine after the updates are applied.

September 30, 2014

CentOS team at cPanel 2014

September 30, 2014 03:28 AM

The CentOS team will have a booth in the Exhibit Hall for the 2014 cPanel Conference at the Westin Galleria hotel in Houston, Texas from September 30th to October 1st 2014.

CentOS Board members Johnny Hughes (that's me :D) and Jim Perrin will be at the booth whenever the hall is open. 

We are looking forward to lots of discussions and we will have some swag to give out (Tee Shirts .. including the new 10 Year Anniversary tee, Stickers, etc.). We will also be happy to install CentOS on your laptop (or let you do it) ... or if you have a USB key available, we will put a CentOS iso on it for you to use for install later.

If you are going to be at cPanel 2014, come on down and see us!


Powered by Planet!
Last updated: May 28, 2015 09:30 AM