January 23, 2015

More builders available for Koji/CBS

January 23, 2015 04:54 PM

As you probably know, the CentOS Project now hosts the CBS effort, (aka Community Build System), that is used to build all packages for the CentOS SIGs.

There was already one physical node dedicated to Koji Web and Koji Hub, and another node dedicated to the build threads (koji-builder). As we have now more people building packages, we thought it was time to add more builders to the mix, and here we go: http://cbs.centos.org/koji/hosts lists now two added machines that are dedicated to Koji/CBS.

Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz with 8cores/sockets (+ Hyperthreading activated)  , and 32Gb of RAM. Let's see how the SIGs members will keep those builders busy and throwing a bunch of interesting packages for the CentOS Community :-) . Have a nice week-end

January 19, 2015

I do terrible things sometimes

January 19, 2015 12:00 AM

Abandon hope…

This is not a how-to, but more of a detailed confession about a terrible thing I’ve done in the last few days. The basic concept for this crime against humanity came during a user’s group meeting where several companies expressed overwhelming interest in containers, but were pinned to older, unsupported versions of CentOS due to 3rd party software constraints. They asked if it would be possible to run a CentOS-4 based container instead of a full VM. While obviously migrating from CentOS-4 to a more recent (and supported) version would be preferable, there are some benefits to migrating a CentOS 4 system to a Docker container. I played around with this off and on over the weekend, and finally came up with something fairly functional. I immediately destroyed it so there could be no evidence linking me to this activity.

The basics for how I accomplied this are listed below. They are terrible. Please do NOT follow them.

Disable selinux on your container host.

Look, I told you this was terrible. Dan Walsh and Vaclav Pavlin of Red Hat were kind enough to provide us patches for SELinux in CentOS-6, and then again for CentOS-5. I’m not going to repay their kindness by dragging them into this mess too. Dan is a really nice guy, please don’t make him cry.

The reason we disable selinux is explained on the CentOS-Devel mailing list. Since there’s no patch for CentOS-4 containers, selinux has to be disabled on the host for things to work properly.

Build a minimal vm.

Initially I tried running a slightly modified version of our CentOS-5 kickstart file for Docker through the usual build process. This mostly worked, however it was somewhat unreliable. The build process did not always exit cleanly, often leaving behind broken loop objects I couldn’t unmount. The resulting container worked, but had no functional rpmdb. The conversion trick used with CentOS-5 didn’t work properly with CentOS-4, even accounting for version differences.

I finally decided to build a normal vm image using virt-install. You could use virt-manager to do this part, it really doesn’t matter. There have been a number of functional improvements to anaconda over the years, and going back to the CentOS-4 installer hammers this home. I had to adjust my kickstart to use the old format, removing several more modern options I’d taken for granted. I ended up with the following. For this install, I made sure to install to an image file for easy extraction later on.

install
url --url=http://vault.centos.org/4.9/os/x86_64/
lang en_US.UTF-8
network --device=eth0 --bootproto=dhcp
rootpw --iscrypted $1$UKLtvLuY$kka6S665oCFmU7ivSDZzU.
authconfig --enableshadow
selinux --disabled
timezone --utc UTC

clearpart --all --initlabel
part / --fstype ext3 --size=1024 --grow
reboot
%packages
@Base

%post
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root

rpm -q grub redhat-logos
rm -rf /boot
rm -rf /etc/ld.so.cache

Extract to tarball

Because we’re wiping out /boot and locking the root user, this image really won’t be useful for anything except converting to a container. The next step is to extract the contents into a smaller archive we can use to build our container. In order to do this, we’ll use the virt-tar-out command. This image is not going to be as small as the regular CentOS containers in the Docker index. This is partly due to rpm dependencies, and partly to how the image is created. Honestly, if you’re doing this, a few megs of wasted disk space is the least of your worries.

virt-tar-out -a /path/to/centos-4.img / - | xz --best > /path/to/centos-4-docker.tar.xz

Building the Container

At this point we have enough that we could actually just do a cat centos-4-docker.tar.xz | docker import - centos4, but there are still a few cleanup items that need to be addressed. From here, a basic Dockerfile that provides a vew changes is in order. Since CentOS-4 is End-of-Life and no longer served via the mirrors, the contents of /etc/yum.repos.d/ need to be modified to point to your local mirror as well as /etc/sysconfig/rhn/sources if you intend to still use the up2date utility. To do this, copy your existing yum repo files and sources from your working CentOS-4 systems into the directory with the container tarball, and use a Dockerfile similar to the one below.

FROM scratch
MAINTAINER you <your@emailaddress.com>
ADD centos-4-docker.tar.xz /
ADD CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo
ADD sources /etc/sysconfig/rhn/sources

DELETE IT ALL

All that’s left now is to run docker’s build command, and you have a successfully built a CentOS-4 base container to use for migration purposes, or just to make your inner sysadmin cry. Either way. This is completely unsupported. If you’ve treated this as a how-to and followed the steps, I would recommend the following actions:

  1. Having a long think about the decisions in your life that led you to this moment
  2. Drinking
  3. Sobbing uncontrollably
  4. Apologizing to everyone around you.

January 16, 2015

Docker: to keep those old apps ticking

January 16, 2015 07:30 PM

Got an old, long running app on CentOS-4 that you want to retain ? Running a full blown VM not worth while ? Well, Docker can help. As Jim found out earlier today :

Given that CentOS-4 is out of life now, we are not going to push CentOS-4 images to official CentOS collection on the Docker registry, but if folks want this, please ask and we can publish a short howto on whats involved in building your own.

Ofcourse, always consider migrating the app to a newer, supported platform like CentOS-6 or 7 before trying these sort of workarounds.

Docker is available out of the box, by defauly, on all CentOS-7/x86_64 installs.

- KB

January 15, 2015

Libguestfs preview for EL 7.1

January 15, 2015 11:33 AM

Want to see whats coming with libguestfs in EL 7.1 ? Richard Jones has setup a preview repo at http://people.redhat.com/~rjones/libguestfs-RHEL-7.1-preview that contains all the bits you need.

To set this up:

# cat >/etc/yum.repos.d/libguestfs-RHEL-7.1-preview.repo
[libguestfs-RHEL-7.1-preview]
name=libguestfs RHEL 7.1 preview - x86_64
baseurl=http://people.redhat.com/~rjones/libguestfs-RHEL-7.1-preview/
enabled=1
gpgcheck=0
EOF

You should now be able to run a 'yum install libguestfs-tools'. There are some other interesting things in the repo as well, so feel free to poke around ( including an updated virt-v2v ). Remember to send testing feedback to http://www.redhat.com/mailman/listinfo/libguestfs

- KB

January 14, 2015

CentOS-India meetup group announced!

January 14, 2015 02:13 PM

Hi,

There is a meetup group for CentOS usere in India at : http://www.meetup.com/CentOS-India and they are looking for people to come join as well as people to help run local meetings in different parts of the country.

So if you are based in India, use or would like to use CentOS Linux, go ahead and join up.

- KB

January 12, 2015

Provisioning quickly nodes in a SeaMicro chassis with Ansible

January 12, 2015 02:19 PM

Recently I had to quickly test and deploy CentOS on 128 physical nodes, just to test hardware and that all currently "supported" CentOS releases could be installed quickly when needed. The interesting bit is that it was a completely new infra, without any traditional deployment setup in place, so obviously, as sysadmin, we directly think about pxe/kickstart, which is so trivial to setup. That was the first time I had to "play" with SeaMicro devices/chassis though, and so understanding how they work (the SeaMicro 15K fabric chassis, to be precise). One thing to note is that those seamicro chassis don't provide remote VGA/KVM feature (but who cares, as we'll automate the whole thing, right ? ) but they instead provide either cli (ssh) or rest api access to the management interface, so that you can quickly reset/reconfigure a node, changing vlan assignement, and so on.

It's not a secret that I like to use Ansible for ad-hoc tasks, and I thought that it would be (again) a good tool for that quick task. If you have used Ansible already, you know that you have to declare nodes and variables (not needed, but really useful) in the inventory (if you don't gather inventory from an external source). To configure my pxe setup (and so being able to reconfigure it when needed) I obviously needed to get mac addresses from all 64 nodes in each chassis, decide that hostnames will be n${slot-number}., etc .. (and yes in Seamicro slot 1 = 0/0, slot 2 = 1/0, and so on ...)

The following quick-and-dirty bash script let you do that quickly in 2 seconds (ssh into chassis, gather information, and fill some variables in my ansible host_vars/${hostname} file) :

#!/bin/bash
ssh admin@hufty.ci.centos.org "enable ;  show server summary | include Intel ; quit" | while read line ;
  do
  seamicrosrvid=$(echo $line |awk '{print $1}')
  slot=$(echo $seamicrosrvid| cut -f 1 -d '/')
  id=$(( $slot + 1)); ip=$id ; mac=$(echo $line |awk '{print $3}')
  echo -e "name: n${id}.hufty.ci.centos.org \nseamicro_chassis: hufty \nseamicro_srvid: $seamicrosrvid \nmac_address: $mac \nip: 172.19.3.$ip \ngateway: 172.19.3.254 \nnetmask: 255.255.252.0 \nnameserver: 172.19.0.12 \ncentos_dist: 6" > inventory/n${id}.hufty.ci.centos.org
done

Nice so we have all ~/ansible/hosts/host_vars/${inventory_hostname} files in one go (I let you add ${inventory_hostname} in the ~/ansible/hosts/hosts.cfg file with the same script, but modify to your needs
For the next step, we assume that we already have dnsmasq installed on the "head" node, and that we also have a httpd setup to provide the kickstart to the nodes during installation.
So our basic ansible playbook looks like this :

---
- hosts: ci-nodes
  sudo: True
  gather_facts: False

  vars:
    deploy_node: admin.ci.centos.org
    seamicro_user_login: admin
    seamicro_user_pass: obviously-hidden-and-changed
    seamicro_reset_body:
      action: reset
      using-pxe: "true"
      username: "{{ seamicro_user_login }}"
      password: "{{ seamicro_user_pass }}"

  tasks:
    - name: Generate kickstart file[s] for Seamicro node[s]
      template: src=../templates/kickstarts/ci-centos-{{ centos_dist }}-ks.j2 dest=/var/www/html/ks/{{ inventory_hostname }}-ks.cfg mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Adding the entry in DNS (dnsmasq)
      lineinfile: dest=/etc/hosts regexp="^{{ ip }} {{ inventory_hostname }}" line="{{ ip }} {{ inventory_hostname }}"
      delegate_to: "{{ deploy_node }}"
      notify: reload_dnsmasq

    - name: Adding the DHCP entry in dnsmasq
      template: src=../templates/dnsmasq-dhcp.j2 dest=/etc/dnsmasq.d/{{ inventory_hostname }}.conf
      delegate_to: "{{ deploy_node }}"
      register: dhcpdnsmasq

    - name: Reloading dnsmasq configuration
      service: name=dnsmasq state=restarted
      run_once: true
      when: dhcpdnsmasq|changed
      delegate_to: "{{ deploy_node }}"

    - name: Generating the tftp configuration boot file
      template: src=../templates/pxeboot-ci dest=/var/lib/tftpboot/pxelinux.cfg/01-{{ mac_address | lower | replace(":","-") }} mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Resetting the Seamicro node[s]
      uri: url=https://{{ seamicro_chassis }}.ci.centos.org/v2.0/server/{{ seamicro_srvid }}
           method=POST
           HEADER_Content-Type="application/json"
           body='{{ seamicro_reset_body | to_json }}'
           timeout=60
      delegate_to: "{{ deploy_node }}"

    - name: Waiting for Seamicro node[s] to be available through ssh ...
      action: wait_for port=22 host={{ inventory_hostname }} timeout=1200
      delegate_to: "{{ deploy_node }}"

  handlers:
    - name: reload_dnsmasq
      service: name=dnsmasq state=reloaded

The first thing to notice is that you can use Ansible to provision nodes that aren't already running : people think than ansible is just to interact with already provisioned and running nodes, but by providing useful informations in the inventory, and by delegating actions, we can already start "managing" those yet-to-come nodes.
All the templates used in that playbook are really basic ones, so nothing "rocket science". For example the only diff for the kickstart.j2 template is that we inject ansible variables (for network and storage) :

network  --bootproto=static --device=eth0 --gateway={{ gateway }} --ip={{ ip }} --nameserver={{ nameserver }} --netmask={{ netmask }} --ipv6=auto --activate
network  --hostname={{ inventory_hostname }}
<snip>
part /boot --fstype="ext4" --ondisk=sda --size=500
part pv.14 --fstype="lvmpv" --ondisk=sda --size=10000 --grow
volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14
logvol /home  --fstype="xfs" --size=2412 --name=home --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=100000
logvol /  --fstype="xfs" --size=8200 --name=root --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=1000000
logvol swap  --fstype="swap" --size=2136 --name=swap --vgname=vg_{{ inventory_hostname_short }}
<snip>

The dhcp step isn't mandatory, but at least in that subnet we only allow dhcp to "already known" mac address, retrieved from the ansible inventory (and previously fetched directly from the seamicro chassis) :

# {{ name }} ip assignement
dhcp-host={{ mac_address }},{{ ip }}

Same thing for the pxelinux tftp config file :

SERIAL 0 9600
DEFAULT text
PROMPT 0
TIMEOUT 50
TOTALTIMEOUT 6000
ONTIMEOUT {{ inventory_hostname }}-deploy

LABEL local
        MENU LABEL (local)
        MENU DEFAULT
        LOCALBOOT 0

LABEL {{ inventory_hostname}}-deploy
        kernel CentOS/{{ centos_dist }}/{{ centos_arch}}/vmlinuz
        MENU LABEL CentOS {{ centos_dist }} {{ centos_arch }}- CI Kickstart for {{ inventory_hostname }}
        {% if centos_dist == 7 -%}
	append initrd=CentOS/7/{{ centos_arch }}/initrd.img net.ifnames=0 biosdevname=0 ip=eth0:dhcp inst.ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
	{% else -%}
        append initrd=CentOS/{{ centos_dist }}/{{ centos_arch }}/initrd.img ksdevice=eth0 ip=dhcp ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
 	{% endif %}

The interesting part is the one on which I needed to spend more time : as said, it was the first time I had to play with SeaMicro hardware, so I had to dive into documentation (which I *always* do, RTFM FTW !) and understand how to use their Rest API but once done, it was a breeze. Ansible by default doesn't provide a native resource for Seamicro, but that's why Rest exists, right and thanfully, Ansible has a native URI module, which we use here . The only thing on which I had to spend more time was to understand how to properly construct the body, but declaring in the yaml file as a variable/list and then converting it on the fly to json (with the magical body='{{ seamicro_reset_body | to_json }}' ) was the way to go and is so self-explained when read now.

And here we go, calling that ansible playbook and suddenly 128 physical machines were being installed (and reinstalled with different CentOS versions - 5,6,7 - and arches i386,x86_64)

Hope this helps if you  have to interact with Seamicro chassis from within an ansible playbook too

January 06, 2015

The EPEL and CentOS Project relationship

January 06, 2015 11:15 AM

On Saturday 31st Jan, after close of Fosdem day 1 – I am working to bring together a group of people who all care about the EPEL and CentOS Project relationships to try and workout how best to move things forward. Key points to address are how SIG’s and other efforts in CentOS can consume, rely on, feedback to and message around content in EPEL and similarly how can CentOS efforts feedback into EPEL components – the overall aim being to workout a plan and a way for the two buildsystems to talk to each other ( the CentOS Community one and the EPEL one ), and to set some level of expectations across the project efforts.

Everyone is welcome to come along for the conversation, but it would be most productive for people who are CentOS SIG members and EPEL contributors / administrators and users who rely on EPEL content on their CentOS Linux installs.

I’ve started a thread to setup some of the basic topics on the centos-devel list, you can track it here. And there is a list of people who want to make it for the conversation at the bottom of the CentOS Fosdem 2015 planning page. If you are able to make it, let me know and I will add your name to the list. Remember this is a post-fosdem day 1 thing, in the early evening of the 31st Jan 2015.

See you there!

December 15, 2014

CentOS in OpenShift Commons

December 15, 2014 12:24 PM

Happy to announce that the CentOS Project is now a part of the OpenShift Commons initiative.

In their own words :

The Commons builds connections and collaboration across OpenShift communities, projects and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.

A significant amount of OpenShift development and community delivery work is now done on CentOS Linux, and I am hoping that this new association between the two projects allows us to further build on that platform.

- KB

Reproducible CentOS containers

December 15, 2014 12:00 AM

Around 9 months ago, I took over the creation of the official CentOS images contained in the Docker index. Prior to that, the index officially had one lonely and outdated CentOS 6.4 image that we at the CentOS Project were unaware of. Docker sort of exploded into our view and we spent a bit of time playing catch-up, trying to get things done the way we as a distribution would like to see them done. One of these actions was to do away with minor-versioned containers.

We chose to drop the minor version from our containers, and petitioned the Docker registry maintainer to remove the existing one (not built by us). The reasoning for this is fairly straightforward: A larger percentage of users never updated to the current containers, and so most of the bugs submitted to us were for older versions. Even recently Dan Walsh has had to post reminders to run updates. By only having a centos6 or centos7 image, we tried to remove the mindset for minor versions. Since the containers themselves are a svelte 132 packages, they amount to little more than the dependencies needed for bash and yum. In theory, the differences between a 6.5 image and a 6.6 image should be entirely negligible. In fact, by default any package installed on a 6.5 container would be from the 6.6 repositories.

That said, the number one request since we stopped shipping point releases is… you guessed it: Point releases. While I continue to maintain that our position of updates is the proper one, real world usage often runs counter to ivory tower thinking. A number of valid use cases were brought up in the course of discussions with community members asking for containers to be tagged with minor point releases, and I have agreed to reconsider tagging minor version images.

Beginning with the January monthly rollout, I will add minor tags for the 5, 6 builds in the Docker index. The minor tags will be for 5.11, 6.6. For 7 builds it will correspond to date tagged build name, the same as the installation media. These tags will be built from, and correspond to the respective CentOS installation media, and so will not contain updates. This means if you are using the minor tags, you could potentially / would be exposing your containers to exploits that have been patched in the rolling updates. The latest, 5, 6, and 7 tags will continue to point to the rolling monthly releases, which I would highly recommend using.

December 02, 2014

FreeIPA 4.1.2 and CentOS

December 02, 2014 06:04 PM

The FreeIPA community is looking for your help and feedback!

The FreeIPA development team is excited to share with you a new version of the FreeIPA server 4.1.2 running in a container on top of CentOS. It is the first time a FreeIPA upstream release is available in the CentOS docker index. It is a preview of the features that will eventually make their way in the main CentOS distribution. This version of FreeIPA showcases multiple new major features as well as improvements to existing components above what is currently available in CentOS 7.0

 

In order to use this docker container, please run
docker pull centos/freeipa

Then follow the guide/documentation available at https://registry.hub.docker.com/u/centos/freeipa/

 

These features include:

– Backup and Restore
Ability to backup server data and restore an instance in the case of disaster
http://www.freeipa.org/page/V3/Backup_and_Restore

– CA Certificate Management Utility
A tool to change IPA chaining or rotate the CA certificate on already installed server
http://www.freeipa.org/page/V4/CA_certificate_renewal

– ID Views
Ability to store POSIX data and SSH keys in IPA for users belonging to a trusted Active Directory domain. Alternative POSIX data and SSH keys can also be stored for regular IPA users making it possible to serve different POSIX data to different clients (requires SSSD 1.12.3 or later). This is useful in migration scenarios where consolidation of multiple identity stores (local files, NIS domains, legacy LDAP servers, ..) with duplicated identities and inconsistent POSIX attributes needs to be retained for some clients.
http://www.freeipa.org/page/V4/Migrating_existing_environments_to_Trust

Note: The solution requires the latest SSSD bits availble the Copr REPO. https://copr.fedoraproject.org/coprs/mkosek/freeipa/

– DNSSEC
With this version we are introducing for the first time the ability to manage DNSSEC signatures on DNS data. This feature will be available in the community version only and would not be included into CentOS 7.1.
http://www.freeipa.org/page/Releases/4.1.0#DNSSEC_Support

There are also significant improvements in UI, permissions,  keytab management, automatic membership and SUDO rules handling.
More information can be found here:
http://www.freeipa.org/page/V4/Automember_rebuild_membership
http://www.freeipa.org/page/V4/Forward_zones
http://www.freeipa.org/page/V4/Keytab_Retrieval
http://www.freeipa.org/page/V4/Keytab_Retrieval_Management
http://www.freeipa.org/page/V4/PatternFly_Adoption

The biggest and the most interesting feature of FreeIPA 4.1.2 is support for the two factor authentication using HOTP/TOTP compatible software tokens like FreeOTP (open source compatible alternative to Google Authenticator) and hardware tokens like Yubikeys. This feature allows Kerberos and LDAP clients of a FreeIPA server to authenticate using the normal account password as the first factor and an OTP
token as a second factor. For those environments where a 2FA solution is already in place, FreeIPA can act as a proxy via RADIUS. More about this feature can be read here.
http://www.freeipa.org/page/V4/OTP

If you want to see this feature in CentOS 7.1 proper we need your help!
Please give it a try and provide feedback. We really, really need it!

Please use freeipa-users@redhat.com if you have any questions.
If you notice any issues or want to file an RFE you can do it here:
https://fedorahosted.org/freeipa/ (requires a Fedora account).
You can also find us on irc.freenode.net on #freeipa.

November 26, 2014

And now a few words from Paul C.

November 26, 2014 08:13 PM

Although some people in open source communities might not be aware of him, Paul Cormier holds a singular position in the open source world. This hinges on the detail that Red Hat is the longest standing and most successful company at promoting the growth of free/open source software and especially the acceptance of that software in the enterprise (large businesses.) Paul is a Red Hat EVP, but he is also the President of Products and Technologies, meaning he is ultimately accountable for what Red Hat does in creating products the open source way. Paul has held this position essentially for the last dozen years, and so has overseen everything in Red Hat from the creation of Fedora Linux to the rise of cloud computing that Red Hat is an intimate part of.

In other words, when Paul C. speaks — keynote or in-person — he is someone really worth paying close attention to.

In this post on Red Hat’s open source community website, “One Year Later:  Paul Cormier on Red Hat and the CentOS Project“, I provide some introduction and background around a video interview Paul did with ServerWatch about the Red Hat and CentOS Project relationship.

(Speaking of ‘intimately’, that explains my relationship to Red Hat and the CentOS Project — I spent all of 2013 architecting and delivering on making CentOS Linux the third leg in the stool of Red Hat platform technologies. When I say in the “One Year Later…” article about “making sure (Paul C. is) happy and excited about Red Hat joining forces with the CentOS Project,” that responsibility is largely mine.)

November 24, 2014

Switching from Ethernet to Infiniband for Gluster access (or why we had to …)

November 24, 2014 10:37 AM

As explained in my previous (small) blog post, I had to migrate a Gluster setup we have within CentOS.org Infra. As said in that previous blog post too, Gluster is really easy to install, and sometimes it can even "smells" too easy to be true. One thing to keep in mind when dealing with Gluster is that it's a "file-level" storage solution, so don't try to compare it with "block-level" solutions (so typically a NAS vs SAN comparison, even if "SAN" itself is wrong for such discussion, as SAN is what's *between* your nodes and the storage itself, just a reminder.)

Within CentOS.org infra, we have a multiple nodes Gluster setup, that we use for multiple things at the same time. The Gluster volumes are used to store some files, but also to host (different gluster volumes with different settings/ACLs) KVM virtual-disks (qcow2). People knowing me will say : "hey, but for performances reasons, it's faster to just dedicate for example a partition , or a Logical Volume instead of using qcow2 images sitting on top a filesystem for Virtual Machines, right ?" and that's true. But with our limited amount of machines, and a need to "move" Virtual Machine without a proper shared storage solution (and because in our setup, those physical nodes *are* both glusterd and hypervisors), Gluster was an easy to use solution to :

It was working, but not that fast ... I then heard about the fact that (obviously) accessing those qcow2 images file through fuse wasn't efficient at all, but that Gluster had libgfapi that could be used to "talk" directly to the gluster daemons, bypassing completely the need to mount your gluster volumes locally through fuse. Thankfully, qemu-kvm from CentOS 6 is built against libgfapi so can use that directly (and that's the reason why it's automatically installed when you install KVM hypervisor components). Results ? better , but still not was I/we was/were expecting ...

When trying to find the issue, I discussed with some folks in the #gluster irc channel (irc.freenode.net) and suddenly I understood something that it's *not* so obvious for Gluster in distributed+replicated mode : for people having dealt with storage solutions at the hardware level (or people using DRBD, which I did too in the past, and that I also liked a lot ..) in the past, we expect the replication to happens automatically at the storage/server side, but that's not true for Gluster : in fact Glusterd just exposes metadata to gluster clients, which then know where to read/write (being "redirected" to correct gluster nodes). That means so than replication happens at the *client* side : in replicated mode, the clients will write itself twice the same data : once on each server ...

So back to our example, as our nodes have 2*1Gb/s Ethernet card, and that one is a bridge used by the Virtual Machines, and the other one "dedicated" to gluster, and that each node is itself a glusterd/gluster client, I let you think about the max perf we could get : for a write operation : 1Gbit/s , divided by two (because of the replication) so ~ 125MB / 2 => in theory ~ 62 MB/sec (and then remove tcp/gluster/overhead and that drops to ~ 55MB/s)

How to solve that ? well, I tested that theory and confirmed directly that it was the case, when in distributed mode only, write performances were automatically doubled. So yes, running Gluster on Gigabit Ethernet suddenly was the bottleneck. Upgrading to 10Gb wasn't something we could do, but , thanks to Justin Clift (and some other Gluster folks), we were able to find some "second hand" Infiniband hardware (10Gbps HCAs and switch)

While Gluster has native/builtin rdma/Infiniband capabilities (see "tranport" option in the "gluster create volume" command), we had in our case to migrate existing Gluster volumes from plain TCP/Ethernet to Infiniband, while trying to get the downtime as small as possible. That is/was my first experience with Infiniband, but it's not as hard as it seems, especially when you discover IPoIB (IP over Infiniband). So from a Syadmin POV, it's just "yet another network interface", but a 10Gbps one now :)

The Gluster volume migration then goes like this : (schedule a - obvious - downtime for this) :

On all gluster nodes (assuming that we start from machines installed only with @core group, so minimal ones) :

yum groupinstall "Infiniband Support"

chkconfig rdma on

<stop your clients or other  apps accessing gluster volumes, as they will be stopped>

service glusterd stop && chkconfig glusterd off &&  init 0

Install then the hardware in each server, connect all Infiniband cards to the IB switch (previously configured) and power back on all servers. When machines are back online, you have "just" to configure the ib interfaces. As in my cases, machines were "remote nodes" and not having a look at how they were configured, I  had to use some IB tools to see which port was connected (a tool like "ibv_devinfo" showed me which port was active/connected, while "ibdiagnet" shows you the topology and other nodes/devices). In our case it was port 2, so let's create the ifcfg-ib{0,1} devices (and ib1 being the one we'll use) :

DEVICE=ib1
TYPE=Infiniband
BOOTPROTO=static
BROADCAST=192.168.123.255
IPADDR=192.168.123.2
NETMASK=255.255.255.0
NETWORK=192.168.123.0
ONBOOT=yes
NM_CONTROLLED=no
CONNECTED_MODE=yes

The interesting part here is the "CONNECTED_MODE=yes" : for people who already uses iscsi, you know that Jumbo frames are really important if you have a dedicated VLAN (and that the Ethernet switch support Jumbo frames too). As stated in the IPoIB kernel doc , you can have two operation mode : datagram (default 2044 bytes MTU) or  Connected (up to 65520 bytes MTU). It's up to you to decide which one to use, but if you understood the Jumbo frames thing for iscsi, you get the point already.

An "ifup ib1" on all nodes will bring the interfaces up and you can verify that everything works by pinging each other node, including with larger mtu values :

ping -s 16384 <other-node-on-the-infiniband-network>

If everything's fine, you can then decide to start gluster *but* don't forget that gluster uses FQDN (at least I hope that's how you configured initially your gluster setup, already on a dedicated segment, and using different FQDN for the storage vlan). You just have to update your local resolver (internal DNS, local hosts files, whatever you want) to be sure that gluster will then use the new IP subnet on the Infiniband network. (If you haven't previously defined different hostnames for your gluster setup, you can "just" update that in the different /var/lib/glusterd/peers/* and /var/lib/glusterd/vols/*/*.vol)

Restart the whole gluster stack (on all gluster nodes) and verify that it works fine :

service glusterd start

gluster peer status

gluster volume status

# and if you're happy with the results :

chkconfig glusterd on

So, in a short summary:

  • Infiniband isn't that difficult (and surely if you use IPoIB, which has though a very small overhead)
  • Migrating gluster from Ethernet to Infiniband is also easy (and surely if you planned carefully your initial design about IP subnet/VLAN/segment/DNS resolution for "transparent" move)

November 21, 2014

Updating to Gluster 3.6 packages on CentOS 6

November 21, 2014 03:08 PM

I had to do yesterday some maintenance yesterday on our Gluster nodes used within CentOS.org infra. Basically I had to reconfigure some gluster volumes to use Infiniband instead of Ethernet. (I'll write a dedicated blog post about that migration later).

While a lot of people directly consume packages from Gluster.org (for example http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/), you'll be able (soon) to also install directly those packages on CentOS, through packages built by the Storage SIG . At the moment I'm writing this blog post, gluster 3.6.1 packages are built and available on our Community Build Server Koji setup , but still in testing (and unsigned).

"But wait, there are already glusterfs packages tagged 3.6 in CentOS 6.6, right ? " will you say. Well, yes, but not the full stack. What you see in the [base] (or [updates]) repository are the client packages, as for example a base CentOS 6.x can be a gluster client (through fuse, or libgfapi - really interesting to speed up qemu-kvm instead of using the default fuse mount point ..) , but the -server package isn't there. So the reason why you can either use the upstream gluster.org yum repositories or the Storage SIG one to have access to the full stack, and so run glusterd on CentOS.

Interested in testing those packages ? Wanting to test the update before those packages will be released by the Storage SIG ? here we go : http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/ (packages available for CentOS 7 too)

By the way, if you never tested Gluster, it's really easy to setup and play with, even within Virtual Machines. Interesting reading : (quick start) : http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

November 13, 2014

EPEL Orphaned packages and their dependents to be removed Dec 17th

November 13, 2014 10:22 AM

Hi,

The EPEL repository runs from within the Fedora project, sharing resources ( including, importantly their source trees ) with the Fedora ecosystem; over the years its proven to be a large and helpful resource for anyone running CentOS Linux.

One key challenge they have however, much like CentOS Linux, is that the entire effort is run by a few people helped along by a small group of voulenteers. So while the package list they provide is huge, the people putting in the work behind it is small. A fallout from this is that over the years a significant chunk of packages in the EPEL repo are now orphaned. They once had a maintainer but either that maintainer has gone away now, or has other priorities.

A few days back, Steven announced that they were going to start working to drop these orphaned packages unless someone steps up to help maintain them. You can read his announcement here : https://lists.fedoraproject.org/pipermail/epel-devel/2014-November/010430.html

This is a great time for anyone looking to get involved with packages and packaging as a whole and wanting to contribute into the larger CentOS Linux ecosystem to jump in and take ownership of content ( ideally stuff that you care about, hence likely to keep it managed for a period of time ). They have a simple process to get started, document at the Joining EPEL page here : https://fedoraproject.org/wiki/Joining_EPEL and you can see a list of packages being orphaned on the urls from Steven’s post linked above.

Regards,

October 29, 2014

CentOS Dojo at LISA14 in Seattle on November 10th, 2014

October 29, 2014 08:03 PM

Join us at the all day (09:00 to 17:00) CentOS Dojo on Monday, November 10th, 2014 at the LISA14 conference in Seattle, Washington.

There will be at least three CentOS board members there (Johnny Hughes, Jim Perrin, and Karsten Wade).

The current topics include:

  • CI environment scaling by Dave Nalley
  • DevOps Isn’t Just for WebOps: The Guerrilla’s Guide to Cultural Change by Michael Stahnke
  • The EPEL Phoenix Saga by Stephen Smoogen
  • Docker in the Distro by Jim Perrin
  • Managing your users by Matt Simmons
Visit the CentOS Wiki for more information.

October 28, 2014

CentOS-6.6 is Released

October 28, 2014 11:27 AM

CentOS 6.6 is now released, see the Announcement.

So, the Continuous Release RPMs where released on 21 October (7 days after RHEL 6.6) and the Full Release was done 28 October (14 days after RHEL 6.6).

Enjoy.



October 21, 2014

Continuous Release Repository RPMs for CentOS-6.6 Released

October 21, 2014 07:46 AM

The CentOS team has released the Continuous Release (CR) Repository RPMs for CentOS-6.6 into their 6.5/cr tree.  See the Release Announcement.

Now a little more about the release process.

  1. Red Hat releases a version of Red Hat Enterprise Linux.  In this case the version is Red Hat Enterprise Linux 6.6 (RHEL-6.6), which was released on October 14th, 2014.  With that release by Red Hat comes the source code which RHEL 6.6 is based on.
  2. The CentOS team takes that released source code and starts building it for their CentOS release (in this case CentOS-6.6).  This process can not start until the Source Code from Red Hat is available, which in this case was October 14th.
  3. At some point, all the Source Code has been built and there are RPMs available, this is normally 1-5 days depending on how many Source RPMs there are to build and how many times the order needs to be changed to get the builds done correctly.
  4. After the CentOS team thinks they have a good set of binary RPMs built, they submit them to the QA team (a team of volunteers who do QA for the releases).  This QA process includes the t_functional suite and several knowledgeable system administrators downloading and running tests on the RPMs to validate updating with them works as planned.
  5. At this point there are tested RPMs ready, and the CentOS team needs to build an installer tree. This means, take the new RPMs and move them into place in the main tree, remove the older ones RPMs they are replacing, run the build installer to create an installable tree, test that installable tree.  This process can take up to 7 days.
  6. Once there is an installable tree, all the ISOs have to be created and tested.  We have to create the ISOs, upload them to the QA process, test them for installs via ISOs (correct sizes, how to split the ISOs, what is on the Live CDs and LiveDVDs to keep them below the max size to fit on media, etc.).  We then also test the installs for UEFI installs, Secure Boot installs (CentOS-7 only), coping to USB Keys and checking the installs that way, etc.  This process can also take up to 7 days.
So, in the process above, we can have vetted binary RPMs ready to go as soon as 5 days after we start, but it may be 14 or more days after that before we have a complete set of ISOs to do a full release.  Thus the reason for the CR Repository.

The CR Repository


The process of building and testing an install tree, then creating and testing several types of ISO sets from that install tree (DVD Installer, Minimum Install ISO, LiveCD, LiveDVD, etc) can take 1-2 weeks after all the RPMs are built and have gone through initial QA testing.

The purpose of the CR repository is to provide quicker access to RPMs for an upcoming CentOS point release while further QA testing is ongoing and the ISO installers are being built and tested.

Updates in the CR for CentOS-6.6

More Information about CR.

CentOS-6.6 Release Notes (Still in progress until the actual CentOS-6.6 release).

Upstream RHEL-6.6 Release Notes and Technical Notes.

October 15, 2014

Running MariaDB, FreeIPA, and More with CentOS Containers

October 15, 2014 02:03 PM

The CentOS Project is pleased to announce four new Docker images in the CentOS Container Set, providing popular, ready to use containerized applications and services. Today you can grab containers with MariaDB, Nginx, FreeIPA, and the Apache HTTP Server straight from the Docker Hub.

The new containers  are based on CentOS 7, and are tailored to provide just the right set of packages to provide MariaDB, Nginx, FreeIPA, or The Apache HTTP Server right out of the box.

The first set of applications and services provide two of the world’s most popular Web servers, MariaDB for your database needs, and FreeIPA to provide an integrated security information management solution.

The CentOS Container Set is an effort to leverage the CentOS Project to give developers and admins the building blocks to easily set up containerized services in their environment. Keep an eye on the CentOS blog for further releases, or help us as we continue to develop more!

To get started with one of the images, use: `docker pull centos/<app>` where <app> is the name of the container (*e.g.* `docker pull centos/mariadb`). You can find some quick “getting started” info on the Docker Hub page for each application.

Jason Brooks has written up a longer howto for FreeIPA  that details how to build the container (which is already done here, but you can rebuild the images if you like using the Dockerfiles on GitHub), and how to set it up to use FreeIPA with an application.

We have a larger set of Dockerfiles (derived initially from the Fedora Dockerfiles) set that we’re working on to develop pre-made CentOS Docker containers for easy use. Join the centos-devel mailing list to ask questions about the containers, or to provide feedback on their use. We also accept pull requests if you have any fixes or new Dockerfiles to contribute!

Koji – CentOS CBS infra and sslv3/Poodle important notification

October 15, 2014 10:46 AM

As most of you already know, there is an important SSLv3 vulnerability (CVE-2014-3566 - see https://access.redhat.com/articles/1232123) , known as Poodle.
While it's easy to disable SSLv3 in the allowed Protocols at the server level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some clients are still defaulting to SSLv3, and Koji does that.

We currently have disabled SSLv3 on our cbs.centos.org koji instance, so if you're a cbs/koji user, please adapt your local koji package (local fix !)
At the moment, there is no available upstream package, but the following patch has been tested by Fedora people too (and credits go to

https://lists.fedoraproject.org/pipermail/infrastructure/2014-October/014976.html)

=====================================================
- --- SSLCommon.py.orig    2014-10-15 11:42:54.747082029 +0200
+++ SSLCommon.py    2014-10-15 11:44:08.215257590 +0200
@@ -37,7 +37,8 @@
if f and not os.access(f, os.R_OK):
raise StandardError, "%s does not exist or is not
readable" % f

- -    ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only
+    #ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only
+    ctx = SSL.Context(SSL.TLSv1_METHOD)   # TLSv1 only
ctx.use_certificate_file(key_and_cert)
ctx.use_privatekey_file(key_and_cert)
ctx.load_client_ca(ca_cert)
@@ -45,7 +46,8 @@
verify = SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT
ctx.set_verify(verify, our_verify)
ctx.set_verify_depth(10)
- -    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)
+    #ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1)
+    ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_TLSv1 | SSL.OP_NO_SSLv3)
return ctx
=====================================================

We'll keep you informed about possible upstream koji packages that would default to at least TLSv1

If you encounter a problem, feel free to drop into #centos-devel channel on irc.freenode.net and have a chat with us

October 01, 2014

Xen4CentOS XSA-108 Security update for CentOS-6

October 01, 2014 08:00 AM

There has been a fair amount of press in the last couple of days concerning Xen update XSA-108, and the fact that Amazon EC2 and Rackspace must reboot after this update:

Rackspace forced to reboot cloud over Xen bug

Amazon Reboots Cloud Servers, Xen Bug Blamed

There are other stories, but those articles cover the main issue.

As KB tweeted, the CentOS-6 Xen4CentOS release is also impacted by this issue and the CentOS team has released CESA-2014:X013 to deal with XSA-108.  There are also 3 other Xen4CentOS updates released:  CESA-2014:X010, CESA-2014:X011, CESA-2014:X012

If you are using Xen4CentOS on CentOS-6, please use yum update to get these security updates ... and like Rackspace and Amazon EC2, you need to reboot your dom0 machine after the updates are applied.

September 30, 2014

CentOS team at cPanel 2014

September 30, 2014 03:28 AM

The CentOS team will have a booth in the Exhibit Hall for the 2014 cPanel Conference at the Westin Galleria hotel in Houston, Texas from September 30th to October 1st 2014.

CentOS Board members Johnny Hughes (that's me :D) and Jim Perrin will be at the booth whenever the hall is open. 

We are looking forward to lots of discussions and we will have some swag to give out (Tee Shirts .. including the new 10 Year Anniversary tee, Stickers, etc.). We will also be happy to install CentOS on your laptop (or let you do it) ... or if you have a USB key available, we will put a CentOS iso on it for you to use for install later.

If you are going to be at cPanel 2014, come on down and see us!

CentOS Linux 5.11 for x86_64 and i386 is released

September 30, 2014 03:11 AM

The CentOS Linux 5.11 distribution for both the x86_64 and i386 architectures is now released.

If you are running any previous version of CentOS-5 Linux, then you can upgrade simply by using the command:

yum update

ISOs are also available here:

http://isoredirect.centos.org/centos/5/isos/

Please see the Release Announcement and Release Notes for more details.

 

September 25, 2014

CentOS, Docker, and Systemd

September 25, 2014 12:00 AM

Over the last few weeks, we’ve been asked about using systemd inside the CentOS-7 Docker containers for more complex operation. Because systemd offers a number of rather nice features, I can completely understand why people want to use it rather than pulling in outside tools like supervisord to recreate what already exists in centos by default. Unfortunately it’s just not that easy.

There are a couple major reasons why we don’t include systemd by default in the base Docker image. Dan Walsh covered these pretty completely in a blog post, but to recap where we are currently lets hit the highlights

  • systemd requires the CAP_SYS_ADMIN capability. This means running docker with --privileged. Not good for a base image.
  • systemd requires access to the cgroups filesystem.
  • systemd has a number of unit files that don’t matter in a container, and they cause errors if they’re not removed

It’s for these reasons that we ship with fakesystemd in the default image. The fakesystemd package provides dependency resolution and the proper directory structure so that packages install normally, and individual apps can be run inside the container by default. The fakesystemd package isn’t really an elegant fix, but it’s currently the best we can do in the base images. As soon as we’re able to rip it out and ship a proper systemd package, we will.

If you’re okay with running your container with --privileged, then you can follow the steps below to create your systemd enabled docker image from the CentOS-7 base image.

Dockerfile for systemd base image

FROM centos:centos7
MAINTAINER "you" <your@email.here>
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]

This Dockerfile swaps out fakesystemd for the real deal, but deletes a bunch of the unit files we don’t need. Building this image gives us a usable base to start from.

docker build --rm -t centos7-systemd . 

systemd enabled app container

Once this new base image is built, we can move on to building stuff that actually needs systemd. In this instance we’ll use httpd as an example.

FROM centos7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]

We build once again:

docker build --rm -t centos7-systemd/httpd

To put this all together and run httpd with systemd in Docker, we do this:

docker run –privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 centos7-systemd/httpd

This container is running with systemd in a limited context, but it must always be run as a privileged container with the cgroups filesystem mounted.

There are plenty of rumors circulating that this will be addressed in the future, either in system or in a systemd-like subpackage. As soon as we’re able to ship systemd in the base image, we will do so.

September 24, 2014

Critical Bash updates for CentOS-5, CentOS-6, and CentOS-7

September 24, 2014 11:37 AM

There is a critical CVE issue in all versions CentOS that has been fixed today.  Please update your CentOS with this command:

yum update

Information about this issue can be found here:

http://red.ht/1msy8D6

and here:

http://red.ht/1uZGljA

The CentOS release announcements are here:

CentOS-5, CentOS-6, CentOS-7

If you have any other questions about this issue, you can ask on the CentOS mailing list here.

September 05, 2014

CentOS Dojo in Orlando at Fossetcon 11 Sep

September 05, 2014 08:25 PM

If you are in Florida or in Orlando attending Fossetcon next week, come over to our next CentOS Dojo on Thursday 11 September (all day).

CentOS Dojos are a one day event that bring together people from the CentOS communities to talk about systems administration, best practises, and emerging technologies.”

At this particular Dojo we have this great lineup of information, discussion, and getting things done:

  • Jim Perrin (@BitIntegrity) will start the day with a few minutes about the CentOS Project and do some introductions around the room.
  • Garrett Honeycutt (@learnpuppet) goes next with a session, “Why Automation is Important” that covers topics such as configuration management with Puppet, Ansible, et al.
  • Dmitri Pal from the FreeIPA project will discuss “Active Directory Integration”, a popular topic for many sysadmins and ops people stuck with a mixed-in-with-Windows environment.
  • Greg Sheremeta (@gregsheremeta) of the oVirt project finishes with a tutorial on using the oVirt all-in-one installer. oVirt is virtualization management around KVM (cf. VMWare vSphere) with a growing userbase.
  • Then a sponsored lunch and time to network with your fellow Dojo attendees.
  • After lunch until the evening is a hackfest focusing on building and using Docker, building Xen components for CentOS 6, and whatever else gets cooked up. The CentOS team will be bringing a local mirror and WiFi for connecting on a private LAN for the hackfest. You can bring your laptop, ideas, and skills.

If you are interested in attending, please sign up on our event page.

September 03, 2014

CentOS Dojo, September 11th at Fossetcon 2014

September 03, 2014 09:30 PM

The CentOS Project will be having a CentOS Dojo on day0 of Fossetcon 2014, in Orlando Florida at the  Rosen Plaza Hotel on September, 11th 2014.

We will have speakers in the morning, starting at 10:00 am local time and a hackfest beginning at 1:00pm.

Please see the CentOS Wiki for details.  Register here.

September 02, 2014

The CentOS Events twitter account

September 02, 2014 11:12 PM

Hi,

@CentOSEvents is now live! We will be tweeting about events we run, events we participate in and all the dojo planning, presenting, attending info you might want.

- KB

July 22, 2014

Testing CentOS-6 to CentOS-7 upgrades via CentOS Testing Repo

July 22, 2014 05:49 PM

EDIT (Monday July 28, 2014 – 2010 UTC):

We now have what we think is going to be the final version of this upgrade tool.  Please see the following link to test:

http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool

End Edit

================

We now have some Beta Testing RPMs available to test upgrades from CentOS-6 to CentOS-7.  These tests were announced on the CentOS-Devel mailing list here:

http://lists.centos.org/pipermail/centos-devel/2014-July/011277.html

Since the release of the test RPMs, we have had several patches created by Manuel Mausz.  Manuel’s patches have done a lot to make the Preupgrade Assistant work for upgrades.  We now need to get some tests of the patched RPMs.

The new RPMs are available from the Testing Repo here:

http://dev.centos.org/centos/6/upg/x86_64/Packages/

The upstream documentation for performing upgrades, as it currently exists, is here:

http://red.ht/1oVEt7O

The CentOS team would like to very much thank Manuel for his testing work and patches for Preupgrade Assistant. This is an example of how we are now doing things in the “New” CentOS Project … where the community is now involved in all aspects of what we do except the actual building of the upstream sources for the actual distro.

Other things we need from the community for this process:

  1. Test the RPMs as they exist right now in the Testing Repo.
  2. If the process needs more changes to work properly, submit patches to the CentOS-Devel mailing list to get them rolled into the packages.
  3. Document the process of using the current RPMs from the Testing Repo to actually perform CentOS-6.5 to CentOS-7 upgrades.
  4. Update wiki.centos.org to contain the newly documented processes to perform the upgrades.

The SRPMs for these packages are here:

http://dev.centos.org/centos/6/upg/Source/SPackages/

The sources are also available from git.centos.org:

https://git.centos.org/project/rpms

And the specific packages are:

  • preupgrade-assistant : Git Branch c6
  • preupgrade-assistant-contents : Git Branch c6
  • redhat-upgrade-tool: Git Branch c6

Please test and document these packages and the process, and submit any required code changes to the CentOS-Devel mailing list.  If you need wiki.centos.org edit capability to create/update docs for the process, ask on the CentOS-Docs mailing list.

Note:  The state of this software is to be considered Beta at best … do NOT try to use it on ANYTHING even slightly important.

——————————————-

EDIT:  New packages are now pushed based on the changes from this mail:

http://lists.centos.org/pipermail/centos-devel/2014-July/011610.html

Instructions:

Please run preupg with "-s CentOS6_7".

July 19, 2014

CentOSPlus kernel that mitigates CVE-2014-4699 now available

July 19, 2014 08:38 PM

CVE-2014-4699:
The Linux kernel before 3.15.4 on Intel processors does not properly restrict use of a non-canonical value for the saved RIP address in the case of a system call that does not use IRET, which allows local users to leverage a race condition and gain privileges, or cause a denial of service (double fault), via a crafted application that makes ptrace and fork system calls.

This issue affects CentOS-6 and -7 kernels. An updtream fix has now been applied to the CenOSPlus kernels.

CentOS-6:
kernel-2.6.32-431.20.3.0.1.el6.centos.plus.x86_64.rpm
kernel-2.6.32-431.20.3.0.1.el6.centos.plus.i686.rpm

CentOS-7:
kernel-plus-3.10.0-123.4.2.el7.centos.plus.0.1.x86_64.rpm

July 15, 2014

The CentOS-7 Release Announcement

July 15, 2014 11:08 AM

We would like to announce the general availability of CentOS Linux 7 for 64 bit x86 compatible machines.

This is the first release for CentOS-7 and is version marked as 7.0-1406

First, please read through the release notes at : http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7 – these notes contain important information about the release and details about some of the content inside the release from the CentOS QA team. These notes are updated constantly to include issues and incorporate feedback from the users.

———-
Updates, Sources and DebugInfos

Since the upstream EL7 release, there have been some updates released – these have been built and are being pushed to the CentOS mirror network at the moment. They will be available within the next 24 hrs. From this point on we will aim to deliver all updates within 24 to 48 hrs of upstream releases.

For the first time, this release was built from sources hosted at git.centos.org, however srpms being a byproduct of the build and also considered critical in the code and buildsys process are being published to match every rpm we release. Sources will be available from vault.centos.org in their own dedicated directories to match the
corrosponding binary rpms. Since there is far lesser traffic to the source rpms compared with the binary rpms, we are not putting this content on the main mirror network, however if users wish to mirror this content they can do so using the reposync command available in the yum-utils package. All source rpms are signed with the same key used to sign their binary counterparts.

Debuginfo packages are also being signed and pushed. They should be online by the end of this week, July 11th.

Yum configs for both sources and debuginfo packages are included in the default centos-release package on every install.

For the CentOS-7 build and release process we adopted a very open process. The output of the entire buildsystem is made available, as its built, at http://buildlogs.centos.org/ – we hope to continue with that process for the life of CentOS-7, and attempt bringing CentOS-5 and CentOS-6 builds into the same system.

———-
Numbering

CentOS 7.0-1406 introduces a new numbering scheme that we want to further develop into the life of CentOS-7. The 0 component maps to the upstream realease, whose code this release is built from. The 1406 component indicates the monthstamp of the code included in the release ( in this case, June 2014 ). By using a monthstamp we are able to respin and reissue updated media for things like container and cloud images, that are regularly refreshed, while still retaining a connection to the base distro version.

In order to facilitate Special Interest Groups to further extend the CentOS Linux platform, we are also using component codes. The main distro is, therefore, titled ‘Core’. SIGs would be able to adopt any name they need and deliver that by overriding the base centos-release rpm.

———-
Download

In order to conserve donor bandwidth, and to make it possible to get the mirror content sync’d out as soon as possible, we recommend using torrents to get your initial installer images:

Details on the images is available on the mirrors at http://mirror.centos.org/centos/7/isos/x86_64/0_README.txt – that file clearly highlights the difference in the images, and when one might be more suiteable than the others.

The size, sha256 sums and torrents for the ISO files,:

* CentOS-7.0-1406-x86_64-DVD.iso
Size: 4148166656
Torrent:http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.torrent
sha256sum: ee505335bcd4943ffc7e6e6e55e5aaa8da09710b6ceecda82a5619342f1d24d9

* CentOS-7.0-1406-x86_64-Everything.iso
Size: 7062159360
Torrent: http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-Everything.torrent
sha256sum: 745a0a4a02147d8371b87dd09d402c7dc5fddc609caa7af44bc7b004de78c58a

* CentOS-7.0-1406-x86_64-GnomeLive.iso
Size: 1108344832
Torrent: http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-GnomeLive.torrent
sha256sum: 2e926343f55903060bb453d0d1d21158d92a623c21ad5f820cfa8f97095888bf

* CentOS-7.0-1406-x86_64-KdeLive.iso
Size: 1298137088
Torrent: http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-KdeLive.torrent
sha256sum: 2157f276efbfc6ae2e037c29092a065628ba8598fe4c2c9b2473b3a5cd5b9abd

* CentOS-7.0-1406-x86_64-livecd.iso
Size: 720371712
Torrent: http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-livecd.torrent
sha256sum: 89ef9fb1c5564ccbbbcc223369cea8bcebc84bb28464db812fe01b775f8cf779

* CentOS-7.0-1406-x86_64-NetInstall.iso
Size: 379584512
Torrent: http://mirror.centos.org/centos/7/isos/x86_64/CentOS-7.0-1406-x86_64-NetInstall.torrent
sha256sum: df6dfdd25ebf443ca3375188d0b4b7f92f4153dc910b17bccc886bd54a7b7c86

The iso files are also available for direct download from

http://mirror.centos.org/centos/7/isos/x86_64

———-
Coming Soon

We are currently working to extend the portfolio of content we deliver for a major release. In the past its only been iso media and install tree’s, but with CentOS-7 we are also going to deliver:

= Docker Images

= Cloud Images in vendor ecosystems ( HPCloud, RackSpace, AWS, Google Compute etc )

= Cloud Images for direct download and consumption in on-premise infra ( RDO/OpenStack, CloudStack, OpenNebula and Eucalyptus )

= Given the popularity of the minimal install ISO in CentOS-6, we are going to try and deliver a minimal install ISO for CentOS-7 as well. One key challenge here is that the installer image has grown to nearly 360MB, and getting enough content into a CD size image is proving hard.

= A community build system is in the works, we hope to have that functional by the end of this month ( July 2014 ), allowing us to set up a contributor base in the Special Interest Groups to extend and further develop layers and variants on CentOS Linux

= Special Interest Groups including Xen on CentOS, CentOS Storage and CentOS Atomic Host are starting to gain traction, expect to see content delivered from those groups in the near future.

= As a part of the expanded Core efforts, we are also going to attempt to deliver a CentOS-7 release for 32bit x86, ARM and PowerPC in the coming months.

If you are interested in joining any of these efforts, signup for the CentOS-devel list at http://lists.centos.org/ and send in a self intro email and what areas you are interested in helping out with.

———-
Dojo

We try and organise Dojo’s in various parts of the world as a one day event, to bring together people who use CentOS and others who are keen to learn about CentOS. The day’s focus is on sharing technical knowledge and success stories. Its also a great place to meet and talk about upcoming technologies and learn how others are using them on CentOS Linux.

04th Aug ’14 Cologne, Germany : http://wiki.centos.org/Events/Dojo/Cologne2014
25th Aug ’14 Paris, France : http://wiki.centos.org/Events/Dojo/Paris2014
29th Oct ’14 Barcelona, Spain: http://wiki.centos.org/Events/Dojo/Barcelona2014

This autumn and winter we also hope to host Dojos in New York City USA, Timisoara Romania and Bangalore, Pune and New Delhi in India. Please keep an eye on the page at http://wiki.centos.org/Events for details on these venues.

———-
Getting Help

The CentOS ecosystem is sustained by community driven help and guidance. The best place to start for new users is at http://wiki.centos.org/GettingHelp

———-
Contributors

This release was made possible due to the hard work of many people, foremost on that list are the Red Hat Engineers for producing a great distribution, without them CentOS Linux would look very different.

The following people made exceptional contributions in this build,
test release process for CentOS-7 :

Akemi Matsuno-Yagi
Alain Reguera Delgado
Alan Bartlett
Andreas Thienemann
Anssi Johansson
Athmane Madjoudj
Bonnie King
Brian Stinson
Carl Trieloff
Christoph Galuschka
Fabian Arrotin
James Moger
Jeff Sheltren
Jim Perrin
Johnny Hughes Jr
Karanbir Singh
Karsten Wade
Kay Williams
Manuel Wolfshant
Marcus Moeller
Michael Scherer
Mike McLean
Pat Riehecky
Ralph Angenendt
Stephen John Smoogen
Trevor Hemsley
Tru Huynh
Tuomas Kuosmanen
Tuomo Soini
Tyler Parsons

———-
Thanks

I would also like to thank our donors and sponsors for their continued support for the project. Its down to their help that we were able to deploy enough resources to run the Public QA process for CentOS-7; as a data point we ran nearly 300 – 350mbps of sustained bandwidth for the last 3 weeks that we?ve had the Public QA running.

And thanks to everyone who contributed with ideas, code, test feedback and promoting CentOS into the ecosystem.

Enjoy!


Karanbir Singh,
Project Lead, The CentOS Project
+44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS
GnuPG Key : http://www.karan.org/publickey.asc


Powered by Planet!
Last updated: January 25, 2015 10:00 PM