June 21, 2016

CentOS at 2016 Texas Linux Fest

June 21, 2016 06:25 PM

We will have a CentOS Booth at the 2016 Texas Linux Fest on July 8th and 9th in the Austin Texas Convention Center.

Please stop by the CentOS booth for some Swag and discussion.

We will also have several operational CentOS-7 Arm32 devices at the booth, including a Raspberry Pi2, Raspberry Pi3, CubieTruck (Cubieboard3) and CubieTruck Plus (Cubieboard5).  These devices are showcasing our AltArch Special Interest Group, which produce ppc64, ppc64le, armhfp (Arm32), aarch64 Arm64), and i686 (x86 32) architectures of CentOS-7.

We also will be glad to discuss the new things happening within the project, including a number of operational Special Interest Groups (SIGs) that are producing add on software for CentOS including The Xen Hypervisor, OpenStack (via RDO), Storage (GlusterFS and Ceph), Software Collections, Cloud Images (AWS, Azure, Oracle, Vagrant Boxes, KVM), Containers (Docker and Project Atomic).

So, if you have been using CentOS for the past 12 years, all that is happening just like it always has (long lived standard Linux distro with LTS), as well as all the new hypervisor, container and cloud capabilities.

May 24, 2016

Welcome CDN77.com as a CentOS Project Sponsor

May 24, 2016 01:00 PM

Over the years, the CentOS Project infra has exclusively been run on donated hardware, managed by the CentOS infra team from end to end. Most of the edge parts of the content delivery have happened from third party mirrors – these third party mirrors have helped massively in ensuring that we’re able to deliver content rapidly, in a verifiable way, across the world to any yum operation run from a CentOS Linux machine.

Since this network is so focused on delivering end user content, and only released signed content – when we setup buildlogs.centos.org as a way for developers and users to see early content, just fresh from the Community Build service, or from content that people were working on: we decided to not put this content on the wide mirror network. There is a much smaller consumption base for this content, and the price on disk for mirror content is very high for content that does not otherwise see much movement. This model worked fine, we ran a few machines behind buildlogs.centos.org for almost a year and a half before we started hitting capacity issues. Mostly network latency in some areas of the world were poor from the US and EU, where we ran these machines from.

It was at this time that Oskar Gottlieb from CDN77.com got in touch offering to help seed some of our content! We were more than happy to take this up, but had to work through the difference in how their system works V/s what we had in place at the time. Then after a brief test, Fabian announced our move to serving content from CDN77 for all buildlogs.centos.org binary content. You can read this announcement here : https://lists.centos.org/pipermail/centos-devel/2016-March/014552.html

What we are effectively doing is : serve the repodata and metadata for the content from buildlogs.centos.org backed machines run from the CentOS resources, managed by the CentOS infra team. However, any content ( rpms, images etc ) that anyone gets, are offloaded to the CDN77 network across the world.

This has been hugely beneficial for us : it reduced the amount of resources we needed to continue to meet the growing demand for content from here, and it also allowed our associate SIG projects to rapidly seed out devel and testing content ( eg. the RDOProject offloads their tripleo images to buildlogs.centos.org, that are then also served from the CDN77.com network ).

I’d like to welcome CDN77.com to our sponsor network. Tts sponsors like them who’ve kept the project alive and well resourced up over the years – its a network we continue to rely on extensively. If you use CentOS Linux and would like to join the sponsor network, please get in touch.

regards,

May 02, 2016

Generating multiple certificates with Letsencrypt from a single instance

May 02, 2016 10:00 PM

Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the Letsencrypt initiative. I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).

If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the csr and get the signed cert back (and the whole chain too) One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)

So what are our options then ? The letsencrypt documentation mentions several plugins like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)

The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of CDN way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on all the nodes (as you don't know in advance which one will be verified by the ACME server)

So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with SANs ) in a transpartent way ? I so thought about something like this :

Single Letsencrypt node

The idea would be to :

  • use a central node : let's call it central.domain.com (vm, docker container, make-your-choice-here) to launch the letsencrypt client
  • have the ACME server hitting transparently one of the web servers without any changed/uploaded file
  • the server getting the GET request for that file using the letsencrypt central node as a backend node
  • ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.

The good news is that it's possible and even really easy to implement, through ProxyPass (for httpd/Apache web server) or proxy_pass (for nginx based setup)

For example, for the httpd vhost config for sub1.domain.com (three nodes in our example) we can just add this in the .conf file :

<Location "/.well-known/">
    ProxyPass "http://central.domain.com/.well-known/"
</Location>

So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node):

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub1.domain.com

Same if you run nginx instead (let's assume this for sub2.domain.com and sub3.domain.com) , you just have to add a snippet in your vhost .conf file (and before the / definition too):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }

And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub2.domain.com -d sub3.domain.com

Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)

The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this :

   RewriteEngine On
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (RewriteCond are cumulatives so it will not be redirect if url starts with .well-known:

   RewriteEngine On
   RewriteCond $1 !^.well-known
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):

location /.well-known/ {
        proxy_pass      http://central.domain.com/.well-known/ ; 
    }
location / {
        rewrite        ^ https://$server_name$request_uri? permanent;
   }

Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)

April 28, 2016

IPv6 connectivity status within the CentOS.org infra

April 28, 2016 10:00 PM

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also msync.centos.org

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like www.centos.org as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within centos.org !

April 13, 2016

CentOS Community Poster Designs!

April 13, 2016 06:20 PM

The CentOS Project is heading to Red Hat Summit, but we need your help! Since this event is kind of a big deal, we need to make sure our booth is appropriately dressed for the occasion. The theme for the community space around summit draws it’s inspiration from some popular NASA JPL posters, and we’d like to showcase our community’s creativity by having your designs at our both and possibly by handing them out as giveaways during the conference. We at the Project are better builders and sys-admins than designers. We’d love to see your suggestions and ideas.

 

If you have a design you’d like to see us use for the poster and booth work, you can submit it to the CentOS mailing list, or post it as a pull request to our community artwork repository. The designs need to be submitted by midnight(EST) on Monday, April 25th so that we have time to go through the submissions. Let’s see what sort of creativity is lurking out there!

April 07, 2016

Download Updated CentOS Atomic Host Today

April 07, 2016 03:24 PM

An updated version of CentOS Atomic Host (version 7.20160404) is now available for download, featuring significant updates to docker (1.9.1) and to the atomic run tool. Version 1.9 of the atomic run tool now includes support for storage backend migration, for downloading and deploying specific atomic tree versions, and for displaying process information from all containers running on a host.

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • docker-1.9.1-25.el7.centos.x86_64
  • kubernetes-1.2.0-0.9.alpha1.gitb57e8bd.el7.x86_64
  • kernel-3.10.0-327.13.1.el7.x86_64
  • atomic-1.9-4.gitff44c6a.el7.x86_64
  • flannel-0.5.3-9.el7.x86_64
  • ostree-2016.1-2.atomic.el7.x86_64
  • etcd-2.2.5-1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (414 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (426 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up –provider virtualbox

ISO

The installer ISO (731 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (1 GB) image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-22617648
us-west-2 ami-65659005
us-west-1 ami-68710d08
eu-west-1 ami-bb9616c8
eu-central-1 ami-f03fde9f
ap-southeast-1 ami-2da6734e
ap-northeast-1 ami-100f1f7e
ap-southeast-2 ami-b284a7d1
ap-northeast-2 ami-7a1dd414
sa-east-1 ami-0668e76a

 

SHA Sums

10e024927636863fd11e9a9427f9b552b6c67661f695f418b1228dda33bc6ed5 CentOS-Atomic-Host-7.1603-GenericCloud.qcow2 
00a3c556e11094a996f7e688609158aa6909181d34cc767a26a43e41d39a00a2 CentOS-Atomic-Host-7.1603-GenericCloud.qcow2.gz 
1ea638075f41f87751d123cc8cfe8860f6987e009b83d9692161209e2c2ce4af CentOS-Atomic-Host-7.1603-GenericCloud.qcow2.xz 
9f7717da7b6813b1b7a1f87c577c8977915a8c350c36fb64b1f26dcc60bf21eb CentOS-Atomic-Host-7.1603-Installer.iso
f227bcb447f3de1800faf08e453920fd739330cc942ba331467b2099026477f2 CentOS-Atomic-Host-7.1603-Vagrant-Libvirt.box
bc451f55a53e1df83b7556a123a99922ffd867c35eaba2dfc6bfd8aecc748472 CentOS-Atomic-Host-7.1603-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

February 26, 2016

New CentOS Atomic Host Images Available for Download

February 26, 2016 10:38 PM

An updated version of CentOS Atomic Host (version 7.20160224) is now available for download. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-327.10.1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.6-6.gitca1e384.el7.x86_64
  • kubernetes-1.2.0-0.6.alpha1.git8632732.el7.x86_64
  • etcd-2.2.2-5.el7.x86_64
  • ostree-2016.1-2.atomic.el7.x86_64
  • docker-1.8.2-10.el7.centos.x86_64
  • flannel-0.5.3-9.el7.x86_64

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (421 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (435 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox

ISO

The installer ISO (742 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (1 GB) image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
sa-east-1 ami-059d1f69
ap-northeast-1 ami-c74644a9
ap-southeast-2 ami-bae8ced9
us-west-2 ami-3fb05d5f
ap-southeast-1 ami-6c4c850f
eu-central-1 ami-ce8663a1
eu-west-1 ami-451ea236
us-west-1 ami-fd62129d
us-east-1 ami-e6d5e88c
ap-northeast-2 ami-5732fc39

SHA Sums

d4e43826fc9f641272e589dfb8d979cd592809b34cdbdaee8b7abc9a09ff30d2 CentOS-Atomic-Host-7.1602-GenericCloud.qcow2
33bd4f732c2857c698bd00bc6db29ae2a4d7d0b768f0353d4e28a5c5ab1c999e CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.gz
ee9d9b4d78906ea9c33b0b87c8ad3387e997b479626e64ffedfd3f415a84cded CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.xz
39a548f95022a9ab100d64dbf3579d40c66add1bc56ca938b7dba38b73c2ea87 CentOS-Atomic-Host-7.1602-Installer.iso
2f965b2a502c3839b6be84dee5ee4e60328d9f074e1494ded58b418a309df060 CentOS-Atomic-Host-7.1602-Vagrant-Libvirt.box
bc976d197cac629fd68a6d8faf6bcfaeca8afd0020bf573ef343622a7ae1581b CentOS-Atomic-Host-7.1602-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

February 12, 2016

New CentOS Atomic Host Available

February 12, 2016 12:23 AM

An updated version of CentOS Atomic Host (version 7.20160203) is now available for download. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host 7.2.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-327.4.5.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.6-6.gitca1e384.el7.x86_64
  • kubernetes-1.0.3-0.2.gitb9a88a7.el7.x86_64
  • etcd-2.1.1-2.el7.x86_64
  • ostree-2015.9-2.atomic.el7.x86_64
  • docker-1.8.2-10.el7.centos.x86_64
  • flannel-0.5.3-8.el7.x86_64

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (416 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (428 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox

ISO

The installer ISO (737 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (1 GB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz format (418 MB) and xz compressed (318 MB).

Amazon Machine Images

Region Image ID
sa-east-1 ami-238d0e4f
ap-northeast-1 ami-b54d4adb
ap-southeast-2 ami-27123544
us-west-2 ami-1f94747f
ap-southeast-1 ami-4319d720
eu-central-1 ami-40d6cd2c
eu-west-1 ami-430aba30
us-west-1 ami-dae791ba
us-east-1 ami-896653e3
ap-northeast-2 ami-961fd1f8

SHA Sums

4062ef213eed698ac8ec03b32a55dd6903721a44dc8d54a18513644f160ca7d4 CentOS-Atomic-Host-7.20160130-GenericCloud.qcow2
a7dd91736f45101e95e7d9a80c2eede9164eb0392c8c4748b08c98a42d3eda39 CentOS-Atomic-Host-7.20160130-GenericCloud.qcow2.gz
9eca81d3638e4e00fc734d7233b47a3af803237cc82e5a66b3a587552232dcdc CentOS-Atomic-Host-7.20160130-GenericCloud.qcow2.xz
be3c1a3326c04026f37bd6b6c2fccca3a285ea40ac663230624854abeaaee135 CentOS-Atomic-Host-7.20160130-Installer.iso
90942c3599e15ae21cdc0b1682b8e0d3fa88f8db2f6fdca0ece28c2bffdbb34f CentOS-Atomic-Host-7.20160130-Vagrant-Libvirt.box
ce674573f6d7020b3d04c51f070d7172e71b6a4316c1495c238a7eac0260cb5a CentOS-Atomic-Host-7.20160130-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

February 09, 2016

CentOS Project group on Facebook is over 20k users

February 09, 2016 12:59 PM

centos-facebook-members

The CentOS Project’s facebook group at https://www.facebook.com/groups/centosproject/ just went over 20,000 users ( its at 20,038 at the moment ). Great milestone and many thanks for all the support. Large chunks of credit goes to Ljubomir Ljubojevic ( https://www.facebook.com/ljubomir.ljubojevic ) for his time and curating the admin team on the Facebook group.

And a shout out to Bert Debruijn, Wayne Gray, Stephen Maggs, Eric Yeoh and the rest of the 20k users. Well done guys, next step is the 40k mark, but more importantly – keep up the great help and community support you guys provide each other.

Regards,

February 08, 2016

Forcing CPU speed

February 08, 2016 11:19 PM

Most of the time tuned-adm can set fairly good power states, but I’ve noticed that when I want powersave as the active profile, to try and maximize the battery life – it will still run with an ondemand governor; In some cases, eg when on a plane and spending all the time in a text editor, thats not convenient ( either due to other apps running or when you really want to get that 7 hr battery life ).

On CentOS Linux 7, you can use a bit of a hammer solution in the form of /bin/cpupower – installed as a part of the kernel-tools rpm. This will let you force a specific cpu range with the frequency-set command with -d (min speed) and -u (max speed ) options, or just set a fixed rate with -f. As an example, here is what I do when getting on the plane

/bin/cpupower frequency-set -u 800Mhz

Stuff does get lethargic on the machine, but at 800Mhz and with all the external devices / interfaces / network bits turned off – I can still squeeze about 5 hrs of battery life from my X1 Carbon gen2 which has:

model: 45N1703
voltage: 14.398 V
energy-full-design: 45.02 Wh
capacity: 67.3701%

Ofcourse, you should still set “tuned-adm profile powersave” to get the other power save options, and watch powertop with your typical workload to get an idea on where there might be other tuning wins. And if anyone has thoughts on what to do when that battery capacity hits 50 – 60%… it does not look like the battery on this lenovo x1 is replaceable ( or even sold! ).

regards,

January 26, 2016

EPEL round table at FOSDEM 2016

January 26, 2016 06:57 PM

As a follow-up to last year’s literally-a-discussion-in-the-hallway about EPEL with a few dozen folks at FOSDEM 2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM, “Wither EPEL? Harvesting the next generation of software for the enterprise” in the distro devroom. As a treat, Stephen Smoogen will be moderating the panel; Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.

If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.

The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process (SIGs provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.

Hope to see you there!

Getting Started with CentOS CI

January 26, 2016 01:46 AM

We have been building out a CentOS Community CI infra, that is open to anyone working on infra code or related areas to CentOS Linux, and have now onboarded a few projects. You can see the web ui ( jenkins! ) at https://ci.centos.org/.

Dusty has also put together a basic getting started guide, that also goes into some of the specifics on how and why the CentOS CI infra works the way it does, check it out at http://dustymabe.com/2016/01/23/the-centos-ci-infrastructure-a-getting-started-guide/.

Regards,

Few changes in CentOS Atomic Host build scripts

January 26, 2016 01:36 AM

hi,

If you use the CentOS atomic host downstream build scripts at https://github.com/CentOS/sig-atomic-buildscripts you will want to note a major change in the downstream branch. The older build_ostree_components.sh script has now been replaced with 3 scripts:
builds_stage1.sh, build_stage2.sh and build_sign.sh; Running build_stage1.sh followed by build_stage2.sh will give you exactly the same output as the old script used to.

The third script, build_sign.sh, now makes it easier to sign the ostree repo before any of the images are built. In order to use this, generate or import your gpg secure key, and drop the resulting .gpg file into /usr/share/ostree/trusted.gpg.d/ and edit the build_sign.sh script, edit the keyid at the end, and run the script after your build_stage1.sh is complete ( and before you run the build_stage2.sh ). You will notice a pinentry window popup, enter the password, and check for a 0 exit. Note that the gpg sign is a detached sign for the ostree commit.

regards,

January 17, 2016

Alternative Architectures Abound in CentOS 7 (1511)

January 17, 2016 03:33 PM

With the latest release of CentOS-7, we have added several new Alternative Architecture (AltArch) releases in addition to our standard x86_64 (x86 {Intel/AMD} 64-bit) architecture.

Architectures (aka arches) in Linux distributions refer to the type of CPU on which the distribution runs.  In the case of our standard release, it runs on x86 64-bit CPUs like Intel Pentium 64-bit and AMD 64-bit processors.  A few months ago, in the CentOS 7 (1503) release, we added the x86 32-bit (i686) as well as the Arm 64-bit (aarch64) architectures to CentOS-7.  These two arches have been updated to our latest CentOS-7 release (1511).

We have additionally added 3 new architectures to our latest release.  Arm32 Userland (armhfp), PowerPC 7 (ppc64) and PowerPC 8 LE (ppc64le).  Here is the Release Announcement.

These new architectures provide a long lived community based platform based on our x86_64 releases many new machine types.  The CentOS team is very excited to be able to provide our code base for these architectures and we need help from the community to make them all better.

We are hosting a CentOS Dojo in Brussels, Belgium on the 29th Jan 2016. Lots of the key people working on the AltArch builds will be present there and it would be a great forum to engage with these groups. You can get the details for the event HERE, including the registration links. (Note: Registrations are currently closed, but we are trying to find more space, so they could open before the event)

We will also have a booth at FOSDEM 2016, as well as talks in the Distributions DevRoom, see you there.

December 16, 2015

Fixing CentOS 7 systemd conflicts with docker

December 16, 2015 03:38 PM

With the release of 7.2, we’ve seen a rise in bugs filed for container build failures in docker. Not to worry, we have an explanation for what’s going on, and the solution for how to fix it.

The Problem:

You fire off a docker build, and instead of a shiny new container, you end up with an error message similar to:

Transaction check error:
file /usr/lib64/libsystemd-daemon.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-id128.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-journal.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-login.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libudev.so.1 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/security/pam_systemd.so from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64

This is due to the transition from systemd-container-* packages to actual systemd. For some reason, the upstream package doesn’t obsolete or conflict, and so you’ll get errors when installing packages.

The fix:

The fix for this issue is very simple. Do a fresh docker pull of the base container.

# docker pull centos:latest

or

# docker pull centos:7

Your base container will now be at the appropriate level, and won’t have this conflict on package installs so you can simply run your docker build again and it will work.

But I have to use 7.1.1503!

If for some reason you must use a point-in-time image like 7.1.1503, then a package swap will resolve things for you. 7.1.1503 comes with fakesystemd, which you must exchange for systemd. To do this, execute the following command in your Dockerfile, prior to installing any packages:

RUN yum clean all && yum swap fakesystemd systemd

This will ensure you get the current package data, and will replace the fakesystemd package which is no longer needed. That’s all there is to solving the file conflicts and systemd dependency issues for CentOS base containers.

 

December 14, 2015

Kernel 3.10.0-327 issue on AMD Neo processor

December 14, 2015 11:00 PM

As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel. Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues. That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a AMD Athlon(tm) II Neo K345 Dual-Core Processor. So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web. When rebooting on the newer kernel, it panics directly.

Two bug reports are open for this, one on the CentOS Bug tracker, linked also to the upstream one. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :

  • boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
  • once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
  • as root, run grub2-mkconfig -o /etc/grub2.conf

Hope it can help others too

December 05, 2015

CentOS Meetup in London 3rd Dec 2015

December 05, 2015 11:29 AM

Hi,

We now have a CentOS Users and contributors group for the UK on meetup.com ( http://www.meetup.com/CentOS-UK/ ), and I hosted the inaugural meetup over beer a few days back. It was a great syncup, and lots of very interesting conversations. One thing that always comes through at these meetings and I really appreciate is the huge diversity in the userbase, and the very different viewpoints and value propositions that people focus on into the CentOS Linux platform, and the larger ecosystem around it.

The main points that stuck with me over the evening were the CentOS Atomic Host ( https://wiki.centos.org/SpecialInterestGroup/Atomic/Download ) and the CentOS on ARM devices ( and the general direction of where ARM devices are going ). Stay tuned for more info on that in the next few weeks.

Looking forward now to the next London meetup ( likely 2nd week of Jan ’16 ), and also joining some meetings in other parts of the UK. Everyone is welcome to join, and I could certainly use help in organising meetups in other places around the UK. See you at a CentOS meetup soon.

Regards,

November 30, 2015

Kernel IO wait and megaraid controller

November 30, 2015 11:00 PM

Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our Zabbix monitoring instance complaining about web scenarios failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but iotop is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).

As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.

At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.

That server has the following raid adapter :

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)

That means that you need to use the MegaCLI tool for that.

A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) page explaining the issue with default settings and the "Patrol Read" operation. While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"

I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode. Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :

VM iowait

That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)

Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).

Hope it can help others too

November 24, 2015

CentOS Atomic Host Updated

November 24, 2015 06:46 PM

Today we’re announcing an update to CentOS Atomic Host (version 7.20151118), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host. Please note that this release is based on content derived from the upstream 7.1 release.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-229.20.1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.6-6.gitca1e384.el7.x86_64
  • kubernetes-1.0.3-0.2.gitb9a88a7.el7.x86_64
  • etcd-2.1.1-2.el7.x86_64
  • ostree-2015.6-4.atomic.el7.x86_64
  • docker-1.8.2-7.el7.centos.x86_64
  • flannel-0.2.0-10.el7.x86_64

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (409 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (421 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox

ISO

The installer ISO (673 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (934 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz format (408 MB) and xz compressed (323 MB).

Amazon Machine Images

Region Image ID
sa-east-1 ami-39348e55
ap-northeast-1 ami-cec7e4a0
ap-southeast-2 ami-5e421b3d
us-west-2 ami-cb6878aa
ap-southeast-1 ami-49a4652a
eu-central-1 ami-f72b399b
eu-west-1 ami-3c2ff54f
us-west-1 ami-48e88628
us-east-1 ami-19d59073

SHA Sums

cf7c5e67e18a3aaa27d1c6c4710bb9c45a62c80fb5e18a836a2c19758eb3d23e CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2 92cf36f528ae00235ad6eb4ee0d0dd32ccf5f729f2c6c9a99a7471882effecaa CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2.gz 263c1f403c352d31944ca8c814fd241693caa12dbd0656a22cdc3f04ca3ca8d1 CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2.xz dfe0c85efff2972d15224513adc75991aabc48ec8f8ad49dad44f8c51cfb8165 CentOS-Atomic-Host-7.20151101-Installer.iso 139eb88d6a5d1a54ae3900c5643f04c4291194d7b3fccf8309b8961bbd33e4ec CentOS-Atomic-Host-7.20151101-Vagrant-Libvirt.box 63ab56d08cdc75249206ad8a7ee3cdd51a226257c8a74053a72564c3ff3d91a0 CentOS-Atomic-Host-7.20151101-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

November 19, 2015

RHEL 7.2 released today

November 19, 2015 07:28 AM

Red Hat released their second point release to the EL7 series today. Most if not all of the sources seem to already be in place on git.centos.org, so we can start the rebuild and QA cycle. Red Hat release notes can be found here.

It is not yet decided, if we do a CR-release with the build packages, or if it will be a release with ISOs and all. Our reading of the errata released with 7.2 indicates no critical security update. We will post news on this matter here. Those errata can be found here.

As Regulars know, this will take some time, and the next minor release of CentOS-7 will be done when it is done. So it can be either 7.1511 or 7.1512.

Stay tuned.

October 15, 2015

The portable cloud

October 15, 2015 12:32 AM

In late 2012 I constructed myself a bare bones cluster of a couple of motherboards, stacked up and powered, to be used as a dev cloud. It worked, but was a huge mess on the table, and it was certainly neither portable nor quiet. That didnt mean I would not carry it around – I did, across the atlantic a few times, over to Asia once. It worked. Then in 2014 I gave the stack away. Which caused a few issues, since living in a certain part of London means I must put up with a rather sad 3.5mbps adsl link from BT. Had I been living in a rural setting, government grants etc would ensure we get super high speed internet, but not in London.

I really needed ( since my work pattern had incorporated it ), my development and testing cluster back. Time to build a new one!

Late summer last year the folks at Protocase kindly built me a cloud box, to my specifications. This is a single case, that can accommodate upto 8 mini-itx (or 6 mini-ATX, which is what i am using ) motherboards, along with all the networking kit for them and a disk each. Its not yet left the UK, but the box is reasonably well traveled in the country. If you come along to the CentOS Dojo, Belgium or the CentOS table at Fosdem, you should see it there in 2016. Here you can see the machine standing on its side, with the built in trolley for mobility.

Things to note here : you can see the ‘back’ of the box, with the power switches, the psu with its 3 hot swap modules, the 3 large case cooling fans and the cutout for the external network cable to go into the box. While there is only 1 psu, the way things are cabled inside the box, its possible to power upto 4 channels individually. So with 8 boards, you’d be able to power manage each pair on its own.

Box-1

Here is the empty machine as it was delivered. The awesome guys at Protocase pre-plumbled in the psu, wired up the case fans ( there are 3 at the back, and 2 in the front. The ones in the front are wired from the psu so run all the time, where as the back 3 are connected as regular case-fan’s onto the motherboards, so they come up when the corresponding machine is running ) – I thought long and hard about moving the fans to the top/bottom but since the machine lives vertically, this position gives me the best airflow. On the right side, opposite from the psu, you can see 4 mounting points, this is where the network switch goes in.
Box-2

Close up of the PSU used in this machine, I’ve load tested this with 6x i5 4690K boards and it works fine. I did test with load, for a full 24 hrs. Next time I do that, I’ll get some wattage and amp readings as well. Its rated for 950w max. I suspect anything more than 6 boards will get pretty close to that mark. Also worth keeping in mind is that this is meant to be a cloud or mass infra testing machine, its not built for large storage. Each board has its own 256gb ssd, and if i need additional storage, that will come over the network from a ceph/gluster setup outside.
Box-3

The PSU output is split and managed in multiple channels, you an see 3 of the 4 here. Along with some of the spare case fan lines.
Box-4

Another shot of the back 3 fans, you can also see the motherboard mounting points built into the base of the box. They put these up for a mini-itx / mini-ATX as well as regular ATX. I suspect its possible to get 4 ATX boards in there, but its going to be seriously tight and the case fans might need an upgrade.
Box-5

Close up of the industrial trolley that is mounted onto the box ( its an easy remove for when its not needed, i just leave it on ).
Box-6

The right side of the box hosts the network switch, this allows me to put the power cables on the left and back, with the network cables on the right and front. Each board has its own network port ( as they do.. ), and i use a usb3 to gbit converter at the back to give me a second port. This then allows me to split public and private networks, or use one for storage and another for application traffic etc. Since this picture was taken, I’ve stuck another 8 port switch on the front of this switch’s cover, to give me the 16 ports i really need.
Box-7

Here is the rig with the first motherboard added in, with an intel i5 4960k cpu. The board can do 32 gb, i had 16 in it then, have upgraded since.
Box-8

Now with everything wired up. There is enough space under the board to drive the network cables through.
Box-9

And with a second board added in. This time an AMD fx-8350. Its the only AMD in the mix, and I wanted one to have the option to test with, the rest of the rig is all intels. The i5’s are a fewer cores, but overall with far better power usage patterns and run cooler. With the box fully populated, running a max load, things get warm in there.
Box-10

The boards layer up on top of each other, with an offset; In the picture above, the intel board is aligned to the top of box, the next tier board was aligned to the bottom side of the box. This gives the cpu fans a bit more head room, and has a massive impact on temperature inside the box. Initially, I had just stacked them up 3 on each side – ambient temperature under sustained load was easily touching 40 deg C in the box. Staggering them meant ambient temperature came down to 34 Deg C.

One key tip was Rich Jones discovering threaded rods, these fit right into the mother board mounting points, and run all the way through to the top of the box. You can then use nuts on the rod to hold the motherboard at whatever height you need.

If you fancy a box like this for yourself, give the guys at Protocase a call and ask for Stephen MacNeil, I highly recommend their work. The quality of the work is excellent. In a couple of years time, I am almost certainly going to be back talking to them about the cloudybox2. And yes, they are the same guys who build the 45drives storinator machine.

update: the box runs pretty quiet. I typically only have 2 or 3 machines running in there, but even with all 6 running a heavy sustained load, its not massively loud, the airflow is doing its thing. Key thing there is that the front fans are set to ingest air – and they line up perfectly with the cpu placements, blowing directly at the heat sinks. I suspect the top most tier boards only get about 50% of the airflow compared to the lower two tiers, but they also get the least utilisation of the lot.

enjoy!

October 13, 2015

CentOS Linux 7 32-bit x86 (i386) Architecture Released

October 13, 2015 05:04 PM

The Alternative Architecture Special Interest Group (AltArch SIG) is happy to announce the release the x86 32-bit version of CentOS Linux 7.  This architecture is also known as i386 or i686.  You can get this version of CentOS from the INFO page.

This version of CentOS Linux 7 is for PAE capable 32 bit machines, including x86 based IOT boards similar to the Intel Edison.  It joins the 64-bit ARMv8 (aarch64) architecture as a fully released AltArch version.

Work within the AltArch SIG currently continues on the 32-bit ARMv7, 64-bit PPC little-endian, and 64-bit PPC big-endian architectures.

 

 

October 09, 2015

CentOS Linux 5 Update batch rate

October 09, 2015 03:54 PM

Hi,

We typically push updates in batch’s. This might be anywhere from 1 update rpm to 100’s ( for when there is a big update upstream ), however most batches are in the region of 5 to 20 rpms. So how many batches have we done in the last year in a bit ? Here is a graph depicting our update batch release rate since Jan 1st 2014 till today.

cl5-update-batch-rate

I’ve removed the numbers from the release rate, and left the dates in since its the trending that most interesting. In a few months time, once we hit new years I’ll update this to split by year so its easy to see how 2015 compared with 2014.

You can click the image above to get a better view. The blue segment represents batches built, and the orange represents batches released.

regards,

October 06, 2015

CentOS Atomic Host in AWS via Vagrant

October 06, 2015 12:56 PM

Hi,

You may have seen the announcement that CentOS Atomic Host 15.10 is now available ( if not, go read the announcement here : http://seven.centos.org/2015/10/new-centos-atomic-host-release-available-now/ ).

You can get the Vagrant box’s for this image via the Atlas / VagrantCloud process or just via direct downloads from http://cloud.centos.org/centos/7/atomic/images/ )

What I’ve also done this time is create a vagrant_aws box that references the AMIs in the regions they are published. This is hand crafted and really just a PoC like effort, but if its something people find helpful I can plumb this into the main image generation process and ensure we get this done for every release.

QuickStart
Once you have vagrant running on your machine, you will need the vagrant_aws plugin. You can install this with:

vagrant plugin install aws

and check its there with a

vagrant plugin list“.

You can then add the box with “vagrant box add centos/atomic-host-aws“. Before we can instantiate the box, we need a local config with the aws credentials. So create a directory, and add the following into a Vagrantfile there :

Vagrant.configure(2) do |config|
  config.vm.box = "centos/atomic-host-aws"
  config.vm.provider :aws do |aws, override|
    aws.access_key_id = "Your AWS EC2 Key"
    aws.secret_access_key = "Your Secret Key"
    aws.keypair_name = "Your keypair name"
    override.ssh.private_key_path = "Path to key"
  end
end


Once you have those lines populated with your own information, you should now be able to run
vagrant up --provider aws

It takes a few minutes to spin up the instance. Once done you should be able to “vagrant ssh” and use the machine. Just keep in mind that you want to terminate any unused instances, since stopping will only suspend it. A real vagrant destroy is needed to lose the ec2 resources.

Note: this box is setup with the folder sync’ feature turned off. Also, the ami’s per region are specified in the box itself, if you want to use a specific region just add a aws.region = ““, into your local Vagrantfile, everything else should get taken care of.

You can read more about the aws provider for vagrant here : https://github.com/mitchellh/vagrant-aws

Let me know how you get on with this, if folks find it useful we can start generating these for all our vagrant images.

October 05, 2015

New CentOS Atomic Host Release Available Now

October 05, 2015 06:08 PM

Today we’re announcing an update to CentOS Atomic Host (version 7.20151001), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-229.14.1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.0-115.el7.x86_64
  • kubernetes-1.0.3-0.1.gitb9a88a7.el7.x86_64
  • flannel-0.2.0-10.el7.x86_64
  • docker-1.7.1-115.el7.x86_64
  • etcd-2.1.1-2.el7.x86_64
  • ostree-2015.6-4.atomic.el7.x86_64

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

Upgrading

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (389 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (400 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host. For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

  vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO (672 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (393 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz format (391 MB) and xz compressed (390 MB).

Amazon Machine Images

Region Image ID
------ --------
sa-east-1 ami-1b52c506
ap-northeast-1 ami-3428b634
ap-southeast-2 ami-43f2bb79
us-west-2 ami-73eaf043
ap-southeast-1 ami-346f7966
eu-central-1 ami-7ed1d363
eu-west-1 ami-3936034e
us-west-1 ami-6d9c5a29
us-east-1 ami-951452f0

SHA Sums

96586e03a1a172195eae505be35729c1779e137cd1f8c11a74c7cf94b0663cb2 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2 33d338bb42ef916a40ac89adde9c121c98fbd4220b79985f91b47133310aa537 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2.gz 73184e6f77714472f63a7c944d3252aadc818ac42ae70dd8c2e72e7622e4de95 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2.xz 4e09f6dfae5024191fec9eab799861d87356a6075956d289dcb31c6b3ec37970 CentOS-Atomic-Host-7.20151001-Installer.iso 92932e9565b8118d7ca7cfbe8e18b6efd53783853cc75dae9ad5566c6e0d9c88 CentOS-Atomic-Host-7.20151001-Vagrant-Libvirt.box 8f626bdafaecb954ae3fab6a8a481da1b3ebb8f7acf6e84cf0b66771a3ac3a65 CentOS-Atomic-Host-7.20151001-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

October 01, 2015

Progress on the Software Collections SIG

October 01, 2015 10:57 AM

hi,

The software collections special interest group ( https://wiki.centos.org/SpecialInterestGroup/SCLo ) has been making great progress and have finished their initial bootstrap process. They are now getting ready to do a mass build for test and release. I’ve just delivered their rpm signing key, so we are pretty close to seeing content in mirror.centos.org.

As an initial goal, they are working on and delivering rpms – but in parallel efforts are on to get container images into the registries as well, so folks using containers today are able to consume the software collections in either format.

The effort is being co-ordinated by Honza Horak ( https://twitter.com/HorakHonza ), and he’s the best person to get in touch with to join and help.

Regards,

September 23, 2015

CentOS AltArch SIG status

September 23, 2015 10:00 PM

Recently I had (from an Infra side) to start deploying KVM guests for the ppc64 and ppc64le arches, so that AltArch SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about Power8 and the fact you can just use libvirt/virt-install to quickly provision new VMs on PowerKVM , but I'll do that in a separate post.

Parallel to ppc64/ppc64le, armv7hl interested some Community members, and the discussion/activity about that arch is discussed on the dedicated mailing list. It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )

Last (but not least) in this AltArch list is i686 : Johnny built all packages and are already publicly available on buildlogs.centos.org , each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.

If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the centos-devel list !

September 16, 2015

CentOS Dojo in Barcelona

September 16, 2015 10:00 PM

So, thanks to the folks from Opennebula, we'll have another CentOS Dojo in Barcelona on Tuesday 20th October 2015. That even will be colocated with the Opennebulaconf happening the days after that Dojo. If you're attending the OpennebulaConf, or if you're just in the area and would like to attend the CentOS Dojo, feel free to register

Regarding the Dojo content, I'll be myself giving a presentation about Selinux : covering a little bit of intro (still needed for some folks afraid of using it , don't know why but we'll change that ...) about selinux itself, how to run it on bare-metal, virtual machines and there will be some slides for the mandatory container hype thing. But we'll also cover managing selinux booleans/contexts, etc through your config management solution. (We'll cover puppet and ansible as those are the two I'm using on a daily basis) and also how to build and deploy custom selinux policies with your config management solution.

On the other hand, if you're a CentOS user and would like yourself to give a talk during that Dojo, feel free to submit a talk ! More informations about the Dojo on the dedicated wiki page

See you there !

September 10, 2015

Our second stable Atomic Host release

September 10, 2015 10:41 PM

Jason just announced our second stable CentOS Atomic Host release at http://seven.centos.org/2015/09/announcing-a-new-release-of-centos-atomic-host/

I’m very excited about this one, and its not only because I’ve helped make it happen – but this is also the first time a SIG in the CentOS Ecosystem has done a full release, from rpms, to images, to hosted vendor space ( AMI’s in 9 regions on Amazon’s EC2 ).

One of the other things that I’ve been really excited about is that this is the first time we’ve used the rpm-sign infra that I’ve been working on these past few days. It allows SIG built content ( rpms or images or ISOs or even text ) to be signed with pre-selected keys. And do this without having to compromise the key trust level. I will blog more around this process and how SIGs can consume these keys, and how this maps to the TAG model being used in cbs.centos.org

for now, go get started with the CentOS Atomic Host!

regards,

CentOS Dojo in Barcelona, 20th Oct 2016

September 10, 2015 10:10 PM

Hi,

We have a dojo coming up in Barcelona, co-located with the OpenNebula conference in late October. The event is going to run from 1:30pm to 6:30pm ( but I suspect it wont really end till well into the early hours of the morning as people keep talking about CentOS things over drinks, dinner, more drinks etc ! ).

You can get the details, including howto register at https://wiki.centos.org/Events/Dojo/Barcelona2015.

Fabian is going to be there, and we are talking to a great set of potential speakers – the focus is going to be very much on hands on learning about technologies on and around CentOS Linux! And as in the past, we expect content to be sysadmin / operations folks specific rather than developers ( although, we highly encourage developers to come along as well, and talk to us and share their experiences with the sysadmin world! ).

regards,


Powered by Planet!
Last updated: June 25, 2016 06:30 AM