February 09, 2016

CentOS Project group on Facebook is over 20k users

February 09, 2016 12:00 PM

centos-facebook-members

The CentOS Project’s facebook group at https://www.facebook.com/groups/centosproject/ just went over 20,000 users ( its at 20,038 at the moment ). Great milestone and many thanks for all the support. Large chunks of credit goes to Ljubomir Ljubojevic ( https://www.facebook.com/ljubomir.ljubojevic ) for his time and curating the admin team on the Facebook group.

And a shout out to Bert Debruijn, Wayne Gray, Stephen Maggs, Eric Yeoh and the rest of the 20k users. Well done guys, next step is the 40k mark, but more importantly – keep up the great help and community support you guys provide each other.

Regards,

February 08, 2016

Forcing CPU speed

February 08, 2016 11:19 PM

Most of the time tuned-adm can set fairly good power states, but I’ve noticed that when I want powersave as the active profile, to try and maximize the battery life – it will still run with an ondemand governor; In some cases, eg when on a plane and spending all the time in a text editor, thats not convenient ( either due to other apps running or when you really want to get that 7 hr battery life ).

On CentOS Linux 7, you can use a bit of a hammer solution in the form of /bin/cpupower – installed as a part of the kernel-tools rpm. This will let you force a specific cpu range with the frequency-set command with -d (min speed) and -u (max speed ) options, or just set a fixed rate with -f. As an example, here is what I do when getting on the plane

/bin/cpupower frequency-set -u 800Mhz

Stuff does get lethargic on the machine, but at 800Mhz and with all the external devices / interfaces / network bits turned off – I can still squeeze about 5 hrs of battery life from my X1 Carbon gen2 which has:

model: 45N1703
voltage: 14.398 V
energy-full-design: 45.02 Wh
capacity: 67.3701%

Ofcourse, you should still set “tuned-adm profile powersave” to get the other power save options, and watch powertop with your typical workload to get an idea on where there might be other tuning wins. And if anyone has thoughts on what to do when that battery capacity hits 50 – 60%… it does not look like the battery on this lenovo x1 is replaceable ( or even sold! ).

regards,

January 26, 2016

EPEL round table at FOSDEM 2016

January 26, 2016 06:57 PM

As a follow-up to last year’s literally-a-discussion-in-the-hallway about EPEL with a few dozen folks at FOSDEM 2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM, “Wither EPEL? Harvesting the next generation of software for the enterprise” in the distro devroom. As a treat, Stephen Smoogen will be moderating the panel; Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.

If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.

The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process (SIGs provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.

Hope to see you there!

Getting Started with CentOS CI

January 26, 2016 01:46 AM

We have been building out a CentOS Community CI infra, that is open to anyone working on infra code or related areas to CentOS Linux, and have now onboarded a few projects. You can see the web ui ( jenkins! ) at https://ci.centos.org/.

Dusty has also put together a basic getting started guide, that also goes into some of the specifics on how and why the CentOS CI infra works the way it does, check it out at http://dustymabe.com/2016/01/23/the-centos-ci-infrastructure-a-getting-started-guide/.

Regards,

Few changes in CentOS Atomic Host build scripts

January 26, 2016 01:36 AM

hi,

If you use the CentOS atomic host downstream build scripts at https://github.com/CentOS/sig-atomic-buildscripts you will want to note a major change in the downstream branch. The older build_ostree_components.sh script has now been replaced with 3 scripts:
builds_stage1.sh, build_stage2.sh and build_sign.sh; Running build_stage1.sh followed by build_stage2.sh will give you exactly the same output as the old script used to.

The third script, build_sign.sh, now makes it easier to sign the ostree repo before any of the images are built. In order to use this, generate or import your gpg secure key, and drop the resulting .gpg file into /usr/share/ostree/trusted.gpg.d/ and edit the build_sign.sh script, edit the keyid at the end, and run the script after your build_stage1.sh is complete ( and before you run the build_stage2.sh ). You will notice a pinentry window popup, enter the password, and check for a 0 exit. Note that the gpg sign is a detached sign for the ostree commit.

regards,

January 17, 2016

Alternative Architectures Abound in CentOS 7 (1511)

January 17, 2016 03:33 PM

With the latest release of CentOS-7, we have added several new Alternative Architecture (AltArch) releases in addition to our standard x86_64 (x86 {Intel/AMD} 64-bit) architecture.

Architectures (aka arches) in Linux distributions refer to the type of CPU on which the distribution runs.  In the case of our standard release, it runs on x86 64-bit CPUs like Intel Pentium 64-bit and AMD 64-bit processors.  A few months ago, in the CentOS 7 (1503) release, we added the x86 32-bit (i686) as well as the Arm 64-bit (aarch64) architectures to CentOS-7.  These two arches have been updated to our latest CentOS-7 release (1511).

We have additionally added 3 new architectures to our latest release.  Arm32 Userland (armhfp), PowerPC 7 (ppc64) and PowerPC 8 LE (ppc64le).  Here is the Release Announcement.

These new architectures provide a long lived community based platform based on our x86_64 releases many new machine types.  The CentOS team is very excited to be able to provide our code base for these architectures and we need help from the community to make them all better.

We are hosting a CentOS Dojo in Brussels, Belgium on the 29th Jan 2016. Lots of the key people working on the AltArch builds will be present there and it would be a great forum to engage with these groups. You can get the details for the event HERE, including the registration links. (Note: Registrations are currently closed, but we are trying to find more space, so they could open before the event)

We will also have a booth at FOSDEM 2016, as well as talks in the Distributions DevRoom, see you there.

December 16, 2015

Fixing CentOS 7 systemd conflicts with docker

December 16, 2015 03:38 PM

With the release of 7.2, we’ve seen a rise in bugs filed for container build failures in docker. Not to worry, we have an explanation for what’s going on, and the solution for how to fix it.

The Problem:

You fire off a docker build, and instead of a shiny new container, you end up with an error message similar to:

Transaction check error:
file /usr/lib64/libsystemd-daemon.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-id128.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-journal.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-login.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libudev.so.1 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/security/pam_systemd.so from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64

This is due to the transition from systemd-container-* packages to actual systemd. For some reason, the upstream package doesn’t obsolete or conflict, and so you’ll get errors when installing packages.

The fix:

The fix for this issue is very simple. Do a fresh docker pull of the base container.

# docker pull centos:latest

or

# docker pull centos:7

Your base container will now be at the appropriate level, and won’t have this conflict on package installs so you can simply run your docker build again and it will work.

But I have to use 7.1.1503!

If for some reason you must use a point-in-time image like 7.1.1503, then a package swap will resolve things for you. 7.1.1503 comes with fakesystemd, which you must exchange for systemd. To do this, execute the following command in your Dockerfile, prior to installing any packages:

RUN yum clean all && yum swap fakesystemd systemd

This will ensure you get the current package data, and will replace the fakesystemd package which is no longer needed. That’s all there is to solving the file conflicts and systemd dependency issues for CentOS base containers.

 

December 14, 2015

Kernel 3.10.0-327 issue on AMD Neo processor

December 14, 2015 11:00 PM

As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel. Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues. That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a AMD Athlon(tm) II Neo K345 Dual-Core Processor. So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web. When rebooting on the newer kernel, it panics directly.

Two bug reports are open for this, one on the CentOS Bug tracker, linked also to the upstream one. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :

  • boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
  • once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
  • as root, run grub2-mkconfig -o /etc/grub2.conf

Hope it can help others too

December 05, 2015

CentOS Meetup in London 3rd Dec 2015

December 05, 2015 11:29 AM

Hi,

We now have a CentOS Users and contributors group for the UK on meetup.com ( http://www.meetup.com/CentOS-UK/ ), and I hosted the inaugural meetup over beer a few days back. It was a great syncup, and lots of very interesting conversations. One thing that always comes through at these meetings and I really appreciate is the huge diversity in the userbase, and the very different viewpoints and value propositions that people focus on into the CentOS Linux platform, and the larger ecosystem around it.

The main points that stuck with me over the evening were the CentOS Atomic Host ( https://wiki.centos.org/SpecialInterestGroup/Atomic/Download ) and the CentOS on ARM devices ( and the general direction of where ARM devices are going ). Stay tuned for more info on that in the next few weeks.

Looking forward now to the next London meetup ( likely 2nd week of Jan ’16 ), and also joining some meetings in other parts of the UK. Everyone is welcome to join, and I could certainly use help in organising meetups in other places around the UK. See you at a CentOS meetup soon.

Regards,

November 30, 2015

Kernel IO wait and megaraid controller

November 30, 2015 11:00 PM

Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our Zabbix monitoring instance complaining about web scenarios failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but iotop is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).

As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.

At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.

That server has the following raid adapter :

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)

That means that you need to use the MegaCLI tool for that.

A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) page explaining the issue with default settings and the "Patrol Read" operation. While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"

I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode. Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :

VM iowait

That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)

Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).

Hope it can help others too

November 24, 2015

CentOS Atomic Host Updated

November 24, 2015 06:46 PM

Today we’re announcing an update to CentOS Atomic Host (version 7.20151118), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host. Please note that this release is based on content derived from the upstream 7.1 release.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-229.20.1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.6-6.gitca1e384.el7.x86_64
  • kubernetes-1.0.3-0.2.gitb9a88a7.el7.x86_64
  • etcd-2.1.1-2.el7.x86_64
  • ostree-2015.6-4.atomic.el7.x86_64
  • docker-1.8.2-7.el7.centos.x86_64
  • flannel-0.2.0-10.el7.x86_64

Upgrading

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (409 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (421 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox

ISO

The installer ISO (673 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (934 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz format (408 MB) and xz compressed (323 MB).

Amazon Machine Images

Region Image ID
sa-east-1 ami-39348e55
ap-northeast-1 ami-cec7e4a0
ap-southeast-2 ami-5e421b3d
us-west-2 ami-cb6878aa
ap-southeast-1 ami-49a4652a
eu-central-1 ami-f72b399b
eu-west-1 ami-3c2ff54f
us-west-1 ami-48e88628
us-east-1 ami-19d59073

SHA Sums

cf7c5e67e18a3aaa27d1c6c4710bb9c45a62c80fb5e18a836a2c19758eb3d23e CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2 92cf36f528ae00235ad6eb4ee0d0dd32ccf5f729f2c6c9a99a7471882effecaa CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2.gz 263c1f403c352d31944ca8c814fd241693caa12dbd0656a22cdc3f04ca3ca8d1 CentOS-Atomic-Host-7.20151101-GenericCloud.qcow2.xz dfe0c85efff2972d15224513adc75991aabc48ec8f8ad49dad44f8c51cfb8165 CentOS-Atomic-Host-7.20151101-Installer.iso 139eb88d6a5d1a54ae3900c5643f04c4291194d7b3fccf8309b8961bbd33e4ec CentOS-Atomic-Host-7.20151101-Vagrant-Libvirt.box 63ab56d08cdc75249206ad8a7ee3cdd51a226257c8a74053a72564c3ff3d91a0 CentOS-Atomic-Host-7.20151101-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

November 19, 2015

RHEL 7.2 released today

November 19, 2015 07:28 AM

Red Hat released their second point release to the EL7 series today. Most if not all of the sources seem to already be in place on git.centos.org, so we can start the rebuild and QA cycle. Red Hat release notes can be found here.

It is not yet decided, if we do a CR-release with the build packages, or if it will be a release with ISOs and all. Our reading of the errata released with 7.2 indicates no critical security update. We will post news on this matter here. Those errata can be found here.

As Regulars know, this will take some time, and the next minor release of CentOS-7 will be done when it is done. So it can be either 7.1511 or 7.1512.

Stay tuned.

October 15, 2015

The portable cloud

October 15, 2015 12:32 AM

In late 2012 I constructed myself a bare bones cluster of a couple of motherboards, stacked up and powered, to be used as a dev cloud. It worked, but was a huge mess on the table, and it was certainly neither portable nor quiet. That didnt mean I would not carry it around – I did, across the atlantic a few times, over to Asia once. It worked. Then in 2014 I gave the stack away. Which caused a few issues, since living in a certain part of London means I must put up with a rather sad 3.5mbps adsl link from BT. Had I been living in a rural setting, government grants etc would ensure we get super high speed internet, but not in London.

I really needed ( since my work pattern had incorporated it ), my development and testing cluster back. Time to build a new one!

Late summer last year the folks at Protocase kindly built me a cloud box, to my specifications. This is a single case, that can accommodate upto 8 mini-itx (or 6 mini-ATX, which is what i am using ) motherboards, along with all the networking kit for them and a disk each. Its not yet left the UK, but the box is reasonably well traveled in the country. If you come along to the CentOS Dojo, Belgium or the CentOS table at Fosdem, you should see it there in 2016. Here you can see the machine standing on its side, with the built in trolley for mobility.

Things to note here : you can see the ‘back’ of the box, with the power switches, the psu with its 3 hot swap modules, the 3 large case cooling fans and the cutout for the external network cable to go into the box. While there is only 1 psu, the way things are cabled inside the box, its possible to power upto 4 channels individually. So with 8 boards, you’d be able to power manage each pair on its own.

Box-1

Here is the empty machine as it was delivered. The awesome guys at Protocase pre-plumbled in the psu, wired up the case fans ( there are 3 at the back, and 2 in the front. The ones in the front are wired from the psu so run all the time, where as the back 3 are connected as regular case-fan’s onto the motherboards, so they come up when the corresponding machine is running ) – I thought long and hard about moving the fans to the top/bottom but since the machine lives vertically, this position gives me the best airflow. On the right side, opposite from the psu, you can see 4 mounting points, this is where the network switch goes in.
Box-2

Close up of the PSU used in this machine, I’ve load tested this with 6x i5 4690K boards and it works fine. I did test with load, for a full 24 hrs. Next time I do that, I’ll get some wattage and amp readings as well. Its rated for 950w max. I suspect anything more than 6 boards will get pretty close to that mark. Also worth keeping in mind is that this is meant to be a cloud or mass infra testing machine, its not built for large storage. Each board has its own 256gb ssd, and if i need additional storage, that will come over the network from a ceph/gluster setup outside.
Box-3

The PSU output is split and managed in multiple channels, you an see 3 of the 4 here. Along with some of the spare case fan lines.
Box-4

Another shot of the back 3 fans, you can also see the motherboard mounting points built into the base of the box. They put these up for a mini-itx / mini-ATX as well as regular ATX. I suspect its possible to get 4 ATX boards in there, but its going to be seriously tight and the case fans might need an upgrade.
Box-5

Close up of the industrial trolley that is mounted onto the box ( its an easy remove for when its not needed, i just leave it on ).
Box-6

The right side of the box hosts the network switch, this allows me to put the power cables on the left and back, with the network cables on the right and front. Each board has its own network port ( as they do.. ), and i use a usb3 to gbit converter at the back to give me a second port. This then allows me to split public and private networks, or use one for storage and another for application traffic etc. Since this picture was taken, I’ve stuck another 8 port switch on the front of this switch’s cover, to give me the 16 ports i really need.
Box-7

Here is the rig with the first motherboard added in, with an intel i5 4960k cpu. The board can do 32 gb, i had 16 in it then, have upgraded since.
Box-8

Now with everything wired up. There is enough space under the board to drive the network cables through.
Box-9

And with a second board added in. This time an AMD fx-8350. Its the only AMD in the mix, and I wanted one to have the option to test with, the rest of the rig is all intels. The i5’s are a fewer cores, but overall with far better power usage patterns and run cooler. With the box fully populated, running a max load, things get warm in there.
Box-10

The boards layer up on top of each other, with an offset; In the picture above, the intel board is aligned to the top of box, the next tier board was aligned to the bottom side of the box. This gives the cpu fans a bit more head room, and has a massive impact on temperature inside the box. Initially, I had just stacked them up 3 on each side – ambient temperature under sustained load was easily touching 40 deg C in the box. Staggering them meant ambient temperature came down to 34 Deg C.

One key tip was Rich Jones discovering threaded rods, these fit right into the mother board mounting points, and run all the way through to the top of the box. You can then use nuts on the rod to hold the motherboard at whatever height you need.

If you fancy a box like this for yourself, give the guys at Protocase a call and ask for Stephen MacNeil, I highly recommend their work. The quality of the work is excellent. In a couple of years time, I am almost certainly going to be back talking to them about the cloudybox2. And yes, they are the same guys who build the 45drives storinator machine.

update: the box runs pretty quiet. I typically only have 2 or 3 machines running in there, but even with all 6 running a heavy sustained load, its not massively loud, the airflow is doing its thing. Key thing there is that the front fans are set to ingest air – and they line up perfectly with the cpu placements, blowing directly at the heat sinks. I suspect the top most tier boards only get about 50% of the airflow compared to the lower two tiers, but they also get the least utilisation of the lot.

enjoy!

October 13, 2015

CentOS Linux 7 32-bit x86 (i386) Architecture Released

October 13, 2015 05:04 PM

The Alternative Architecture Special Interest Group (AltArch SIG) is happy to announce the release the x86 32-bit version of CentOS Linux 7.  This architecture is also known as i386 or i686.  You can get this version of CentOS from the INFO page.

This version of CentOS Linux 7 is for PAE capable 32 bit machines, including x86 based IOT boards similar to the Intel Edison.  It joins the 64-bit ARMv8 (aarch64) architecture as a fully released AltArch version.

Work within the AltArch SIG currently continues on the 32-bit ARMv7, 64-bit PPC little-endian, and 64-bit PPC big-endian architectures.

 

 

October 09, 2015

CentOS Linux 5 Update batch rate

October 09, 2015 03:54 PM

Hi,

We typically push updates in batch’s. This might be anywhere from 1 update rpm to 100’s ( for when there is a big update upstream ), however most batches are in the region of 5 to 20 rpms. So how many batches have we done in the last year in a bit ? Here is a graph depicting our update batch release rate since Jan 1st 2014 till today.

cl5-update-batch-rate

I’ve removed the numbers from the release rate, and left the dates in since its the trending that most interesting. In a few months time, once we hit new years I’ll update this to split by year so its easy to see how 2015 compared with 2014.

You can click the image above to get a better view. The blue segment represents batches built, and the orange represents batches released.

regards,

October 06, 2015

CentOS Atomic Host in AWS via Vagrant

October 06, 2015 12:56 PM

Hi,

You may have seen the announcement that CentOS Atomic Host 15.10 is now available ( if not, go read the announcement here : http://seven.centos.org/2015/10/new-centos-atomic-host-release-available-now/ ).

You can get the Vagrant box’s for this image via the Atlas / VagrantCloud process or just via direct downloads from http://cloud.centos.org/centos/7/atomic/images/ )

What I’ve also done this time is create a vagrant_aws box that references the AMIs in the regions they are published. This is hand crafted and really just a PoC like effort, but if its something people find helpful I can plumb this into the main image generation process and ensure we get this done for every release.

QuickStart
Once you have vagrant running on your machine, you will need the vagrant_aws plugin. You can install this with:

vagrant plugin install aws

and check its there with a

vagrant plugin list“.

You can then add the box with “vagrant box add centos/atomic-host-aws“. Before we can instantiate the box, we need a local config with the aws credentials. So create a directory, and add the following into a Vagrantfile there :

Vagrant.configure(2) do |config|
  config.vm.box = "centos/atomic-host-aws"
  config.vm.provider :aws do |aws, override|
    aws.access_key_id = "Your AWS EC2 Key"
    aws.secret_access_key = "Your Secret Key"
    aws.keypair_name = "Your keypair name"
    override.ssh.private_key_path = "Path to key"
  end
end


Once you have those lines populated with your own information, you should now be able to run
vagrant up --provider aws

It takes a few minutes to spin up the instance. Once done you should be able to “vagrant ssh” and use the machine. Just keep in mind that you want to terminate any unused instances, since stopping will only suspend it. A real vagrant destroy is needed to lose the ec2 resources.

Note: this box is setup with the folder sync’ feature turned off. Also, the ami’s per region are specified in the box itself, if you want to use a specific region just add a aws.region = ““, into your local Vagrantfile, everything else should get taken care of.

You can read more about the aws provider for vagrant here : https://github.com/mitchellh/vagrant-aws

Let me know how you get on with this, if folks find it useful we can start generating these for all our vagrant images.

October 05, 2015

New CentOS Atomic Host Release Available Now

October 05, 2015 06:08 PM

Today we’re announcing an update to CentOS Atomic Host (version 7.20151001), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • kernel-3.10.0-229.14.1.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • atomic-1.0-115.el7.x86_64
  • kubernetes-1.0.3-0.1.gitb9a88a7.el7.x86_64
  • flannel-0.2.0-10.el7.x86_64
  • docker-1.7.1-115.el7.x86_64
  • etcd-2.1.1-2.el7.x86_64
  • ostree-2015.6-4.atomic.el7.x86_64

If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

Upgrading

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (389 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (400 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host. For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

  vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO (672 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (393 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz format (391 MB) and xz compressed (390 MB).

Amazon Machine Images

Region Image ID
------ --------
sa-east-1 ami-1b52c506
ap-northeast-1 ami-3428b634
ap-southeast-2 ami-43f2bb79
us-west-2 ami-73eaf043
ap-southeast-1 ami-346f7966
eu-central-1 ami-7ed1d363
eu-west-1 ami-3936034e
us-west-1 ami-6d9c5a29
us-east-1 ami-951452f0

SHA Sums

96586e03a1a172195eae505be35729c1779e137cd1f8c11a74c7cf94b0663cb2 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2 33d338bb42ef916a40ac89adde9c121c98fbd4220b79985f91b47133310aa537 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2.gz 73184e6f77714472f63a7c944d3252aadc818ac42ae70dd8c2e72e7622e4de95 CentOS-Atomic-Host-7.20151001-GenericCloud.qcow2.xz 4e09f6dfae5024191fec9eab799861d87356a6075956d289dcb31c6b3ec37970 CentOS-Atomic-Host-7.20151001-Installer.iso 92932e9565b8118d7ca7cfbe8e18b6efd53783853cc75dae9ad5566c6e0d9c88 CentOS-Atomic-Host-7.20151001-Vagrant-Libvirt.box 8f626bdafaecb954ae3fab6a8a481da1b3ebb8f7acf6e84cf0b66771a3ac3a65 CentOS-Atomic-Host-7.20151001-Vagrant-Virtualbox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

October 01, 2015

Progress on the Software Collections SIG

October 01, 2015 10:57 AM

hi,

The software collections special interest group ( https://wiki.centos.org/SpecialInterestGroup/SCLo ) has been making great progress and have finished their initial bootstrap process. They are now getting ready to do a mass build for test and release. I’ve just delivered their rpm signing key, so we are pretty close to seeing content in mirror.centos.org.

As an initial goal, they are working on and delivering rpms – but in parallel efforts are on to get container images into the registries as well, so folks using containers today are able to consume the software collections in either format.

The effort is being co-ordinated by Honza Horak ( https://twitter.com/HorakHonza ), and he’s the best person to get in touch with to join and help.

Regards,

September 23, 2015

CentOS AltArch SIG status

September 23, 2015 10:00 PM

Recently I had (from an Infra side) to start deploying KVM guests for the ppc64 and ppc64le arches, so that AltArch SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about Power8 and the fact you can just use libvirt/virt-install to quickly provision new VMs on PowerKVM , but I'll do that in a separate post.

Parallel to ppc64/ppc64le, armv7hl interested some Community members, and the discussion/activity about that arch is discussed on the dedicated mailing list. It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )

Last (but not least) in this AltArch list is i686 : Johnny built all packages and are already publicly available on buildlogs.centos.org , each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.

If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the centos-devel list !

September 16, 2015

CentOS Dojo in Barcelona

September 16, 2015 10:00 PM

So, thanks to the folks from Opennebula, we'll have another CentOS Dojo in Barcelona on Tuesday 20th October 2015. That even will be colocated with the Opennebulaconf happening the days after that Dojo. If you're attending the OpennebulaConf, or if you're just in the area and would like to attend the CentOS Dojo, feel free to register

Regarding the Dojo content, I'll be myself giving a presentation about Selinux : covering a little bit of intro (still needed for some folks afraid of using it , don't know why but we'll change that ...) about selinux itself, how to run it on bare-metal, virtual machines and there will be some slides for the mandatory container hype thing. But we'll also cover managing selinux booleans/contexts, etc through your config management solution. (We'll cover puppet and ansible as those are the two I'm using on a daily basis) and also how to build and deploy custom selinux policies with your config management solution.

On the other hand, if you're a CentOS user and would like yourself to give a talk during that Dojo, feel free to submit a talk ! More informations about the Dojo on the dedicated wiki page

See you there !

September 10, 2015

Our second stable Atomic Host release

September 10, 2015 10:41 PM

Jason just announced our second stable CentOS Atomic Host release at http://seven.centos.org/2015/09/announcing-a-new-release-of-centos-atomic-host/

I’m very excited about this one, and its not only because I’ve helped make it happen – but this is also the first time a SIG in the CentOS Ecosystem has done a full release, from rpms, to images, to hosted vendor space ( AMI’s in 9 regions on Amazon’s EC2 ).

One of the other things that I’ve been really excited about is that this is the first time we’ve used the rpm-sign infra that I’ve been working on these past few days. It allows SIG built content ( rpms or images or ISOs or even text ) to be signed with pre-selected keys. And do this without having to compromise the key trust level. I will blog more around this process and how SIGs can consume these keys, and how this maps to the TAG model being used in cbs.centos.org

for now, go get started with the CentOS Atomic Host!

regards,

CentOS Dojo in Barcelona, 20th Oct 2016

September 10, 2015 10:10 PM

Hi,

We have a dojo coming up in Barcelona, co-located with the OpenNebula conference in late October. The event is going to run from 1:30pm to 6:30pm ( but I suspect it wont really end till well into the early hours of the morning as people keep talking about CentOS things over drinks, dinner, more drinks etc ! ).

You can get the details, including howto register at https://wiki.centos.org/Events/Dojo/Barcelona2015.

Fabian is going to be there, and we are talking to a great set of potential speakers – the focus is going to be very much on hands on learning about technologies on and around CentOS Linux! And as in the past, we expect content to be sysadmin / operations folks specific rather than developers ( although, we highly encourage developers to come along as well, and talk to us and share their experiences with the sysadmin world! ).

regards,

timezone mangling

September 10, 2015 09:17 PM

Because of what I do and how / where I do it, there are always online, realtime conversations going on ( irc or IM ); and its never really been a huge issue except for people in the US pacific coast. Its always a case of them starting work when I am finishing for the day, and even when i work late at night for the odd hours, its almost always whack in the middle of their lunch hours. And they finish work, even their late night sessions, just about when I am getting started for the day.

So to everyone on that TZ, just want to remind everyone that the best thing to do is stick with emails. I know its fashionable these days to complain about emails and all that, but by and large there is no other means of comms around these days that is easier to get to, mature and really very productive for async conversations. The other thing to keep in mind is that while there are other services and ideas floating around that help solve specific challenges that email isnt best suited for, none of them do a good enough job to remove the email process from the equation. So if we are still going to have email knocking about, lets just use it.

And I’m not ignoring people on irc :) but with 300+ panes in irssi, sometimes it can get hectic and I will often encourage you to ‘Lets Move to Mail’. Its not because I dont want to have the convo right now, its because I want to have the complete conversation!

Regards,

Announcing a New Release of CentOS Atomic Host

September 10, 2015 09:16 PM

Today we’re releasing a significant update to the CentOS Atomic Host (version 7.20150908), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, as an installable ISO image, as a qcow2 image, or as an Amazon Machine Image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

Currently, the CentOS Atomic Host includes these core component versions:

  • kernel 3.10.0-229
  • docker 1.7.1-108
  • kubernetes 1.0.0-0.8.gitb2dafda
  • etcd 2.0.13-2
  • flannel 0.2.0-10
  • cloud-init 0.7.5-10
  • ostree 2015.6-4
  • atomic 1.0-108

Upgrading

If you’re running the version of CentOS Atomic Host that shipped in June, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

If you’re currently run the older, test version of the CentOS Atomic Host, or if you’re running any other atomic host (from any distro or release in the past), you can rebase to this released CentOS Atomic Host by running the following two commands :

$ sudo ostree remote add centos-atomic-host http://mirror.centos.org/centos/7/atomic/x86_64/repo
$ sudo rpm-ostree rebase centos-atomic-host:centos-atomic-host/7/x86_64/standard

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box (393 MB) and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (404 MB) are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host. For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox

ISO

The installer ISO (682 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 (922 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image. The Generic Cloud image is also available compressed in gz (389 MB) and xz compressed (303 MB) formats.

Amazon Machine Images

Region         Image ID
------         --------
sa-east-1      ami-47ed785a
ap-northeast-1 ami-3458d234
ap-southeast-2 ami-05511e3f
us-west-2      ami-a5b6ab95
ap-southeast-1 ami-dc99938e
eu-central-1   ami-8c111191
eu-west-1      ami-0b6e4d7c
us-west-1      ami-69679d2d
us-east-1      ami-69c3aa0c

SHA Sums

a132d59732e758012029a646c466227f4ecf0c71cc42f0a10d3672908e463c0c CentOS-Atomic-Host-7.20150908-GenericCloud.qcow2
aad5d39e0683dc997f34902b068c7373aac3f7dc9b2c962a6ac0fe7394e2aa58 CentOS-Atomic-Host-7.20150908-GenericCloud.qcow2.gz
c8432175a012e7f13b7005fe9c1fe43e03e47ca433db8230ab6d5d1831d2cbe0 CentOS-Atomic-Host-7.20150908-GenericCloud.qcow2.xz
b222702942d02da2204581de6f877cf93289459a99f9080e29016e3b90328098 CentOS-Atomic-Host-7.20150908-Installer.iso
5531fa99429b38c6e6c4aca87672bd5990ab90f6445cc0e55c9121ad62229141 CentOS-Atomic-Host-7.20150908-Vagrant-Libvirt.box
bdcf58772117dd3a84100e5902f4f345daeea7c04f057c0ab6e29bfef3c82eab CentOS-Atomic-Host-7.20150908-Vagrant-Virtualbox.box

Release Cycle

The rebuild image will follow the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’ll be rebuild and included in new images. After the images are tested by the SIG and deemed ready, they’ll be announced. If you’d like to help with the process, there’s plenty to do!

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you’d like to work on testing images, help with packaging, documentation, or even help define the direction of our monthly release — join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

September 09, 2015

Ext4 limitation with GDT blocks number

September 09, 2015 10:00 PM

In the last days, I encountered a strange issue^Wlimitation with Ext4 that I wouldn't have thought of. I've used ext2/ext3/ext4 for quite some time and so I've been used to resize the filesystem "online" (while "mounted"). In the past you had to use ext2online for that, then it was integrated into resize2fs itself.

The logic is simple and always the same : extend your underlaying block device (or add another one), then modify the LVM Volume Group (if needed), then the Logical Volume and finally the resize2fs operation, so something like

lvextend -L +${added_size}G /dev/mapper/${name_of_your_logical_volume} 
resize2fs /dev/mapper/${name_of_your_logical_volume}

I don't know how much times I've used that, but this time resize2fs wasn't happy :

resize2fs: Operation not permitted While trying to add group #16384

I remember having had in the past an issue because of the journal size not being big enough. But this wasn't the case here.

FWIW, you can always verify your journal size with dumpe2fs /dev/mapper/${name_of_your_logical_volume} |grep "Journal Size"

Small note : if you need to increase the journal size, you have to do it "offline" as you have to remove the journal and then add it back with a bigger size (and that also takes time) :

umount /$path_where_that_fs_is_mounted
tune2fs -O ^has_journal /dev/mapper/${name_of_your_logical_volume}
# Assuming we want to increase to 128Mb
tune2fs -j -J size=128 /dev/mapper/${name_of_your_logical_volume} 

But in that case, as said, it wasn't really the root cause : while the resize2fs: Operation not permitted doesn't give much informations, dmesg was more explicit :

EXT4-fs warning (device dm-2): ext4_group_add: No reserved GDT blocks, can't resize

The limitation is that when the initial Ext4 filesystem is created, the number of reserved/calculated GDT blocks for that filesystem will allow to grow it by a factor of 1000.

Ouch, that system (CentOS 6.7) I was working on had been provisioned in the past for a certain role, and that particular fs/mount point was set to 2G (installed like this through the Kickstart setup ). But finally role changed and so the filesystem has been extended/resized some times, until I tried to extend it to more than 2TiB, which then caused resize2fs to complain ...

So two choices :

  • you do it "offline" through umount, e2fsck, resize2fs, e2fsck, mount (but time consumming)
  • you still have plenty of space in the VG, and you just want to create another volume with correct size, format it, rsync content, umount old one and mount the new one.

That means that I learned something new (one learns something new every day !), and also the fact that you then need to take that limitation in mind when using a kickstart (that doesn't include the --grow option, but a fixed size for the filesystem).

Hope that it can help

September 02, 2015

Implementing TLS for postfix

September 02, 2015 10:00 PM

As some initiatives (like Let's Encrypt as one example) try to force TLS usage everywhere. We thought about doing the same for the CentOS.org infra. Obviously we already had some x509 certificates, but not for every httpd server that was serving content for CentOS users. So we decided to enforce TLS usage on those servers. But TLS can be used obviously on other things than a web server.

That's why we considered implementing something for our Postfix nodes. The interesting part is that it's really easy (depending of course at the security level one may want to reach/use). There are two parts in the postfix main.cf that can be configured :

  • outgoing mails (aka your server sends mail to other SMTPD servers)
  • incoming mails (aka remote clients/servers send mail to your postfix/smtpd server)

Let's start with the client/outgoing part : just adding those lines in your main.cf will automatically configure it to use TLS when possible, but otherwise fall back on clear if remote server doesn't support TLS :

# TLS - client part
smtp_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_scache 

The interesting part is the smtp_tls_security_level option : as you see, we decided to force it to may . That's what Postfix official TLS documentation calls "Opportunistic TLS" : in some words it will try TLS (even with untrusted remote certs !) and will only default to clear if no remote TLS support is available. That's the option we decided to use as it doesn't break anything, and even if the remote server has a self-signed cert, it's still better to use TLS with self-signed than clear text, right ?

Once you have reloaded your postfix configuration, you'll directly see in your maillog that it will start trying TLS and deliver mails to servers configured for it :

Sep  3 07:50:37 mailsrv postfix/smtp[1936]: setting up TLS connection to ASPMX.L.GOOGLE.com[173.194.207.27]:25
Sep  3 07:50:37 mailsrv postfix/smtp[1936]: Trusted TLS connection established to ASPMX.L.GOOGLE.com[173.194.207.27]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Sep  3 07:50:37 mailsrv postfix/smtp[1936]: DF584A00774: to=<>, orig_to=<>, relay=ASPMX.L.GOOGLE.com[173.194.207.27]:25, delay=1, delays=0/0.12/0.22/0.71, dsn=2.0.0, status=sent (250 2.0.0 OK 1441266639 79si29025652qku.67 - gsmtp)

Now let's have a look at the other part : when you want your server to present the STARTTLS feature when remote servers/clients try to send you mails (still in postfix main.cf) :

# TLS - server part
smtpd_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtpd_tls_cert_file = /etc/pki/tls/certs/<%= postfix_myhostname %>-postfix.crt 
smtpd_tls_key_file = /etc/pki/tls/private/<%= postfix_myhostname %>.key
smtpd_tls_security_level = may
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache

Still easy, but here we also add our key/cert to the config but if you decide to use a signed by a trusted CA cert (like we do for centos.org infra), be sure that the cert is the concatenated/bundled version of both your cert and the CAChain cert. That's also documented in the Postfix TLS guide, and if you're already using Nginx, you already know what I'm talking about as you already have to do it too.

If you've correctly configured your cert/keys and reloaded your postfix config, now remote SMTPD servers will also (if configured to do so) deliver mails to your server through TLS. Bonus point if you're using a cert signed by a trusted CA, as from a client side you'll see this :

Sep  2 16:17:22 hoth postfix/smtp[15329]: setting up TLS connection to mail.centos.org[72.26.200.203]:25
Sep  2 16:17:22 hoth postfix/smtp[15329]: Trusted TLS connection established to mail.centos.org[72.26.200.203]:25: TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)
Sep  2 16:17:23 hoth postfix/smtp[15329]: CC8351C00C9: to=<fake_one_for_blog_post@centos.org>, relay=mail.centos.org[72.26.200.203]:25, delay=1.6, delays=0.19/0.03/1.1/0.31, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as A7299A006E2)

The Trusted TLS connection established part shows that your smtpd server presents a correct cert (bundle) and that the remote server sending you mails trusts the CA used to sign that cert.

There are a lot of TLS options that you can also add for tuning/security reasons, and all can be seen through postconf |grep tls, but also on the Postfix postconf doc

August 24, 2015

A Flashable CentOS Image for the Intel Edison

August 24, 2015 06:56 PM

A Flashable Image for the Intel Edison

The Intel Edison system-on-a-chip boards are pretty cool, a little compute module can plug into a number of different breakout boards. There’s an Arduino-style board, and another form-factor featuring a bunch of stackable modules (GPIO, SD Card, OLED Screen etc.) Since the system is a dual-core Atom, we can easily put CentOS on it! To start with we will focus on the userland components and use the kernel that comes bundled in the Edison toolkit.

If you’d rather not read the whole thing, here are some links to a flashable rootfs image and a yum repo containing the tools that go with it:
Bootable Image: http://people.centos.org/bstinson/edison/edison-image-centos.ext4.xz
Sources and Yum Repo: http://people.centos.org/bstinson/edison/7/

More of the upcoming work (building a kernel, and the rest of the SDK) will be done under the Alternate Architectures SIG once that is in full-swing. In the meantime I’d love to see discussion, bug reports, and collaboration on the centos-devel list.

My Email: brian (at) bstinson (dot) com

IRC: bstinson

Building your own rootfs image

If you’d like to spin your own image (instead of using the prebuilt image above) here are the steps you need to get a flashable image. The Edisons are fairly simple to install to (since the rootfs is simply an ext4 image)

To begin with we need a 1G file that will serve as our ext4 image

# Make a 1G file full of Zeros
root@host# dd if=/dev/zero of=~/projects/edison/edison-image-centos.ext4 bs=1M count=1024

# Put an ext4 filesystem on it
root@host# mkfs.ext4 ~/projects/edison/edison-image-centos.ext4

# Mount it someplace handy
root@host# mount -t ext4 ~/projects/edison/edison-image-centos.ext4 /mnt/edison-image-centos

Now that we have the image mounted in a useful place we can start installing
packages

# Add the centos-release and centos-release-edison files
root@host# rpm --root /mnt/edison-image-centos -Uvh --nodeps http://buildlogs.centos.org/centos/7/os/i386/Packages/centos-release-7-1.1503.el7.centos.2.8.1.i686.rpm\
http://people.centos.org/bstinson/edison/7/i386/Packages/centos-release-edison-1-1.el7.centos.noarch.rpm

# Tweak the basearch variable (allows you to do target install from an x86_64 host)
root@host# echo 'i386' > /mnt/edison-image-centos/etc/yum/vars/basearch

# Install the packages
root@host# yum --installroot=/mnt/edison-image-centos install bind-utils bash yum vim-minimal shadow-utils less iputils iproute firewalld rootfiles centos-release edison-modules edison-tweaks wpa_supplicant dhclient

Flashing the rootfs image

You can grab dfu-util for CentOS 7 from here

Connect the OTG port on the Edison breakout board to a USB port and it should show up as a dfu device.

# The dfu VendorID:ProductID is 8087:0a99 for the Edison
root@host# dfu-util -l -d 8087:0a99
dfu-util 0.8

Copyright 2005-2009 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2014 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org

Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=11, name=”initrd”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=10, name=”vmlinuz”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=9, name=”home”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=8, name=”update”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=7, name=”rootfs”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=6, name=”boot”, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=5, name=”u-boot-env1″, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=4, name=”u-boot1″, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=3, name=”u-boot-env0″, serial=”UNKNOWN”
Found DFU: [8087:0a99] ver=9999, devnum=68, cfg=1, intf=0, alt=2, name=”u-boot0″, serial=”UNKNOWN”

# Flash the image
root@host# dfu-util -d 8087:0a99 -a rootfs -D ~/projects/edison/edison-image-centos.ext4

Connect to the newly installed Edison

Plug the Console port into a USB port and fire up your favorite serial terminal
emulator

root@host# screen /dev/ttyUSB0 115200
Flashing already done...
GADGET DRIVER: usb_dnl_dfu
reading vmlinuz
5383904 bytes read in 133 ms (38.6 MiB/s)
Valid Boot Flag
Setup Size = 0x00003c00
Magic signature found
Using boot protocol version 2.0c
Linux kernel version 3.10.17-poky-edison+ (sys_dswci@tlsndgbuild004) #1 SMP PREEMPT Wed Apr 29 03:54:01 CEST 2015
Building boot_params at 0x00090000
Loading bzImage at address 00100000 (5368544 bytes)
Magic signature found
Kernel command line: "rootwait root=PARTUUID=012b3303-34ac-284d-99b4-34e03a2335f4 rootfstype=ext4 console=ttyMFD2 earlyprintk=ttyMFD2,keep loglevel=4 g_multi.ethernet_config=cdc systemd.unit=multi-user.target hardware_id=00 g_multi.iSerialNumber=88a7cbd65118ecb7cbe1fde0dd5174df g_multi.dev_addr=02:00:86:51:74:df platform_mrfld_audio.audio_codec=dummy"

Starting kernel …

[ 0.760411] pca953x 1-0020: failed reading register
[ 0.765618] pca953x 1-0021: failed reading register
[ 0.770719] pca953x 1-0022: failed reading register
[ 0.775838] pca953x 1-0023: failed reading register
[ 1.623920] snd_soc_sst_platform: Enter:sst_soc_probe
[ 2.028440] pmic_ccsm pmic_ccsm: Error reading battery profile from battid frmwrk
[ 2.046563] pmic_ccsm pmic_ccsm: Battery Over heat exception
[ 2.046634] pmic_ccsm pmic_ccsm: Battery0 temperature inside boundary

Welcome to CentOS Linux 7 (Beta)!

Expecting device dev-ttyMFD2.device…
[ OK ] Reached target Remote File Systems.
[ OK ] Listening on Delayed Shutdown Socket.
[ OK ] Listening on /dev/initctl Compatibility Named Pipe.
[ OK ] Listening on Journal Socket.
Mounting Debug File System…
Starting Apply Kernel Variables…
Starting Create list of required static device nodes…rrent kernel…
Mounting POSIX Message Queue File System…
Starting Setup Virtual Console…
Mounting Configuration File System…
Mounting FUSE Control File System…
Starting Journal Service…
[ OK ] Started Journal Service.
[ OK ] Listening on udev Kernel Socket.
[ OK ] Listening on udev Control Socket.
Starting udev Coldplug all Devices…
[ OK ] Reached target Encrypted Volumes.
[ OK ] Set up automount Arbitrary Executable File Formats F…utomount Point.
[ OK ] Reached target Swap.
Starting Remount Root and Kernel File Systems…
Expecting device dev-disk-by\x2dpartlabel-home.device…
[ OK ] Created slice Root Slice.
[ OK ] Created slice User and Session Slice.
[ OK ] Created slice System Slice.
[ OK ] Reached target Slices.
[ OK ] Created slice system-getty.slice.
[ OK ] Created slice system-serial\x2dgetty.slice.
[ OK ] Mounted Debug File System.
[ OK ] Started Apply Kernel Variables.
[ OK ] Mounted POSIX Message Queue File System.
[ OK ] Started Setup Virtual Console.
[ OK ] Mounted Configuration File System.
[ OK ] Mounted FUSE Control File System.
[ OK ] Started Remount Root and Kernel File Systems.
[ OK ] Started Create list of required static device nodes …current kernel.
Starting Create static device nodes in /dev…
Starting Load/Save Random Seed…
Starting Configure read-only root support…
[ OK ] Started udev Coldplug all Devices.
[ OK ] Started Create static device nodes in /dev.
[ OK ] Started Load/Save Random Seed.
[ OK ] Started Configure read-only root support.
Starting udev Kernel Device Manager…
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Started udev Kernel Device Manager.
[ OK ] Found device /dev/ttyMFD2.
[ OK ] Found device /dev/disk/by-partlabel/home.
Mounting /home…
[ OK ] Reached target Sound Card.
[ OK ] Mounted /home.
[ OK ] Reached target Local File Systems.
Starting Trigger Flushing of Journal to Persistent Storage…
Starting Create Volatile Files and Directories…
[ OK ] Started Trigger Flushing of Journal to Persistent Storage.
[ OK ] Started Create Volatile Files and Directories.
Starting Update UTMP about System Reboot/Shutdown…
[ OK ] Started Update UTMP about System Reboot/Shutdown.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Timers.
[ OK ] Reached target Paths.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Reached target Sockets.
[ OK ] Reached target Basic System.
Starting firewalld – dynamic firewall daemon…
Starting Dump dmesg to /var/log/dmesg…
Starting Disable the watchdog device on the Intel Edison…
[ 7.665241] intel_scu_watchdog_evo: watchdog_stop
Starting Permit User Sessions…
Starting Login Service…
Starting D-Bus System Message Bus…
[ OK ] Started D-Bus System Message Bus.
Starting LSB: Bring up/down networking…
[ OK ] Started Dump dmesg to /var/log/dmesg.
[ OK ] Started Disable the watchdog device on the Intel Edison.
[ OK ] Started Permit User Sessions.
[ OK ] Started LSB: Bring up/down networking.
Starting Getty on tty1…
[ OK ] Started Getty on tty1.
Starting Serial Getty on ttyMFD2…
[ OK ] Started Serial Getty on ttyMFD2.
[ OK ] Reached target Login Prompts.
[ OK ] Started Login Service.
[ OK ] Reached target Multi-User System.

CentOS Linux 7 (Beta)
Kernel 3.10.17-poky-edison+ on an i686

localhost login:

Connecting the WiFi Interface

From here, you can connect using the onboard wireless controller:

# Modify /etc/sysconfig/wpa_supplicant to include your device
INTERFACES="-iwlan0"

# Add your network to the wpa_supplicant config
root@edison# wpa_passphrase MyHomeSSID "PasswordToMYWifi" > /etc/wpa_supplicant/wpa_supplicant.conf

# Start the wpa_supplicant service
root@edison# systemctl enable wpa_supplicant
root@edison# systemctl start wpa_supplicant

# Get an address on the interface
root@edison# dhclient wlan0

Known Issues

  • The disable-watchdog service starts late in the boot process, so touching
    /.autorelabel will result in a bootloop (the watchdog timer times out before
    the selinux relabel is finished)
  • We’re still using and distributing the Kernel from the toolkit, we will be
    working on building a CentOS kernel which may also allow for a 64-bit userland

July 29, 2015

Cloud In A Box: CentOS OpenStack Remix

July 29, 2015 04:58 PM

OpenStack is the current de facto standard for cloud computing platforms and is supported by all major Linux distributions. Coupled with its role as the base technology in the domains of NFV & SDN, it has become one of the hottest softwares for networking community. It is a combination of numerous components and services, which means that deploying openstack is often complex, time consuming and error prone, especially for beginners. Deployment options vary from manual setup i.e., to install and setup each individual component manually, to use of automated tools such as Devstack, Fuel and Packstack.

The easiest way for getting started with openstack is through automated tools but using them properly requires significant forum/manual scavenging effort. This is too daunting for cloud application developers or anyone whose primary concern is to evaluate the cloud technology.

To ease these deployment concerns, our goal is to provide a robust, pre-configured (yet customizable) and easily installed openstack setup. The result will be a “CentOS Remix” with an option to setup openstack during installation. This is implemented by integrating two efforts by the Red Hat community namely RDO and Packstack into the CentOS installer i.e., Anaconda.

Implementation:

The development involves integrating OpenStack from the CentOS Cloud SIG (which also feeds the Red Hat community’s openstack packaging effort, RDO ) and Packstack (openstack deployment tool) with Anaconda. The resulting remix will:

  • Install CentOS along with OpenStack in one installation cycle.
  • Use packstack to configure & deploy openstack (in post-install phase).

The integration will be achieved by developing an add-on for the Anaconda installer. Anaconda add-ons can be used to add support for custom configurations screens in the graphical and text-based user interface. OpenStack support can also be added to anaconda by modifying its source but add-ons are also extensible, maintainable, easier to debug  and test. They also provide an opportunity to extend openstack support to other Linux distributions that use anaconda.

Current Status:

Anaconda has three modes of operation i.e., Kickstart, Graphical and Text User Interfaces. Hence our add-on development is divided into adding openstack installation support for each of these three modes. Uptill now Kickstart support has been implemented i.e., user is able to install openstack through a kickstart file during setup.

Currently GUI support is being developed. After that TUI support and openstack customization options will be added. Final deliverable will be an “CentOS Openstack remix” ISO (~1.2GB) extending CentOS minimal ISO.

Project source along with testing instructions are available at Github

  • Email: asadxflow@gmail.com
  • IRC: asad_ (#centos-devel)

 

 

July 24, 2015

RootFS Build Factory: The Story So Far

July 24, 2015 11:04 AM

Johnny Hughes has already posted images for Cubietruck and Raspberry Pi 2 and told you how to use them with your boards. In this post, I would like to tell you all the what has gone into development of RootFS Build Factory so far which includes a bit about the CentOS ARMv7 effort.

When I first started looking up project ideas for GSoC this year, the RootFS Build Factory idea caught my attention because it fit right into my interests and skill set. This was in the first week of March. Back then there was no CentOS ARMv7 and as far as I knew, the only person who had done any work in the area of building ARMv7 packages was Howard Johnson. His post on the CentOS arm-dev mailing list http://lists.centos.org/pipermail/arm-dev/2015-February/000089.html described his efforts of compiling CentOS for ARMv7 using Raspberry Pi 2 and Odroid C1. This was my first introduction to CentOS ARMv7.

Back then it seemed like the RootFS Build Factory project would require building a minimal CentOS ARMv7 first and then working on a set of scripts to re-bundle packages in this minimal build.
I got in touch with members on the CentOS team on #centos-devel and #centos-gsoc on Freenode and interacted with Jim Perrin and Ian McLeod (who later became my mentor for this GSoC project). With their inputs I started thinking of alternatives to Howard’s method of compiling packages and came across work done by msalter from Redhat https://www.redhat.com/archives/fedora-buildsys-list/2009-July/msg00000.html

He had developed plugins to cross compile packages using mock and Koji and a yum plugin for installing non native rpms. This seemed great as I did not have any ARMv7 hardware at that point and the idea of generating ARMv7 on fast x86_64 desktop seemed like a good one. Later on, after discussion with msalter (which happened in the first week of June) and based on his advice we realized that this approach wasn’t going to work for CentOS as the pre/post install scripts in the RPMs wouldn’t run in a cross environment.

My original GSoC Proposal was based on using msalter’s yum plugin to build ARMv7 images on x86_64 but after discussion with msalter and in consultation with my mentor Ian, it was decided not to go forward with the yum cross plugin approach and to focus on the targets in my proposal which would involve building CentOS ARMv7 images using either ARMv7 hardware or QEMU.

There was still the big issue of how, where and who would compile CentOS ARMv7. This is where Fabian Arrotin’s efforts came in and took care of matters. His work using a plague farm he setup on the Scaleway nodes, got us a working set of ARMv7 packages. Until then Ian and I were contemplating doing the build ourselves using hardware we had at our disposal.

We decided that until the ARMv7 CentOS build was ready, we would use Fedora for development. Fabian Arrotin was very quick in creating the repositories which meant we didn’t have to use Fedora for long. Of course the first build of CentOS 7 ARM using the RootFS Build Factory happened on Fedora 21.

The present status of the project is this:

  • Tested generation of images for QEMU (https://github.com/mndar/rbf/blob/master/doc/QEMU_README.md), Cubietruck, Odroid C1, Raspberry Pi 2, Banana Pi, Cubieboard 2. Tests for the last two have been reported by Nicolas [nicolas at shivaserv.fr] and David Tischler [david.tischler at mininodes.com] respectively. The Odroid C1 and Raspberry Pi 2 images do not use the CentOS Kernel.
  • Untested Boards: Cubieboard, Wandboard{solo,dual,quad}, Pandaboard, CompuLab TrimSlice, Beaglebone. Support for these boards has been added based on information on the Fedora Wiki. I do not have these boards with me.
  • For adding support for more boards, you can refer to https://github.com/mndar/rbf/blob/master/doc/ADD_SUPPORT_README.md

Presently there are 3 main components in RootFS Build Factory

  • rbf.py : takes a XML Template and generates an image.
  • rbfdialog.py : dialog based UI using the python2-pythondialog library to load/edit/create XML Templates.
  • rbfinstaller.py : Takes a Generic/QEMU image, writes it to your microSD, then writes board specific U-Boot to microSD. This works for the boards where the CentOS kernel is used since the only difference between images for different boards is U-Boot. In case of Raspberry Pi 2 and Odroid C1, just the image generated by RootFS Build Factory is written to microSD.

The original proposal mentioned writing the UI in PyGTK but because cross development was out of the picture and I didn’t think people would run X11 in QEMU just for RootFS Build Factory, I chose a console based approach. Although the interface loads in the QEMU console, it doesn’t load the colors and there is some text visible on the edges while selecting files/directories. I suggest you set up bridge networking on your host and then SSH into the QEMU instance.

If you have any queries you can post them in the comments below or email me [emailmandar at gmail.com] or discuss it on the CentOS arm-dev mailing list http://lists.centos.org/mailman/listinfo/arm-dev

Introducing Flamingo: A Lightweight Cloud Instance Contextualization Tool for CentOS Atomic Host

July 24, 2015 11:01 AM

When using on-demand instantiation of virtual machines in cloud computing,
users pass configuration data to the cloud and that data is used for
configuring the instances.
This process is called contextualization. Contextualization includes
identity(users, groups), network, system services and disk configuration.

Flamingo is a contextualization tool that is being developed under GSoC 2015

that aims to handle early initialization of cloud instances.

The current de facto standard for instance virtualization is cloud-init.
However, there are some problems with it. The most prominent ones are:

  • The usage of a scripting language (Python in this case), since scripting
    languages have the overhead of the interpreter, its dependencies and
    being slower than compiled ones due to their dynamic nature.
  • The documentation is lacking at best. There are examples of common
    use-cases. However, most of the code-base and plug-ins are undocumented.
    Inspecting the code-base is a prerequisite to extend the functionality or
    even understand it.
  • Test coverage is low. Making it hard to extend, maintain, and improve.

Don`t get me wrong it gets the job done. But, it has a lot of issuses and
these issues make it hard to use, extend and maintain.

 

Goals
Flamingo aims to solve the following problems encountered in cloud contextualization;

  • Speed
  • Dependency
  • Maintainability
  • Extensibility

Go is a very suitable choice for a tool like this. Since, it is fast,
it has cheap concurrency, and dependency management is a breeze (see godep).
It allows the distribution of a single executable binary with its dependencies.

 

Target Distribution

The first target for Flamingo is the CentOS Atomic Host and CentOS Linux generic cloud images. We would, ofcourse, like to see wider adoption and are interfacing with other projects and image builders to see how best we might collaborate on this moving forward.

 

Getting Involved
You can find the source code for the tool here.

For more details please check this blog post

 

Discussions
In the meanwhile if you’d like to share your opinions, learn more,
or contribute please feel free to open an issue, mail to centos-devel,user-list or come to #centos-devel IRC channel to have a chat.

Contact

  • E-mail: contact _ tmrts.com
  • IRC: tmrts

 

Tamer Tas

 


Powered by Planet!
Last updated: February 11, 2016 04:00 AM