August 11, 2017

New CentOS Atomic Release and Kubernetes System Containers Now Available

August 11, 2017 06:53 PM

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (7.1707), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

The release, which came as part of the monthly CentOS release stream, was a modest one, including only a single glibc bugfix update. The next Atomic Host release will be based on the RHEL 7.4 source code and will include support for overlayfs container storage, among other enhancements.

Outside of the Atomic Host itself, the SIG has updated its Kubernetes container images to be usable as system containers. What's more, in addition to the Kubernetes 1.5.x-based containers that derive from RHEL, the Atomic SIG is now producing packages and containers that provide the current 1.7.x version of Kubernetes.

Containerized Master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. You can install the master kubernetes components (apiserver, scheduler, and controller-manager) as system containers, using the following commands:

# atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-apiserver:latest

# atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-scheduler:latest

# atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-controller-manager:latest

Kubernetes 1.7.x

The CentOS Virt SIG is now producing Kubernetes 1.7.x rpms, available through this yum repo. The Atomic SIG is maintaining system containers based on these rpms that can be installed as as follows:

on your master

# atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-sig-apiserver:latest

# atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-sig-scheduler:latest

# atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-sig-controller-manager:latest

on your node(s)

# atomic install --system --system-package=no --name kubelet registry.centos.org/centos/kubernetes-sig-kubelet:latest

# atomic install --system --system-package=no --name kube-proxy registry.centos.org/centos/kubernetes-sig-proxy:latest

Both the 1.5.x and 1.7.x sets of containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and function as drop-in replacements for the installed rpms. If you prefer to run Kubernetes from installed rpms, you can layer the master components onto your Atomic Host image using rpm-ostree package layering with the command: atomic host install kubernetes-master.

The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets every two weeks on Tuesday at 04:00 UTC in #centos-devel, and on the alternating weeks, meets as part of the Project Atomic community meeting at 16:00 UTC on Monday in the #atomic channel. You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

August 04, 2017

CentOS Linux 7 (1708); based on RHEL 7.4 Source Code

August 04, 2017 11:04 AM

Red Hat released Red Hat Enterprise Linux 7.4 on August 1st, 2017 (Info).  In the CentOS world, we call this type of release a 'Point Release', meaning that the major version of a distribution (in this case Red Hat Enterprise Linux 7) is getting a new point in time update set (in this case '.4').  In this specific case, there were about 700 Source packages that were updated.  On this release date, the CentOS Project team began building a point release of CentOS Linux 7, CentOS Linux 7 (1708), with this new source code from Red Hat.  Here is how we do it.

When there is a new release of RHEL 7 source code, the public release of this source code happens on the CentOS git server (git.centos.org).  We then use a published set of tools (tools) to build Source RPMs (info) from the released git source code and immediately start building the updated version of CentOS Linux.  We use a program called mock to build Binary RPM packages from the SRPMs.

At the time of this article (5:00am CDT on August 4th, 2017), we have completed building 574 of the approximately 700 SRPMs needed for our point release.  Note that this is the largest number of packages in an EL7 point release so far.

What's Next

Continuous Release Repository

Once we complete the building of the 700 SRPMs in the point release, they will start our QA process.  Once the packages have gone through enough of the QA process to ensure they are built correctly, do the normal things, and link against the proper libraries (tutorial; linking), the first set of updates that we will we release will be from our Continuous Release repository.  These binary packages will be the same packages that we will use in the next full tree and they will be available as updates in the current CR repo in the current release.  If you have opted into CR repo (explained in the above link), then after we populate and announce the release of those packages, then a normal 'yum update' will upgrade you to the new packages.

Normally the only packages that we don't release into the CR repo are the new centos-release package and the new anaconda package, and any packages associated specifically with them.  Anaconda is the installer used to create the new ISOs for install and the centos-release package has information on the new full release.  These packages, excluded from CR, will be in the new installer tree and on the new install media of the full release.

Historically, the CR repo is released between 7 and 14 days after Red Hat source code release, so we would expect CR availability between the 8th and 15th of August, 2017 for this release.

Final Release and New Install Media, CentOS Linux 7 (1708)

After the CR Repo release, the CentOS team and the QA team will continue QA testing and we will create a compilation of the newly built and released CR packages and packages still relevant from the last release into a new repository for CentOS Linux 7 (1708). Once we have a new repo, we will create and test install media. The repository and install media will then be made available on mirror.centos.org.  It will be in a directory labeled 7.4.1708.

Historically, the final release becomes available 3 to 6 weeks after the release of the source code by Red Hat.  So, we would expect our full release to happen sometime between August 22nd and  September 12th, 2017.

As I mentioned earlier, this is the largest point release yet in terms of number of packages released in the EL7 cycle to date, so it may take a few days longer for each above cycle.  Also for building this set of packages we also need something new .. the Developer Tool Set (version 6) compiler .. for some packages.  This should not be a major issue as the Software Collections Special Interest Group (SCL SIG) has a working version of that tool set already released.  A big think you to that SIG, as they have saved me a huge amount of work and time for this release.

August 02, 2017

Updated CentOS Vagrant Images Available (v1707.01)

August 02, 2017 10:40 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 31 July 2017 and the following changes:

  • we are again using the same kickstarts for Hyper-V and the other hypervisors
  • you can now login on the serial console (useful if networking is broken)

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

July 27, 2017

Using NFS for OpenStack (glance,nova) with selinux

July 27, 2017 10:00 PM

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)

It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let's test this before then using NFS through Infiniband (using IPoIB), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)

It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2;

require {
    type glance_api_t;
    type virtlogd_t;
    type nfs_t;
    class file { append getattr open read write unlink create };
    class dir { search getattr write remove_name create add_name };
}

#============= glance_api_t ==============
allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name };
allow glance_api_t nfs_t:file { write getattr unlink open create read};

#============= virtlogd_t ==============
allow virtlogd_t nfs_t:dir search;
allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes

No! Don’t turn off SELinux!

July 27, 2017 02:24 PM

One of the daily activities of the CentOS Community Lead is searching the Internet looking for new and interesting content about CentOS that we can share on the @CentOSProject Twitter account, or Facebook, Google +, or Reddit. There's quite a bit of content out there, too, since CentOS is very popular.

Unfortunately, some of the content gets unshared, based on one simple text search:

"SELinux AND disable"

That setting is indicative of one thing: the author is advocating the deactivation of SELinux, one of the most important security tools any Linux user can have. When that step is outlined, we have to pass sharing it and even recommend readers ignore such advice completely.

What is SELinux?

But why do articles feel the need to outright deactivate SELinux rather than help readers work through any problems they might have? Is SELinux that hard?

Actually, it's really not.

According to Thomas Cameron, Chief Architect for Red Hat, SELinux is a form of mandatory access control. In the past, UNIX and Linux systems have used discretionary access control, where a user will own a file, the user's group will own the file, and everyone else is considered to be other. Users have the discretion to set permissions on their own files, and Linux will not stop them, even if the new permissions might be less than secure, such as setting chmod 777 to your home directory.

"[Linux] will absolutely give you a gun, and you know where your foot is," Cameron said back in 2015 at Red Hat Summit. The situation gets even more dangerous when a user has root permissions, but that is the nature of discretionary access control.

With a mandatory access control system like SELinux in place, policies can be set and implemented by administrators that can typically prevent even the most reckless user from giving away the keys to the store. These policies are also fixed so not even root access can change it. In the example above, if a user had implemented chmod 777 on their home directory, there should be a policy in place within SELinux to prevent other users or processes from accessing that home directory.

Policies can be super fine-grained, setting access rules for anything from users, files, memory, sockets, and ports.

In distros like CentOS, there are typically two kinds of policies.

  • Targeted. Targeted processes are protected by SELinux, and everything else is unconfined.
  • MLS. Multi-level/multi-category security policies that are complex and often overkill for most organizations.

Targeted SELinux is the operational level most SELinux users are going to work with. There are two important concepts to keep in mind when working with SELinux, Cameron emphasized.

The first is labeling, where files, processes, ports, etc. are all labeled with an SELinux context. For files and directories, these labels are handled as extended attributes within the filesystem itself. For processes, ports, and the rest, labels are managed by the Linux kernel.

Within the SELinux label is a type category (along with SELinux user, role, and level categories). Those latter aspects of the label are really only useful for complex MLS policies. But for targeted policies, type enforcement is key. A process that is running a given context -- say, httpd_t --should be allowed to interact with a file that has an httpd_config_t label, for example.

Together, labeling and type enforcement form the core functionality of SELinux. This simplification of SELinux, and the wealth of useful tools in the SELinux ecosystem have made SELinux a lot more easy to manage than the old days.

So why is that when SELinux throws an error, so many tutorials and recommendations simply tell you to turn off SELinux enforcement? For Cameron, that's analogous to turning your car's radio up really loud when you hear it making a weird noise.

Instead of turning SELinux off and thus leaving your CentOS system vulnerable to any number of problems, try checking the common problems that come up when working with SELinux. These problems typically include:

  •  Mislabeling. This is the most common type of SELinux error, where something has the wrong label and it needs fixed.
  • Policy Modification. If SELinux has set a certain policy by default, based on use cases over time, you may have a specific need to change that policy slightly.
  • Policy Bugs. An outright mistake in the policy.
  • An Actual Attack. If this is the case, 'setenforce=0' would seem a very bad idea.

Don't Turn It Off!

If someone tells you not to run SELinux, this is not based on any reason other than supposed convenience or misinformation about SELinux.

The "convenience" argument would seem to be moot, given that a little investigation of SELinux errors using tools like `sealert` reveal verbose and detailed messages on what the problem is and exactly what commands are needed to get the problem solved.

Indeed, Cameron recommends that instead turning off SELinux altogether, run the process with SELinux in permissive mode temporarily and when policy violations (known as AVC denials) show up in the SELinux logs, you can either fix the boolean settings within existing policies to allow the new process to run without error. Or, if needed, build new policy modules on a test machine, move them to production machines and use `semodule - i` to install them, and set booleans based on what is learned on the test machines.

This is not 2010 anymore; SELinux on CentOS is not difficult to untangle and does not have to be pushed aside in favor of convenience anymore.

You can read more about SELinux in the CentOS wiki. Or see the SELinux coloring book, for a gentler introduction to what it is and how it works.

July 18, 2017

CentOS Atomic Host 7.1706 Released

July 18, 2017 11:38 PM

An updated version of CentOS Atomic Host (tree version 7.1706), is now available. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.17.2-9.git2760e30.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-32.git88a4867.el7.centos.x86_64
  • etcd-3.1.9-1.el7.x86_64
  • flannel-0.7.1-1.el7.x86_64
  • kernel-3.10.0-514.26.2.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.5-3.el7.x86_64
  • rpm-ostree-client-2017.5-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-70e8fd66
ap-south-1 ami-c0c4bdaf
eu-west-2 ami-dba8bebf
eu-west-1 ami-42b6593b
ap-northeast-2 ami-7b5e8015
ap-northeast-1 ami-597a9e3f
sa-east-1 ami-95aedaf9
ca-central-1 ami-473e8123
ap-southeast-1 ami-93b425f0
ap-southeast-2 ami-e1332f82
eu-central-1 ami-e95ffd86
us-east-2 ami-1690b173
us-west-1 ami-189fb178
us-west-2 ami-a52a34dc

SHA Sums

f854d6ea3fd63b887d644b1a5642607450826bbb19a5e5863b673936790fb4a4  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2
9e35d7933f5f36f9615dccdde1469fcbf75d00a77b327bdeee3dbcd9fe2dd7ac  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.gz
836a27ff7f459089796ccd6cf02fcafd0d205935128acbb8f71fb87f4edb6f6e  CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.xz
e15dded673f21e094ecc13d498bf9d3f8cf8653282cd1c83e5d163ce47bc5c4f  CentOS-Atomic-Host-7.1706-Installer.iso
5266a753fa12c957751b5abba68e6145711c73663905cdb30a81cd82bb906457  CentOS-Atomic-Host-7.1706-Vagrant-Libvirt.box
b85c51420de9099f8e1e93f033572f28efbd88edd9d0823c1b9bafa4216210fd  CentOS-Atomic-Host-7.1706-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

July 04, 2017

Updated CentOS Vagrant Images Available (v1706.01)

July 04, 2017 02:22 PM

2017-07-14: We have release version 1706.02 of centos/6, which fixes a regression introduced by the "stack clash" patch (this made Java crash, just like on centos/7). The packages were all updated to 2017-07-14, and include additional fixes.

2017-07-05: We have released version 1706.02 of centos/7, providing RHBA-2017:1674-1, which fixes a regression introduced by the patch for the "stack clash" vulnerability. Existing boxes don't need to be destroyed and recreated: running sudo yum update inside the box will upgrade the kernel if needed. There is no patch for CentOS Linux 6 at this time, but we plan to provide an updated centos/6 image if such a patch is later released.

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 2 Juli 2017. This release also includes an updated kernel with important security fixes.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

June 27, 2017

HPC Student Cluster Competition, Frankfurt, Germany

June 27, 2017 03:51 PM

Last week, as I mentioned in my earlier post, I was in Frankfurt, Germany, for the ISC High Performance Computing conference. The thing that grabbed my attention, more than anything else, was the Student Cluster Competition.  11 teams from around the world - mostly from Universities - were competing to create the fastest (by a variety of measures) student supercomputer. These students have progressed from earlier regional competitions, and are the world's finest young HPC experts. Just being there was an amazing accomplishment. And these young people were obviously thrilled to be there.

Each team had hardware that had been sponsored by major HPC vendors. I talked with several of the teams about this. The UPC Thunderchip team, from Barcelona Tech, (Winner of the Fan Favorite award!) said that their hardware, for example, had been donated by (among other vendors) CoolIT systems, who had donated the liquid cooling system that sat atop their rack.

(When I was in college, we had a retired 3B2 that someone had dumpster-dived for us, but I'm not bitter.)

Over the course of the week, these teams were given a variety of data challenges. Some of them, they knew ahead of time and had optimized for. Others were surprise challenges, which they had to optimize for on the fly.

While the jobs were running, the students roamed the show floor, talking with vendors, and, I'm sure, making contacts that will be beneficial in their future careers.

Now, granted, I had a bit of a ulterior motive. I was trying to find out the role that CentOS plays in this space. And, as I mentioned in my earlier post, 8 of the 11 teams were running CentOS. (One - University of Hamburg - was running Fedora. Two - NorthEast/Purdue, and Barcelona Tech - were running Ubuntu) And teams that placed first, second, and third in the competition - (First place: Tsinghua University, Beijing. Second place: Centre for High Performance Computing South Africa. Third place: Beihang University, Beijing.) - were also running CentOS. And many of the research organizations I talked to were also running CentOS on their HPC clusters.

I ended up doing interviews with just two of the teams, about their hardware, and what tests that they had to complete on them to win the contest. You can see those on my YouTube channel, HERE and HERE.

At the end, while just three teams walked away with the trophies, all of these students had an amazing opportunity. I was so impressed with their professionalism, as well as their brilliance.

And good luck to the teams who have been invited to the upcoming competition in Denver. I hope I'll be able to observe that one, too!

June 19, 2017

New CentOS Atomic Host Release Available for Download

June 19, 2017 04:31 PM

An updated version of CentOS Atomic Host (tree version 7.1705), is now available. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.17.2-4.git2760e30.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-28.git1398f24.el7.centos.x86_64
  • etcd-3.1.7-1.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.21.1.el7.x86_64
  • kubernetes-node-1.5.2-0.6.gitd33fd89.el7.x86_64
  • ostree-2017.5-1.el7.x86_64
  • rpm-ostree-client-2017.5-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-90e7c686
ap-south-1 ami-2ba1de44
eu-west-2 ami-2b44524f
eu-west-1 ami-a6c5dbc0
ap-northeast-2 ami-87ba65e9
ap-northeast-1 ami-64868e03
sa-east-1 ami-f9197295
ca-central-1 ami-66e55a02
ap-southeast-1 ami-850a88e6
ap-southeast-2 ami-caf7e6a9
eu-central-1 ami-42e84f2d
us-east-2 ami-89bb9dec
us-west-1 ami-245b7644
us-west-2 ami-68818a11

SHA Sums

46082267562f9eefbc4b29058696108f07cb0ceb2dafe601ec5d66bb1037120a  CentOS-Atomic-Host-7.1705-GenericCloud.qcow2
1bd6dfbe360be599a45734f03b34e08cc903630327e1c54534eb4218bf18d0da  CentOS-Atomic-Host-7.1705-GenericCloud.qcow2.gz
aa4a3ac057d3ea898bea07052aa9fd20c7ca7ea3c8a5474bca9227622915e5e2  CentOS-Atomic-Host-7.1705-GenericCloud.qcow2.xz
d99c2e9d1d31907ace3c1e54fc3087ebb9d627ca46168f7e65b469789cd39739  CentOS-Atomic-Host-7.1705-Installer.iso
cfd6a29e26e202476b6a2dc54b1b31321688270a6cc6a9ef4206ac0ebd0e309c  CentOS-Atomic-Host-7.1705-Vagrant-Libvirt.box
d9b5f965f637a909efa15cd43063729db002c012c81a8cde0de588ea0932f970  CentOS-Atomic-Host-7.1705-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

 

June 16, 2017

Introducing official CentOS images for Vagrant on Hyper-V

June 16, 2017 02:26 PM

Starting with v1705, we are starting to offer the official CentOS images for Vagrant, for the Hyper-V provider. Hyper-V is Microsoft's native hypervisor, available by default in Windows Server 2008 and newer, Windows 8 Pro, Windows 10 Pro, and Windows Hyper-V Server 2012; since it also powers Microsoft Azure, it's probably a good alternative to VirtualBox for people using Vagrant professionally on Windows hosts. We now support Hyper-V, libvirt (KVM), VirtualBox and VMware as Vagrant providers.

The build scripts are in the vagrant directory of our GitHub repo, https://github.com/CentOS/sig-cloud-instance-build. Pull requests are welcome; please only file a new issue to report actual bugs, not support requests (please use the mailing lists or IRC for that).

Just like with other proprietary hypervisors like VMware, we have no possibility to automatically test the images for Hyper-V on our Continuous Integration infrastructure. We welcome any feedback from Hyper-V users -- please let us know if you think you would be able test our monthly releases on a regular basis (we normally release at the beginning of each month). You can contact us on the centos-devel mailing list or via IRC in #centos-devel on Freenode.

We have been working on providing images for Hyper-V at least since October 2016. It took much longer than I anticipated, due to the necessity of upgrading our build infrastructure and fix several problems with the image configuration. In this context, I would like to warmly thank the following people:

  • Patrick Lang from Microsoft, for helping with testing and debugging the experimental Hyper-V images;
  • Fabian Arrotin and Thomas Oulevey from the CentOS Project, for their infrastructure work;
  • Micheal Vermaes, for helping test the experimental images;
  • Karanbir Singh, the CentOS Project lead, for his constant support and assistence.

Updated CentOS Vagrant Images Available (v1705.01)

June 16, 2017 02:26 PM

2017-06-21: We have released version 1705.02 of the official CentOS Linux images for Vagrant, providing important security fixes in glibc and the Linux kernel, addressing the "stack clash" vulnerability. We advise all users to update to this new release. Existing boxes don't need to be destroyed and recreated: running sudo yum update inside the box will upgrade the packages as needed (a reboot of the box is required after the update).

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 1 June 2017. Starting with this release, we also provide official images for Hyper-V. Hyper-V is Microsoft's native hypervisor, included by default

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.3 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.4 on OS X 10.11.6, with VirtualBox 5.1.20.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

May 16, 2017

New CentOS Atomic Host with Optional Docker 1.13

May 16, 2017 05:55 PM

An updated version of CentOS Atomic Host (tree version 7.20170428), is now available, featuring the option of substituting the host’s default docker 1.12 container engine with a more recent, docker 1.13-based version, provided via the docker-latest package.

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.15.4-2.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-16.el7.centos.x86_64
  • etcd-3.1.3-1.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.16.1.el7.x86_64
  • kubernetes-node-1.5.2-0.5.gita552679.el7.x86_64
  • ostree-2017.3-2.el7.x86_64
  • rpm-ostree-client-2017.3-1.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

docker-latest

You can switch to the alternate docker version by running:

# systemctl disable docker --now
# systemctl enable docker-latest --now
# sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker

Because both docker services share the /run/docker directory, you cannot run both docker and docker-latest at the same time on the same system.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
ap-south-1 ami-9c7b06f3
eu-west-2 ami-14425570
eu-west-1 ami-a1b9b7c7
ap-northeast-2 ami-e01cc18e
ap-northeast-1 ami-2a0d304d
sa-east-1 ami-ce7619a2
ca-central-1 ami-8b813def
ap-southeast-1 ami-61e36702
ap-southeast-2 ami-84c7cde7
eu-central-1 ami-f970ae96
us-east-1 ami-4a70015c
us-east-2 ami-d2cfe8b7
us-west-1 ami-57ba9c37
us-west-2 ami-fbd8bd9b

SHA Sums

977c9b6e70dd0170fc092520f01be26c4d256ffe5340928d79c762850e5cedd9  CentOS-Atomic-Host-7.1704-GenericCloud.qcow2
781074c43aa6a6f3cad61a77108541976776eb3cb6fe30f54ca746a8314b5f87  CentOS-Atomic-Host-7.1704-GenericCloud.qcow2.gz
aef7fedf01b920ee75449467eb93724405cb22d861311fbc42406a7bd4dbfee2  CentOS-Atomic-Host-7.1704-GenericCloud.qcow2.xz
669c5fd1b97bc2849a7e3dbec325207d98e834ce71e17e0921b583820d91f4f5  CentOS-Atomic-Host-7.1704-Installer.iso
b5ef69bff65ab595992649f62c8fc67c61faa59ba7f4ff0cb455a9196e450ae2  CentOS-Atomic-Host-7.1704-Vagrant-Libvirt.box
73757f50ef9cdac2e3ba6d88a216cca23000a32fa96891902feaa86d49147e3f  CentOS-Atomic-Host-7.1704-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

May 15, 2017

Linking Foreman with Zabbix through MQTT

May 15, 2017 10:00 PM

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

May 07, 2017

Deploying Openstack through puppet on CentOS 7 - a Journey

May 07, 2017 10:00 PM

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

 yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y
 packstack --gen-answer-file=answers.txt
 vim answers.txt
 packstack --answer-file=answers.txt --dry-run
 * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

May 03, 2017

Updated CentOS Vagrant Images Available (v1704.01)

May 03, 2017 01:32 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 30 April 2017 and the following changes:

  • kdump has been removed from the images, since it needs to reserve 160MB + 2bits/4kB RAM for the crash kernel, and automatic allocation only works on systems with at least 2GB RAM

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    . Please note that there is a bug in VirtualBox 5.1.20 that prevents vagrant-vbguest from working.
    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.3 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.4 on OS X 10.11.6, with VirtualBox 5.1.20.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or...
vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6
vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum f8cd95ce24fd9f615dd38bbf8b6c285a916a4cac1d98ada4ab16d6626468032b --provider libvirt --box-version 1704.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

April 13, 2017

Deploying Openstack PoC on CentOS with linux bridge

April 13, 2017 10:00 PM

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

April 12, 2017

Remotely kicking a CentOS install through ligthweight 1Mb iso image

April 12, 2017 10:00 PM

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0
ifstat

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

initrd http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/initrd.img
chain http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/vmlinuz net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/7/os/x86_64/ inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see https://wiki.centos.org/HowTos/RemoteiPXE , but that one defaults to dhcp

Hope it helps

New CentOS Atomic Host with Updated Kubernetes, Etcd and Flannel

April 12, 2017 04:31 PM

An updated version of CentOS Atomic Host (tree version 7.20170405), is now available, including significant updates to kubernetes (version 1.5.2), etcd (version 3.1) and flannel (version 0.7).

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.15.4-2.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-11.el7.centos.x86_64
  • etcd-3.1.0-2.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.10.2.el7.x86_64
  • kubernetes-node-1.5.2-0.2.gitc55cf2b.el7.x86_64
  • ostree-2017.1-3.atomic.el7.x86_64
  • rpm-ostree-client-2017.1-6.atomic.el7.x86_64

Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade

Images

Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox 

ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images

Region Image ID
us-east-1 ami-a50d85b3
ap-south-1 ami-13f6857c
eu-west-2 ami-42233726
eu-west-1 ami-49063c2f
ap-northeast-2 ami-d1c81abf
ap-northeast-1 ami-7b1c3e1c
sa-east-1 ami-914f2dfd
ca-central-1 ami-2de75b49
ap-southeast-1 ami-53328c30
ap-southeast-2 ami-6d929c0e
eu-central-1 ami-dca270b3
us-east-2 ami-18bc987d
us-west-1 ami-b22a0fd2
us-west-2 ami-2e2bbb4e

SHA Sums

b337bc56a71b6b25237a5c0c06c9f48a33973b4e41c648288bcfaf5a494af98c  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2
707db9907a850816fca7782da1dca3584fa0d8be821d0ee95525b688aaa0cc6d  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2.gz
c4ef91cc801777e214106522f848f8b388fb92699d67ed4fe86cc942a361f7a2  CentOS-Atomic-Host-7.1703-GenericCloud.qcow2.xz
5e41a0306a8c1c212117c68eae10f0f59b25cb6c57dec9629bf3ac760bca54bc  CentOS-Atomic-Host-7.1703-Installer.iso
f509eb482a614d2eb047009aaa6c37c125b66cdd483e7015983cae5f72d9f041  CentOS-Atomic-Host-7.1703-Vagrant-Libvirt.box
2c0ba7dda2f4f249aa6c31cfcb36df1a17913b9d8786afb7b340a24b15b404f1  CentOS-Atomic-Host-7.1703-Vagrant-VirtualBox.box

Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

April 07, 2017

Updated CentOS Vagrant Images Available (v1703.01)

April 07, 2017 09:54 PM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 30 March 2017 and the following changes:

  • The VMware images now use the paravirtualized SCSI controller (the kernel module for the LSILogic controller has been deprecated upstream).
  • The VMware images now specify vmware_desktop, allowing them to work with both VMware Fusion and VMware Workstation

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.8.1 from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.3 on OS X 10.11.6, with VirtualBox 5.1.18.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 82bbed14c34fdd8fd3cb617b0e8c0f154ebd4d1388f45de3335b2cdf791e5fed --provider libvirt --box-version 1703.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

March 13, 2017

Updated CentOS Vagrant Images Available (v1702.01)

March 13, 2017 10:23 AM

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 28 February 2017.

Known Issues

  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile:
    config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line
    config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Vagrant 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
  7. The metadata of the images for VMware is set to vmware_fusion. Please specify vmware_fusion as the provider when downloading the images, even if you're using VMware Workstation.

Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant from SCL, with libvirt and VirtualBox 5.0.30 (without the VirtualBox Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.0 on OS X 10.11.6, with VirtualBox 5.0.30.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6
$ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6
$ vagrant box update --box centos/7

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc
$ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 48745c0f2dd4fbee366d830e3e333b637528ad936dd66ed5911df2adc02f46d7 --provider libvirt --box-version 1702.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.

January 16, 2017

Enabling SPF record for centos.org

January 16, 2017 11:00 PM

In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was Email Spoofing (aka "forged from address"). That's how I discovered that we never implemented SPF for centos.org (while some of the Infra team members had that on their personal SMTP servers).

While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :

sending

The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after being authorized of course) through servers listed in your SPF.

One thing to know with SPF is that it breaks plain forwarding and aliases but it's not how you will setup your SPF record, but how originator domain does it : For example if you have joe@domain.com sending to joe@otherdomain.com itself being an alias to joe2@domain.com, that will break, as MX for domain.com will see that a mail for domain.com was 'sent' from otherdomain.com and not from an IP listed in their SPF. There are workaround for this though, aka remailing and SRS

receiving

So you have a SPF in place and so restrict from where you are sending mails ? Great, but SPF only works if other SMTP servers involved are checking for it, and so you should do the same ! The fun part is that even if you have CentOS 7, and so Postfix 2.10, there is nothing by default that let you verify SPF : as stated on this page :

Note: Postfix already ships with SPF support, in the form of a plug-in policy daemon. This is the preferred integration model, at least until SPF is mandated by standards. 

So for our postfix setup, we decided to use pypolicy-spf : lightweight, easy , written in python. The needed packages are already available in Epel, but we also rebuilt it on CBS. Once installed, configured and integrated with Postfix, you'll start (based on your .conf settings) blocking mail that arrives to your SMTP servers, but from IP/servers not listed in the originator domain SPF policy (if any).

If you have issues with our SPF current policy on centos.org, feel free to reach us in #centos-devel on irc.freenode.net to discuss it.

January 13, 2017

create a new github.com repo from the cli

January 13, 2017 11:03 PM

I often get into a state where I’ve started some work, done some commits etc and then realised I dont have a place to push the code to. Getting it on github has involved getting the browser out, login to github, click click click {pain}. So, here is how you can create a new repo for your login name on github.com without moving away from the shell.


curl -H "X-GitHub-OTP: XXXXX" -u 'LoginName' https://api.github.com/user/repos -d '{"name":"Repo-To-Create"}'

You need to supply your OTP pass and replace XXXX with it, and ofcourse your own LoginName and finally the Repo-To-Create. Once this call runs, curl will ask for your password and you should get the github API dump a bunch of details ( or tell you that it failed, in which case you need to check the call ).

now the usual ‘git remote add github git@github.com:LoginName/Repo-To-Create‘ and you are off.

regards,

January 04, 2017

Music recording on CentOS 7 DAW

January 04, 2017 11:00 PM

There was something that was on my (private) TODO list for quite some time now : being able to record music, mix and output a single song from multiple recorded tracks. For that you need a Digital Audio Workstation DAW.

I have several instruments at home (electric guitars, bass, digital piano and also drums) but due to lack of (free) time I never investigated the DAW part on Linux and especially CentOS. So having some "offline" days during the holidays helped me investigating that and also being able to setup a small DAW on a recycled machine. Let's consider the hardware and software parts.

Hardware support

I personally still own a Line6 TonePort UX2 interface which is now more than 10 years old, and that I used in the past on a iMac. The iMac still runs, but exclusively with CentOS 7 those days, and the TonePort was just collecting dust. When I tried to plug it , it wasn't really detected, but just mainly because of the kernel config, so I asked (gently) Toracat to enable the required kernel module in the centos-plus kernel and with the centos-plus kernel, the toneport ux2 is seen as an external sound card. Good :

geonosis kernel: usb 3-2: new full-speed USB device number 2 using xhci_hcd
geonosis kernel: usb 3-2: New USB device found, idVendor=0e41, idProduct=4142
geonosis kernel: usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
geonosis kernel: usb 3-2: Product: TonePort UX2
geonosis kernel: usb 3-2: Manufacturer: Line 6
geonosis mtp-probe: checking bus 3, device 2: "/sys/devices/pci0000:00/0000:00:14.0/
usb3/3-2"
geonosis mtp-probe: bus: 3, device: 2 was not an MTP device
geonosis kernel: line6usb: module is from the staging directory, the quality is unkn
own, you have been warned.
geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 found
geonosis kernel: line6usb: module is from the staging directory, the quality is unkn
own, you have been warned.
geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 now attached
geonosis kernel: line6usb 3-2:1.1: Line6 TonePort UX2 found
geonosis kernel: usbcore: registered new interface driver line6usb
geonosis kernel: usbcore: registered new interface driver snd_usb_toneport

I also recently offered myself a small gift to play with : a small Fender Mustang guitar amplifier : small enough to fit under my desk in my home office, and with amps/effects emulation built-in, plus usb output to redirect sound directly to computer. (Easier for easy recording, than setting up a microphone in front my other Fender Custom vibrolux reverb tube amp, and my neighbors are also grateful for that decision)

The good news is that it's directly recognized as another sound card without any kernel module to activate/enable :

geonosis kernel: usb 3-1: new full-speed USB device number 3 using xhci_hcd
geonosis kernel: usb 3-1: New USB device found, idVendor=1ed8, idProduct=0014
geonosis kernel: usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
geonosis kernel: usb 3-1: Product: Mustang Amplifier
geonosis kernel: usb 3-1: Manufacturer: FMIC
geonosis kernel: usb 3-1: SerialNumber: 05D7FF373837594743075518
geonosis kernel: hid-generic 0003:1ED8:0014.0001: hiddev0,hidraw0: USB HID v1.10 Device [FMIC Mustang Amplifier] on usb-0000:00:14.0-1/input0
geonosis mtp-probe: checking bus 3, device 3: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-1"
geonosis mtp-probe: bus: 3, device: 3 was not an MTP device
geonosis kernel: usbcore: registered new interface driver snd-usb-audio

With those two additional sound cards detected, it looks now like this :

[arrfab@geonosis ~]$ cat /proc/asound/cards 
 0 [PCH            ]: HDA-Intel - HDA Intel PCH
                      HDA Intel PCH at 0xd2530000 irq 32
 1 [TonePortUX2    ]: line6usb - TonePort UX2
                      Line6 TonePort UX2 at USB 3-2:1.0
 2 [Amplifier      ]: USB-Audio - Mustang Amplifier
                      FMIC Mustang Amplifier at usb-0000:00:14.0-1, full speed

Great, now let's have a look at the software part !

Software

There are multiple ways to record quickly any sound from a sound card on Linux, and Audacity is well known for this, as it comes with several effects, you can import, edit , cut, paste (and more !) quickly sounds (and even multiple tracks). But when it comes to music recording, especially if you want to also play with MIDI , you need a proper sequencer. It's really great to see that on Linux you have multiple alternatives, but one that seems to be very popular in the Free and open source world is Ardour. As nothing was built for CentOS 7, I decided to create a DAW-7 COPR repository that has everything I need ( when combined with EPEL and/or Nux-Dextop )

I so (re)built (thanks to upstream Fedora maintainers !) in that copr repository multiple packages, including (but not limited to) :

  • Ardour 5.5 : sequencer
  • Qjackctl : frontend for needed jack-audio-connection-kit
  • Calf : very good effects/plugins for jack and so that can be used directly within ardour
  • LV2 : other effects/plugins
  • Guitarix : guitar/bass amp+effect simulator
  • LMMS : other sequencer but more oriented for midi/loops but not really audio recording from external devices
  • Hydrogen : Drum machine when you can't record real drums but you can program your own pattern[s]
  • ... and much more ... :-)

After having tested multiple settings (there are a lot to learn around this), I found myself comfortable with this :

sudo su -c 'curl https://copr.fedorainfracloud.org/coprs/arrfab/DAW-7/repo/epel-7/arrfab-DAW-7-epel-7.repo > /etc/yum.repos.d/arrfab-daw.repo'
sudo yum install -y ardour5 calf lmms hydrogen qjackctl jack-audio-connection-kit jack_capture guitarix lv2-abGate lv2-calf-plugins lv2-drumgizmo lv2-drumkv1 lv2-fomp-plugins lv2-guitarix-plugins lv2-invada-plugins lv2-vocoder-plugins lv2-x42-plugins fluid-soundfont-gm fluid-soundfont-gs

One thing that you have to know (but read all the tutorials/documentation around this) is that your user needs to be part of the jackuser and audio groups in Linux to be able to use the needed Jack sound server (which you have to also master, but once you understand it, it's just a virtual view of what you'd need to do with real cables plugging in/out into various hardware elements) :

sudo usermod --groups jackuser,audio --append $your_username

One website I recommend you to read is LibreMusicProduction as it has tons of howtos and also video tutorials about Ardour and other settings. Something else worth mentioning if you just want drum loops : you can find some on Google but I found some real good ones to start with (and licensed in a way that you can reuse those) on Looperman, Drumslive and Freesound.

Who said that CentOS 7 was only for servers in datacenters and the Cloud ? :-)

CentOS 7 DAW

Have fun on your CentOS 7 DAW.

November 24, 2016

Zabbix, selinux and CentOS 7.3.1611

November 24, 2016 11:00 PM

If you're using CentOS, you probably noticed that we have a CR repository containing all the built packages for the next minor release, so that people can "opt-in" and already use those packages, before they are released with the full installable tree and iso images.

Using those packages on a subset of your nodes can be interesting, as it permits you to catch some errors/issues/conflicts before the official release (and so symlink on mirrors being changed to that new major.minor version)

For example, I tested myself some roles and found an issue with zabbix-agent refusing to start on a node fully updated/rebooted with CR pkgs (so what will become 7.3.1611 release). The issue was due to selinux denying something (that was allowed in previous policy)

Here is what selinux had to say about it :

type=AVC msg=audit(1480001303.440:2626): avc:  denied  { setrlimit } for  pid=22682 comm="zabbix_agentd" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:zabbix_agent_t:s0 tclass=process

It's true that there was an update for selinux policy : from selinux-policy-3.13.1-60.el7_2.9.noarch to selinux-policy-3.13.1-102.el7.noarch.

What's interesting is that I found the reported issue at Zabbix side, but for zabbix-server (here it's the agent, server is running fine) : ZBX-10542

Clearly something that was working before and now denied, so I created a bug report and hopefully one fix will come in an updated selinux-policy package. But I doubt that it will be available soon.

So in the mean time, what you have to do is :

  • either put zabbix_agent_t into permissive mode with semanage permissive -a zabbix_agent_t
  • either build and distribute a custom selinux policy in your infra (preferred method for me)

For those interested, the following .te (type enforcement) will allow you to build a custom .pp selinux policy file (that you can load with semodule) :

module local-zabbix 1.0;

require {
    type zabbix_agent_t;
    class process setrlimit;
}

#============= zabbix_agent_t ==============
allow zabbix_agent_t self:process setrlimit;

You can now use your configuration management platform to distribute that built .pp policy (you don't need to build it on every node). I'll not dive into details, but I wrote some slides around this (for Ansible and Puppet) for a talk I gave some time ago, so feel free to read those, especially the last slides (with examples)

November 07, 2016

Welcoming new members to the CentOS Container team

November 07, 2016 02:08 PM

Join me in warmly welcoming Dharmit Shah, Bama Charan Kundu and Navid Shaikh to the CentOS Container team.

They are primarily focused on delivering and curating the CentOS Container Pipeline (https://github.com/centos/container-pipeline-service ). In the coming weeks, keep an eye out for announcements from them in the CentOS Blog at https://seven.centos.org .

– KB

November 05, 2016

Vim 8 for CentOS Linux 7

November 05, 2016 03:40 AM

Matěj Cepl is curating a set of Vim 8 rpms for EL7 over at
https://copr.fedorainfracloud.org/coprs/mcepl/vim8/ – consider them testing grade, and I am sure he would appreciate feedback and issue reports.

Now go get the shinny new Vim8.

$ rpm -q vim-enhanced
vim-enhanced-8.0.0054-1.0.8.el7.centos.x86_64

Enjoy! And dont forget to drop by and say thanks to Matěj over at https://matej.ceplovi.cz/blog/

October 27, 2016

Security contact for the CentOS Project

October 27, 2016 07:52 PM

If you find any security issue in a CentOS.org website or service, please let us know; the same goes for any issues in CentOS Linux as well as the SIG content on centos.org. And the best way to get in touch is to email security@centos.org – and if the content is sensitive, please use the corrosponding gpg key to encrypt the content with. eg for CentOS Linux 7 specific issue, please encrypt the content with the CentOS Linux 7 key. Similarly for any content specific to the Virt SIG, please use the CentOS SIG Virt key.

How can you verify the keys ? The fingerprints are published behind https at https://www.centos.org/keys/.

DNS data for the www.centos.org website is :
www.centos.org has address 85.12.30.226
www.centos.org has IPv6 address 2a01:788:a002:0:225:90ff:fe33:f34c

– KB

October 26, 2016

Adding a timeout for your CI jobs at ci.centos.org

October 26, 2016 09:10 PM

The typical workflow for most ci.centos.org ( cico ) jobs is :

* Call Duffy's API endpoint with node/get and grab some machines
* Setup the machines environment for the ci job to come
* Push content to nodes
* Run the tests
* Clear out / tear down
* Call Duffy's API end point with node/done to return the machines
* Report status via Jenkins

Machines handed out in this manner to the CI Jobs are available for upto 6 hours at a time, at which point they are reaped back into the available pool for other jobs to consume. What this also means is that if for any reason, the job gets stuck, it could be upto six hours before the developer/user gets any feedback about the tests failing.

The usual way to resolve this situation is to setup a timeout in the jenkins job. That would allow Jenkins to watch for the run, on timeout, kill the job and report failure. However, if your job is setup with a single build step that also includes requesting the machines and returning them when done, Jenkins killing the job will mean your machines wont get returned for upto 6 hrs. Given that most projects are setup with a quota of 10 deployed machines; not returning them when done, would mean your jobs get put into a queue that isnt clearing out in a rush.

One way to work around this would be to split the machine request and machine return functions into a pre-build and post-build step, and then pass over the session-id for the deployed machines via the build steps. That way, you could trap and report specific conditions. A varioation of this would be to setup conditional build steps, and have them execute different functions as needed.

An easy and simple workaround however, is to just wrap the test commands in a /usr/bin/timeout call. timeout is delivered as a binary from the coreutils package on CentOS Linux 7 and would be available on all machines, including the jenkins worker instances. Take a look at https://github.com/almighty/almighty-jobs/blob/master/devtools-ci-index.yaml#L64 for a quick example of how this would work in a JJB template. This way we can timeout on the job, and yet be able to return nodes or handle any other content we need, in the same ci job script. A script that then does not have or need any Jenkins specific content, making it possible to run from developer laptops or as child jobs on its own.

/usr/bin/timeout ( man 1 timeout ) also allows you to preserve the sub commands exit status, if you need to track and report different status from your ci jobs. And ofcourse, there are many other uses for /usr/bin/timeout as well!

– KB

October 20, 2016

(ab)using Alias for Zabbix

October 20, 2016 10:00 PM

It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on CBS and also armhfp

There are really cool things in Zabbix, including Low-Level Discovery. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.

But what happens if you suddenly want to for example to create some calculated item on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want to gather stats from ..." .. but it seems I was wrong. Zabbix user macros can be used in multiple places, but not everywhere. (It seems that I wasn't the only one not understanding the doc coverage for this, but at least that bug report will have an effect on the doc to clarify this)

That's when I discussed this in #zabbix (on irc.freenode.net) that RichLV pointed me to something that could be interesting for my case : Alias. I must admit that it's the first time I was hearing about it, and I don't even know when it landed in Zabbix (or if I just overlooked it at first sight).

So cool, now I can just have our config mgmt pushing for example a /etc/zabbix/zabbix_agentd.d/interface-alias.conf file that looks like this and reload zabbix-agent :

Alias=net.if.default.out:net.if.out[enp4s0f0]
Alias=net.if.default.in:net.if.in[enp4s0f0]

That means that now, whatever the interface name will be (as puppet in our case will create that file for us) , we'll be able to get values from net.if.default.out and net.if.default.in keys, automatically. Cool

That also means that if you want to aggregate all this into a single key for a group of nodes (and so graph that too), you can do something always referencing those new keys (example for the total outgoing bandwidth for a group of hosts) :

grpsum["Your group name","net.if.default.out",last,0]

And from that point, you can easily also configure triggers, and graphs too. Now going back to work on some other calculated items for total bandwith usage for a period of time and triggers based on some max_bw_usage user macro.

October 01, 2016

CentOS-7 1609 Rolling ISOs Now Live

October 01, 2016 12:01 PM

Rolling ISOs

The CentOS Linux team produces rolling CentOS-7 isos, normally on a monthly basis.

The most recently completed version of those ISOs are version 1609 (16 is for 2016, 09 is for September).

The team usually creates all our ISO and cloud images based on all updates through the 28th of the month in question .. so 1609 would mean these ISOs will contain all updates for CentOS-7 through September 28th, 2016.

These rolling ISOs have the same installer as the most recent CentOS-7 point release (currently 7.2.1511) so that they install on the same hardware as our original ISOs, while the packages installed are the latest updates.

This means that the actual kernel that boots up on the ISO is the 7.2.1511 default kernel (kernel-3.10.0-327.el7.x86_64.rpm), but that the kernel installed is the latest kernel package (kernel-3.10.0-327.36.1.el7.x86_64.rpm for the 1609 ISOs).

These normal Rolling ISOs can be downloaded from this LINK and here are the sha256sums:
CentOS-7-x86_64-DVD-1609-01.iso:
3948f7a31a8693b2df90dc31252551dcd5aa9d13f4910ad9ce28fcddc247d84f 

CentOS-7-x86_64-Everything-1609-01.iso:
602383c2aa93f6d7df46bd47658dcbf9b9d567108dec56ba60ce46a2f51c6eb2 

CentOS-7-x86_64-LiveGNOME-1609-01.iso:

f6ee8af6814bc58e2c8424db862a443649f3a57b5f85caf63704ab52d5bbac68 

CentOS-7-x86_64-LiveKDE-1609-01.iso:
1349c70e815d46c49d6ea459de6fbc074f5131c803343db18d32987ee78fd303 


CentOS-7-x86_64-Minimal-1609-01.iso:
54721e5e444a3191b18b0fabe1c35e12b65f93aa31732beb7899212d19cab69b 


You can verify the sha256sum of your downloaded ISO following these instructions prior to install.

The DVD ISO contains everything needed to do an install, but still fits on one 4.3 GB DVD.  This is the most versatile install that will fit on a single DVD and if you are new to CentOS this likely the installer you want.  If you pick Minimum Install in this installer, you can do an install that is identical to Minimal ISO.  You can also install many different Workstation and Server installs from this ISO, including both GNOME and KDE.

The Everything ISO has all packages, even those not used by the installer.  You usually do not need this ISO unless you do not have access to the internet and want to install things later from this DVD and not included by the graphical installer.  Most users will not need this ISO, it is > 7 GB but can do installs from a USB key that is big enough to hold it (currently an 8 GB key).

The LiveGNOME ISO is a Basic GNOME Workstation install, but there is no modification or personalization allowed during the install.  It is a much easier install to do, but any extras packages must be installed from the internet later.

The LiveKDE ISO is Basic KDE Workstation install.  It also does not allow modification or personalization until after the install has finished.

The Minimal ISO is a very small and quick install that boots to the command console and has network connectivity and a firewall.  It is used by System Administrators for the minimal install that they can then add functionality to.  You need to know what you are doing to use this ISO.

Newer Hardware Support

As explained above, the normal rolling ISOs boot from the Point Release installer.  Sometimes there is newer hardware that might not be supported in the point release installer, but could be supported with a newer kernel.  This installer is much less tested and is only recommended if you can not get one of the normal installers to work for you.

There are only 2 ISOs in this family, here are the links and sha256sums:
CentOS-7-x86_64-DVD-1609-99.iso:
90c7148ddccbb278d45d06805dee6599ec1acc585cafd02d56c6b8e32a238fa9 

CentOS-7-x86_64-Minimal-1609-99.iso:
1cfbbc73cc7a0eb17d7fe2fa5b1adf07492e340540603e8e1fd28b52e95f02e3

You can verify the ISO's sha256 sum using this LINK, and the descriptions above are the same for these two ISOs.



Powered by Planet!
Last updated: August 19, 2017 10:00 PM