Key Factors from Part 1
The first article in this series raised some subtle points when using SaltStack to deploy code to production environments, one of these was targeting Production systems another was rebuilding minimal Operating System (OS) environments for production use. In this article we will add some more concepts that go hand in hand – targeting and building for production environments and what those environments should look like to a Developer and System Administrator alike.
Targeting Environments
In one of the examples given in the first article we presented some sample code that targeted the OS type, to recap it looked like this:
{% if grains['os'] == 'CentOS' %} {% set apache = 'apache' %} {% set webserver = 'httpd' %} {% elif grains['os'] == 'Ubuntu' %} {% set apache = 'www-data' %} {% set webserver = 'apache2' %} {% endif %}
This is a powerful feature of SaltStack and very useful, its called "Grains" and it can be simply thought of as a Name/Value you can set to a host, and when you set the same Name/Value pair on a number of hosts you can then target those hosts in groups from the SaltStack server. This allows you to run commands or install software or basically do anything by using grains. So what type of Grains might be usable? Using the theme of this article as a guide, clearly the one that comes to mind first is targeting PRODUCTION, DEVELOPMENT, UAT and PREPROD environments to name a few. You can also add even more Grains like PRODWEBSERVER, PRODAPPSERVER and come up with any number of groupings, but how and why?
Lets do the "How" bit now, in SaltStack we set a grain from the SaltStack master server, the syntax is easy as shown below:
salt '<host-name-here>' grains.setvals "{'PROD':'True'}"
so if we have four front end web servers in a cluster we could configure them like so:
salt 'LS-PROD-WEBSRV-01' grains.setvals "{'PROD':'True'}" salt 'LS-PROD-WEBSRV-02' grains.setvals "{'PROD':'True'}" salt 'LS-PROD-WEBSRV-03' grains.setvals "{'PROD':'True'}" salt 'LS-PROD-WEBSRV-04' grains.setvals "{'PROD':'True'}" salt 'LS-PROD-WEBSRV-01' grains.setvals "{'PRODWEBFE':'True'}" salt 'LS-PROD-WEBSRV-02' grains.setvals "{'PRODWEBFE':'True'}" salt 'LS-PROD-WEBSRV-03' grains.setvals "{'PRODWEBFE':'True'}" salt 'LS-PROD-WEBSRV-04' grains.setvals "{'PRODWEBFE':'True'}"
In fact Grains are a bit more sophisticated than this, they are not limited to Name/Value pairs, you can set strings and arrays of variables and use those programmatically but to get started lets use the simplest form so we can show the concept.
The other component in the commands shown above is the host name, what we have shown is a logical method to name servers, LS =Linux Server, PROD = Production, WEBSRV = Web Server and then a numeric sequence number. This "HOSTID" can be set in the "Id" field in the minions configuration file, separate to the assigned domain name, so SaltStack provides flexibility in identifying servers. One reason for using this host naming scheme is to have the four fields that can be separated out by the "-" character and accessed programmatically separate to SaltStack such as in scripts and other tools sets. This is a handy capability and I urge you to adopt it.
To apply a patch or software update using Salt specifically for the four servers above we can just use the grain to identify the PRODWEBFE (Production Web Front End) servers using:
salt -G 'PRODWEBFE':'True' pkg.upgrade
To see all our Production servers the "PROD" grain can be used to target them using:
salt -G 'PROD':'True' test.ping
To view what grains are set against a server we can use the command:
salt '<host-name-here> grains.items
The output includes a number of very useful parameters defined such as OS and networking configuration, kernel information and application path data. Once we have a logical and consistent naming scheme for ALL our servers, suitable grains recorded against them (and documented in an IT Wiki page) we can then revist another theme from our first article. Minimal OS environments.
Rebuilding for Production Environments
One key failure in most corporate environments is the in-ability to rebuild or deploy a fresh production environment and go live as the changes made have become so sporadic over the years that fear of failure is to high and the risks cant be mitigated. With SaltStack as your deployment engine and scripts built and maintained, deploying a new fresh environment can be as simple as a single line command.
Re-engineering a legacy system into SaltStack will take some time but is not impossible and can be achieved by actively documenting the existing production environment, identifying all configuration files, their location and contents. Prior to any rebuild attempt, locking down the configuration files so only the root user can modify them will identify if external influences attempt to alter configuration data. Ideally moving configuration data out of the OS directory structure and into either the acceptable /etc directory or more specifically a company specific directory structure such as /app/etc will ensure an OS upgrade does not affect your application's configuration. This structure allows SaltStack to both create the directory structure (and any new ones as applications are added) and set permissions accordingly without direct user intervention. Some degree of software refactoring might be needed but this would be a first step.
The next step in rebuilding a production environment is ensuring all application data both archive and active working set data is NOT in the standard OS directory structure but rather under a mount point such as /data. Again SaltStack can create this mount point, it could be a clustered mount point using Glusterfs where data is shared between like hosts (ideally) and segregated into a year date format so data "auto-archives" based on year. This allows archive data to automatically be stored by year as well as active data stored in the current year directory structure.
Segregating your data outside of the standard OS directory structure gives great flexibility on many fronts:
- System Administrators can remount old data onto slow disks leaving fast storage for the working set of data.
- Data replication processes can be automated to backup older data on a lower priority to fresh data.
- A clustered file system can be mounted on the current year (and coming year) so data is shared between front end nodes and application server for instant lock free access.
- Disaster Recovery processes can be dramatically simplified.
- Application code can be deployed and synchronised via the /app directory mount point.
- All these tasks can be scripted via SaltStack to ease Administration and Deployment tasks.
The end result after carrying out these refactoring tasks is a system with a clean OS build that can be actively updated, isolated from corporate applications (other than system package dependencies) and the application data and configuration controlled. This allows us the ability to build a new MINIMAL OS installation and then use SaltStack to push out the directory structures, configuration and mount points to access data as well as push out the maintenance tasks that keep it all running smoothly.
Once an environment is built, you might want to limit activity to just application updates. Since SaltStack scripts can include other scripts, a total build of a new environment could just be a single script with include lines. Each script builds on the efforts of other scripts. A simplified example follows:
build-dirs.sls
Build our directories for a new production server:
{% for DIR in ['/data','/data/2015','/data/2016','/app','/app/etc','/app/bin','/app/conf','/app/conf/webfe1','/app/conf/webfe2'] %} {{ DIR }}: file.directory: - user: root - group: root - mode: 774 {% endfor %}
Invoke with:
salt '<new-host>' state.sls build-dirs
Next we might install our packages for our production environment, first we might need to define a repository like remi, in this example we have a remi repo install script called "init.sls" in the /srv/salt/repositories/remirepo directory on the salt server, its called "init.sls" for a reason that becomes obvious later.
install-remi-repo: cmd.run: - requires: sls: epel {% if grains['osrelease'].startswith('5') %} - name: rpm -Uvh https://rpms.famillecollet.com/enterprise/remi-release-5.rpm {% elif grains['osrelease'].startswith('6') %} - name: rpm -Uvh https://rpms.famillecollet.com/enterprise/remi-release-6.rpm {% elif grains['osrelease'].startswith('7') %} - name: rpm -Uvh https://rpms.famillecollet.com/enterprise/remi-release-7.rpm {% endif %} - unless: test -e /etc/yum.repos.d/remi.repo enable-remi-repo: file.managed: - name: /etc/yum.repos.d/remi.repo - source: salt://repositories/remirepo/remi.repo - template: jinja
With the correct repository defined for the OS version we are installing via Salt, we have our package script which combines the repo install as well. Your script might include the EPEL repository and even your own in-house repo.
inst-packages.sls
include: -repositories.remirepo.init inst-base-packages: pkg.installed: - pkgs: - mysql - MySQL-python - httpd - python-pip - memcached - python-memcached - mod_wsgi
This list can be added to over time and re-run, once a package is installed it will not be re-installed but new additions to the end of the list will result in the new packages installed. If you need to instal specific versions of software then it is advised you refactor your code to NOT need specific versions, this will protect you from being trapped on an outdated platform and no way to upgrade the OS, packages or your application down the track. Take this piece of advice seriously, there are companies still running RedHat 4.6 that cannot upgrade because they rely on outdated application libraries that are now defunct and wont refactor the code base as its now a huge under taking. End result is a totally unsupportable OS environment and little chance of recovery in a disaster.
The ability to combine these into the beginings of a new production build script becomes as simple as:
build-fresh.sls
include: - build-dirs - inst-packages
Now comes you application scripts, each app should have its own script, but using includes, you could create a single "install-apps.sls" file
install-apps.sls
inst-apps: - install-app1 - install-app1-db - install-app2 - install-app2-db-tables - app2-config-mem-cache
Which leads us to our "build-all.sls script that not only calls all our instal/setup scripts but also restarts some services:
include: - build-dirs - inst-packages - install-apps - config-prod-vhosts - rsync-archive-data - mount-cluster service httpd start: cmd.run service memcached restart: cmd.run
Invoked with a single line: salt '<new-host>' state.sls build-all
Although simplistic, this script is technically plausible for any environment and leads to the concept SaltStack calls "HighState".
Auto deployment using HighState
SaltStack has a concept called "high state" that basically applies ALL the salt scripts against a server automatically, it relies on a tree like YAML file which defines the hosts and scripts (SLS files) to run so the target minion is in a set state, a bit like our build-all script but each script is in a directory and named init.sls. At Conetix we use this to create complete working environments for our customers and it works very well.
Like our build script, the top.sls file located on the Salt master typically in the /srv/salt directory has a similar structure to the build script above but has a host name as the top level identifier, then under each host, the scripts to run. Grains can be used to specify groups and wildcards can be used also. A more detailed but slightly confusing explanation is located on the SaltStack documentation page https://docs.saltstack.com/en/latest/ref/states/layers.html
One caution – Be careful with highstate!
Containerising our build
Building a minimal OS environment and logically laying out the structure, packages, application(s) and configuration allows us to move to the next level of deployment of modern web applications (as well as a host of other application hosting scenarios) – Application Containerization.
Leading this technology revolution is Docker. A technology to deploy applications in a minimal dedicated OS environment that is self contained and can be deployed into any supporting environment, typically hosted in the cloud. We wrote an earlier article called The Cloud and Container Based Hosting which will give you some grounding in the basics as well as an article called "The Cloud's New Journey".
Since those articles Docker is now production ready and on offer from a limited number of cloud hosting companies.
With Docker you build you environment from a text file called a "Dockerfile" which has the instructions on what to build. SaltStack provides a scripted docker interface that allows all the command to be executed from an SLS script so deploying docker to a host, building docker files and creating docker images is possible. At the time of writing the Salt Docker module is undergoing a revamp as the base product evolves.