Wednesday, December 16, 2015

Evaluating a Devops team

End of year has rolled around. Along with new budgets starting, old budgets ending, and bad holiday parties where you spend two hours avoiding that guy, this is the season of annual evaluations.

You're the manager of a devops team, possibly one newly-formed this year. This devops things is all new. The team operates completely differently from the other sysadmin / operations teams you've seen. So, how do you evaluate them?

An operations team is traditionally measured by things like:
  • Production uptime percentage (high)
  • Frequency and length of production downtimes (low)
Everything is focused around the production environment and how well it stays available. These goals lead the team to fear and avoid changes to production. They also lead the team to disregard everything other than production as less important, or even unimportant. Not every operations team is like this, but every human will eventually conform to the actions that are rewarded and avoid the ones that aren't.

Our devops team, though, wants to accomplish different things. This team wants to deliver changes to production rapidly, even daily or hourly. They talk about reconstructing production on every deployment. Reconstructing lower environments on every deployment, too. They also talk about production in a very different way, not focused on external users.

If we want our fledgling devops team to keep doing these things, as different as they are from the traditional operations team, then we need to change what we measure. Or, rather, add to that list. That list is important. Operations teams are responsible for production uptime and need to be measured on that. But, the traditional mistake is only measuring on that.

Let's add a few more items to that list.

  • Cost of Deployment
    • Time to deploy (low)
    • Mean time between deployments (low)
    • Number of people required for a deployment (low)
  • Non-production friction
    • Uptime percentage (high)
    • Frequency and length of downtimes (low)
  • Number of manual tasks (low)
The last one may be unnecessary - it often falls out of what a devops team is trying to do. This team you sometimes struggle to understand isn't an operations team - it's a development team building an application which, when used, results in a new application environment. Yes, we measure them on operational efficiency, but it's much more than that.

Operational efficiency becomes a measurement of the applications the devops team has created, similar to signups and engagement metrics for the web applications we work so hard to deploy. Instead, we start to measure them in how well the application performs its job.

Thursday, September 24, 2015

Opensource isn't a right and it's not free

The trigger for this post is a reasonable request from a Windows user. Well, it's reasonable on its face. They put forward two questions, paraphrased below:

  1. Why is is so hard for opensource developers to get their stuff working on Windows?
  2. Why don't opensource developers care about the Windows population?
When it comes to operating systems, Linux and OSX are practically kissing cousins. Linux grows out of Minix and OSX grows out of NetBSD. There's been significant cross-pollenation and agreement. Both are POSIX-compliant, have relatively sane package systems, and provide a consistent command-line interface and a common configuration mechanism. Filepaths differ, but that's a simple case-statement. The same tools work the same.

Windows, on the other hand, is different. Very different. The entire architecture of the thing is alien. It has different assumptions for what an OS does and how someone will use it. It puts things in different places. It has different ways of reading and setting configuration. It has four different commandlines, each of them completely different. It has three unrelated packaging systems, none of which put stuff anywhere like the others and which don't even acknowledge the others' existences.

That should answer the "Why don't it work right?" question.

90% of all web applications are deployed onto some form of Linux. (Rarely on OSX, but that's the fault of Apple's licensing, not a problem with the platform. IIS may have had its day in the sun, but that sun has practically set.) Given Linux's historical problems with providing a good GUI, OSX has proven to be a great development platform for web applications destined for Linux. Given its great support for virtual machines, it also becomes a great platform for developing mobile applications (XCode and various VM solutions for Android). That hits the developer tetrafecta.

Because of this, most developers are very familiar with Linux and are likely using an OSX laptop. They will only use Windows for testing in IE.

This is what people are paid to do.

Let's talk about who are opensource developers. The average OSS developer (insofar as there is an average) is someone who's paid to do development using OSS tools during the day. They will usually work on these modules to initially scratch an itch for work - the initial release is usually a donation from an employer. The developer then starts to take pride in it at home. Most OSS developers realize that these modules act as advertisement for their skills. (Every job offer I've received in the past 10+ years has referenced my OSS work in some way.)

There is a vast world out there of things that I could learn. Whole new ways of
  • Storing and managing data
  • Interacting with huge datasets
  • Doing operations
  • Doing development
  • Thinking about parallel execution
I don't have a mousefart's chance in a windstorm to even keep up with what I already do for work. So why am I going to spend time learning how to manage a complex system that I will never use for work?

I'm not. And that's why I don't care if my opensource stuff works on Windows. If you want to make it work, then great. I love pull requests. If your employer wants to pay me to support it on Windows, we can discuss my rate.

But you don't have the right to make me work for free to accomplish something I don't care about. That's just not how things work.

Thursday, September 17, 2015

Devops - The toolchest

I gotten the same question from a few people about Devops - Where do I start?. "What exactly do you mean by everything?" Yes, there's a list in the post, but the question started to evolve over a few conversations to the related question "What is the full list of tools that a devops team needs to have an answer for".

So, treat this list as a checklist. Your IT operations may not need an example of one or another of these items, but you need to have thought about why you don't need it.

Caveats:
  1. I'm not interested in vim/emacs-type debates. So, questions of Puppet vs. Chef (in specific) aren't going to be in here. But, "configuration management" is going to be one of the line items. 
  2. My experience is primarily focused on web application development. Nearly everything is still correct for mobile, desktop, or embedded development. If there are differences in tool categories, I'd love to hear about them.

Backoffice Operations

This is the list of tools/services that are necessary to just run a company today. It doesn't matter what your company does or even if it writes its own software.
  • DNS
    • Depending on how other things are set up, you may not need this explicitly
  • Firewalls
    • Depending on how other things are set up, you may not need this explicitly
  • Mail (both receiving and sending)
    • Something like Google Mail
    • You may use something additional for sending emails from an application
  • Document Management
    • Something like Google Docs
  • Instant Messaging
    • Something like HipChat or Slack. Your teams are going to be doing it anyways, so best get in front of it.
  • Conferencing
    • Google Hangouts, Skype, or even Appear.in. Your teams are going to be doing it anyways, so best get in front of it.
  • Monitoring
    • Yes, you want to monitor even these things
    • Depending on your topologies, you may have multiple monitoring tools for different purposes, such as Pingdom plus Icinga plus CloudWatch.
  • Alerting / Escalation
    • While Nagios may do alerting, it's best to have a specific tool to handle that. I like PagerDuty.
  • Dashboards
    • The best teams have an up-or-not dashboard for every service the company depends on, whether it's internal or external. It cuts down on the emails that say "Is X down?".
    • Dashboards make executives happy. Non-executives love happy executives. Therefore, non-executives love dashboards, especially ones that executives can manipulate.

Development Tools

This is a list of the internal-facing tools that enable your ability to do development.
  • Source Control Management - specifically a DVCS like Git or Mercurial
  • Issue Tracker
    • This needs to be linked to your SCM so that commits will change issue status
  • Pull Request / Code Review tool
    • This should be the only tool that can merge to master
    • This should also update the issue tracker
  • Job Runner (aka, Continuous Integration / CI)
    • Runs tests upon pull requests (on create and update)
    • Runs packaging upon merge to master
  • Deployables repository
    • Maven, Yum, Rubygems - there's lots of package types and you need to have a place to put them.
  • Deployment process
    • This is Chef / Puppet / Salt / OpsWorks / whatever.
    • This needs to be integrated into however you have constructed your SCM process
    • This needs to update your issue tracker
    • Ideally, anybody in the company should be able to push a button and the button does the right thing.

Production Tools

When you're running your web application, these are the services you need to consider. You will also need to consider the development environment version of each of these. For example, if you're using S3 as your file store, do you provide a development S3 bucket (with all the attendant issues of using a shared resource for multiple developers) or do you use something like fake-s3?
  • Load Balancer
    • This includes SSL termination (you don't want to terminate SSL at your web application)
  • CDN / Static files
    • All your HTML, CSS, Javascript, images, and videos belong here.
    • This is different from your application caching layer, such as Squid (though you may reuse the same tool).
  • Application servers
    • Where all your code goes.
    • You may have multiple tiers of this, depending on your application's topologies.
  • Metrics gathering
    • Something like New Relic or Librato.
    • You may do monitoring on these metrics (for example, N internal server errors per M seconds)
  • Application Caching
    • This may be response caching (such as Squid) or data caching (such as Memcache)
    • It may be ephemeral caching (such as Memcache) or semi-permanent caching (such as Redis)
  • Relational Database(s)
    • More than just production, it's also the development choice and how those interact.
  • Key-Value Database(s)
    • More than just production, it's also the development choice and how those interact.
  • Backups
    • This includes how to test backups, including testing them in production
    • This includes disaster-recovery and offsite storage
  • Data destruction policies
    • This includes how to remove data from backups so that it's guaranteed to be destroyed
  • Non-production environment construction
    • Non-production environments cannot be the same as production. Exactly what tradeoffs are you making and why?
    • Developer environments are even less like production. How are you ensuring the developer environment is as close as possible so that "It works on my machine" is never uttered.

Friday, September 11, 2015

Devops - The "Process"™

Last post, I laid out the argument for using source control in operations. To summarize, put all the things in source control so you can control them. Except tools don't control what people do - only processes can do that. So, let's work through what the process should be, now that everything is in a known place.

First, what are we looking for in this process? It's really hard to know if you've achieved something if we don't know what we're trying to achieve. Sounds pretty obvious, I know, but think back - how many projects have you worked on where that wasn't done? How successful were those projects?

What I want out of my operations process is this:
  1. A guarantee that every piece of infrastructure was:
    1. created solely from things in source control.
    2. changed solely from things in source control.
  2. A guarantee that I can find out:
    1. what changed
    2. why it changed
    3. when it changed
    4. who reviewed/approved the change
    5. when it was applied to each instance in each environment
      1. and which instances in which environments it hasn't been applied to
    6. what is the dependency graph between this and other changes
Oh, and it has to be unobtrusive and not get in the way and be easy to understand. Not so hard.

These guarantees look very similar to the guarantees that the standard Agile development processes provide. The first is obviously implemented - devs aren't allowed to touch deployed servers. So, any change they want to see in the application must come from things in source control. (This is different from most non-Agile processes where code changes to deployed servers are sometimes emailed (or even IM'ed!) from dev to ops and implemented by hand because changes are so infrequent.)

The second is implemented through discipline. There is nothing in Agile that says all changes must be associated with an issue, confined within a branch, and go through a multi-person development/review process. But, every Agile team I have ever seen or heard of does it that way because doing it any other way has led to uncomfortable situations and unanswerable questions. It's so ubiquitous that tools now treat this as "Agile mode". Github's pull request process even creates an issue# just to ensure that there is a record in the issue tracker.

In order to apply this process to operations, we will need discipline of our own. Any operations team, to do its job, will be applying changes manually. This gets the job done now and is easy to reason about. Not a bad thing. Except, how do you know that what was done is what was defined in source control? This is where the discipline comes in.

Ideally, all changes to deployed servers happen via script. If those scripts are executed from a place separate from the deployed servers, that's even better. (For example, only using the AWS SDK to touch your AWS infrastructure.) The script can be a Puppet/Chef/Salt thing or Ruby scripts or even Bash. It doesn't matter, so long as the computer is the one actually doing the changes. The scripted-ness is checked in and you treat it like application code. Including deployment.

In short, you treat deployments of your changes exactly as you treat deployments of the applications under your management. Which makes sense because an application is more than just code - it's also the infrastructure.

"That's great in an ideal world, Rob, but nothing I have is scripted. It's all checklists. Now what?"

Checklists are scripts that run against a human virtual machine. If you look at the checklist, you should be able to replace many of the steps with "execute this script". Maybe even condense 2-3 steps into one script. Over time (and this is definitely a journey, not a destination), each checklist will condense into "invoke these N scripts". Which, itself, is just one script.

"That great, but I don't even have checklists. My people just know what to do."

If they "just know what to do," then you do not. Do not know what they do, do not know they have done it, do not have control. You're responsible that they do it. And, most importantly, you're responsible that new people can learn it. If it isn't written down, how are new people learning their job?

Tuesday, September 8, 2015

Devops - Where do I start?

Last post, I laid out a series of questions every operations team should be able to answer. Everyone may agree that this list of operational capabilities is good, but getting from here to there is far more complicated. What should we do first?

The absolute first thing every operations team must do is get everything into source control. If it's not in source control, you cannot audit it, review it, or manage it. If a change happens, you don't know how, when, what, or why and there's no chain of custody showing what the approvals were. In short, you do not control it. Not controlling the stuff that makes your stuff isn't sane devops.

This implies there needs to be solid source control. First choice is what to use. Always use a distributed version control system (aka, DVCS) - Git or Mercurial if at all possible. Using a DVCS has two massive advantages over a centralized version control system, like Subversion, Perforce, or CVS (no links to bad choices!). First, DVCS's are far better tools for managing development changesets - lots of discussion about that around the web. The better reason, for operations, is that every clone can be used as the new master if everything else goes pear-shaped. Remember - the first question asks if you are capable of recovering everything else if you have your production backups and the latest checkout of source control. You cannot recover source control itself with the latest checkout of Subversion or CVS. You can do so with Git or Mercurial.

If at all possible, use a service. (This, btw, is going to be a theme I'll expand on more later.) GitHub, AWS's CodeCommit, Atlassian's BitBucket, or Google's Cloud Source Repositories are all excellent choices, along with many others. They all provide private repositories and are extremely scalable and secure. For nearly every organization, these services are capable enough. If you're already using Google's AppEngine or the AWS suite of services, the choice is pretty simple. If you're using the hosted Atlassian suite, BitBucket again seems to be an easy choice. Github is an excellent choice in most other scenarios. Sometimes, for corporate reasons, you have to host internally. In that case, you should strongly consider using GitLab or Atlassian Stash. In all cases, you should be using the same tools as your developers.

Once you picked a tool and a method for hosting, the next step is to get everything into it. Literally and truly everything. If it's a script, check it in. If it's a configuration file, check it in. If it's a Chef recipe, Puppet manifest, Salt pillar, or any other file for a similar tool - yup, check it in. All the secrets (GPG-encrypted first, of course). All the scripts. All the configuration. Everything.

If you don't have an automated way of building it, then write down how to build and check that in. Unless you have a really good reason not to, use a text-based markup language. It's important to use text because text is diff-able by Git/Mercurial. If it's not diff-able, then it becomes very difficult to see the differences when someone wants to change something. I prefer GitHub-flavored Markdown, but there's plenty of other good choices. Use the one that makes the most sense with the rest of your tooling landscape.

Note: I recommend both text (for diff-ability) and GPG-encryption (for secrets). GPG encryption is inherently not diff-able. For secrets, this is good. For instructions and scripts, you want diff-ability.

What's the minimal list of servers/services/activities that you need to check into source control? You guessed it - everything.
  • DNS / Network definitions
  • LDAP / IAM / User authentication lists
  • Mail servers (if you manage mail internally, otherwise treat it as an external service)
  • Monitoring and alerting definitions
    • Especially if you use something like PagerDuty for alerting
  • GPG-encrypted master passwords for external services
  • GPG-encrypted keys (and other authentication methods) for external services
  • GPG-encrypted SSL certificates
  • Server construction methods (for your application)
    • Including where and how to get the base images
  • Application deployment methods
  • Service construction methods (for your internally-hosted supporting services)
    • For example, CI (Jenkins, Stash, etc)
  • Any and all desktop support (including VPN clients, etc)
  • Anything else you are responsible for
(Note: While it doesn't explicitly say it in the documentation, but you can GPG-encrypt a file for multiple recipients and any of them can decrypt it. This is a good thing. Note that you will need to re-key all the secrets whenever someone leaves, not just re-encrypt them. You needed to do this anyway. You only need to re-encrypt them when someone new comes in.)

All of these things may not all live in the same repository. But, don't hesitate to put it into some repository just because you don't know the perfect place. You can always move it later. And, the movement itself will document the growing understanding you have of how to manage the infrastructure.

If you don't know how to rebuild a server, take your best guess and stick that in the repository. As you learn more, you will update what's there. The repository logs will be a trail of exactly what you had to do in order to get all the information you currently have.

The next post will talk about what to do with this repository once you've built it.

Thursday, September 3, 2015

Questions to ask Operations

(. . . or, if you're the devops team, questions you should be asking yourself)

Devops teams exist to make the answers to the following questions a resounding "Absolutely!". If there is any question in this list that you're not 110% confident in the "Absolutely!", then that's the next thing to work on. And, yes, this list is ordered from most important to least important.
  1. Can we confidently rebuild our production environment from source control and backups of data?
    1. In under an hour?
    2. Including all monitoring, alerting, and metrics gathering?
  2. Can we confidently terminate one person's access?
    1. In under 10 minutes?
    2. With one command?
  3. Can we confidently create a instance of the application?
    1. That is a structural clone of production?
    2. With reasonable fake data?
    3. In one command?
    4. On a laptop?
  4. Can we confidently turn off any one server in production at any time?
    1. With zero impact or visibility to users?
    2. Including your:
      1. database master?
      2. session store?
  5. Can we confidently tell anyone to take 3 months leave to care for a sick family member?
    1. Without ever calling them once?
  6. Can we confidently hire into any spot and have that person fully authenticated and authorized?
    1. With nothing missing?
    2. In their first hour?
    3. Before they even show up?
  7. Can we confidently hire someone into IT and have them make a change to production?
    1. In their first week?
    2. In their first day?
  8. Can we confidently say that what is reviewed in QA is EXACTLY what can go to production?
  9. Can we confidently let anyone promote from one environment to the next?
    1. With a button?
    2. Showing them exactly what will be promoted?
      1. As issue numbers linked from your issue tracker?
    3. With rollbacks?
  10. Do you have tests of your infrastructure?
    1. Including monitoring, alerting, and metrics gathering?
    2. Including external interfaces?
    3. Run as part of a CI service?
    4. With automated coverage statistics?
      1. Over 90%?
Implicit in every question is the follow-up "How do you know?" If you ask yourself these questions and cannot point to where you did that yesterday (or the last time, in the case of authn/z changes), then you're treating your infrastructure as magic.

Next post discusses where to start.

    Thursday, August 27, 2015

    The Packager DSL - The second user story

    1. DSLs - An Overview
    2. The Packager DSL - The first user story
    3. The Packager DSL - The second user story
    Our main user is an odd person. So far, all we've made is a DSL that can create empty packages with a name and a version. No files, no dependencies, no before/after scripts - nothing. But, instead of asking for anything useful, the first response we get is "Huh - I didn't think 'abcd' was a legal version number." Some people just think in weird ways.

    Our second user story:
    Throw an error whenever the version isn't an acceptable version string by whatever criteria are normally employed.
    Before we dig into the user story itself, let's talk about why this isn't a "bug" or a "defect" or any of the other terms normally bandied about whenever deployed software doesn't meet user expectations. Every time the user asks us to change something, it doesn't matter whether we call it a "bug", "defect", "enhancement", or any other word. It's still a change to the system as deployed. Underneath all the fancy words, we need to treat every single change with the same processes. Bugfixes don't get a special pass to production. Hotfixes don't get a special pass to production. Everything is "just a change", nothing less and nothing more.

    In addition, the "defect" wasn't in our implementation. It was in the first user story if it was anywhere. The first user story didn't provide any restrictions on the version string, so we didn't place any upon it. And that was correct - you should never do more than the user story requires. If you think that a user story is incomplete in its description, you should go back to the user and negotiate the story. Otherwise, the user doesn't know what they're going to receive. Even worse, you might add something to the story that the user does not want.

    Really, this shouldn't be considered a defect in specification, either. That concept assumes an all-knowing specifier that is able to lay out fully-formed and fully-correct specifications that never need updating. Which is ridiculous on its face. No-one can possibly be that person and no-one should ever be forced to try. This much tighter feedback loop between specification to production to next specification is one of the key concepts behind Agile development. (The original name for devops was agile systems administration or agile operations.)

    All of this makes sense. When you first conceive of a thing, you have a vague idea of how it should work. So, you make the smallest possible thing that could work and use it. While you start with a few ideas of where to go next, the moment you start using it, you realize other things that you never thought of. All of them (older ideas and newer realizations) become user stories and we (the developers and the users together) agree on what each story means, then prioritizes them in collaboration. Maybe the user wants X, but the developers point out Y would be really quick to do, so everyone agrees to do Y first to get it out of the way. It's an ongoing conversation, not a series of dictates.

    All of which leads us back to our user story about bad version strings. The first question I have is "what makes a good or bad version string?" The answer is "whatever criteria are normally employed". That means it's up to us to come up with a first pass. And, given that we're building this in a Ruby world, the easiest thing would be to see what Ruby does.

    Ruby's interface with version strings would be in its packages - gems. Since everything in Ruby is an object, then we would look at Gem and its version-handling class Gem::Version. Reading through that documentation, it looks like the Ruby community has given a lot of thought to the issue. More thought than I would have realized was necessary, but it's good stuff. More importantly, if we use Gem::Version to do our version string validation, then we have a ready-documented worldview of how we expect version strings to be used.

    Granted, we will have to conform to whatever the package formats our users will want to generate require. FreeBSD may require something different from RedHat and maybe neither is exactly what Gem::Version allows. At that point, we'll have failing tests we can write from user stories saying things like "For Debian, allow this and disallow that." For now, let's start with throwing an error on values like "bad" (and "good"). Anything more than that will be another user story.

    As always, the first thing is to write a test. Because we can easily imagine needing to add more tests for this as we get more user stories, let's make a new file at spec/dsl/version_spec.rb. That way, we have a place to add more variations.

    describe Packager::DSL do
        context "has version strings that" do
            it "rejects just letters" do
                expect {
                    Packager::DSL.execute_dsl {
                        package {
                            name 'foo'
                            version 'abcd'
                            type 'whatever'
                        }
                    }
                }.to raise("'abcd' is not a legal version string")
            end
        end
    end
    

    Once we have our failing test, let's think about how to fix this. We have three possible places to put this validation. We may even choose to put pieces of it in multiple places.
    1. The first is one we've already done - adding a validation to the :package entrypoint. That solution is good for doing validations that require knowing everything about the package, such as the version string and the type together.
    2. The second is to add a Packager::Validator class, similar to the DSL and Executor classes we already have. This is most useful for doing validation of the entire DSL file, for example if multiple packages need to be coordinated.
    3. The third is to create a new type, similar to the String coercion type we're currently using for the version.
    Because it's the simplest and is sufficient for the user story, let's go with option #3. I'm pretty sure that, over time, we'll need to exercise option #1 as well, and possibly even #2. But, YAGNI. If we have to change it, we will. That's the beauty of well-built software - it's cheap and easy to change as needed.

    class Packager::DSL < DSL::Maker
        add_type(VersionString = {}) do |attr_name, *args|
            unless args.empty?
                begin
                    ___set(attr_name, Gem::Version.new(args[0]).to_s)
                rescue ArgumentError
                    raise "'#{args[0]}' is not a legal version string" 
                end
            end
            ___get(attr_name)
        end
    
        add_entrypoint(:package, {
            ...,
            :version => VersionString,
            ...,
        }) ...
    end
    

    Note how we use Gem::Version to do the validation, but we don't save it as a Gem::Version object. We could keep the value as such, but there's no real reason (yet) to do so. ___get() and ___set() (with three underscores each) are provided by DSL::Maker to do the sets and gets. The attr_name is provided for us. So, if we wanted, we could reuse this type for many different attributes.

    We could add more tests, documenting exactly what we've done. But, I'm happy with this. Ship it!

    There's something else we haven't discussed about this option in particular. Type validations happen immediately when the item is set. Since values can be reused within a definition, they are safely reusable. For example (once we have defined how to include files in our packages), we could have something like:

    package {
        name 'some-product'
        version '1.2.44'
        files {
            source "/downloads/vendor/#{name}/#{version}/*"
            dest "/place/for/files"
        }
    }
    

    If we defer validation until the end, we aren't sure we've caught all the places an invalid value may have been used in another value. This is why we attempt a "defense in depth", a concept normally seen in infosec circles. We want to make sure the version value is as correct as we can make it with just the version string. Then, later, we want to make sure it's even more correct once we can correlate it with the package format (assuming we ever get a user story asking us to do so).

    Wednesday, August 26, 2015

    Creating the Packager DSL - Retrospective

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    8. Creating the Packager DSL - The executor
    9. Creating the Packager DSL - The CLI
    10. Creating the Packager DSL - Integration
    11. Creating the Packager DSL - Retrospective
    We've finished the first user-story. To review:
    I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.
    Our user has a script to run and a DSL format to use. While this is definitely not the end of the project by any means, we can look back on what we've done so far and learn a few things. We're also likely to find a list of things we need to work on and refactor as we move along.

    It helps to come up with a list of what we've accomplished in our user story. As we get further along in the project, this list will be much smaller. But, the first story is establishing the walking skeleton. So far, we have:

    1. The basics of a Ruby project (gemspec, Rakefile, Bundler, and project layout)
    2. A DSL parser (using DSL::Maker as the basis)
      1. Including verification of what is received
    3. An intermediate data structure representing the packaging request (using Ruby Structs)
    4. An Executor (that calls out to FPM to create the package)
    5. A CLI handler (using Thor as the basis)
      1. Including verification of what is received
    6. Unit-test specifications for each part (DSL, Executor, and CLI)
      1. Unit-tests by themselves provide 100% code coverage
    7. Integration-test specifications to make sure the whole functions properly
    8. A development process with user stories, TDD, and continuous integration
    9. A release process with Rubygems
    That's quite a lot. We should be proud of ourselves. But, there's always improvements to be made.

    The following improvements aren't new features. Yes, an empty package without dependencies is almost completely worthless. Those improvements will come through user stories. These improvements are ones we've seen in how we've built the guts of the project. Problems we've noticed along the way that, left unresolved, will become technical debt. We need to list them out now so that, as we work on user stories, we can do our work with an eye to minimizing these issues. In no particular order:
    • No error handling in the call to FPM
      • Calling an external command can be fraught with all sorts of problems.
    • Version strings aren't validated
    • There's no whole-package validator between the DSL and the Executor
    • We probably need a Command structure to make the Executor easier to work with
    • We probably want to create some shared contexts in our specs to reduce boilerplate
      • This should be done as we add more specs
    It's important to be continually improving the codebase as you complete each new user story. Development is the act of changing something. Good developers make sure that the environment they work in is as nimble as possible. Development becomes hard when all the pieces aren't working together.

    Over the next few iterations, we'll see how this improved codebase works in our favor when we add the next user stories (validating the version string, dependencies, and files).

    Tuesday, August 25, 2015

    Packager DSL 0.0.1 Released

    The Packager DSL has been released to Rubygems. (I was going to name it "packager", but that was already taken.) Please download it and take it for a spin. It'll be under pretty active development, so any bugs you find or missing features you need should be addressed rapidly.

    Monday, August 24, 2015

    Creating the Packager DSL - Integration

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    8. Creating the Packager DSL - The executor
    9. Creating the Packager DSL - The CLI
    10. Creating the Packager DSL - Integration
    Our user story:
    I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.
    Our progress:

    • We can parse the DSL into a Struct. We can handle name, version, and package format. If any of them are missing, we raise an appropriate error message.
    • We can create a package from the parsed DSL
    • We have a script the executes everything
    So, we're done, right? Not quite. We have no idea if it actually works. Sure, you can run it manually and see the script works. But, that's useful only when someone remembers to do it and remembers how to interpret it. Much better is a test that runs every single time in our CI server (so no-one has to remember to do it) and knows how to interpret itself. In other words, a spec.

    We have tests for each of the pieces in isolation. That was deliberate - we want to make sure that each piece works without involving more complexity than is necessary. But, our user story doesn't care about that. It says the user wants to execute a script and get their package. The user doesn't care about the Parser vs. the Executor (or anything else we've written). Those distinctions are for us, the developers, to accommodate the inevitable changes that will happen. A developer's job (and raison d'etre, frankly) is to create change. Without change, a developer has nothing to do. So, we organize our project to make it as easy as possible to make change.

    But, at the end of the day, it's the integration of these pieces that matters. So, we need to create end-to-end integration tests that show how all the pieces will work together. Where the unit tests we've written so far test the inner workings of each unit, the integration tests will test the coordination of all the units together. We are interested in checking that the output of one piece fits the expected input of the next piece.

    Said another way, our unit tests should provide 100% code coverage (which you can see with rspec spec/{cli,dsl,executor}). The integration tests will provide 100% user-expectation coverage.

    As always, first thing we do is write a test. We have a subdirectory in spec/ for the unit tests for each component. Let's add another one called spec/integration with a file called empty_spec.rb and contents of

    require 'tempfile'
    
    describe "Packager integration" do
        let(:dsl_file) { Tempfile.new('foo').path }
        it "creates an empty package" do
            append_to_file(dsl_file, "
                package {
                    name 'foo'
                    version '0.0.1'
                    package_format 'dir'
                }
            ")
    
            Packager::CLI.start('execute', dsl_file)
    
            expect(File).to exist('foo.dir')
            expect(Dir['foo.dir/*'].empty?).to be(true)
        end
    end
    

    Take a file with an empty package definition, create a package in the directory we're in, then verify. Seems pretty simple. We immediately run into a problem - no package is created. If you remember back when when we were creating the executor, we never actually call out to FPM. It's relatively simple to add in an #execute method to the Executor which does a system() call. That should make this test pass.

    But, that's not enough. After you run it, do a git status. You'll immediately see another problem - the package was created in the directory you ran rspec in. Which sucks. But, it's fixable.

    In the same way we have tempfiles, we have temp directories. Sysadmins used to bash are familiar with mktemp and mktemp -d. Ruby has Tempfile and Dir.mktmpdir, respectively. So, let's run the test within a temporary directory - that should solve the problem.

    require 'tempfile'
    require 'tmpdir'
    
    describe "Packager integration" do
        let(:dsl_file) { Tempfile.new('foo').path }
    
        it "creates an empty package" do
            Dir.mktmpdir do |tempdir|
                Dir.chdir(tempdir) do
                    # Rest of test
                end
            end
        end
    end
    

    That keeps the mess out of the main directory. Commit and push.

    Though, when I look at what's been written, the tempdir handling is both manually-done (the author has to remember to do it every time) and creates two more levels of indentation. The manual part means it's possible for someone to screw up. The indentations part means that it's harder to read what's happening - there's boilerplate in every test. Which is somewhat ironic given that the whole point of this process is to create a DSL - removing the boilerplate. We can do better. Red-Green-Refactor doesn't just apply to the code. (Or, put another way, tests are also code.)

    RSpec allows us to do things before-or-after all-or-each of the specs in a context. Let's take advantage of this to ensure that every integration test will always happen within a tempdir.

    require 'fileutils'
    require 'tempfile'
    require 'tmpdir'
    
    describe "Packager integration" do
        let(:dsl_file) { Tempfile.new('foo').path }
    
        let(:workdir) { Dir.mktmpdir }
        before(:all)  { @orig_dir = Dir.pwd }
        before(:each) { Dir.chdir workdir }
        after(:each) {
            Dir.chdir @orig_dir
            FileUtils.remove_entry_secure workdir
        }
    
        it "creates an empty package" do
            # Rest of test
        end
    end
    

    A few notes here.

    1. When we used the block versions of Dir.mktmpdir and Dir.chdir, the block cleaned up whatever we did (e.g., changed back to the original directory). When we use the direct invocation, we have to do our own cleanup.
    2. before(:all) will always run before before(:each) (guaranteed by RSpec).
    3. We don't want to use let() for the original directory. let() is lazy, meaning it only gets set the first time it's invoked. Instead, we set an attribute of the test (as provided helpfully to us by RSpec).
      1. We could have used let!() instead (which is eager), but it's too easy to overlook the !, so I don't like to use it. Sometimes, subtlety is overly so.
    4. Tests should be runnable in any order. And this includes all the other tests in all the other spec files. You should never assume that any two tests will ever run in a specific order or even that any test will run in a specific test run. So, we always make sure to change directory back to the original directory (whatever that was). There's nothing here that assumes anything about the setup.
    5. FileUtils has many ways to remove something. #remove_entry_secure is the most conservative, so the best for something that needs to be accurate more than it needs to be fast.
    6. We need to leave the tempdir that we're in before trying to remove it. On some OSes, the OS will refuse to remove a directory if a process has it as its working directory.

    prev

    Friday, August 21, 2015

    Creating the Packager DSL - The CLI

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    8. Creating the Packager DSL - The executor
    9. Creating the Packager DSL - The CLI
    Our user story:
    I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.
    Our progress:

    • We can parse the DSL into a Struct. We can handle name, version, and package format. If any of them are missing, we raise an appropriate error message.
    • We can create a package from the parsed DSL
    We still need to:
    • Provide a script that executes everything
    Writing this script, on its face, looks pretty easy. We need to:
    1. Receive the filename from the commandline arguments
    2. Pass the contents of that filename to Packager::DSL.parse_dsl()
    3. Pass the contents of that to Packager::Executor
    A rough version (that works) could look like:

    #!/usr/bin/env ruby
    
    require 'packager/dsl'
    require 'packager/executor'
    
    filename = ARGV[0]
    items = Packager::DSL.parse_dsl(IO.read(filename))
    Packager::Executor.new.create_package(items)
    

    You can create a file with a package declaration (such as the one in our spec for the DSL) and pass it to this script and you will have an empty package created. All done, right?

    Not quite.

    The first problem is testing executables is hard. Unlike classes and objects which live in the same process, the testing boundary of a script is a process boundary. Process boundaries are much harder to work with. Objects can be invoked and inspected over and over, in any order. Invocations of an executable are one-shot. Outside the results, there's nothing to inspect once you've invoked the script. If we could minimize the script and move most of the logic into some objects, that would make testing it so much easier. And, we can measure our code coverage of it.

    The second (and bigger) problem is writing good executables is hard. Very very hard. Good executables have options, error-handling, and all sorts of other best practices. It is nearly impossible to write a good executable that handles all the things, even if you're an expert.

    Again, the good Ruby opensource developers have provided a solution - Thor. With Thor, we can move all the logic into a Packager::CLI class and our executable in bin/packager becomes

    #!/usr/bin/env ruby
    
    $:.unshift File.expand_path('../../lib', __FILE__)
    
    require 'rubygems' unless defined? Gem
    require 'packager/cli'
    
    Packager::CLI.start
    

    Almost all of that is cribbed from other Ruby projects, meaning we can directly benefit from their experience. The executable is now 8 lines (including whitespace). We can visibly inspect this and be extremely certain of its correctness. Which is good because we really don't want to have to test it. The actual CLI functionality moves into classes and objects, things we can easily test.

    First thing first - we need a test. Thor scripts tend to function very similarly to git, with invocations of the form "<script> <command> <flags> <parameters>". So, in our case, we're going to want "packager create <DSL filenames>". This translates into the #create method on the Packager::CLI class. The filenames will be passed in as the arguments to the #create method. We don't have any flags, so we'll skip that part (for now).

    A note on organization - we have tests for the DSL, the Executor, and now the CLI. We can see wanting to write many more tests for each of those categories as we add more features, so let's take the time right now to reorganize our spec/ directory. RSpec will recursively descend into subdirectories, so we can create spec/dsl, spec/executor, and spec/cli directories. git mv the existing DSL and Executor specs into the appropriate directories (renaming them to be more meaningful), run RSpec to make sure everything is still found, then commit the change. You can pass rspec the name of a file or a directory, if you want to run just a subset of the tests. So, if you're adding just a DSL piece, you can run those tests to make sure they pass without having to do the entire thing.

    Back to the new CLI test. The scaffolding for this looks like

    describe Packager::CLI do
        subject(:cli) { Packager::CLI.new }
    
        describe '#create' do
        end
    end
    

    The nested describe works exactly as you'd expect. (RSpec provides many ways to organize things, letting you choose which works best for the situation at hand.)

    The first test, as always, is the null test. What happens if we don't provide any filenames? Our script should probably print something and stop, ideally setting the exit code to something non-zero. In Thor, the way to do that is to raise Thor::Error, "Error string". (I wish they'd call that Thor::Hammer, but you can't have everything.) So, the first test should expect the error is raised.

        it "raises an error with no filenames" do
            expect {
                cli.create
            }.to raise_error(Thor::Error, "No filenames provided for create")
        end
    

    Run that, see it fail, then let's create packager/cli.rb to look like

    class Packager
        class CLI < Thor
            def create()
                raise Thor::Error, "No filenames passed for create"
            end
        end
    end
    

    Again, we're writing just enough code to make the tests pass. Now, let's pass in a filename to #create. Except, where do we get the file?

    One possibility is to create a file with what we want, save it somewhere inside spec/, add it to the project, then reference that filename as needed.

    There are a few subtle problems with that approach.

    1. The file contents are separated from the test(s) using them.
    2. These files have to be packaged with the gem in order for client-side tests to work.
    3. There could be permission issues with writing to files inside the installation directory.
    4. Developers end up wanting to keep the number of these files small, so shoehorn as many possible cases within each file as possible.
    Fortunately, there's a much better approach. Ruby, like most languages, has a library for creating and managing tempfiles. Ruby's is called Tempfile. Adding this to our test file results in

    require 'tempfile'
    
    describe Packager::CLI do
        subject(:cli) { Packager::CLI.new }
        let(:package_file) { Tempfile.new('foo').path }
    
        def append_to_file(filename, contents)
            File.open(filename, 'a+') do |f|
                f.write(content)
                f.flush
            end
        end
    
        describe '#create' do
            it "raises an error with no filenames" do
                expect {
                    cli.create
                }.to raise_error(Thor::Error, "No filenames provided for create")
            end
    
            it "creates a package with a filename" do
                append_to_file(package_file, "
                    package {
                        name 'foo'
                        version '0.0.1'
                        type 'dir'
                    }
                ")
    
                cli.create(package_file)
            end
        end
    end
    

    We create a tempfile and the filename stored in the package_file 'let' variable. That's just an empty file, though. We then want to put some stuff in it, so we create the append_to_file helper method. (This highlights something important - we can add methods as needed to our tests.) Then, we use it to fill the file with stuff and pass the filename to Packager::CLI#create.

    Note: We have to flush to disk to ensure that when we read from the file, the contents are actually in the file instead of the output buffer.

    We have our input filename (and its contents) figured out. What should we expect to happen? We could look at whether a package was created in the directory we invoked the CLI. That is what our user story requires. And, we will want to have that sort of integration test, making sure everything actually functions the way a user will expect it to function. But, that's not this test. (Not to mention the Executor doesn't actually invoke FPM yet!)

    These tests are meant to exercise each class in isolation - these are unit tests. Unit tests exercise the insides of one and only one class. If we were to see if a package is created, we're actually testing three classes - the CLI as well as the DSL and Executor classes. That's too many moving parts to quickly figure out what's gone wrong when something fails. By having tests which only focus on the internals of the CLI, DSL, and Executor classes by themselves as well as the integration of all the parts, we can easily see which part of our system is in error when tests start to fail. Is it the integration and CLI tests? Is it just the integration tests? Is it just the DSL? All of these scenarios immediately point out the culprit (or culprits).

    Given that the CLI is going to invoke the DSL and Executor, we want to catch the invocation of the #parse_dsl and #create_package methods. We don't want to actually do what those methods do as part of this test. Frankly, the CLI object doesn't care what those methods do. It only cares that the methods function, whatever that means.

    RSpec has a concept called stubbing. (This is part of a larger concept in testing called "mocking". RSpec provides mocks, doubles, and stubs, as do many other libraries like mocha.) For our purposes, what we can do is say "The next time method X is called on an instance of class Y, do <this> instead." Stubs (and mocks and doubles) will be uninstalled at the end of every spec, so there's no danger of it leaking or affecting anything else. With stubs, our happy-day test now looks like

            it "creates a package with a filename" do
                contents = "
                    package {
                        name 'foo'
                        version '0.0.1'
                        type 'dir'
                    }
                "
                append_to_file(package_file, contents)
    
                expect(Packager::DSL).to receive(:parse_dsl).with(contents).and_return(:stuff)
                expect_any_instance_of(Packager::Executor).to receive(:create_package).with(:stuff).and_return(true)
    
                cli.create(package_file)
            end
    
    
    This looks like an awful mouthful. And, it may seem odd to create expectations before we call cli.create. But, if you think about it for a second and read the two expectations out loud, it can make sense. All of our tests so far have been "We expect X is true." What we're now saying is "We expect X will be true." Which works out.

    As for the formatting, you can do the following:

          expect(Packager::DSL).to(
              receive(:parse_dsl).
              with(contents).
              and_return(:stuff)
          )
    

    Note the new parentheses for the .to() method and the periods at the end of one line (instead of the beginning of the next). These are required for how Ruby parses. You could also use backslashes, but I find those too ugly. This, to me, is a good compromise. Please feel free to experiment - the goal is to make it readable for you and your maintainers, not me or anyone else in the world.

    Our #create method now changes to

            def create(filename=nil)
                raise Thor::Error, "No filenames passed for create" unless filename
                items = Packager::DSL.parse_dsl(IO.read(filename))
                Packager::Executor.new.create_package(items)            
            end
    

    and we're done. Remember to do a git status to make sure you're adding all the new files we've been creating to source control.

    prev

    Tuesday, August 18, 2015

    Creating the Packager DSL - The executor

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    8. Creating the Packager DSL - The executor
    Our user story:
    I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.
    Our progress:

    • We can parse the DSL into a Struct. We can handle name, version, and package format. If any of them are missing, we raise an appropriate error message.
    We still need to:
    • Create a package from the parsed DSL
    • Provide a script that executes everything
    Since the script is the umbrella, creating the package is the next logical step. To create the package, we'll defer to FPM. FPM doesn't have a Ruby API - it is designed to be used by sysadmins and requires you to build a directory of what you want and invoke a script.

    The first seemingly-obvious approach is to directly embed the things to do directly in the parser where we are currently creating the Package struct. That way, we do things right away instead of building some intermediate thing we're just going to throw away. Sounds like a great idea. And, it would be horrible.

    The best programs are written in reusable chunks that each do one and only one thing and do it well. This is true for operating systems and especially true for programs. In software, we call it coupling, or the degree one unit is inextricably-linked to other units. And, we want our units to be coupled as little as possible.

    Our one unit right now (the parser) handles understanding the DSL as a string. We have two other responsibilities - creating a package and handling the interaction with the command-line. Unless we have a good reason otherwise, let's treat each of those as a separate unit. (There are occasionally good reasons to couple things together, but it's best to know why the rule's there before you go about breaking it.)

    Now, we have two different units that, when taken together in the right order, will work together to take a DSL string and create a package. They will need to communicate one to the next so that the package-creation unit creates the package described by the DSL-parsing unit. We could come up with some crazy communication scheme, but the parser already produces something (the Package struct). That should be sufficient for now. When that changes, we can refactor with confidence because we will keep our 100% test coverage.

    Before anything else, we'll need to install FPM. So, add it to the gemspec and bundle install.

    Next, we need to write a test (always write the test first). The first part of the test (the input) is pretty easy to see - we want to create a Packager::Struct::Package with a name, version, and format set. Figuring out what the output should be is . . . a little more complicated. We don't want to test how FPM works - we can assume (barring counterexample) that FPM works exactly as advertised. But, at some point, we need to make sure that our usage of FPM is what we want it to be. So, we will need to test that the output FPM creates from our setup is what we want.

    The problem here is FPM delegates the actual construction of the package to the OS tools. So, it uses RedHat's tools to build RPMs, Debian's tools to be DEBs, etc. More importantly, the tools to parse those package formats only exist on those systems. Luckily for this test, we can ignore this problem. The command for creating an empty package is extremely simple - you can test it easily yourself on the commandline. But, we need to keep it in the back of our mind for later - sooner rather than later.

    Since we're testing the executor (vs. the parser), we should put our test in a second spec file. The test would look something like:

    describe Packager::Executor do
        it "creates an empty package"
            executor = Packager::Executor.new(:dryrun => true)
            input = Packager::DSL::Package.new('foo', '0.0.1', 'unknown')
            executor.create_package(input)
            expect(executor.command).to eq([
                'fpm',
                '--name', input.name,
                '--version', input.version,
                '-s', 'empty',
                '-t', input.package_format,
            ])
        end
    end
    

    A few things here:
    1. Unlike Packager::DSL where we run with class methods (because of how DSL::Maker works), we're creating an instance of the Packager::Executor class to work with. This allows us to set some configuration to control how the executor will function without affecting global state.
    2. FPM does not support the "unknown" package format. We're testing that we get out what we put in.
    3. The FPM command already looks hard to work with. Arrays are positional, but the options to FPM don't have to be. We will want to change that to be more testable.
    4. Creating that Packager::DSL::Package object is going to become very confusing very quickly for the same reasons as the FPM command - it's an Array. Positional arguments become hard to work with over time.
    You should run the spec to make sure it fails. The Packager::Executor code in lib/packager/executor.rb would look like:

    class Packager::DSL
        attr_accessor :dry_run, :command
    
        def initialize(opts)
            @dry_run = opts[:dry_run] ? true : false
            @command = [] # Always initialize your attributes
        end
    
        def create_package(item)
            command = [
                'fpm',
                '--name', item.name,
                '--version', item.version,
                '-s', 'empty',
                '-t', item.package_format,
            ]
    
    
            return true
        end
    end

    Make sure to add the appropriate require statement in either lib/packager.rb or spec/spec_helper.rb and rake spec should pass. Add and commit everything, then push. We're not done, but we're one big step closer.

    prev

    Monday, August 17, 2015

    Creating the Packager DSL - Initial steps

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    The Packager DSL is, first and foremost, a software development project. In order for a software project to function properly, there's a lot of scaffolding that needs to be set up. This post is about that. (Please feel free to skip it if you are comfortable setting up a Ruby project.) When this post is completed, we will have a full walking skeleton. We can then add each new feature quickly. We'll stop when we're just about to add the first DSL-ish feature.

    If you're creating your own DSL with your own project name, just put it everywhere you see packager and Packager accordingly.

    Repository choice

    Distributed version control systems (aka, DVCS) provide significant capabilities in terms of managing developer contributions over previous centralized systems (such as Subversion and CVS). Namely, Git (and Mercurial or Bazaar) have extremely lightweight branches and very good merging strategies. There's a ton of good work on the net about choosing one of these systems and how to use it.

    I use Git. For my opensource work, I use GitHub. If I'm within an enterprise, Gitlab or Stash are excellent repository management solutions. The minimum scaffolding for any project in Git is:
    • .gitignore file
      • These files are never to be checked into the repository. Usually intermediate files for various developer purposes (coverage, build artifacts, etc). The contents of this file are usually language-specific.
    • .gitattributes file
      • Ensure line-endings are consistent between Windows, Linux, and OSX.

    Repository scaffolding

    The minimum scaffolding for a Ruby project is:
    • lib/ directory
      • Where our library code lives. Leave this empty (for now).
    • spec/ directory
      • Where our tests (or specifications) live. Leave this empty (for now).
    • bin/ directory
      • Where our executables live. Leave this empty (for now).
    • .rspec file
      • Default options for rspec (Ruby's specification runner).
    • Rakefile file
      • Ruby's version of Makefile
    • packager.gemspec file
      • How to package our project.
    • Gemfile file
      • Dependency management and repository location. I write mine so that it will delegate to the gemspec file. I hate repeating myself, especially when I don't have to.
    You are welcome to copy any or all of these files from the ruby-packager project and edit them accordingly or create them yourself. There is ridiculous amounts of documentation on this process and each of the tools (Rake, Rspec, Gem, and Bundler) all over the web. Ruby, moreso than most software communities, has made a virtue of clear, clean, and comprehensive documentation.

    You also need to have installed Bundler using gem install bundler. Bundler will then make sure your dependencies are always up-to-date with the simple bundle install command. I tend to install RVM first, just to make sure I have can upgrade (or downgrade!) my version of Ruby as needed.

    At this point, go ahead and add all of these files to your checkout and commit. I like using the message "Create initial commit". (If you use git, the empty lib/ and spec/ directories won't be added. Don't worry - they will be added before we're finished with Day 1.)

    Note: If your DSL is for use within an enterprise, please make sure that you know where to download and install your gems from. Depending on the enterprise, you probably already have an internal gems repository and a standard process for requesting new gems and new versions. You can set the location of that internal gems repository within the Gemfile instead of https://rubygems.org. If you have to set a proxy in order to reach rubygems.org, please talk your IT administrator first.

    Documentation

    It may seem odd to consider documentation before writing anything to document (and we will revisit this when we have an actual API to document), but there are two documentation files we should create right away:
    • README.md
      • This is the file GitHub (and most other repository systems) will display when someone goes to the repository. This should describe your project and give a basic overview of how to use it.
    • Changes / Changelog
      • Development is the act of changing software. Let's document what we've changed and when. I prefer the name Changes, but several other names will work.
    I prefer to use Markdown (specifically, GitHub-flavored Markdown) in my README files. Hence, the .md suffix. Please use whatever markup language you prefer and which will work within your repository system.

    Your changelog should be a reverse-sorted (newest-first) list of all the changes grouped by version. The goal is to provide a way for users to determine when something was changed and why. So, wherever possible, you should provide a link to the issue or task that describes what changed in detail. The best changelog is just a list of versions with issue descriptions linking to the issue.

    Finally, git add . && git commit -am "Create documentation stubs"

    SDLC

    Before we write any software, let's talk quickly about how we're going to get the changes into the software and out the door. While the first set of changes are going to be driven by what we want, we definitely want a place for our users to open requests. GitHub provides a very simple issue tracker with every project, so packager will use that. If you're within an enterprise, you probably already have a bug tracker to use. If you don't, I recommend Jira (and the rest of the Atlassian suite) if you're a larger organization and Trac if you're not.

    Either way, every change to the project should have an associated issue in your issue tracker. That way, there's a description of why the change was made associated with the blob that is the change.

    Speaking of changes, use pull requests wherever possible. Pull requests do two very important things:
    1. They promote code reviews, the single strongest guard against bugs.
    2. They make it easy to have a blob that is the change for an issue.
    I will confess, however, that when I'm working by myself on a project, I tend to commit to master without creating issues. But, I like knowing that the process can be easily changed to accomodate more than one developer.

    Release process

    Our release process will be pretty simple - we'll use the gem format. We can add a couple lines to our Rakefile and Ruby will handle all the work for us.
    require 'rubygems/tasks'
    Gem::Tasks.new
    We will also need to add it to our gemspec
    s.add_development_dependency 'rubygems-tasks', '~> 0'
    and bundle install. That provides us with the release Rake task (among others). Please read the documentation for what else it provides and what it does.

    Note: If your DSL is for use within an enterprise, please make sure that you know where and how you are going to release your gem. This will likely be the place you will download your dependencies from, but not always. Please talk your IT administrator first.

    First passing spec

    I am a strong proponent of TDD. Since we're starting from scratch, let's begin as we mean to continue. So, first, we add a spec/spec_helper.rb file that looks like:
    require 'simplecov'
    SimpleCov.configure do
        add_filter '/spec/'
        add_filter '/vendor/'
        minimum_coverage 100
        refuse_coverage_drop
    end
    SimpleCov.start

    require 'packager'
    Then, add a spec/first_spec.rb file with:
    describe "DSL" do
        it "can compile"
            expect(true).to be(true)
        end
    end
    (This is a throwaway test, and that's okay. We will get rid of it when we have something better to test.)

    rake spec fails right away because simplecov isn't installed. You'll need to add simplecov to the packager.gemspec and run bundle install to install simplecov.

    We haven't created lib/packager.rb yet, so when we execute rake spec, it will fail (compile error). So, let's create lib/packager.rb with:
    class Packager
    end
    rake spec now passes. git add . && git commit -am "Create initial test" to mark the success.

    We also have 100% test coverage from the start. While this isn't a silver bullet that cures cancer, it does tell us when we're writing code that may not always work like we think it will. By forcing every line of code to be executed at least once by something our test suite (which is what 100% coverage guarantees), we are forced to write at least one test to reach every nook and cranny. Hopefully, we'll be adult enough to recognize when there's a use-case that isn't tested, even if the code under question is executed.

    Continuous Integration

    Finally, before we start adding actual features, let's end with continuous integration. If you're on GitHub, then use Travis. Just copy the one in ruby-packager and it will work properly (JRuby needs some special handling for code coverage). Travis will even run the test suite on pull requests and note the success (or lack thereof) right there. In order to enable Travis to run against your GitHub repository, you will need to register with Travis and point-and-click on its site to set things up. There is plenty of documentation (including StackOverflow Q&A) on the process.

    Otherwise, talk to your devops or automation team in order to set up integration with Jenkins, Bamboo, or whatever you're using in your enterprise. Whatever you do, it should be set up to run the whole test suite on every single push on every single branch. More importantly, it should act as a veto for pull requests (if that's supported by your tooling).

    Summary

    It may not seem like we've actually done anything, but we've done quite a lot. A development project isn't about writing code (though that's a necessary part). It's about managing requests for change and delivering them in a sane and predictable fashion. Everything we've done here is necessary to support exactly that.

    Friday, August 14, 2015

    Creating the Packager DSL - First feature

    1. Why use a DSL?
    2. Why create your own DSL?
    3. What makes a good DSL?
    4. Creating your own DSL - Parsing
    5. Creating your own DSL - Parsing (with Ruby)
    6. Creating the Packager DSL - Initial steps
    7. Creating the Packager DSL - First feature
    In our last segment, we did everything necessary to set up the Packager DSL repository, but it doesn't actually do anything. Let's make a start on fixing that.

    User Story

    First, we need a description of what we're going to do. This, in Agile, would be the User Story. Well, first-er, we need to decide what we're going to do. For that, let's look at why our project exists at all.

    The Packager DSL's purpose is to provide an easy way to describe what should go into a package, primary focusing on OS-packages (RPM, DEB, MSI, etc). Eventually, it should be usable to describe (nearly) every type and construction of OS package that's reasonable to expect to use. So, any package you might install on your RedHat, Ubuntu, or Windows server should be describable with this DSL. We'll defer the actual package construction to fpm. But, all the work of collecting (and validating!) the files, figuring out versions, and constructing the invocation of fpm - that's what we'll do.

    For our first feature, let's build an empty package. And that's the first stab at a user story.
    I want to create an empty package.
    In talking with our client (yes, I talk to myself a lot), the first question I tend to ask is "what happens if I don't receive X?" and I don't treat myself in client-mode any differently. So, what happens if we get something like package {}. That seems a bit off. Package filenames are usually constructed as:
    <name>-<version>-<other stuff>.<extension>
    Name and version seem to be key, so let's amend the story to require name and version.
    I want to create an empty package by specifying the name and version. If either is missing, stop and inform the user.
    Which immediately begs the question of "How do we stop and inform the user?" Which then leads to the question of how we're even running the DSL. The easiest thing to do is run a script, so let's do that. If we write things properly, then we can easily change the UI from a command-line script to something fancier.
    I want to invoke a script, passing in the name of my DSL file. This should create an empty package by specifying the name and version. If either is missing, print an error message and stop. Otherwise, a package should be created in the directory I am in.
    Hmm. "a package should be created" - what kind of package? RPM? DEB? Something else?
    I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.

    First test

    The first thing we need to do is write a failing test. In TDD, this is known as Red-Green-Refactor (all three linked posts are good reading). We want to write the smallest test that can fail, then the smallest code that can pass, then take the opportunity to clean anything up (if necessary). A couple sidebars are important here.

    Failing tests

    Tests are worthless if they only pass. Tests exist to fail. If they don't fail, then they cannot warn you when something isn't working. As a result, the first thing we want to test is the test itself, otherwise we cannot trust it. When you write a test, it's really important to to see the test fail first.

    Also, in this process, each test is the next step in the journey of creating the product. If we've followed TDD, then every feature and each line of code is already tested. We're wanting to test something that hasn't been written yet - it's the next step. We're describing what should happen once we've taken the step. If that doesn't fail, then we have no confidence that we've properly described where we're planning to go.

    If you do not see the test fail, then several things are going wrong, probably a combination of:

    • Your test is worthless because it won't fail when it should in the future.
    • You don't understand the system well enough to push it beyond its edges.
    • You haven't constructed your test infrastructure well enough to exercise the problem at hand.
    • You haven't described the problem at hand properly.

    Immediate Refactoring

    Refactoring, as a whole, becomes much simpler with a robust and comprehensive test suite. Refactoring immediately, though, is less obviously beneficial. It's great to be able to have an opportunity to rethink your implementation right after you have it working. But, the biggest gain in my experience is that by rethinking your implementation, you end up thinking of more edge cases. Each of these becomes another test. By continuously refactoring, you keep driving the development process forward.

    The test

    The first test we want to write is the simplest thing that we could pass in. That would be package {}.
    describe Packager::DSL do
        it "fails on an empty package" do
            expect {
                Packager::DSL.parse_dsl("package {}")
            }.to raise("Every package must have a name")
        end
    end
    
    When we run this with rake spec, it fails with a compilation error because the Packager::DSL class doesn't exist. Which makes sense - we're just now taking the first step into Packager-dsl-istan. The test tells us what the first steps should be (and in what order):
    1. Create the class
    2. Subclass it from DSL::Maker (which provides parse_dsl)
    3. Create an entrypoint called "package"
    4. Add a validation that always fails with "Every package must have a name"
    Yes - always fails. We don't have a test for the success path yet, so the simplest code that could possibly work (while still driving us forward) is for the validation to always raise an error. We'll fix that as soon as we know how to make it succeed.

    To make this work, we need to create a lib/packager/dsl.rb file with
    require 'dsl/maker'
    
    class Packager
        class DSL < DSL::Maker
            add_entrypoint('package') do
            end
            add_validation('package') do
                return "Every package must have a name"
            end
        end
    end
    
    rake spec still fails stating it cannot find Packager::DSL. Huh?! Ahh ... we forgot to load it in our spec_helper. We can either add a require statement in spec/spec_helper.rb or we can add it in lib/packager.rb. Either one is fine - you can always move it later if you find you need to.

    Now, rake spec gives a different error - it cannot find DSL::Maker. We're not creating that - we're going to use it from RubyGems. So, let's add it to our gemspec file. (Remember - our Gemfile is delegating to our gemspec.) We want to make sure we're using at least 0.1.0 (the latest version as of this writing).
    s.add_dependency 'dsl_maker', '~> 0.1', '>= 0.1.0'
    After a quick bundle installrake spec now runs cleanly. We also want to delete the other spec file we created when we added our scaffolding. So, git add . && git rm spec/first_spec.rb. We've removed a spec, so let's make sure we still have 100% cover with rake spec. Once confirmed, git commit -m "Create first spec with a real DSL".

    Writing for failure first

    In the same way that we want to write the test first and see it fail, we want to write for failure (or sad-day) situations first. Writing for success (or happy-day) is actually pretty easy - it's the way most people think of the world. It's the way our user story was written and what most people are going to be looking for when they evaluate your work. But, if you think back to the software you've used, the best software was the one that caught and managed the errors the best. There is nothing more frustrating that a program that blithely lets you get into a huge mess that it should've (at least) warned you about.

    So, the best way to write software is to try and figure out all the different ways a person can screw up and plug those holes. You'll miss some - the universe is always making a better idiot. But, you'll catch most of them, and that's what matters.

    Adding the attributes

    Let's add a success. Let's allow a DSL that has a package with just a name succeed. We're going to have to amend this success when we add version and type, but the neat thing about code is that it's so moldy. (Err ... changeable.)

    What does it mean for a DSL to succeed? The immediate straight-line to solve the user story would be to invoke FPM directly. While tempting, it's too big of a step to take. Right now, we're focused on parsing the DSL. So, let's make sure we're doing that right before worrying about integration with another library. For now, let's just create a data structure that represents what we put into the DSL. Ruby provides a very neat thing called a Struct which allows us to define a limited data structure without too much effort. Let's add another specification into spec/empty_package_spec.rb
        it "succeeds with a name" do
            items = Packager::DSL.execute_dsl {
                package {
                    name "foo"
                }
            }
    
            expect(items[0]).to be_instance_of(Packager::DSL::Package)
            expect(items[0].name).to eq("foo")
        end
    
    (I prefer using execute_dsl() because it allows Ruby to make sure my package definitions compile. You can use parse_dsl() instead.) Make sure it fails, then change lib/packager/dsl.rb to be
    class Packager
        class DSL < DSL::Maker
            Package = Struct.new(:name)
    
            add_entrypoint('package', {
                :name => String,
            }) do
                Package.new(name)
            end
            add_validation('package') do |item|
                return "Every package must have a name" unless item.name
            end
        end
    end
    
    That should pass both the new test and the old test. Commit everything with a good message. Adding the version and type attributes should be a similar sequence of activities. Make sure to exercise the discipline of adding each one separately, ensuring that you have a failing test, then minimal code, then passing tests.

    You may have to amend tests to make them pass with the new code. That's expected as the requirements change. Just make sure that the tests you have to amend are ones you either expected to amend or which make sense to amend. Don't just blindly "make the tests pass". That makes the test suite worthless.

    Summary

    We haven't finished the user story, but we've gotten to the point where we can parse a DSL with validations and have a predictable data structure at the end of it. Looking down the road, we will take that data structure and invoke fpm with it. Then, we'll see some really neat benefits to having the separation between parsing and execution.