Friday, August 14, 2015

Creating the Packager DSL - First feature

  1. Why use a DSL?
  2. Why create your own DSL?
  3. What makes a good DSL?
  4. Creating your own DSL - Parsing
  5. Creating your own DSL - Parsing (with Ruby)
  6. Creating the Packager DSL - Initial steps
  7. Creating the Packager DSL - First feature
In our last segment, we did everything necessary to set up the Packager DSL repository, but it doesn't actually do anything. Let's make a start on fixing that.

User Story

First, we need a description of what we're going to do. This, in Agile, would be the User Story. Well, first-er, we need to decide what we're going to do. For that, let's look at why our project exists at all.

The Packager DSL's purpose is to provide an easy way to describe what should go into a package, primary focusing on OS-packages (RPM, DEB, MSI, etc). Eventually, it should be usable to describe (nearly) every type and construction of OS package that's reasonable to expect to use. So, any package you might install on your RedHat, Ubuntu, or Windows server should be describable with this DSL. We'll defer the actual package construction to fpm. But, all the work of collecting (and validating!) the files, figuring out versions, and constructing the invocation of fpm - that's what we'll do.

For our first feature, let's build an empty package. And that's the first stab at a user story.
I want to create an empty package.
In talking with our client (yes, I talk to myself a lot), the first question I tend to ask is "what happens if I don't receive X?" and I don't treat myself in client-mode any differently. So, what happens if we get something like package {}. That seems a bit off. Package filenames are usually constructed as:
<name>-<version>-<other stuff>.<extension>
Name and version seem to be key, so let's amend the story to require name and version.
I want to create an empty package by specifying the name and version. If either is missing, stop and inform the user.
Which immediately begs the question of "How do we stop and inform the user?" Which then leads to the question of how we're even running the DSL. The easiest thing to do is run a script, so let's do that. If we write things properly, then we can easily change the UI from a command-line script to something fancier.
I want to invoke a script, passing in the name of my DSL file. This should create an empty package by specifying the name and version. If either is missing, print an error message and stop. Otherwise, a package should be created in the directory I am in.
Hmm. "a package should be created" - what kind of package? RPM? DEB? Something else?
I want to run a script, passing in the name of my DSL file. This should create an empty package by specifying the name, version, and package format. If any of them are missing, print an error message and stop. Otherwise, an empty package of the requested format should be created in the directory I am in.

First test

The first thing we need to do is write a failing test. In TDD, this is known as Red-Green-Refactor (all three linked posts are good reading). We want to write the smallest test that can fail, then the smallest code that can pass, then take the opportunity to clean anything up (if necessary). A couple sidebars are important here.

Failing tests

Tests are worthless if they only pass. Tests exist to fail. If they don't fail, then they cannot warn you when something isn't working. As a result, the first thing we want to test is the test itself, otherwise we cannot trust it. When you write a test, it's really important to to see the test fail first.

Also, in this process, each test is the next step in the journey of creating the product. If we've followed TDD, then every feature and each line of code is already tested. We're wanting to test something that hasn't been written yet - it's the next step. We're describing what should happen once we've taken the step. If that doesn't fail, then we have no confidence that we've properly described where we're planning to go.

If you do not see the test fail, then several things are going wrong, probably a combination of:

  • Your test is worthless because it won't fail when it should in the future.
  • You don't understand the system well enough to push it beyond its edges.
  • You haven't constructed your test infrastructure well enough to exercise the problem at hand.
  • You haven't described the problem at hand properly.

Immediate Refactoring

Refactoring, as a whole, becomes much simpler with a robust and comprehensive test suite. Refactoring immediately, though, is less obviously beneficial. It's great to be able to have an opportunity to rethink your implementation right after you have it working. But, the biggest gain in my experience is that by rethinking your implementation, you end up thinking of more edge cases. Each of these becomes another test. By continuously refactoring, you keep driving the development process forward.

The test

The first test we want to write is the simplest thing that we could pass in. That would be package {}.
describe Packager::DSL do
    it "fails on an empty package" do
        expect {
            Packager::DSL.parse_dsl("package {}")
        }.to raise("Every package must have a name")
    end
end
When we run this with rake spec, it fails with a compilation error because the Packager::DSL class doesn't exist. Which makes sense - we're just now taking the first step into Packager-dsl-istan. The test tells us what the first steps should be (and in what order):
  1. Create the class
  2. Subclass it from DSL::Maker (which provides parse_dsl)
  3. Create an entrypoint called "package"
  4. Add a validation that always fails with "Every package must have a name"
Yes - always fails. We don't have a test for the success path yet, so the simplest code that could possibly work (while still driving us forward) is for the validation to always raise an error. We'll fix that as soon as we know how to make it succeed.

To make this work, we need to create a lib/packager/dsl.rb file with
require 'dsl/maker'

class Packager
    class DSL < DSL::Maker
        add_entrypoint('package') do
        end
        add_validation('package') do
            return "Every package must have a name"
        end
    end
end
rake spec still fails stating it cannot find Packager::DSL. Huh?! Ahh ... we forgot to load it in our spec_helper. We can either add a require statement in spec/spec_helper.rb or we can add it in lib/packager.rb. Either one is fine - you can always move it later if you find you need to.

Now, rake spec gives a different error - it cannot find DSL::Maker. We're not creating that - we're going to use it from RubyGems. So, let's add it to our gemspec file. (Remember - our Gemfile is delegating to our gemspec.) We want to make sure we're using at least 0.1.0 (the latest version as of this writing).
s.add_dependency 'dsl_maker', '~> 0.1', '>= 0.1.0'
After a quick bundle installrake spec now runs cleanly. We also want to delete the other spec file we created when we added our scaffolding. So, git add . && git rm spec/first_spec.rb. We've removed a spec, so let's make sure we still have 100% cover with rake spec. Once confirmed, git commit -m "Create first spec with a real DSL".

Writing for failure first

In the same way that we want to write the test first and see it fail, we want to write for failure (or sad-day) situations first. Writing for success (or happy-day) is actually pretty easy - it's the way most people think of the world. It's the way our user story was written and what most people are going to be looking for when they evaluate your work. But, if you think back to the software you've used, the best software was the one that caught and managed the errors the best. There is nothing more frustrating that a program that blithely lets you get into a huge mess that it should've (at least) warned you about.

So, the best way to write software is to try and figure out all the different ways a person can screw up and plug those holes. You'll miss some - the universe is always making a better idiot. But, you'll catch most of them, and that's what matters.

Adding the attributes

Let's add a success. Let's allow a DSL that has a package with just a name succeed. We're going to have to amend this success when we add version and type, but the neat thing about code is that it's so moldy. (Err ... changeable.)

What does it mean for a DSL to succeed? The immediate straight-line to solve the user story would be to invoke FPM directly. While tempting, it's too big of a step to take. Right now, we're focused on parsing the DSL. So, let's make sure we're doing that right before worrying about integration with another library. For now, let's just create a data structure that represents what we put into the DSL. Ruby provides a very neat thing called a Struct which allows us to define a limited data structure without too much effort. Let's add another specification into spec/empty_package_spec.rb
    it "succeeds with a name" do
        items = Packager::DSL.execute_dsl {
            package {
                name "foo"
            }
        }

        expect(items[0]).to be_instance_of(Packager::DSL::Package)
        expect(items[0].name).to eq("foo")
    end
(I prefer using execute_dsl() because it allows Ruby to make sure my package definitions compile. You can use parse_dsl() instead.) Make sure it fails, then change lib/packager/dsl.rb to be
class Packager
    class DSL < DSL::Maker
        Package = Struct.new(:name)

        add_entrypoint('package', {
            :name => String,
        }) do
            Package.new(name)
        end
        add_validation('package') do |item|
            return "Every package must have a name" unless item.name
        end
    end
end
That should pass both the new test and the old test. Commit everything with a good message. Adding the version and type attributes should be a similar sequence of activities. Make sure to exercise the discipline of adding each one separately, ensuring that you have a failing test, then minimal code, then passing tests.

You may have to amend tests to make them pass with the new code. That's expected as the requirements change. Just make sure that the tests you have to amend are ones you either expected to amend or which make sense to amend. Don't just blindly "make the tests pass". That makes the test suite worthless.

Summary

We haven't finished the user story, but we've gotten to the point where we can parse a DSL with validations and have a predictable data structure at the end of it. Looking down the road, we will take that data structure and invoke fpm with it. Then, we'll see some really neat benefits to having the separation between parsing and execution.

No comments:

Post a Comment