Monday, August 3, 2015

Why create your own DSL?

  1. Why use a DSL?
  2. Why create your own DSL?
DSLs are great. The problem, though, is that there are many domains that have been too small for someone to write a DSL for. SQL and CSS exist because millions of develpers need to access relational data and style web pages. There are dozens of domains that could use a DSL, from packaging to orchestration to HTTP route management to cross-product configuration management. Let alone all the domains that are specific to your organization, from the corporation down to the team.

Just because there isn't a DSL doesn't mean you aren't programming in that domain. For example, you may have to manage a heterogenous environment of Apache and Nginx web servers for various legacy reasons. They may have server-specific configurations, but they have to share a set of configuration. Someone has to ensure that a change to the Nginx configurations is both made to the Apache configurations and that the change is translated properly and that the change occurs in the same deployment.

No-one will ever create a publicly-available generic DSL for managing web server configurations. There just isn't a large enough population who have to maintain a heterogenous web server environment. And even if there was such a DSL, it wouldn't be quite as useful for your needs. It would be generic - everything available as the lowest common denonimator. Which reduces the busat of the DSL for your purposes.

Busat ("expressive terseness") is not an objective measure equivalent for all people in all places. Expressiveness is directly related to the receiver's ability to understand what was communicated. If you can limit your listeners to only those who agree on specific terms, then you can be terser while remaining as expressive. If you have a jargon, then a DSL can take advantage of that.

Compare the following examples:

server {
    hostname "host1.domain.com"
    ssl {
        key_file "/etc/ssl/key_file"
        ca_file "/etc/ssl/ca_file"
        pem_file "/etc/ssl/pem_file"
    }
    ....
}
purpose :internal_web {
    ssl_root_directory "/etc/ssl"
    domain "domain.com"
    # Other internal_web things
    ....
}

server "host1" {
    purpose :internal_web
    ....
}

If your audience can all agree on what "internal_web" means, then that's strictly better. It describes exactly what you're doing, why you're doing it, and changes become much easier to vet for correctness. But, unless you're willing to write your own DSL, you would never be able to collapse the boilerplate in the configuration.

It's highly unlikely you specifically have to maintain both Apache and Nginx configurations to do the same thing. But, it's guaranteed that your organization or team has processes unique to it. Some special snowflake way of looking at something in the development process. Something that's just really annoying to manage in the standard language. Some good places to look are:
  • Packaging and orchestration (or most other devops/operations activities)
  • Configuration file generation
    • web servers
    • monitoring
    • datastores
  • Configuration value management across environments
  • Anything that has to interact with multiple different systems
  • Anything repetitive (such as CSS, for which there is Less)

Wednesday, July 29, 2015

Why use a DSL?

  1. Why use a DSL?
  2. Why create your own DSL?
Languages exist to communicate ideas. Most of us are familiar with generic human languages like English, Swahili, Japanese - even created languages like Esperanto and Lojban. These are able to express any idea humans can possibly come up with in a way other humans can understand. In programming terms, all human languages are Turing-complete.

Sometimes, though, some ideas are easier to express in certain languages vs. others. Supposedly, Eskimos have 50+ words for snow and the Sami have nearly 1000 words for reindeer. Given how important those topics are in those cultures, that would a lot of sense. People working together in those domains would be able to communicate more quickly because the same effort communicates more concepts. For example, "busat" (in Sami) translates to "male reindeer with a single, very large testicle". I have no idea how often this occurs, but that's probably a unique identifier in most reindeer herds.

We see this as well in programming languages. Programmers were writing object-oriented programs in ANSI C for years before Bjarne Stroustrup created C++. It's easier to write OO programs in C++ than in C. In C, you have to be extremely disciplined to make sure that you're adhering to public vs. private interfaces, that you invoke the "methods" properly (passing the invocant as the first parameter, passing the correct invocant to the right method, etc), and lots of other bookkeeping. It's just exhausting to keep track of all of that, especially across a large codebase. In C++, the language not only reduces the bookkeeping you have to do, but it also reduces the number of characters you have to read (and type, but read is more important).

Like Sami, there are programming languages that trade expressibility in one domain for another. I suspect most of the words for computing and the internet in Sami are borrowed from English (as they are in many other languages). All the usable words are already taken for other purposes. "Scripting" languages, like Ruby and Python and Javascript, make a similar set of tradeoffs. They give up the ability to write programs that execute extremely quickly (like programs written in C would do) in order to make it easy for humans to write the programs. Programs written in these languages are often much shorter (10-100x shorter) than the equivalent in C or Java. They are much more expressive when it comes to specific domains of computing. No-one would write an operating system in Perl, but these languages excel at manipulating text and talking to databases at faster-than-human-reaction-time speeds.

Expressive terseness (aka, "busat") is really important in programming because the hardest part of doing development is working within existing code. Depending on whose percentages you want to use, the maintenance phase of a project is anywhere from 60%-90% of the time and cost of that project. Maintenance, first and foremost, is an effort in reading comprehension. You can't fix a bug unless you understand the code where the bug lives, what code is connected to it, and how the various execution paths wend through that code (and the code around it). This is a lot easier to do when you're dealing with 50 lines of code than 500 (assuming equal cyclomatic complexities). The business-level concepts are easier to see and there are fewer places for bugs to hide.

SQL and CSS are good examples of DSLs that take complex domains (set manipulation and style metadata, respectively) and allow the developer to express exactly and minimally what they are trying to accomplish. Querying sets - writing joins, projections, and all the other logic that SQL provides - is extremely complicated. Doing this in any standard programming language can run to hundreds and thousands of lines with lots of cyclomatic complexity. Plenty of places for bugs to live. A DSL makes it easier to express the desire to do these three set conjunctions (using these indices for lookup), then project these 5 data points (with these manipulations), ordered in this way.

DSLs also make it much easier for people working in different languages (or even business domains) to collaborate and learn from each other within the domain. There are hundreds of forums, discussion boards, and blogs on SQL or CSS tips, tricks, and improvements. These tips work regardless of what programming language you use.

Sunday, July 5, 2015

What is production?

In What is an application?, I propose a definition for "application" as "A set of capabilities provided to a user to enable them to satisfy their desires." But, there are many other terms that are undefined. Over the next several posts, I'll define each one. The most important (after application) is "production", so I'll start there.

Let's do this with a thought experiment. Pretend that your application only has a production, however you define it. This is where your users come and where you make your money (assuming you do). There is only the one instance and, because there's only one, no-one needs a name for it. It's just "the application" - there's nothing to confuse it with. Anytime you need to make a change, you go make it in "the application" and your users immediately see it. Sounds good, right?

Of course, no-one works like this, and for good reason. Some changes are small enough that they can be made directly where your users are interacting, but the vast majority of them are not. Most changes require several hours (if not days) of work, often collaborating between multiple people and are built in stages you don't want your users to see.

So, we distinguish between where users go for the "live" application and where developers work to make changes. Stand up a clone of production, except it doesn't have live users going to it, and call it "development". Developers can make changes to it knowing they are safe from affecting the business. Production remains the place where users satisfy their desires.

So far, it seems pretty clear what production vs. development is. Production is where users go (but not developers) and development is where developers go (but not users). And, from a developer's perspective, that would be enough.

There are more stakeholders in an application than just users and developers. At minimum, you have the business owners. They define what the application is meant to do - what desires the user is attempting to satisfy and what capabilities the user will have to do so. If communication was perfect, then the business owners could tell the developers "Do this" and be assured that the necessary changes would happen exactly as they intended. This also assumes developers will never make mistakes. In real life, neither statement is remotely true. Review of work requested is a fact of life. Business owners need to assure and control the quality of what they pay for. Hence, the name "QA" (or, sometimes, "QC", for quality control).

Some organizations choose to have such review occur within the development instance. This makes a lot of sense for smaller, newer, and/or slower projects who either cannot or do not need the ongoing cost of a separate instance. In most other projects, the shortcomings of this plan become obvious very quickly. Ongoing development makes it difficult to determine if a failure is because of the work under review or the unstable nature of the development instance. Business owners are uncertain what would happen to the production instance if they approve the work done for a request. Will the change for that request work properly when users try to exercise the new capability? Were the failures in that change or in something else?

We have development, QA/QC, and production. It's pretty obvious what "production" is - it's where the users are and it has to be stable with a managed and defined process for change.

So, where does a demonstration/demo or training environment fit? It's not the production, but it needs to be stable for a smaller set of users and a limited window of time. This is where a lot of organizations stumble, attempting to tie the demo or training instance to either the existing production (slow-changing) or QA (quick-changing) environments. Except, the business needs usually require a middle-ground between the two.

Which leads to the better definition of "production". Or, rather, splitting out what constitutes "production" into different knobs we can apply to other environments.

The first knob is change management. Different environments will change within different change control regimens. This knob is based on who decides when the environment changes. Development changes whenever a developer edits a file. QA changes whenever a developer finishes some work. Production, however, changes whenever the business feels a feature is both ready for use and appropriate for release. A demo or training environment will be similarly managed by the business, not the development teams.

The second knob is the stringency of review. We've already seen how changes to production will usually go through a QA environment first before a user will see it in production. Demo and training environments also need similar review because users will be in these environments.

So, what's the difference between production, training, and demo? From a developer's perspective, often nothing. They're all strongly controlled environments with reviewed changes pushed when the business wants them.

All of this discussion leads to this:
  1. Production is where users live.
  2. Production is where change control is at its maximum (whatever that is).
  3. Production is where data robustness is at its maximum. (To be discussed in a later post.)
  4. Production is where availability is at its maximum. (To be discussed in a later post.)
  5. Multiple environments can share aspects of Production and should be treated as such in those axes.

Tuesday, June 16, 2015

What is an application?

Operations (and devops, which is just another approach to operations) has one purpose - to make sure that the business's IT assets are operating properly. That's what operations means - the group that handles the operating. But that's a really nebulous word, possibly even self-defining. What goes into that?
  • Production is up and operating smoothly.
  • All of the metrics are being gathered properly.
  • Everything is secure.
  • Changes to production happen smoothly, predictably, and intentionally.
More nebulous words - "Smoothly"; "Everything"; "Predictably". Even "Production" can be very nebulous and undefined. Lots of groups talk about "production" vs. "production-like". Everyone agrees that the version you make money from is "production". But, is a version of your product for demos "production" or "production-like"? Is it like that all the time? How do you distinguish?

Nebulous words are places where confusion arises and where balls get dropped. "I thought Joe takes care of that." "Why did this get missed?" These issues arise in every organization, large or small, that allows nebulous words to define their operations. It's even worse when operations becomes something someone does in addition to their other hats. Part-time becomes no-time in no time.

Nebulous words cause problems. Problems are dumb, so let's fix that.

There are hundreds of definitions out there, for everything, from all sorts of viewpoints. But, at the end of the day, everything we as IT professionals do is to further a business. Businesses exist to serve users. (If users give the business money, then they are also customers. But a customer is-a user.) A business serves its users by providing them with capabilities that address user desires. (Some of those desires are also needs, but a need is-a desire.) If a business serves its users with IT, then the business is delivering an application.

An application is, then, "A set of capabilities provided to a user that enables them to satisfy their desires."

It doesn't look like this definition gets us very far, but it helps put a number of things into perspective. The first important consequence to note is that this definition doesn't talk about code. It talks about capabilities. Of course, application code is going to be an integral part of providing those capabilities - that's sort of the point of how IT is delivered. But, too many organizations consider the application code to be the sum total of the application. Or, slightly better, the vast majority. Both are patently false.

Consider everything necessary for the application code to function in order to deliver those capabilities. A partial list could include:

  • The server
  • The network (physical and routing definitions)
  • The datastores (relational databases, caching layers, etc)
  • Application configuration
  • Backend services (e.g., payment processors)

If any one of these elements stops working, the user cannot exercise your application. And this list doesn't consider the elements your business may need in order to manage and grow the application (metrics, monitoring, administrative functions, etc).

Of course, you know all this. But, have you considered treating database configuration or network routing as part of the application, managed exactly as the application code is managed?

Friday, August 30, 2013

Provisions to last the journey

In the last post, I talked about Vagrant as half of the most important tool IT organizations have gained in the past decade. This post talks about the other half - the provisioner.

The problem we need to solve is replicating the build of a server exactly over and over. In the past (some 10 years ago), I would use Norton Ghost (a backup utility) to clone a server once I had it setup perfectly, then restore that clone to the other servers. And that worked great, so long as I never needed to change what was on that server. For example, a web-server might have had Apache, various modules (mod_perl, mod_proxy, mod_rewrite, etc), and the MySQL client software. Then, we would install the language dependencies (at the time, I was writing Perl apps) from CPAN. We would take a Ghost at that point, replicate that out, then deploy the application using SVN. If we needed a new module or a new version of a module, that required a new Ghost. If we needed a new Apache module or an upgrade, that required a new Ghost. It only took an hour or two, but it was very manual.

This worked great, for production. All of our production servers would be exactly the same, because they were clones of the same Ghost. But, since the production configuration would be on the Ghost, we couldn't use that in QA or in development.

The other problem was that we had no record of what we were doing. Nothing was in source control, largely because there was nothing to put in source control. SVN (and now Git) are only really useful with text files. (Yes, they take binary files, but only as undifferentiable blobs. Not useful.) This meant no code reviews, no history, and no controls. Everyone had to be a sysadmin.

I've heard of other places using a master package (rpm or deb) that does nothing but require all the other packages necessary for the server to be setup properly. And, this works great . . . until it doesn't. The syntax for building packages can be inscrutable. And, while you can do anything in a package (because packages are just tarballs of scripts with metadata), it's very dangerous to allow anyone the ability to do anything. Even if there are no bad actors, everyone is still a human. Humans make mistakes and making mistakes as root is a good way to lose weekends rebuilding from tape.

Luckily, there is a better way.

Unlike the virtualization manager (Vagrant), there are several good choices for a provisioner. Puppet and Chef are the two big ones right now, but several others are nipping at their heels. They differ in various ways, but all of them provide the same primary function - describing how a server should be set up in a parseable format. If you are underwhelmed, just wait a few minutes. (I'll use Puppet in my examples because it's the one I'm using right now. All these examples could be written just as easily in Chef, SaltStack, or Ansible. Juju is a little different.)

The basic unit of work is the manifest (in Puppet) or cookbook (in Chef). This is what contains the parseable description of what needs to be accomplished. In both, you describe what you want to exist, after execution is complete. (Unlike a script, you do not describe how to do it or in what order - it's the provisioner's job to figure that out.) So, you might have something like:

$name = "apache"
package { "apache2":
  require => User[$name],
}
group { $name:
  ensure => "present",
}
user { $name:
  ensure => "present",
  gid => $name,
  require => Group[$name],
}

This would install the apache2 package (found in Ubuntu), create an 'apache' group and an 'apache' user. You'll notice that the apache2 package requires the apache user. So, creating the user would run before installing the package, even though it's defined afterwards. So, define things in the order that makes sense and the provisioner will figure things out. This means, however, that when you watch it run, things won't run in the same order from time to time, and that's okay.

Provisioners are designed to run again and again. They are idempotent, meaning that they will only do something if it hasn't been done already. This property is extremely powerful because we can make a change to a manifest (or cookbook) and, when we run it, only the change (and anything dependent on that change) will execute. This solves the problem of the upgrades with Ghost.

Now, we have a executable description of what a given server should look like. The best part? It's in plaintext. We're going to check this description into our source control so that we can track the changes necessary for each request. We can now treat this as any other code - with changesets, pair programming, and code reviews. Changes to servers can be deployed like every other piece of code in our application. Best of all, they can be tied to the application changes that spawned the need for them (if appropriate). So, our structural changes go through the exact same QA process as the application changes, increasing our confidence in them.

These days, it's really hard to argue against using a provisioner. We can argue which provisioner to use, but it's kinda like using source control. We can argue Git vs. Subversion vs. Mercurial vs. Darcs vs. Bazaar. But, no-one is arguing for the position of "Don't want it." The same should go for provisioners.

Tuesday, August 27, 2013

Use Vagrant for a Great Good

Vagrant is the one half of the best tool for IT organizations in the past decade. Hands down. And I'm going to tell you exactly why you are going to believe me.

No-one focuses on it and no-one cares about it, but environment mismatches are one of the biggest problems IT organizations face. It's a silent threat that doesn't take down whole sites. It's more insidious., only biting you every few months. Things that pass QA sometimes mostly work in production. It's really hard to replicate that bug in production. So, you write it off as a heisenbug. Or maybe the test suite passes on the developer's machine and the QA's machine, but sometimes fails in Jenkins. So, you disable that test from running in Jenkins because you've already wasted three days trying to figure it out.

Everyone kinda knows what the root problem is - you bitch about it over lunch every so often. But, it seems like such a high-class problem, so you don't fix it. Yeah, sure, Github and Etsy do it, but those are huge teams with tons of operations resources to put towards making everything perfect, right?

Not really. Both of them are actually small teams, relatively speaking. And, they don't devote huge amounts of time to it. They just do things right from the get-go. There's a bunch of tools these and similar teams use. The first and most foundational tool is Vagrant.

Vagrant is virtualization made easy. Vagrant creates and manages a semi-anonymous virtual machine (VM) using a simple configuration file (called a Vagrantfile). There are three basic commands:

  • vagrant up
  • vagrant halt
  • vagrant ssh
(There's more to it - a total of 15 commands as of this writing, but those are the three big ones.) And they do exactly what they say on the tin - bring the VM up, bring it down, and login to it. It works with Virtualbox, VMWare, and several other virtualization providers.

That's secret sauce #1 - Vagrant is just sugar around virtualization providers. It does all the heavy lifting of setting up the VM, managing it, and making sure it doesn't conflict with other VMs. (Yes, we're going to talk about multi-VM setups!)

So, now you have create a VM. So what? Because the setup of the VM is automated and everything is checked into your source control, every user of this repository has the exact same VM setup on their machine. As the setup of the server changes, a quick vagrant reload and everyone is in sync again.

Setting up multiple VMs is also very simple. You might want to do this for all kinds of reasons.
  1. An application server and its database.
    1. If they're both in the same repository, the same Vagrantfile can define both VMs.
    2. If they're not, each repository has its own Vagrant file. In this case, defining your own subnet works wonders. (I like 33.33.33.xx - it's a private DoD subnet that's not routable.)
    3. Remember - coworkers shouldn't share cups, plates, or databases. It's just not sanitary.
  2. Front-end developers working with services.
    1. The services can run on their own VMs and be deployed to as if they were in the QA environment. Your designers can now work on their code without having to know how the services are managed AND not have conflicts.
So, when do you want to set up a VM? I strongly believe that every source code repository should have its own VM. This includes backend code, like Python or Ruby applications as well as front-end code, like Backbone or Ember applications.

"Rob, really?! Front-end code? Doesn't it run in the browser already? Why go through all the hassle of setting up a VM?"

Yes, really, for several reasons:
  1. Front-end applications may run in the browser, but they aren't built in the browser. Compass/SASS, Less - these are all tools that are versioned and depend on a toolchain of specific versions.
  2. No-one ever works on a single project these days. Each project has its own toolchain, but many of these tools expect to be installed globally.
  3. Most front-end applications depend on some REST API to be available. If it's not, you may choose to build a stub application instead of hard-coding the responses in text files. Now you have a back-end application that needs to be managed.
  4. Test tools often want to run in a server. This is especially true for PhantomJS and ZombieJS. It really sucks when your testing frameworks aren't in sync between developers.
And, finally, Vagrant provides the foundation for the other half of the most important tool of the past decade - the provisioner.

Tuesday, August 6, 2013

Designing for testability

I'm going to assume you agree that writing tests is good and that 100% code coverage (or as close to it as possible) is a great ideal to strive for.

Testing stuff is hard. Any stuff. By anyone. (QA teams don't have it any easier.) This is true if you don't have tests and if you have tests. And, sometimes, the tests you have make it harder to write more tests.

The root problem is testability. I define testability as "the ease by which a system is verifiable." (This is different from "How well can someone describe a testcase." The latter is a skill of the person, the former an attribute of the system.) The easier a system is to test, the greater its testability.

Testability affects and is affected by everything. Every decision made by anyone on the project can reduce the project's testability. Often in ways that aren't obvious until months later. For example, the ops team adds a new service and it needs a configuration file. The person in charge of doing it is focused on getting this service up and running, so they hard-code the file's path into a module that's included in the application. They didn't know the dev team's process for adding a new configuration file - they're ops, not dev. But, that's now a block to testability. Instead of creating a new configuration file with appropriate values for testing and pointing the code at it, the tester has to put the file in that spot. The spot might be in a directory that requires privileges to write in, meaning tests now have to run with elevated privileges. It's also a spot which might change later, intermittently breaking the test suite in hard-to-diagnose ways.

A lot of ink (digital and not) has been spent on discussing ways of improving the application code within a system to make it easier to write unit-tests. An incomplete list would include:
  • Decoupling
  • Interfaces
  • Mock objects
A nearly equal amount has described how to write integration tests, though with less prescription for making a system more testable (we'll see why in a later post). And, still further, people have talked about other ways of distinguishing this test from that test.

At the heart, testing any system is just this:

  1. Hook up an input stream with testing data
  2. Hook up monitors on an output stream
  3. Run the test
This process works for everything, so we'll look at it in the light of a car. When I take my car into the local oil change place, they test a whole bunch of components in my car, not just the oil. For example, to test the transmission fluid, they:
  1. (input) Extract a small amount of fluid from my transmission and put it on a card.
  2. (output) The card has a reference color on it.
  3. (run test) Compare the fluid color against the reference color using a Mark-1 eyeball.
That's a highly repeatable and very strong test. It's cheap to execute (for time, materials, and training) and it works. (Happily for me, they are able to do this - the transmission fluid in one of my older cars was filthy and would have caused the transmission to fail if it hadn't been changed. I wouldn't have known to do it otherwise.) They test the air filter, the transmission fluid, the lights, the wipers - pretty much every component in my car. 

Well, not quite. They test every highly-testable component in my car. They don't test the integrity of the engine mounts, the safety of the seat-belts, or if the airbags are charged. Why not? What's different about those components that makes tests for them much harder?

Unlike the various fluids and filters, the airbags (for example), aren't designed to be tested. There may be very good reasons for doing so, but that's not the question. If there was a car that designed the airbags in such a way that my oil changing place could cheaply test their charge, they would jump all over it. Running several dozen cheap tests make clueless drivers (like me!) want to use them and the more they can test, the more they will find that (legitimately) needs replaced. (Likely, by them, because why go somewhere else?)

The oil change experience also gives us another crucial point - unit tests and integration tests are the same thing. The mechanics use different inputs, outputs, and tests when examining different components. But, the point of input, the point of output, and the expectation are all well-defined. There's no distinction between someone who is capable of judging the transmission fluid vs. the performance of the car as a whole. Nor is there a distinction between the types of tests (or inspections, as they call them).

More on this in part 2.