Sean Massa – 19 Nov 2012
Let’s talk about tmux!
tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen.
It essentially provides a new context for you to manage multiple terminal windows. I find this useful for grouping terminal windows that relate to a single project, allowing me to switch entire project contexts quite easily.
I’m assuming you know a bit about how tmux works. If that’s not true, take a look at some tutorials and come on back!
You can switch normal tmux sessions (what I’m calling my project contexts) by hitting
mod+s, then selecting the session you want. However, I want to be able to recreate my sessions if they don’t already exist. To do this, you can create shell scripts that create tmux sessions just as you might by using the shortcuts.
Let’s create a shell script that will create a context for a project called facile.
Now, we can just run this script to create and attach to our session! But, we can do better.
Let’s create a context switcher that has the following properties:
- only creates the session if it doesn’t already exist
- attaches to the session we ask for
- allows us to easily setup new session creation scripts
The simple bash scripting here could be better, but it achieves all of our goals. First, it uses
tmux a -t $1 to try to attach to a session with the given name. If that fails, the
|| clause triggers and executes
~/.tmux/setup_$1.sh, which looks for a session creation script in that location prefixed by
setup_. If that fails, we create a new blank session with the given name by executing
tmux new-session -s $1.
Now, we can simply call
tm facile to attach/create our facile session. We can call
tm new-thing to create a blank session for our new-thing project. We can go on to create a new session creation script at
~/.tmux/setup_new-thing.sh so that future executions of the command
tm new-thing executes that script instead!
Sean Massa – 25 Jun 2012
The examples below are written in CoffeeScript and Jasmine. I use explicit return statements to try and make it more clear to those of you not as familiar with CoffeeScript.
When your code accesses global state, you have to pay extra attention to avoid test pollution by saving state before and resoring it after test runs. Check out this example:
If we instead pull the global state access up, we can ignore it completely when we test this method.
If you continue to do this, you start to see exactly where your functions’ dependencies lie. If your argument list is growing too large, it’s not because you decided to eliminate global state, it’s because your function has too many dependencies. This refactoring simply exposed that problem to you.
Avoiding Global State: Cookies
Tests should not set cookies unless you are testing a cookie library. You should mock out the cookie access and test that your methods call the cookie library or method properly. You can also always check the cookies in the console `document.cookie` or with the Edit This Cookie Chrome extension. Below are examples on how to stub this.
Avoiding Global State: DOM
Accessing the DOM is not always necessary. In fact, the instances where this is actually required are quite rare, if you structure your app the right way. In my current app, all specs (except a few) never touch the DOM via #jasmine_content (or any other part). The trick is that DOM fragments act (mostly) the same if they are attached to the window.document or not. You can even fire events on DOM fragments and waitsFor them to be triggered.
If you absolutely must insert items in the DOM, use #jasmine_content. Make sure you clean up after yourself as well. Below is an example:
Avoiding Global State: Network
There are only two reasons that your tests should ever access the network: the need to load a fixture or a local script. Even then, that should point to localhost and it should succeed. Anything else should be stubbed out. This includes third party scripts hosted on third party sites, such as google maps and facebook. You can always stub those libraries. Essentially, if the build is run on a box without an internet connection, it should still succeed.
Sometimes actions can cause a network request without you realizing it. One such example is inserting an image tag with a src attribute into the DOM. It’s important to check the network tab to make sure that nothing new shows up there. Below are examples of ways to mock out network access:
For images that aren’t what you might call figures, consider setting them with css as a background image instead. Beyond the semantic benefits, this will prevent those images from being loaded in a test environment where you don’t load your CSS.
Avoiding Global State: Other
Dependencies should be mocked. Assertions should be made that those dependencies were called properly. That’s what makes them unit tests. Allowing execution to pass into a dependent module turns your spec into an integration test. Integration tests are useful, but you should do so explicitly and with purpose.
Some might say that people mock too often—that the fact that their code has multiple depedencies is an indication that you need to refactor. I definitely agree with that. The point here is that if you have depedencies, you need to handle them properly. Sometimes that’s with a mock and sometimes it’s with a refactoring.
In my travels, I discovered a series of specs where the execution flow of each one went: Module Under Test > Dependent Module > Sub-dependent Module > DOM [insert] > [trigger] Network Request. Then, the assertion tested that the DOM triggered the Network request properly. Ideally, there would be unit tests all along the way. Even an integration test still shouldn’t trigger a network request.
Below is an example of how a dependent module can have untested side effects:
Using the Module Pattern
The pervasive Module Pattern is lauded as a best practice, and I think it is, but we really have to be diligent in how we use it. One issue that is always under debate is how much code do you make private? In my opinion, very very little code should ever be private for the simple fact that it makes it harder to test. You can prepend method names with an underscore if you don’t intend it to be accessed externally. If it does anything of any real value, it should be tested. If you really want to make it private, pull your private code into another module (with public methods) so that it can be tested there without messing with your primary module’s interface.
Below is an example of the Module Pattern used poorly.
We can fix this in two ways. One is to simply expose all of those methods.
Another way is to pull this into two modules. One acts as the interface for the other, but both have accessible methods for testing.
In general, I use the second method here.
The Problem with Auto-init
I have seen several modules that define a module then call its `init` method immediately. This pattern has one major advantage and two major drawbacks.
The advantage is that it’s self-contained in one file. All of the logic required to create and use the module can be triggered by including one script. This is nice, but I hope that you will see how the drawbacks outweight this benefit.
The drawbacks are: (1) In order to test this, init has already been called before your spec can run. In many cases, this can throw an error. In others, it may just behave differently the second time it’s called. (2) The module iteself becomes less resusable. You can’t just include this module and decide when to call the initialize method. Further, you can’t include the module and only call certain methods on it (and not initiailze).
Therefore, I believe that auto-init script files are a bad pattern. Below is a simple example of this pattern at work:
Note that Backbone.js views that call render in their initialize methods are a form of this Auto-init pattern.
Using Console log/warn/error
The browser’s console lists messages that come from errors, warnings and log messages triggered by the code on the page. If a script triggers one of these things while loading (not during a spec run), it will log the error to the console, but the suite will pass. This is a huge problem.
An error in the console implies:
- a poor understanding of the code under test
- untested code
- actual bugs
It also makes it a lot harder to debug your own specs if the log is already full of dozens of errors and/or messages.
Spec Suite Health
Inspired by the issues discovered above, I constructed a series of Suite Health Specs. They are a group of specs that run at the very end of your entire suite to make sure that none of your tests polluted areas of global scope.
This helper sets up some collections to keep track of a few method calls. We log these calls for later because you will often use console.log (and others) during debugging to check execution order and values.
Then, we set up this set of specs to run after all of our other specs. These will verify our collections as well as areas of global state.
In order to get this setup properly, we need to place them in your jasmine.yml file, like so:
These practices are all things we have probably heard before. It may have been in school or in practice. We simply need to recognize that code is code: these problems exist in any language and we should be diligent in understanding and controlling them.
Sean Massa – 23 Apr 2012
The Chicago Node.js User Group has been thriving recently. So, I wanted to talk about this growing group and how it fits in with Groupon.
In early 2011, Caleb Cornman and I were investigating this interesting system called Node.js. Caleb launched the Meetup.com group in March, hosted at the Hashrocket office. We had our first meeting to get things rolling.
The first couple of meetups were decent. Attendance hovered around 10-20 people and we went over some intro-level Node.js concepts.
In August 2011, Node Knockout was the hot topic. We setup shop at Hashrocket for 48 hours. Only one team from our group competed, but a couple of other people tagged along to hang out before the competition started. I was part of the group Node Ice. We ended up with a decent score, but missed the mark on completeness. For a longer post-mortem, check out Steve Oxley’s post.
At the end of October, things were changing. Caleb was leaving Hashrocket (our sponsor and host), which prompted the need for a new location. At the same time, I was moving to Groupon. After some discussion with the dev team, we agreed to host the meetup inside Groupon itself. Hooray!
The transition went well enough. I was afraid that it would be too hard to get to or that people would get lost in the shuffle. To my surprise, the increase attendance (around 25 in October at Hashrocket) continued to climb when we moved to Groupon. It has held steady around 30-35 since then. Checkout this plot of attendance over time.
The attendance metrics are based on memory and Meetup.com’s determination. So, they may not be accurate. I’ll be keeping a better headcount going forward. Even so, it provides an interesting metric: The average attendance rate of registered members for Chicago Node.js is 69.7%. This is important to realize when you need to order food and provide space for attendees.
Chicago Node.js will continue to be sponsored and hosted by Groupon for the forseeable future. There are plenty of interesting talks and workshops coming. We have meetings lined up through September, but we can always fit more speakers in! If you have any ideas for presentations, lightning talks, events, speakers, or whatever else you can think of, let us know! Even if you have an idea for a talk, but don’t want to present it, we can find someone who will.
We also have a new co-organizer, Todd Larsen, a fellow Groupon developer! He will be working with other companies to secure sponshorships of various sorts. He’ll also help run the meetup itself, as you may have noticed at this month’s meeting.
Sean Massa – 22 Apr 2012
Problems with Jasmine
However, testing became a problem. I needed a way for Jasmine to play well with RequireJS. I could just require the modules I want inside a test, then
waitFor it to be loaded, but that felt rather messy. So, I decided to patch Jasmine’s
describe methods to do what I wanted.
Adding RequireJS Support to Jasmine
When the Jasmine server runs, it exposes a path to your app’s /public folder under /public. The problem is that you would normally reference those files from your website root. So, your bootstrap file needs to know if it is being used in a testing environment. You can test for that and act accordingly.
The jasmine.yml file doesn’t need to include any specific (or wildcarded) spec files.
Patching it and describe
This step involved a lot of work. Essentially, we override the global
describe methods to support the following.
- One argument => pending
- Two arguments => normal behavior
- Three arguments => RequireJS behavior
The standard method for using RequireJS to import a module is to call
define ['module1', 'module2'], (module1, module2) ->. So, I decided to follow the same signature in the it and describe calls, making this valid.
When using the Jasmine gem, the Jasmine test runner page is setup to run the tests in a
window.onload event handler. The problem here is that we want to wait for our modules to be loaded before registering our specs. The new spec methods will use RequireJS to load the necessary modules asyncronously. If we leave the
window.onload handler there, it will run before our modules are loaded and our specs will never be registered.
Thus, we need to wait for our specs to be registered before running the test suite. I handled this with a simple load counter, but there’s probably a race condition with nested module requirements in specs. For now, this works pretty well.
The traditional use of the jasmine.yml configuration file has you list all of your CSS files. It seems that people usually do this so that they can test the visibility (
display: none or display: block) of an element. I feel that this is far from necessary. So, I created a little tool (called Hidey, requires cssom) to extract out your
display: none declarations from your css files into a new smaller file. Now, you only need to include that one file to be able to test your visibilities.
Since writing that tool, however, I’ve changed my view on testing visibility. I’d rather let my CSS handle that and simply test for the existence of a class that should imply visibility. The simplest example would be to test an element for a
hide class if you want it to be hidden.
Putting It All Together
Now, I can write specs that look like this!
I considered submitting a pull request to Jasmine to add RequireJS support, but I’m not completely happy with how it works right now. The more I manipulate Jasmine to do what I want, the more I realize I should write my own testing framework (again, although attempt #1 was many years ago and pretty awful). But, I’ll save that for another day.
Sean Massa – 04 Jan 2012
Step #1: Github
This step involves setting up your github repository to properly respond to a domain name.
- Register for Github if you don’t already have an account.
- Create a new repository called yourusername.github.com.
- Clone it locally by running `git clone firstname.lastname@example.org:yourusername/yourusername.github.com.git`
Now we’re ready to put some content on our blog! If you go to yourusername.github.com, you should now see a page that says you need to setup some content for your blog. If you need more help for this part, take a look at the Github Pages documentation.
Step #2: Jekyll
This step involves setting up Jekyll for your static content generation.
- This requires ruby. Make sure that’s installed.
- Install the Jekyll gem with `gem install jekyll`.
- Find a template that you like and clone it. Or, take a look at an existing site and copy the structure.
You should now be able to see your content by executing `jekyll —server —auto` in your repository directory. The —auto flag will make the server recompile pages based on files you change. If you need more help with this, take a look at the Jekyll documentation.
Step #3: dnsimple
This step involves setting up your domain name to point to your Github Pages blog.
- Go to dnsimple and register for an account.
- Find a domain name that is available and register it.
- View your registered domain and click “Add services to domain”.
- Add the Github Pages service.
- Once you are back at your domain management page, click the “Advanced Editor” button.
- Edit your CNAME entry to point to yourusername.github.com.
- Go to your git repository and create a file called CNAME with the content mydomain.com.
- Push your CNAME change to your git repository.
- Wait a few seconds.
- Go to mydomain.com and see if it worked!
At this point, your domain should be pointing to your Github Pages content. If not, look at one of the documentation sites I linked above. If this guide was unclear, let me know via email or a comment.