Dan Bradbury

Ramblings, rants, and musings of an engineer

Turbolinks and Anchors

So far my journey with turbolinks hasn’t been too bad; I write my slop and things work as I’d expect them to. I knew this streak of good luck was bound to come to an end at some point and today is the day.

I had the misfortune of attempting to implement simple anchor tags. At first I thought I had made a typo but upon checking my code everything was fine. Another test and I noticed the damned .turbolinks-progress-bar appearing onclick. It was clear turbolinks had mistook my anchor link and was intercepting the click like it should be doing with other links. Things got strange when adding data-no-turbolinks yielded the same results..

After googling I found a closed issue that had apparently been resolved. Checked my turbolinks version and we’ve got the latest and greatest. Hastily closed issue back in 2014 leads us to the same issue in present day. There’s a bit of discussion on the issue but doesn’t look like anyone has offered a PR to resolve the issue :(

There are a few snippets to override the default behavior that could prove useful but this is something that I’d expect the turbolinks to have ironed out.

Because I don’t mind writing an onclick for the links I’ll probably implement something like this for a similar effect

1
$('html, body').animate({scrollTop: $('#anchor').offset().top}, 'slow')

I’m definitely disappointed in turbolinks for failing me on this instance but will continue on this less travelled mysterious path DHH wants me to believe in.

Vim Tricks - Googling With Keywordprg

Most vim users are familiar with the man page lookup; K under the cursor or on visual selection. For anyone who needs a quick refrersher lets take a look at the help docs (help :K)

1
2
3
4
5
6
7
8
9
          *K*
K     Run a program to lookup the keyword under the
  cursor.  The name of the program is given with the
  'keywordprg' (kp) option (default is "man").  The
  keyword is formed of letters, numbers and the
  characters in 'iskeyword'.  The keyword under or
  right of the cursor is used.  The same can be done
  with the command
    :!{program} {keyword}

So we can see that the default program (keywordprg / kp) is defaulted to “man” and the keyword is determined by what is right under the cursor when it is used. The other important thing to not is the fact we could invoke man or whatever program we want using :!program_name but that’s not as fun as reconfiguring the default behavior to do what we want.

Let’s imagine that for some reason we find ourselves copying sections of text and searching google for the results. Rather than doing this over and over why not just change the keywordprg to a custom bash script to do what we want. First thing’s first lets write a simple bash script to open up the browser (I assume every OS has some way to open a browers with a given URL; This is written on an Ubuntu machine but if I were on a Mac I’d use open and test / google to make sure the syntax works as expected)

1
2
#!/bin/bash
firefox "https://www.google.com/serach?q=$1"

Give it a handy name like googleShit, move it into your PATH and and pop open that ~/.vimrc to change your default keywordprg

1
set keywordprg=googleShit

And now when you use K inside a new vim session you will be googling contents rather than looking up the man pages! If you find yourself repeating a task under the cursor or in visual mode pretty handy trick to have in the utility belt. Use a little imagination and you can come up with something to improve your daily workflow.

Different Browsers Are the Worst

While working on a personal project I ran into an issue with a bootstrap navbar collapse. In my local testing everything went fine and I decided to push and hoped everything would behave properly.. I grab my iPhone 5 and take a look only to see that the dropdown is not working at all.

After doing some googling I came across a SO post that accurately described the shitty situation I found myself in (the dropdown working in all browsers (including IE) and failing on all iOS devices)

The guy was apparently using a <a> tag without the href attribute which would fail to trigger the collapseable menu. That’s all fine and good but I’m trying to use a span and am too lazy to wrap my one line in a tag so I hunt for a better solution..

My original (almost functional code) trigger looks like this

1
<span class="glyphicon glyphicon-menu-hamburger navbar-toggle" "style"="color:white;", "data-toggle"= "collapse", "data-target"=".navbar-collapse" />

Can you spot what’s missing with this simple data-toggle? It turns out you need to add cursor: pointer to the style of whatever the element might be..

If the majority of people are usin links and buttons to trigger collapsable content then everything will work as expected and no problems will be had. For people who do what they want there’s shit like this to deal with.

And that’s the web for you. Use some CSS/JS library like Bootstrap in hopes of saving yourself time and then tackle random shit like this. For the novice I’d imagine this would be an aggravating roadblock that would make hault all progress for a few solid hours until they give up and use a button or link to accomplish the same thing as adding the cursor: pointer styling.

If you want to do work with web applications enjoy things like this because this is what we deal with on the daily.

Into the Abyss With Turbolinks

Previous attempts to adopt turbolinks during upgrades or new projects led me to the conclusion that I have a burning hatred for everything the project stands for (rage hatred is the worst kind..). From conversations with other Rails folks + former CTOs it seemed like turbolinks was something I could avoid without batting an eyelash (see comparisons to Windows 8 decision making or just ask a local Rails expert what their experience with turbolinks has been like)

As someone who previously ignored the efforts being made by DHH and the core team I would just start a new project with --skip-turbolinks to ensure my own sanity and continue with the hammering.

Since I’m a bit late to this conversation it’s nice to read posts like Yahuda Katz’ problem with turbolinks and 3 reasons why I shouldn’t use Turbolinks to get my hopes and dreams crushed.. Here is just the beginning of the headache that one can look forward to if they are to continue down through the thorns

Duplicate Bound Events

Because JavaScript is not reloaded, events are not unbound when the body is replaced. So, if you’re using generic selectors to bind events, and you’re binding those events on every Turbolinks page load. [This] often leads to undesirable behavior.

Alright so to be honest this isn’t that bad. People can bitch about global state all they want but as someone who enjoys thinking in a “game loop” I don’t mind this and feel like I can easily write my own code to these standards

Audit Third-Party code

You audit all third-party code that you use to make sure that they do not rely on DOM Ready events, or if they do, that they DOM Ready events are idempotent.

And this is where it starts to get fun.. I just stumbled upon a bug that reared its head beacuse of these two issues and I wanted to post a solution that I may find myself using more moving forward..

Imagine we are using typeahead.js we want to go ahead and initialize our typeahead input on a given page. Here’s what the JS might look like

1
2
3
4
5
6
7
8
9
  $('#searchBar .typeahead').typeahead({
    hint: true,
    highlight: true,
    minLength: 2
  },
  {
    name: 'estados',
    source: matcher(items)
  });

A pretty harmless call that you are probably going to copy paste in to try the first time you mess with typeahead.js. It works and you move on.. But be careful because turbolinks will give you some intereseting behaviour if we navigate between the page that has this piece of JS and another page. .

Turbolinks will invoke this each time the page is “loaded”. Because of this we will spawn a new instance of the typeahead input and the associated hint div.. For some reason (one I don’t care to look into) typeahead.js will spawn a new instance and hide the others rather than truly cleaning up. No matter what we are left to fend for ourselves in the wilds of turbolinks so we search for a solution.

I figure we can just handle global state a little better than your typical inline JS would. To do this we simply wrap the initializer in a conditional to verify the number of typeahead divs that are present on the screen. With proper naming we should be able to expand this approach to multiple typeahead instances.

1
2
3
4
5
  if($('.typeahead.tt-input').size() < 1) {
    $('#searchBar .typeahead').typeahead({
      ...
    }
  }

With that extra check we are able to handle the global state that turbolinks will create when natrually navigating and attempting to speed up our page.

A recent webcast featuring DHH got me thinking about how simple the problem of a web application really is. The server demands are not a problem whatsoever (30ms response times are all you need to be perfect anything lower is not truly noticable or necessary). We have an issue when it comes to how the rest of the “page-load” occurs for the user.

We all know the “hard refresh” links, the ones that clearly jump you to a new page with new content. Loading a new page is the same old same old that we’ve been doing since we could serve shit up. Of course the new way is the “one page app” that allows the user to navigate without ever having to disengage from the page they were on. IMO the trend is getting a bit insane (always felt the JS community was a bit heavy handed with trying new things..) and trying to keep up with the latest quarrels and trends is tiring. Where is the solution to the seamless application?

It’s clear that some will say Ember or React are the way forward to building beautiful apps that will take over the world but I’m not sure I believe a JS Framework is what will carry an application. So why learn all that unecessary complexity when HTML5 is here?

If Turbolinks lives up to the into of the README I will be a happy Rails camper.

Turbolinks makes navigating your web application faster. Get the performance benefits of a single-page application without the added complexity of a client-side JavaScript framework. Use HTML to render your views on the server side and link to pages as usual. When you follow a link, Turbolinks automatically fetches the page, swaps in its , and merges its , all without incurring the cost of a full page load.

C’mon Turbolinks don’t let me down again..

Development Turntable

And the turntable keeps on turnin’ and turnin’ Nothing can fuck with the way it goes around - Slug

Human nature tells us that there is a natural desire to make sense of the uncertain and create some semblance of control in our lives. This fundamental desire to create order where chaos thrives is the entire struggle of every growing company and the realization that always occurs when a company begins to grow past its infancy/adolescence. You know a company is in this phase when the Operations side wants to throw X engineers at the problem in hopes that it will increase effiency and get us to that cash cow ASAP (a different can of worms for another time)

When I was introduced to SCRUM in the real world I was blown away with the organization that seemed to be instilled throughout a company of ~20 engineers (the largets team I had worked with at the time) and >100 in all departments. Communication seemed to be streamlined and the pace of development seemed like it was pushing limits and allowing the team to move at the maximum velocity.. As a disclaimer I still am a believer in some system like SCRUM (loose-SCRUM) to keep visibility in a minimal way but I’d like to rethink the “optimal development cycle”

Whenever I think about business I am a bit cynical after seeing a company be sold with very little transparency to the <10 employees in the ranks. Because of past experiences with companies and individuals who have reneged on contracts and payments I like to assume the worst case when thinking in the hypothetical.

Let’s imagine a company that has just gone through a big round of funding and is now ready to make the push from 150 employees to 300+ w/ multiple offices around the United States to house all the talent that they have. This company is going places and they are in control of their destiny. Development team is churning out features left and right and the folks in Operations and Sales are able to keep customers happy and sign new customers with ease. In our ficticious company we have happy employees in every aspect of the business.

Now what happens when a Sales manager gets word that the company can sign the biggest contract ever by orders of magnatiude that make

To be continued..

Scaling Images With HTML5 Canvas

Had intented to post this 8 months ago but it got lost in the sea of gists..

This is old news by now for most but I had quite a bit of fun implementing it for myself and figured I’d share my code and some learnings that came along with it. The basic idea is to use canvas to render an uploaded image and then utilize the toDataURL method on canvas to retrieve a Base64 encoded version of the image. In the example included here we will just direct link to the newly scaled image but you could imagine that we kick off an ajax request and actually process the image (in PHP base64_decode FTW). Without any more tangential delay let’s take a look at the code.

1
2
3
4
5
6
7
8
9
10
11
<input type="file" accept="image/*" id="imageFile" />
<table>
  <tr>
    <td>Width: <input type="text" id="width" value="200" style="width:30; margin-left: 20px;" /></td>
  </tr>
  <tr>
    <td>Height: <input type="text" id="height" value="200" style="width:30; margin-left: 20px;" /></td>
  </tr>
</table>
<canvas id="canvas" style="border: 1px solid black;" width="200" height="200"></canvas>
<button width="30" id="saveImage">Save Image</button>

The above HTML shouldn’t need any explanation but if it does feel free to open the attached JSFiddle to get a feel for it..

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
(function(){
  (function(){
      document.getElementById("imageFile").addEventListener("change", fileChanged, false);
    document.getElementById("width").addEventListener("keyup", sizeChanged, false);
    document.getElementById("height").addEventListener("keyup", sizeChanged, false);
    document.getElementById("saveImage").addEventListener("click", share, false);
  }());

  var currentImage,
    canvas = document.getElementById("canvas");

  function sizeChanged() {
    var dimension = this.id,
        value = this.value;
    canvas[dimension] = value;
    if(currentImage) { renderImage() }
  }

  function fileChanged() {
    var file = this.files[0],
        imageType = /^image\//;

        if (!imageType.test(file.type)) {
          console.error("not an image yo!");
        } else {
          var reader = new FileReader();
          reader.onload = function(e) {
            currentImage = e.target.result;
            renderImage();
          };
          reader.readAsDataURL(file);
        }
  }

  function renderImage() {
    var data = currentImage,
      image = document.createElement("img");
    image.src = data;
    image.onload = function() {
        context = canvas.getContext("2d");
      context.drawImage(this, 0, 0, canvas.width, canvas.height);
    }
  }
  function share() {
      document.location = canvas.toDataURL();
  }
}());

In order to bring the HTML to life we need to attach a few EventHandlers and define some basic functionality. The first things to tackle is the actual file upload.

The File API has been added to the DOM since HTML5 and will be used here to open the uploaded file from <input type="file"> on the "change" event. Inside of the change event there are 2 things that we want to do; (1) confirm the file type, and (2) render the file onto the canvas. The confirm the file type we can use the MIME type given to use from file.type and do a simple regex test (/^image\//) before attempting to render the unknown file (Even though we’ve added accept="image/*" inside the input that can be easily modified to attempt to upload any file). Once we are convinced that the user has uploaded an image it’s time to read the file and send it off to the canvas to render. FileReader’s readDataAsURL will allow us to process the file asyncronously and allows for an onload callback that gives us the ability to set the newly read image and ask the canvas to draw.

Additional Reading

Playing the Twitter-game

I am not a marketer not do I have any real prior experience managing PR/social media for a company of any size. This is just a write-up of some of my learnings while out in the wild

By all accounts I am a Twitter novice; I joined a few years ago but don’t really keep up with it (rage tweet a politician or media personality from time to time but not much more). From other business ventures I’ve learned that having a strong presence on Twitter+Facebook can be a great way to drive traffic to your site and keep folks updated with any updates but I had never invested any time in growing the profiles (outside of word of mouth and the usual social links in the footer of a site). For my most recent project I decided to take an active role in the growth of my Twitter account and to attempt to use a few of these automation tools / API to make my life a little easier.

I started the journey by trying out a product called ManageFlitter.com; I had gone all in and decided to buy the smallest plan they offered to maximize my “strategic follows”. After about 2 days of bulk requesting it becamse obvious that the “snazzy dashboard” views were nothing more than a facade.. I was hitting rate limits and unable to process a single follow / request with the access grant I had enabled for the site.

At this point I started angrily emailing support to figure out why I was being blocked without hitting any of the actual API / request limits listed in the twitter guidelines. Here is the initial diagnosis I recieved (steps to fix are omitted since they were useless..)

Thank you for writing into ManageFlitter support on this. Unfortunately Twitter seems to have revoked some 3rd party app tokens over the last few weeks. This causes “invalid or expired token” in the notifications page and keeps your account from being able to process actions, post tweets into Twitter, or from allowing your ManageFlitter account to properly sync with Twitter.

Hmm so at this point I was frustrated because there is no way my token should have been revoked! Obviously they were using keys so that they could make these “analytic queries” on the twitter user base and had messed something up on their end that had made it impossible to proceed. I pressed along that line of thinking and received the following “helpful” response.

I am sorry to hear this. There seem to be a small group of Twitter users currently having this issue with ManageFlitter and some have noted that once their account is revoked, it is far easier for their account to continue to be revoked afterwards.

Some users have suggested that waiting 24 hours helps reset the account. Others have noted that the amount of actions they perform in sequence also matters greatly to avoid the revoked access. Some have noted that after they perform 100 clicks, if they wait 20-30 seconds (usually enough time to go to the Notifications page and see if Twitter is revoking access), then continuing to perform more clicks.

There is no particular reason why some accounts are being affected and other are not. We have been in touch with Twitter and unfortunately Twitter haven’t offered much in the way of remedy for us or our users.

TLDR; I was told to wait for the problem to fix itself..

This threw a massive wrench in my plans to bulk follow people inside the industry and hope that some percentage of them would follow me back. After a few more angry emails I was told to just wait it out.. At that moment I pulled out the classic "I will have to proceed with a claim to Paypal / the better business bureau" argument to get my money back and move on with another option.

After getting my money back I decided to ride the free train with Tweepi which had no problems for the first week of usage so I decided to buy a month of the lowest tier to use some of the analytics / follow tools that were being offered. With 2 weeks on the platform I can say that I’m very happy with what I paid for and will continue to use it in the future (until my follower count levels out a bit)

So why am I writing this article if I am just using a service to accomplish the task for me? While Tweepi does a lot for me it still imposes artificial limits on follow / unfollow in a 24 hour period. (see pic below)

You can see that the service has some limitations. The main one being that I can follow more people than I can unfollow in a given day. While that makes sense with Twitter’s policies my goal is a raw numbers game where I’d like to follow as many people as possible in hopes they follow me back. Whether they follow me back or not I am content to unfollow and continue with my following numbers game.

Through this process I was able to drive my followed count up quite a bit (considering my actual follower count)

but still had this problem of the unblanaced follow:followers that I wanted to correct. If I was active on Tweepi there was no way for me to drive this down without having to completely stop following people for a period while I unfollowed the max every day.

So today I decided to have a little fun inside the browser and see what I could do. :grin:

Since twitter has a web + mobile application I could obviously sit and click through each of the people I was following to reduce the number but..

So let’s see how well formatted the twitter follower page is (and since it’s Twitter we know its going to be well organized). When arriving at the page we see the trusty un/follow button for each of the followers

we also notice that twitter has some infinite scroll magic going on to continuous load the 1000s of people we follow. With that knowledge in our hands it’s time to craft some Jquery flavored code to do the clicking for us

1
2
3
$.each($('.user-actions-follow-button'), function(value,index) {
  $(index).click();
});

Pretty easy to click through each of the buttons on the page but that’s only going to account for the ones we have manually scrolled through.. Not sufficient since we have >4000 but <20 buttons on the page. So let’s handle that damned auto-scrolling

1
2
3
4
5
6
7
8
9
var count = 0;
function st(){
  $("html, body").animate({ scrollTop: $(document).height() }, "fast");
  if(count < 2000) {
    count += 1;
    setTimeout(st, 500);
  }
}
st()

You might be thinking; why not just for loop this shit?! The scroll animation needs a bit of time to allow for the page load; if you call it too fast the entire page will bug out and the “click button” code wont work as expected. So we just use setTimeout and let that sucker run (good time to take a stretch or make some coffee). When you come back you should hopefully be at the bottom of the screen (wait for GridTimeline-footer to show up and you know you are done) :D

Run the click code and patiently wait for your browser to slow down drastically and eventually unfollow your entire list. The result should look something like this

The 1 follower there through me off since when I clicked on the link for my followers there wasn’t anyone listed. At this point I was suspicious that I may have set off one of the limits that would have deactivated my accounts. I checked my emails and didn’t see any warnings or notifications from Twitter but did start seeing this whenever I tried to follow someone on their website. (Learn more here)

At this point I was thinking I just fucked myself and got my account banned or locked in some way. During this time of panic I decided to refresh my browser and saw some funky behavior on my profile page….

No “Following” count at all?! And I cant follow anyone because of some unknown policy breach..

After writing a short sob story to Twitter about how I had “accidentally unfollowed everyone”(cover my ass) I thought about the locking problem a bit more..Hmmm what about that Tweepi token I was using before? Who would have guessed that it would work and allow me to follow people again!

So with a little bit of crafted Javascript I was able to drop that Following count down without having to fight any artificial limits imposed on me by some third party. I’m incredibly happy with the results (as I am not banned and my account is working as expected) and plan to reproduce with another client in the future.

It’s always a good feeling when adding a new tool to the utility belt.

Replacing SimpleCov

After fighting with simplecov for a little longer that I would like to admit; was attempting to get it to start analyzing a group of files that were the meat and potatoes of my application(Golaith application). Unfortunately none of the default configs (Simplecov.start 'rails', etc) nor the filters were allowing my files to be tracked and printed in the handy coverage html file. Because of all this struggling I decided to go ahead and create my own crude coverage module; I’ll be using this post to discuss my learnings and share an early working iteration.

To get started I wanted to have the invocation of coverage be exactly the same as simplecov; so let’s start with the goal of adding CrudeCov.start inside of our spec_helper.rb to keep track of the files we care about.

Before diving into the code I did a little research on how Simplecov.start worked. I was mainly looking for information on how it was able to keep track of files with only a single invokation inside of the spec_helper. Inside of lib/simplecov.rb we find a definition of the start method; which checks to see if the water is friendly (SimpleCov.usable?) and then starts the tracking with a call to Coverage.start. At this point during my investigation I was pretty sure that Coverage was a Class/Module defined within the simplecov source; after some grepping within the repo I only found one other reference to Coverage inside of lib/simplecov/jruby_fix.rb. Unfortunately that reference is just as the name implies, a jruby specific fix for the Coverage module that overrides the result method. When I was that in the only reference to the module I ran off to google and was incredibly pleased to find that Coverage is a Ruby module! According to the Ruby 2.0 Coverage doc

Coverage provides coverage measurement feature for Ruby. This feature is experimental, so these APIs may be changed in future.

With that note about this being an experimental feature let’s be flexible and see what we can do (simplecov uses it and it’s a pretty successful gem). The usage note in the doc also looks fairly promising:

  1. require “coverage.so”

  2. do ::start

  3. require or load Ruby source file

  4. ::result will return a hash that contains filename as key and coverage array as value. A coverage array gives, for each line, the number of line execution by the interpreter. A nil value means coverage is disabled for this line (lines like else and end).

So we don’t have to worry about #1 (will be loaded by Ruby) and can start with #2 and call Coverage#start, load all the files that matter, and then use Coverage.result (which Returns a hash that contains filename as key and coverage array as value and disables coverage measurement.) to see how well the files have been covered.

As a note Coverage will pickup any file that has been required after do ::start so it’s a good idea to have a way to selectively find the files that you want to get the coverage results on (e.g. Array of keys Dir['./app/apis/*rb'] to grab the coverage results you want)

Since we don’t have any intention of supporting JRuby we should be able to use Coverage as is for our CrudeCov example. Let’s start off with the #start and #print_result(used after our test suite finishes)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
module CrudeCov
  class << self
    def start
      @filelist = []
      Coverage.start
    end

    def print_result
      cov_results = Coverage.result

      root = File.dirname(__FILE__)[0..-6]
      filelist = [
        "./app/apis/untested_endpoint.rb",
        "./app/apis/covered_endpoint.rb"
      ]

      filelist.each do |file|
        # process file results
        # coverage results returns Array([1,0,..,nil,3] where val = # of times line was hit & size = # of lines)
        # this makes for easy matching when creating the pretty html result file
        file_results = cov_results[file]
        results = file_results.compact.sort # remove all nil entries & sort to help with calculations

        puts "Results for: #{file}"
        total_lines = (results.length*1.00).to_f
        covered_lines = total_lines-results.find_index(1)
        percentage = (covered_lines/total_lines).round(2)*100
        puts "#{percentage}% Covered (#{covered_lines} of #{total_lines} Lines Covered)"
      end

      # create html for easy viewing outside of shell
    end
  end
end

Our CrudeCov module above is pretty straightforward and covers our basic needs of (1)Having a one-line call to add to our spec_helper, and (2) a print method that we can call after our suite is finished running (ideally the module would figure out which test framework is being used and ensure that the hook is made to print results at the end of the suite). With the example above we will have to explicityly ensure that the print_result method is called.

Assuming that we are testing with RSpec our spec_helper will look something like this

1
2
3
4
5
6
7
8
9
10
11
require 'crudecov'

CrudeCov.start
# require project files..

Rspec.configure do |config|
  # your other config..
  config.after(:suite) do
    CrudeCov.print_result
  end
end

With that basic setup you will get a print out of the coverage percentages for all files that have been included in the filelist. In less than 30 lines of code we were able to have an incredibly simple coverage module that we could use in a project to sanity check a file that may potentially lacking coverage or confirm proper testing. From that simple example you can start to see how a project like simplecov would come into being and how something as simple as CrudeCov could become a full ruby coverage suite.

With the legitimate need to get data on the effectiveness of your tests; SaSS solutions like Coveralls (which did not recognize a Goliath application) + gems like simplecov, rcov and cover_me have all become relied upon staples for the TDD community.

What’s the point of even doing TDD if you aren’t covering new lines of code that could result in bugs down the road? For that reason alone I’d say it’s worthwhile to implement some sort of coverage tool when all the rest have failed.

Requirements for a Text Editor

These are my minimum requirements for a competent text editor. The list is meant to serve as a quick litmus test for true understanding of tooling in the craft of writing software.

  1. Navigate with ease; jumping to line number, move to top/bottom of file, selection/deletion helpers (inside of quotes, block, function, etc)
  2. Fuzzy file finder + jump to mentioned files / methods
    • I use ctrl-p to fuzzy find in vim (one of my must have plugins)

    • For part of my day rails.vim provides handy helpers to jump from model to controller using custom commands like :Emodel, and :Econtroller. When not working with Rails I can typically rely on ctags to do the trick and allow me to gf around projects at will (hard to beat Command+Click inspection in RubyMine)
  3. Run spec(s) without having to Alt+Tab
    • I use vim-dispatch to run tasks in the background and have the results returned in the vim quickfix window
    • When running a single test or named context I can use a custom hotkey to select the test name under the cursor and create the dispatch command to run
  4. Find Regex pattern within file (replace, count, etc)
    • I’m lucky enough that my editor has %s. Allows for a ton of options with the same simple DSL
  5. Search entire project (git grep or Ag inside the editor)
    • Results should be returned in easy to parse format with jump-to-file capabilities
  6. Organize code into tabs / windows / panes
    • This should come with all editors now but please know how to vsplit if you claim to be a vi user..
  7. View git changes within file (+/- on line numbers) and basic git workflow integration (git blame, commit/push without leaving editor)
    • I use vim-gitgutter to track changes to the file. This has become a staple of my vim config and I couldn’t imagine working without it.

    • For the rest of my git functionatlity I use Tim Pope’s `vim-fugitive’, serves as a wonderful git wrapper for the majority of git commands.
  8. Customizability
    • This is the whole reason I am a vim user. I love the ability to change almost all facets of the editor; from basic config options in .vimrc to adding autocmds to change functionality for events within the editor (file open, file save, etc). In my mind an editor should allow the user to tailor the tool exactly how they want it.

It’s important for anyone writing software to respect the craft and maximize the effectiveness of the tools we use to create. Since the editor is such an important tool in our toolbelt we should always strive to optimize and improve our daily usage.

Replacing Heroku

For anyone still on Heroku and throwing addons at your application please stop and seriously reconsider what you are doing. Across the board things are getting a bit ridiculous for the average hobbyist / fincially conscious company. I used to be quite the Heroku fanboy but after my recent experience with Hosted Graphite I have changed my tone completely.

I was recently looking to add some Graphite metrics to a Heroku application that I’m working on and stumbled across a solution called Hosted Graphite. There was of course a starter package which shows off the beauty of Grafana and a few metrics to give you an idea of the potential. The current pricing can be found here (this prices do fluctuate quite regularly depending on the service); I’ve also gone ahead and added the current prices as of writing this below:

Each price point adds a few key features to the mix + increases your metric limit (this is a bullshit number stored by whatever service you are using so feel free to push it without too much fear. I’ll do a follow-up post on my experience with a MongoDB solution that had a document “limit”). The paid features for Hosted Graphite are as follows:

  • Daily Backups (>Tiny)
  • Data Retention (>Tiny)
  • Hosted StatsD (>Small)
  • Account Sharing (>Medium)

For most folks that are looking to use Graphite in their application they are probably looking to use some StatsD wrapper to report metrics. If you only have a few data points that you care about and can deal with the low retention rate / backup policy then maybe you could add an aditional worker to handle the task and not have to deal with a “hosted StatsD” solution. Even with that stretch of the imagination you are looking at a $19 monthly solution but most likely going to be suckered into a higher cost if you seriously start to add metrics.

Rather than paying for a plugable addon I’d highly recommend spinning up your own Digitalocean server for $10 bucks and get as much value as the Small package (this is being very generous..) being offered on Heroku. If you don’t know how to spin up a Graphite+StatsD server there is no excuse with the resources provided by DO alone:

Assuming that you can get through a basic set of steps then you should have a working Graphite+StatsD service at your disposal with ½ the monthly cost. The interface will be exactly the same (API key/namespace + service endpoint) and you will have the freedom to manage the server as you please. And when you are measuring things you sure as hell don’t want things to stop reporting because a reporting mechanism fails and you have no way to tell..

This is another major gripe that I have with Hosted Graphite; why would I pay for a service like “hosted statsD” when it will fall over and I have no debug info into why that might have happened. When testing with a Small instance of Hosted Graphite the StatsD endpoint would become unresponsive for minutes at a time while UDP/TCP were working as expected. On the other hand with a droplet you have the freedom of config (and Graphite, Carbon, and StatsD have a lot of configuration).

Configuration brings me to the final point I want to make. “Features” like Data retention length are nothing more than configuration that can be managed when setting up your Graphite instance. By configuring carbon you can set custom retention policies on different data points (regex matching FTW) and ensure that you have the policy in place that makes sense for the data being stored. This gives you the freedom of full flexibility inside of a problem-set that you are actively engaged in when adding measurements (naming new metrics). As a best practice you should set a reasonable default and add new metrics into the appropriate policies as needed.