Trying to make sense of PG&E’s Marketing Campaigns

We are getting close to baseball season and since I try to catch the majority of games that means I get to watch a ton more local advertising!

yay ads!

Last season I was lucky enough to return to Norcal and got to watch the Giants on CSN-Bay Area for the majority of televised games. While watching the the season I saw a ton of PG&E ads; ranging from some lady telling I’m the reason my bill keeps going up and a PG&E rep will come out to help me buy more new enery effiencient appliances to 3 latina high school girls who turned off lights to save the school money. Meaning, they are spending money by the boatload to make a wide range of ads in hopes of propagating the message that PG&E cares about its customers (and employeesRich, and Jannis).

Coincidence that they did this ad push while they were in the final stages of the 2010 San Bruno pipe explosion?

So.. Why is PG&E spending so much(assumed) on marketing focused on employees and community? (we can’t be about the exact budget without tricking someone for the information. But we can get a good sense that its somewhere under the earnings from selling stocks since all ads contain This communication paid for by PG&E shareholders., but Iā€™m no expert so maybe its all free..)

The announcement of that they will be cutting ~450 IT jobs was the big fuck you that explains it all. It may not seem too strange for a large, publically traded company to move jobs overseas to save money but let’s take a look at exactly what PG&E is doing here to move the jobs over.

They (PGE) has hired a consulting firm based out of India called Tata Consultancy Services to manage the replacement of these workers. In order to replace these IT folks Tata is using H1B visas to bring folks over to the states to be trained on how to perform the tasks they will be doing within months. That should be alarming to you

If you don’t know too much about H1B Visas its ok not to be shocked. The idea is that we need a way to keep / allow skilled workers to enter the country and work legally to help push forward innovation where the current US workforce is inadequate. The issue with using these visas is that they are of limited supply and are given out based on lottery. With companies like Tata hoarding H1Bs to use for IT training and job relocation we are effectively removing work from the job pool while using a system designed to strengthen the job marketplace.

PG&E knows what they are doing and calculated that they could do this and minimize the damage by running a PR campaign in the lead up to this event. Expect the $300 million a year in savings to be poured into more marketing campaigns to continue the monopoly that they have going.

Bad actors like PG&E and Tata need to be dealt with before they ruin the entire system for deserving individuals who rely on the program

Posted in economics, rants | Leave a comment

Mediocrity in Movies (part 1)

I’ve been trying to make sense of the wave of mediocore movies, games, and music that has been dumped on us lately. This will be part 1 of a series of rants dedicated to mediocrity.

The Question

Why the hell are studios making these trash movies with famous actors and no substance?

Obviously good movies are going to make money, think defining films/classic movies (Dirty Harry, Fast Times at Ridgemont High, insert any movie that you’ll never forget here). But what happens when bad movies start to make money and become repeatable successes in the eyes of the studio execs. Whether you are making a modern classic or a pile of trash like Piranhas 3D it takes money to give the project life (and various degrees of effort).

Since the studio is so powerful in the production of films I want to take a look at a failry young studio that caught my eye while watching the playoffs this weekend. CBS Films. For some reason I was unaware that the monster that is CBS ever had a movie studio that was actively producing films and it took a stupid movie like Partiots Day to alert me to the fact. Since I was watching football on CBS and saw an ad for CBS Films something smelled fishy and I decided to do some wiki-researching

follow the money

CBS Films was founded in 2007 with the goal of producing 4-6 movies, each with a budget of $50 million (big boss says you have a yearly budget of $300 million). If you want to read a more detailed year-by-year summary for the studio the wiki#CompanyHistory does a good job. The TLDR; is they aren’t the best movie studio out there and are looking for help from others who have lead successful ventures like Lionsgate.

Before I get ahead of myself, it’s important to review history and understand that CBS tried out film production before with Cinema Center Films(1967-172). They released films like With Six You Get Eggroll (this is 1968 so you best believe that is a derogatory reference an Asian character who is in a single scene). They did hit a few winners with Snoopy, Come Home (people who love Peanuts do enjoy it) + others like Scrooge and Little Big Man w/ Dustin Hoffman. Checkout the full filmography and see if you recognize any before they closed up shop

Alright now that we know that CBS has been interested in owning a studio for some time we can start to understand why they are making such shitty films every year. I’m fully convinced that CBS is not in the business of making good movies but in it to make profitable films which tends to translates to the sub-par movies they continuously release. For a studio like this with multiple TV networks marketting seems like a sure-fire way to get people to pay up for tickets. This is apparent to anyone who had to watch TV while they were advertising Patriots Day. If this movie makes money it will be an instant-success and the studio will look to repeat the action with another close-to-home act of terror

Even a broken clock is right 2 times a day.

In the case of film studios they occassionally will put out a good movie (not because thats what they do but by being in the right place at the right time + having funds to allow a talented director make a film). So let’s take a look at the full list of movies they’ve made over the past 9 years

Release Date Title Budget Gross(worldwide)
Jan 22, 2010 Extraordinary Measures $30 million $15.1 million
April 23, 2010 The Back-up Plan $35 million $77.5 million
November 24, 2010 Faster $24 million $35.5 million
January 28, 2011 The Mechanic $40 million $51 million
March 4, 2011 Beastly $17 million $28.8 million
February 3, 2012 The Woman in Black $13 million $127.7 million
March 9, 2012 Salmon Fishin in the Yemen $14.5 million $34.6 million
September 7, 2012 The Words $6 million $13.2 million
October 12, 2012 Seven Psychopaths $15 million $23.5 million
March 1, 2013 The Last Exorcism Part II $5 million $15.2 million
May 31, 2013 The Kings of Summer unknown $1.3 million
July 26, 2013 The To Do List $1.5 million $3.9 million
November 1, 2013 Las Vegas $28 million $134.4 million
December 6, 2013 Inside Llewyn Davis $11 million $13 million
April 4, 2014 Afflicted $318,000 $121,200
April 25, 2014 Gambit unknown $14.2 million
August 15, 2014 What If $11 million $7.8 million
September 26, 2014 Pride unknown $16.7 million
February 20, 2015 The Duff $8.5 million $43.5 million
November 13, 2015 Love the Coopers $24 million $41.1 million
March 25, 2016 Get a Job unknown unknown
April 12, 2016 Flight 7500 unknown $2.8 million
August 12, 2016 Hell or High Water $12 million $31 million
October 7, 2016 Middle School: The Worst Years of My Life $8.5 million $20.7 million
December 21, 2016 Patriots Day $45 million We shall see

16/24 movies being profitable seems like they have hit their mark but the remaining were either flops or the studio decided not to release how much the movie cost to make. I assume this is because they spent so much and the movie did so poorly; take a look at Flight 7500, a Sci-Fi/Horror movie with Amy Smart that was barely passable and only made $2.8 million. It was planned for release in 2013 but was pulled and later turned into a on-demand release in 2016. The studios won’t say how much they spent on the film which makes me believe they spent a pretty penny to make a pile of shit. Luckily for CBS Films, any flop that they haven’t over marketted can be turned into a release on Showtime or one of their other movie networks.

pile of trash that CBS fumbled with for 4 years before dumping to on-demand

The most profitable film for the Studio was The Woman in Black which had Daniel Radcliffe in it so every Harry Potter fanboy who could stand a horror film ran to see it around the world. For anyone who saw the movie it wasn’t anything amazing but definitely not a bad movie. A beefy marketing campaign focused on showing Radcliffe’s face as many places as possible helped push a mediocore film into a money making machine for the studio.

After that successful movie the studio remained focused on having recognizable actors in lead rolls for the majority of films they were willing to put their money behind (with the exception of a few failed experiments). For the most part the formula makes money and they continue to make movies with a deep investment in the stars they hire + marketing campaigns to make sure everyone knows Actor X and Y are in Movie ZZZZZZ and the trailer looks good. This exact formula is the rational for making a movie like Patriots Day; it ticks all the boxes of human interest, actor is very recognizable, and its easy to market. So I guess this makes sense for a studio thats all about the money.

My hope is this movie is a complete flop and the studio eventually caves like its predecessor Cinema Center Films. I know its unrealistic to hope for a future where money doesn’t control what gets made but Im optimistic as a consumers we can start sending clear messages that we are tired of this shit storm. I’m hopefull we can get more movies like Fast Time at Ridgemont High that are truly excellent at what they are trying to do. Otherwise we should brace ourselves for the onslaught of mediocraty and be ready for more iterations of Final Destination and whatever marketers know will sell to the general popluation

Posted in Uncategorized | Leave a comment

Exploiting P2P Game Hosting in Dead by Daylight

Any gamer will tell you dedicated servers are prefered to someone being selected as the host and having an unfair advantage with much better latency. P2P online gaming is just awful for anyone who wants a true competetive environment; clients must maintain a connection with the host and if the host leaves the game ends? graceful transfer? (who knows until it happens) + bullshit like the following POC is too damn easy to pull off for anyone who has basic Python abilities.

If you haven’t heard about DbD I’d actually highly recommend the game + give props to the creators for making a fun and original multi-player survival horror game (Steam link). The basic idea of the game is that 4 players are Survivors, responsible for repairing generators and escaping from the graps of the Killer (another player whos goal is to hunt and kill as many Survivors as they can before they all run to safety). Simple idea but really enjoyable if you can get a group of friends and try to survive together / enjoy messing with folks as a killer.

Since the game was made by a very small team there was a wave of complaints and issues in the early days. Once more and more networking issues were being reported/experienced I had to pop open Wireshark and see what was going on.

I joined a game and waited for the load screen to start the Wireshark capture. As soon as the game started you could see the flood of UDP packets + our trusty friend STUN (in this case CLASSIC-STUN but the ideas are the same) and I knew we’d be able to have a little fun

For those of you who might not be familiar with the STUN protocol here’s a quick review:

Session Traversal Utilities for NAT (STUN) is a protocol that serves
as a tool for other protocols in dealing with Network Address
Translator (NAT) traversal. It can be used by an endpoint to
determine the IP address and port allocated to it by a NAT. It can
also be used to check connectivity between two endpoints, and as a
keep-alive protocol to maintain NAT bindings.

Who is sending / receiving these packets?

STUN Client: A STUN client is an entity that sends STUN requests and
receives STUN responses. A STUN client can also send indications.
In this specification, the terms STUN client and client are

What info do we care about in the packet?

  - Protocol Family: IPv4
  - IP:
  - Port: 53199

This is all you have to know about to follow along but if you are interested in knowing more about STUN check out RFC 5389

Each player is acting as a client and is handling both requests and responses to maintain a connection to the other players in the game. If we listen to the traffic we have access to a public IP and port that is open for communication (to confirm just watch UDP packets transportation either way)

Imagine a simple script that listens STUN headers and generates a list of victims and runs a simple UDP flood

import socket
import random
client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
bytes = random._urandom(1024)

victim = input('Target >')
vport = int(input('Port >'))

packets_sent = 0
while 1:
    client.sendto(bytes, (victim, vport))
    packets_sent += 1
    if packets_sent % 100 == 0:
        print('.', end=' ')

and the victim is pwnd.. šŸŽ‰

In the case of DbD the victim is flooded out of the game and points are given to the killer.

As the killer (hosting the game) you can target players with a simple test flood (watch them skip and shut it off before they are out of the game) and then D/C them if they are near escaping (giving the player 0 points and rewarding the killer for a successful kill).

As the survivor you can periodically flood the killer when he is chasing to make sure he cant hit you while you juke / escape his grasp (why not flood and wiggle at the same time?) or you can lag out your fellow survivors to pick up a particularly nice item they are running that game (too funny to lag out a friend who is bragging about some sick item he is going to run this game).

The main point I’m trying to make is that this is a simple simple attack that can be pulled off by any jobber with minimal skill.

In my testing a simple UDP flood like the one shown above using the STUN response results was 100% effective no matter when the flood was run (port remained open for entirety of game and then some..). I ran tests for hours at a time and spaced it out over months of gameplay to see if EAC was every going to pick up on this obivous attack… they never did. In fact EasyAntiCheat will not detect attacks like this (tested in other games they “secure”) + is generally a shit given waht they promise.

TLDR; Networking is difficult and gets messed up often. If something feels poorly implemented chances are it is and there could be some fun to be had understanding whats going on under the covers.

Posted in Dead by Daylight, Dedicated Servers, Gaming, P2P, Wireshark | Leave a comment

Unnecessary Noise in the Programming community

Growing up on the internet I’ve always been aware of trolling and general BM that are associated with competetive gaming and message boards. Unfortunately I’ve been noticing similar behavior in more and more projects & programming community sites. It seems too common to run into a SO post with a comment section like this

The example code is a bit tricky to digest on first pass + is written in bad vimL (see his answer + my correction if interested). Even if the question was dumb and pointless there is no reason to be a dick for no reason. It just creates unnecessary noise that does nothing but deter from the goal at hand. We should look to previous failures like rubyspec and try not to bring whatever shit is going on in our lives into the project. If you really need to blow off steam none of us mind if you go play some Overwatch and chill out a little bit before working on the next issue.

For those of you who didnt follow along at home with the drama around rubyspec here are a few links + a running repo of github drama links
HN on MRI and RubySpec issues
Some #rubinius BM

Rule 97: Don’t be a dick

Posted in internet, programming | Leave a comment

Turbolinks and Anchors

So far my journey with turbolinks hasn’t been too bad; I write my slop and things work as I’d expect them to. I knew this streak of good luck was bound to come to an end at some point and today is the day.

I had the misfortune of attempting to implement simple anchor tags. At first I thought I had made a typo but upon checking my code everything was fine. Another test and I noticed the damned .turbolinks-progress-bar appearing onclick. It was clear turbolinks had mistook my anchor link and was intercepting the click like it should be doing with other links. Things got strange when adding data-no-turbolinks yielded the same results..

After googling I found a closed issue that had apparently been resolved. Checked my turbolinks version and we’ve got the latest and greatest. Hastily closed issue back in 2014 leads us to the same issue in present day. There’s a bit of discussion on the issue but doesn’t look like anyone has offered a PR to resolve the issue šŸ™

There are a few snippets to override the default behavior that could prove useful but this is something that I’d expect the turbolinks to have ironed out.

Because I don’t mind writing an onclick for the links I’ll probably implement something like this for a similar effect

$('html, body').animate({scrollTop: $('#anchor').offset().top}, 'slow')

I’m definitely disappointed in turbolinks for failing me on this instance but will continue on this less travelled mysterious path DHH wants me to believe in.

Posted in html, turbolinks | Leave a comment

Vim Tricks – Googling with keywordprg

Most vim users are familiar with the man page lookup; K under the cursor or on visual selection. For anyone who needs a quick refrersher lets take a look at the help docs (help :K)

K     Run a program to lookup the keyword under the
      cursor.  The name of the program is given with the
      'keywordprg' (kp) option (default is "man").  The
      keyword is formed of letters, numbers and the
      characters in 'iskeyword'.  The keyword under or
      right of the cursor is used.  The same can be done
      with the command
        :!{program} {keyword}

So we can see that the default program (keywordprg / kp) is defaulted to “man” and the keyword is determined by what is right under the cursor when it is used. The other important thing to not is the fact we could invoke man or whatever program we want using :!program_name but that’s not as fun as reconfiguring the default behavior to do what we want.

Let’s imagine that for some reason we find ourselves copying sections of text and searching google for the results. Rather than doing this over and over why not just change the keywordprg to a custom bash script to do what we want. First thing’s first lets write a simple bash script to open up the browser (I assume every OS has some way to open a browers with a given URL; This is written on an Ubuntu machine but if I were on a Mac I’d use open and test / google to make sure the syntax works as expected)

firefox "$1"

Give it a handy name like googleShit, move it into your PATH and and pop open that ~/.vimrc to change your default keywordprg

set keywordprg=googleShit

And now when you use K inside a new vim session you will be googling contents rather than looking up the man pages! If you find yourself repeating a task under the cursor or in visual mode pretty handy trick to have in the utility belt. Use a little imagination and you can come up with something to improve your daily workflow.

Posted in keywordprogram, various.txt, vim, vim tricks, workflow | Leave a comment

Different browsers are the worst

While working on a personal project I ran into an issue with a bootstrap navbar collapse. In my local testing everything went fine and I decided to push and hoped everything would behave properly.. I grab my iPhone 5 and take a look only to see that the dropdown is not working at all.

After doing some googling I came across a SO post that accurately described the shitty situation I found myself in (the dropdown working in all browsers (including IE) and failing on all iOS devices)

The guy was apparently using a <a> tag without the href attribute which would fail to trigger the collapseable menu. That’s all fine and good but I’m trying to use a span and am too lazy to wrap my one line in a tag so I hunt for a better solution..

My original (almost functional code) trigger looks like this

<span class="glyphicon glyphicon-menu-hamburger navbar-toggle" "style"="color:white;", "data-toggle"= "collapse", "data-target"=".navbar-collapse" />

Can you spot what’s missing with this simple data-toggle? It turns out you need to add cursor: pointer to the style of whatever the element might be..

If the majority of people are usin links and buttons to trigger collapsable content then everything will work as expected and no problems will be had. For people who do what they want there’s shit like this to deal with.

And that’s the web for you. Use some CSS/JS library like Bootstrap in hopes of saving yourself time and then tackle random shit like this. For the novice I’d imagine this would be an aggravating roadblock that would make hault all progress for a few solid hours until they give up and use a button or link to accomplish the same thing as adding the cursor: pointer styling.

If you want to do work with web applications enjoy things like this because this is what we deal with on the daily.

Posted in bootstrap, browser bugs, browsers, css, safari, web | Leave a comment

Into the Abyss with Turbolinks

Previous attempts to adopt turbolinks during upgrades or new projects led me to the conclusion that I have a burning hatred for everything the project stands for (rage hatred is the worst kind..). From conversations with other Rails folks + former CTOs it seemed like turbolinks was something I could avoid without batting an eyelash (see comparisons to Windows 8 decision making or just ask a local Rails expert what their experience with turbolinks has been like)

As someone who previously ignored the efforts being made by DHH and the core team I would just start a new project with --skip-turbolinks to ensure my own sanity and continue with the hammering.

Since I’m a bit late to this conversation it’s nice to read posts like Yahuda Katz’ problem with turbolinks and 3 reasons why I shouldn’t use Turbolinks to get my hopes and dreams crushed.. Here is just the beginning of the headache that one can look forward to if they are to continue down through the thorns

Duplicate Bound Events

Because JavaScript is not reloaded, events are not unbound when the body is replaced. So, if you’re using generic selectors to bind events, and you’re binding those events on every Turbolinks page load. [This] often leads to undesirable behavior.

Alright so to be honest this isn’t that bad. People can bitch about global state all they want but as someone who enjoys thinking in a “game loop” I don’t mind this and feel like I can easily write my own code to these standards

Audit Third-Party code

You audit all third-party code that you use to make sure that they do not rely on DOM Ready events, or if they do, that they DOM Ready events are idempotent.

And this is where it starts to get fun.. I just stumbled upon a bug that reared its head beacuse of these two issues and I wanted to post a solution that I may find myself using more moving forward..

Imagine we are using typeahead.js we want to go ahead and initialize our typeahead input on a given page. Here’s what the JS might look like

  $('#searchBar .typeahead').typeahead({
    hint: true,
    highlight: true,
    minLength: 2
    name: 'estados',
    source: matcher(items)

A pretty harmless call that you are probably going to copy paste in to try the first time you mess with typeahead.js. It works and you move on.. But be careful because turbolinks will give you some intereseting behaviour if we navigate between the page that has this piece of JS and another page. .

Turbolinks will invoke this each time the page is “loaded”. Because of this we will spawn a new instance of the typeahead input and the associated hint div.. For some reason (one I don’t care to look into) typeahead.js will spawn a new instance and hide the others rather than truly cleaning up. No matter what we are left to fend for ourselves in the wilds of turbolinks so we search for a solution.

I figure we can just handle global state a little better than your typical inline JS would. To do this we simply wrap the initializer in a conditional to verify the number of typeahead divs that are present on the screen. With proper naming we should be able to expand this approach to multiple typeahead instances.

  if($('').size() < 1) {
    $('#searchBar .typeahead').typeahead({

With that extra check we are able to handle the global state that turbolinks will create when natrually navigating and attempting to speed up our page.

A recent webcast featuring DHH got me thinking about how simple the problem of a web application really is. The server demands are not a problem whatsoever (30ms response times are all you need to be perfect anything lower is not truly noticable or necessary). We have an issue when it comes to how the rest of the “page-load” occurs for the user.

We all know the “hard refresh” links, the ones that clearly jump you to a new page with new content. Loading a new page is the same old same old that we’ve been doing since we could serve shit up. Of course the new way is the “one page app” that allows the user to navigate without ever having to disengage from the page they were on. IMO the trend is getting a bit insane (always felt the JS community was a bit heavy handed with trying new things..) and trying to keep up with the latest quarrels and trends is tiring. Where is the solution to the seamless application?

It’s clear that some will say Ember or React are the way forward to building beautiful apps that will take over the world but I’m not sure I believe a JS Framework is what will carry an application. So why learn all that unecessary complexity when HTML5 is here?

If Turbolinks lives up to the into of the README I will be a happy Rails camper.

Turbolinks makes navigating your web application faster. Get the performance benefits of a single-page application without the added complexity of a client-side JavaScript framework. Use HTML to render your views on the server side and link to pages as usual. When you follow a link, Turbolinks automatically fetches the page, swaps in its , and merges its , all without incurring the cost of a full page load.

C’mon Turbolinks don’t let me down again..

Posted in Rails, turbolinks | Leave a comment

Development Turntable

And the turntable keeps on turnin’ and turnin’
Nothing can fuck with the way it goes around
– Slug

Human nature tells us that there is a natural desire to make sense of the uncertain and create some semblance of control in our lives. This fundamental desire to create order where chaos thrives is the entire struggle of every growing company and the realization that always occurs when a company begins to grow past its infancy/adolescence. You know a company is in this phase when the Operations side wants to throw X engineers at the problem in hopes that it will increase effiency and get us to that cash cow ASAP (a different can of worms for another time)

When I was introduced to SCRUM in the real world I was blown away with the organization that seemed to be instilled throughout a company of ~20 engineers (the largets team I had worked with at the time) and >100 in all departments. Communication seemed to be streamlined and the pace of development seemed like it was pushing limits and allowing the team to move at the maximum velocity.. As a disclaimer I still am a believer in some system like SCRUM (loose-SCRUM) to keep visibility in a minimal way but I’d like to rethink the “optimal development cycle”

Whenever I think about business I am a bit cynical after seeing a company be sold with very little transparency to the <10 employees in the ranks. Because of past experiences with companies and individuals who have reneged on contracts and payments I like to assume the worst case when thinking in the hypothetical.

Let’s imagine a company that has just gone through a big round of funding and is now ready to make the push from 150 employees to 300+ w/ multiple offices around the United States to house all the talent that they have. This company is going places and they are in control of their destiny. Development team is churning out features left and right and the folks in Operations and Sales are able to keep customers happy and sign new customers with ease. In our ficticious company we have happy employees in every aspect of the business.

Now what happens when a Sales manager gets word that the company can sign the biggest contract ever by orders of magnatiude that make

To be continued..

Posted in dev, rants, scrum, teams | Leave a comment

Scaling Images with HTML5 Canvas

Had intented to post this 8 months ago but it got lost in the sea of gists..

This is old news by now for most but I had quite a bit of fun implementing it for myself and figured I’d share my code and some learnings that came along with it. The basic idea is to use canvas to render an uploaded image and then utilize the toDataURL method on canvas to retrieve a Base64 encoded version of the image. In the example included here we will just direct link to the newly scaled image but you could imagine that we kick off an ajax request and actually process the image (in PHP base64_decode FTW). Without any more tangential delay let’s take a look at the code.

<input type="file" accept="image/*" id="imageFile" />
    <td>Width: <input type="text" id="width" value="200" style="width:30; margin-left: 20px;" /></td>
    <td>Height: <input type="text" id="height" value="200" style="width:30; margin-left: 20px;" /></td>
<canvas id="canvas" style="border: 1px solid black;" width="200" height="200"></canvas>
<button width="30" id="saveImage">Save Image</button>

The above HTML shouldn’t need any explanation but if it does feel free to open the attached JSFiddle to get a feel for it..

      document.getElementById("imageFile").addEventListener("change", fileChanged, false);
    document.getElementById("width").addEventListener("keyup", sizeChanged, false);
    document.getElementById("height").addEventListener("keyup", sizeChanged, false);
    document.getElementById("saveImage").addEventListener("click", share, false);

  var currentImage,
    canvas = document.getElementById("canvas");

  function sizeChanged() {
    var dimension =,
        value = this.value;
    canvas[dimension] = value;
    if(currentImage) { renderImage() }

  function fileChanged() {
    var file = this.files[0],
        imageType = /^image\//;

        if (!imageType.test(file.type)) {
          console.error("not an image yo!");
        } else {
          var reader = new FileReader();
          reader.onload = function(e) {
            currentImage =;

  function renderImage() {
    var data = currentImage,
      image = document.createElement("img");
    image.src = data;
    image.onload = function() {
        context = canvas.getContext("2d");
      context.drawImage(this, 0, 0, canvas.width, canvas.height);
  function share() {
      document.location = canvas.toDataURL();

In order to bring the HTML to life we need to attach a few EventHandlers and define some basic functionality. The first things to tackle is the actual file upload.

The File API has been added to the DOM since HTML5 and will be used here to open the uploaded file from <input type="file"> on the "change" event. Inside of the change event there are 2 things that we want to do; (1) confirm the file type, and (2) render the file onto the canvas. The confirm the file type we can use the MIME type given to use from file.type and do a simple regex test (/^image\//) before attempting to render the unknown file (Even though we’ve added accept="image/*" inside the input that can be easily modified to attempt to upload any file). Once we are convinced that the user has uploaded an image it’s time to read the file and send it off to the canvas to render. FileReader‘s readDataAsURL will allow us to process the file asyncronously and allows for an onload callback that gives us the ability to set the newly read image and ask the canvas to draw.

Additional Reading

Posted in browser, canvas, HTML5 | Leave a comment