Govhack 2016 – Colourful Past

Govhack 2016 – Colourful Past

Wow, I cant believe its been one whole year since the last Govhack, the hackathon where groups of people use government data to hack together a project over the course of a weekend.

I had a great time last year on the “Should I Drive?” team. We used WA Main Roads and other data sources to try to answer the question “should I drive to my destination or take some other form of transport?”.

This year I set myself a goal. I wanted to do something to do with Machine Learning / AI. Its a field of computing thats really hot right now and im really interested in getting involved and learning more.

So as with last year, after a brief welcome presentation by the organizers, competitors were invited to take the microphone and pitch ideas. Off-stage the pitchers then threw together a quick poster with their main ideas.

2016-07-29 18.52.00

There were a coupple of interesting ones but the one that really caught my attention was from a guy who wanted to apply Machine Learning to old historic photographs. After a brief discussion with him around his poster I immediately signed up. 20 minutes later we had a team and were heading off upstairs to find a quiet area of the (awesome) Flux co-working space where Govhack was being held this year.

In total we had 8 members, 3 technical and 5 non-technical. That first evening was mostly spent planning who was going to do what and what the priorities were. The three of us technical people sketched out our plan and divvied up the work so that we were all working efficiently.

3tech

I took the front-end website and backend node host / API while Dominic took the Python code which would interface with the various data sets and Houraan did the code which would apply the Machine Learning to the images returned from the data sets.

theplan

With the plan sorted I knew I had a bunch of work to get done before the non-technical people could get their hands on something to use and experiment with. With that in mind I decided to get up really early (4am) on Saturday morning and start work on the site. Im glad I did as it took me most of the morning to get the shell of an app up and running using ReactJS with Typescript and a NodeJS backend hosted on Heroku (such is the way with modern Javascript).

chrome_2016-08-01_12-54-14

When I arrived at Flux later that morning, I demoed my progress and we discussed the scope of the project. The original idea was to produce videos from several photos in a sort of slideshow but after some discussion we decided to narrow the scope so that we were more likely to finish it in time. We decided that if we could just take old photos and apply ML to “Colourise” them then would be a cool way to explore the past using a modern technique.

With the scope of the project resolved, our next task was to come up with a name for it. One of our non-technicals Karl came up with “Colourful Past”, we all agreed that it fit the scope and described the project perfectly.

The rest of the day was spent furiously hacking away on various facets of the project.

We setup a Trello Board to manage tasks and a place to store links and other information about the project.

chrome_2016-08-01_13-04-34

We used Slack for general communications and link sharing when we couldnt just shout across the table.

slack_2016-08-01_13-48-55

Source code was uploaded to our github org https://github.com/colourful-past:

chrome_2016-08-01_15-40-43

In general things went really smoothly. The technical side worked really well, we were able to work efficiently independently then combine the results towards the end. On the non-technical side there few a few issues managing tasks, keeping everyone working all the time but in general we were able to work effectively together.

By the end of the first day we had a basic but working product. You could type in a search term such as “Anzac Day”, the client would then send an API request to the NodeJS server which would then in parallel call a number of Python scripts to query various datasets, the results were then aggregated and returned to the client.

chrome_2016-08-01_14-28-58

The user could then click a button to “Colourise” the black and white image. This makes another call to NodeJS which calls another Python script which uses a Machine Learning model developed by UC Berkeley and trained on 1.3 million black and white images to generate a coloured version of our historic photograph. The resulting image is stored in S3 and the URL returned to the client.

chrome_2016-08-01_14-35-48
(above image is just a placeholder and was not generated by the AI)

After I left on the Saturday evening, to get some needed sleep, Houraan soldiered on and gave the site some much needed design love. When I woke in the morning the site looked much improved, Houraan had done a phenomenal job.

chrome_2016-08-01_14-41-27

chrome_2016-08-01_14-42-59

Sunday was our final day and we spent the first half adding the last bit of polish to the site, such as a really cool subtle gradient effect on the text:

2016-08-01_14-48-13

Then we concentrated on the presentation and video which the judges will be using later that day. Molly did a great job putting together our video which was uploaded to Youtube at the very last minute:

The presentation worked a little different from last year. Instead of everyone getting up on a stage and doing a slideshow in front of the judges, the judges came round to each team’s desk where we did a demo and a short talk before being asked a number of questions. Houraan, Karoline and Bruce nailed it for us while the rest of us watched and gave moral support.

2016-07-31 15.33.01
(We had a little spare time so we thought we would play on the “Colourful” aspect of our product a little by blowing up some balloons, producing a poster and dressing in bright colours)

All in all it went really well and im very happy with the result. The judges seemed to think so too as we came away with a 1st prize in the “West Australian Community Prize” category.

2016-07-31 19.33.43

Well thats just about it. If you want to have a look at what we produced you can try it out at http://colourfulpast.org/. We dont know how long we are going to be able to keep the expensive AWS GPU instances going for so if you are viewing this post some time from now it might not work for you.

I just want to say a massive thanks to all my team-mates for making it an awesome weekend of fun hacking, thanks guys, I hope you see you all next year!

IMG_0389IMG_0455IMG_0436IMG_0395

(P.S. Big thanks to Karl for taking all the pictures!)

My Third Coding Epiphany

My Third Coding Epiphany

I have been meaning to write this post for a while now and since I have spent most of this month back in the UK visiting friends and family I don’t have all that much to share technically so I thought it was about time I got this post done.

Over the course of my 23 years of coding I have had a number of what I call “Code Epiphanies”. These are moments in my coding career where fundamental changes in how I code have taken place.

Like most, I started my coding career writing simple scripts, for me it was the odd bit of HTML, JS, PHP and AS. It was simple imperative code, usually all contained in one file. “When this is clicked do this, then do this” etc.

This way of coding served me well. It took me all the way to University at which point I encountered Java and I started to write larger and larger programs. I now started to struggle as I noticed that I had many more classes and objects but no way to easily tie them together.

For a simple (contrived) example, suppose I have a “Player” object that wanted to let the “PlayerManger” object know when the player had died. I would do something like the following:

public class Player
{
    public PlayerManager manager;

	...

    private void Die()
    {
        manager.OnPlayerDie();
    }
}

The “manager” variable would be set from the outside by whoever created the Player. It looks simple but I found as I had more objects and managers I was getting horribly bogged down as I had to keep hold of references to PlayerManager in parts of the code which werent even remotely related. It was causing my code to become complex and hard to manage.

Thats when I had my first Code Epiphany, I discovered the Singleton. I no longer needed to pass my unrelated objects around, I could just access them directly within the player:

public class Player
{
    ...

    private void Die()
    {
        PlayerManger.GetInstance().OnPlayerDie();
    }
}

This was an incredible revelation to me as it opened my eyes to how important good architecture is as your program gets larger.

As the years went by however I started to notice issues with my Singleton based architecture. Although it was okay for quick projects that weren’t meant to last very long I noticed that as a program got bigger and bigger Singletons were becoming more and more of an issue. For example I found that I couldn’t easily swap out the PlayerManager for a different sort of PlayerManager without breaking a whole bunch of code, for example I couldn’t do the following:

public class Player
{
	...

    private void Die()
    {
        IPlayerManger.GetInstance().OnPlayerDie();
    }
}

Singletons I also found were making my code very rigid. I was finding it hard to abstract parts of my code out into separate reusable libraries that I could use in future projects. With Singleton references all over the place it was becoming a bit of a spaghetti nightmare.

It was around this time that Flash was starting to become really big and I found myself doing more and more AS code. It was also around this time that frameworks were starting to explode in the Flash world. I remember experimenting with a whole bunch of them: PureMVC, Cairngorn, Swiz etc, before I came across Robotlegs.

Robotlegs was (and still is) a great MVCS framework. It creates clear separation between the different layers of your application; Models, Views, Controllers and Services. One of the most important things for me was how it did this, by using a new concept (to me at least), Automatic Dependency Injection.

Automatic Dependency Injection (DI) was my second Coding Epiphany, it did away with my hard-coded Singletons and replaced them with neat little “Inject” tags. First you would define your dependency tree such as:

public class Context
{
    private void setup()
    {
        injector.mapSingleton(IPlayerManger, PlayerManager);
    }
}

You could then just write your Player like:

public class Player
{
	[Inject]
	public IPlayerManger playerManager;

	private void Die()
	{
		playerManager.OnPlayerDie();
	}
}

Then when you create a player using the injector, the IPlayerManager instance will be filled with a PlayerManager instance.

injector.createInstance(Player);

This was a revelation to me as it now meant I could create better isolation between may various classes. Player doesn’t care where PlayerManager comes from and it doesn’t even care what the implementation of it is, it just wants to have the OnPlayerDie() method.

This general concept was great. It applied to everything I did, be it Actionscript games, C# apps or Java backend code, it created nice separation of concerns for me but I was missing one major benefit of DI which led me to my third Code Epiphany.

About 12 months ago I joined The Broth here in Perth Australia. I joined as an Actionscript developer onto a team that had been working on a Facebook and Mobile game for over 3 years. This was my first time joining a team on a project that had already been in development for years and it was a real revelation to me.

Over the years the code had grown and evolved. It had in fact grown to the point where it was starting to get really hard to maintain. I was really afraid to make any changes as I didn’t have the years of experience to know what systems affected each other, whether deleting something over here would cause something over there to break.

So I did some googling and decided to invest in Michael Feather’s book Working Effectively with Legacy Code and Uncle Bob’s Clean Code videos.

51H6SHy6g2L._SX374_BO1,204,203,200_

Both Michael Feathers and Uncle Bob both based most of their discussions around automated testing. Infact Michael Feathers defined legacy code as any code that doesn’t have test coverage. I had heard about Unit Testing and knew that it was something I should be doing but never actually tried it.

Working on this 3 year old project was the perfect opportunity to get to grips with unit tests. As Michael Feathers describes in his book, once you have your code under test, that is you can be sure that its doing what it should be doing, you are then free to refactor the code so long as it passes the tests.

One thing I learnt very quick was that to make your code testable you have to be very careful how you structure your code. Your tests can get hard to write and maintain if your classes get too big or take on too many dependencies.

This to me was my third Code Epiphany. By keeping my classes small with minimal dependencies it made things easier to test. A side benefit was that it made the code much easier to read and reason about.

Going back to my contrived example. I would now structure my Player like:

public class Player
{
	private IPlayerManger _playerManager;
	
	public Player(IPlayerManager playerManager)
	{
		_playerManager = playerManager;
	}

	public void Die()
	{
		_playerManager.OnPlayerDie();
	}
}

I have removed the need for the [Inject] tag as now all my dependencies are just passed in the constructor.

I could then easily test like:

public class PlayerTests
{
	[Test]
	public void WhenPlayerDies_PlayerManagerInformed()
	{
		var mock = new MockPlayerManager();
		var player = new Player(mock);
		player.Die();
		Assert.IsTrue(mock.OnPlayerDieWasCalled();
	}
}

Unit Testing has forced me to write better, simpler to read code purely by the fact it would be hard to test if it wasn’t the case. As Martin Fowler says:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Well thats where im at these days. Trying to write easy to understand and testable code. I don’t know what my fourth Epiphany could possibly be but im excited to keep learning and improving.

New Tab Chrome Experiments – A new Chrome Extension

New Tab Chrome Experiments – A new Chrome Extension

I really enjoy making Chrome Extension, I love how fast it is to go from idea to implementation then availability on the store.

My latest extension is called “New Tab Chrome Experiments” was only conceived of on Sunday evening. I then spent yesterday coding it up and now its up and available todownload on the store.

As the name implies its an extension that replaces the “New Tab” page in chrome. I have always been a fan of the ChromeExperiments.com website and thought wouldnt it be cool if you could view one of those experiments each time a new tab is opened.

screenshot02

The extension works by first downloading the entire catalog of Chrome Experiments from the site. At first I thought I was going to have to scrape but then I went digging through the network traffic of ChromeExperiments.com and found that they are using an un-published but nice API: “https://chromeexperiments-dat.appspot.com/_ah/api/experiments/v1/experiments”.

screenshot01

With that data I could then randomly pick one experiment and load its URL in an iFrame each time a new tab is opened. I also present a little info box on the left hand side which tells you about what experiment you are looking at and provides a link to its page and its author.

screenshot03

I wrote the whole thing in Typescript using React and various other libs. I open-sourced it incase you were interested: https://github.com/mikecann/new-tab-chrome-experiments

You can download the extension from the Chrome Webstore here: https://chrome.google.com/webstore/detail/new-tab-chrome-experiment/ooopblodejpcihjoaepffbkkhfeeofhp

Anyways it was a fun little experiment, I hope people enjoy it as much as I do!