Its been a while since I have blogged about Post To Tumblr, my popular Chrome Extension for Tumblr. I have been quiet but certainly not inactive.
One thing that has always bothered me about the extension ever since version 1, when I started accepting donations, was that the only way to donate was via Paypal. Thus people in countries where Paypal isnt available have been unable to donate and thus unlock the advanced features. Up till now I simply gave those users that emailed me free access to the features, but obviously this isnt ideal.
I like to listen to podcasts, mostly technical related but some science, some business and some general interest. Most of the shows I listen to have ads and one ad that continually pops up is for Braintree. Braintree is a payments solution for online services so I decided to investigate if they would be a good solution for my Chrome extensions.
I discovered that they have the same fee but have have a generous $50k threshold before those charges kick in. They accept credit cards which means they can be used in any country and to top it off they have nice documentation and simple integration options which is more than I can say for Paypal.
So I decided to make the jump and switch PTT over to Braintree.
Donations are all handled on my HTTPS heroku page. Credit card info never touches my server and is all handled via the Braintree iFrame and thus I am never liable for any financial risk nor an I at any point breaking Googles Developer Policies.
Speaking of that. As soon as I published the update with the new Braintree payments integrated I received this lovely email from Google:
Obviously I am rather wary of these sorts of generic take-down emails from Google thanks to my Google Play ban. After emailing them back asking for clarification as to exactly which rule I was breaking I received an email 2 days later that stated that they had reviewed my extension and were going to reinstate it. No explanation as to why it was taken down in the first place…
Anyways. Its back, now integrated with Braintree, and it all works and everything is right with the world, so im not going to pursue it any further.
Markd is a project I have been working on for Brandon over a pepwuper.com for a while now.
Quite simply its a Chrome Extension and website that allows a user to quickly and easily bookmark people you come across on the internet. In Brandons own words:
I wanted a better way to go about the process. A simple tool that would allow me to save someone into my “book of interesting people”, no matter which site I find them on. I wanted it to be easy with image/description automatically filled out. I wanted it to allow me to save reference images. And I wanted it to have a simple way for me to organise and search those I’ve bookmarked later on.
And so he contacted me to help him build out such a tool and Markd was born:
Once installed (and logged in) you can visit any site then just hit the little Markd icon on the browser toolbar:
This opens a window which auto populates from the page you are on:
Currently the auto-population only works for a number of the most popular sites sites, namely: “Twitter, Linkedin, Behance, Facebook, Dribble, DeviantArt, Github and ProductHunt”, but if it isnt on that list (or even if it is) you can easily edit any part of the Mark before it is saved. Images are scraped from the page and presented in a handy dialog:
Once saved you can view and edit your saved marks on Markd.co:
The two parts (Chrome Extension and website) were built with two different technologies.
I decided to go with Aurelia for the extension. I was wanting to find an excuse to give it a try for a while and this seemed like the perfect opportunity.
Its a framework for web app development from Rob Eisenberg, the same dude that worked on Durandal and Angular before deciding he wanted to go his own direction and create Aurelia.
Much like Angular its a template-based framework like Angular e.g.
You then write a backing “Component” class which implements the “addTodo()” function and provides the “heading” variable etc.
In addition to this it provides many other nice things out of the box such as easy to setup routing and good support for Typescript.
Because it is quite new I did have quite a few setup difficulties, in the end I managed to get it all working and found it a pleasure to use.
I wasn’t sure which backend tech to use for this, there were many options. In the end I decided to go with .Net (C#) hosted on Azure. Development was “relatively” simple because of the excellent VisualStudio tooling with C#. I did however have some difficulties around providing my own Authentication scheme using Json Web Tokens. In the end I managed to get it working thanks to a number of helpful projects on github.
Azure itself also provided its own share of problems; one such such being it seems to be impossible to handover a resource group to another user. To me this is a pretty common usecase for a freelancer that wants to handover a completed project to a client.
We also ran into the auto-DB backups not actually running for some reason. For all its power, Azure can be a bit of a fickle beast sometimes, in the future I will probably stay away from it.
Markd was a great project to work on and im proud of the result and the client is happy which makes me happy. We have a bunch of other features and things we want to add to it but that will have to wait for another blog post, for now tho you can check out the tool for free at:
Wow, I cant believe its been one whole year since the last Govhack, the hackathon where groups of people use government data to hack together a project over the course of a weekend.
I had a great time last year on the “Should I Drive?” team. We used WA Main Roads and other data sources to try to answer the question “should I drive to my destination or take some other form of transport?”.
This year I set myself a goal. I wanted to do something to do with Machine Learning / AI. Its a field of computing thats really hot right now and im really interested in getting involved and learning more.
So as with last year, after a brief welcome presentation by the organizers, competitors were invited to take the microphone and pitch ideas. Off-stage the pitchers then threw together a quick poster with their main ideas.
There were a coupple of interesting ones but the one that really caught my attention was from a guy who wanted to apply Machine Learning to old historic photographs. After a brief discussion with him around his poster I immediately signed up. 20 minutes later we had a team and were heading off upstairs to find a quiet area of the (awesome) Flux co-working space where Govhack was being held this year.
In total we had 8 members, 3 technical and 5 non-technical. That first evening was mostly spent planning who was going to do what and what the priorities were. The three of us technical people sketched out our plan and divvied up the work so that we were all working efficiently.
I took the front-end website and backend node host / API while Dominic took the Python code which would interface with the various data sets and Houraan did the code which would apply the Machine Learning to the images returned from the data sets.
When I arrived at Flux later that morning, I demoed my progress and we discussed the scope of the project. The original idea was to produce videos from several photos in a sort of slideshow but after some discussion we decided to narrow the scope so that we were more likely to finish it in time. We decided that if we could just take old photos and apply ML to “Colourise” them then would be a cool way to explore the past using a modern technique.
With the scope of the project resolved, our next task was to come up with a name for it. One of our non-technicals Karl came up with “Colourful Past”, we all agreed that it fit the scope and described the project perfectly.
The rest of the day was spent furiously hacking away on various facets of the project.
We setup a Trello Board to manage tasks and a place to store links and other information about the project.
We used Slack for general communications and link sharing when we couldnt just shout across the table.
In general things went really smoothly. The technical side worked really well, we were able to work efficiently independently then combine the results towards the end. On the non-technical side there few a few issues managing tasks, keeping everyone working all the time but in general we were able to work effectively together.
By the end of the first day we had a basic but working product. You could type in a search term such as “Anzac Day”, the client would then send an API request to the NodeJS server which would then in parallel call a number of Python scripts to query various datasets, the results were then aggregated and returned to the client.
The user could then click a button to “Colourise” the black and white image. This makes another call to NodeJS which calls another Python script which uses a Machine Learning model developed by UC Berkeley and trained on 1.3 million black and white images to generate a coloured version of our historic photograph. The resulting image is stored in S3 and the URL returned to the client.
(above image is just a placeholder and was not generated by the AI)
After I left on the Saturday evening, to get some needed sleep, Houraan soldiered on and gave the site some much needed design love. When I woke in the morning the site looked much improved, Houraan had done a phenomenal job.
Sunday was our final day and we spent the first half adding the last bit of polish to the site, such as a really cool subtle gradient effect on the text:
Then we concentrated on the presentation and video which the judges will be using later that day. Molly did a great job putting together our video which was uploaded to Youtube at the very last minute:
The presentation worked a little different from last year. Instead of everyone getting up on a stage and doing a slideshow in front of the judges, the judges came round to each team’s desk where we did a demo and a short talk before being asked a number of questions. Houraan, Karoline and Bruce nailed it for us while the rest of us watched and gave moral support.
(We had a little spare time so we thought we would play on the “Colourful” aspect of our product a little by blowing up some balloons, producing a poster and dressing in bright colours)
All in all it went really well and im very happy with the result. The judges seemed to think so too as we came away with a 1st prize in the “West Australian Community Prize” category.
Well thats just about it. If you want to have a look at what we produced you can try it out at http://colourfulpast.org/. We dont know how long we are going to be able to keep the expensive AWS GPU instances going for so if you are viewing this post some time from now it might not work for you.
I just want to say a massive thanks to all my team-mates for making it an awesome weekend of fun hacking, thanks guys, I hope you see you all next year!
(P.S. Big thanks to Karl for taking all the pictures!)
A professional games developer that just cant stop tinkering with things