Posts Tagged ‘blog’

Idea: Answering Tricky questions with Amazon Mechanical Turk

Friday, November 3rd, 2017

I know that the Echo gets the bulk of its responses from Alexa, but I think amazon is missing an opportunity with their own mturk service.

Each time Echo responds with “sorry”, the transcript of that command is already flagged for staff review. If enough people ask for anything, the appropriate response will eventually be available for Echo. However, what if Amazon could create new responses to tricky questions in less than a day?

Using Amazon’s own Mechanical Turk service, Amazon can reduce the threshold for human review. Overall decreasing the turnaround for creating better responses from weeks down to hours or even minutes!

For the uninitiated, Mechanical Turk is a service from Amazon where you can complete simple tasks in exchange for a tiny payment. The idea is to use this service for tasks that are easy for humans but hard for computers. Currently, the mturk service looks to be mostly used for transcription and image recognition services.

Software will save us

Thursday, March 24th, 2016

Summary: Some old insights are still true. Hardware has almost 2000 times more transistors in each chip comparing 1995 to 2015, but steadily increasing software demands of hardware have given us slower and slower systems. A basic task on a 1995 machine completes only slightly faster on a modern machine.  Our computers are not 2000 times better or make us 2000 times more productive.

i7 PC散熱

I was reading a press release from Intel on the feasibility of chip advancements after the next gen 10nm chips. Ideas do exist, but nothing will gain the power and speed that the brute force method of “make it smaller with more transistors” approach has for the last 60 years.

Got me thinking about the end game. We have been privileged to sit back and enjoy ever increasing gains from improvement to chip hardware. However, the nature of exponential growth means we have to hit a limit at some point. That point is fast approaching. Could be 2017 or 2020, doesn’t matter, we are already there.

We hit that limit around 2006. That was the point where chip manufacturers could no longer double the number of transistors per chip every 2 years. Instead, Intel and others began making chips with 2+ cores. The guts of two chips inside of one chip casing. You could consider this cheating. These new multi core chips needed special software to run in parallel. Software does not run faster simply by adding more cores. The multicore overhead limits gains like this. For example, the jump from 2 cores to 4 cores was 200% the number of transistors, but only 20% faster running the same multicore software.

Where do we go next after the completion of moore’s law and the end of brute force improvements to hardware? Optimizing what we already got. There are still improvements remaining to be made to hardware, but nothing that will deliver us any magnitude improvements of what we already have. Instead, software engineers are going to have to save the day.

Modern software won’t run very well on chips from 2006, but you can look at similar software available at that time to do a comparison. The scary thing? With the boom of parallel processing, software has gotten slower.

Software bloat is the real reason for computers slowing as time trenches on. Is a concept worth a read on wikipedia. I will summarize some of the causes:

1. Lazy software engineers. As hardware has been improving 100s of fold since 1990s. Software folks have been enjoying making new generations of software that are not very efficient but is hidden by the faster hardware speeds.

2. Change in software tools. Prior to the 1990s, software engineers were limited in memory and speed. By necessity, they had to work much harder to create the most efficient solution. Often this meant writing code in assembly, the machine language. Today’s software folks use high level debugging tools. Are useful sure, but are far removed from the actual code that runs on the chip.

3. Software has become huge. No single person understands every part of what has been cobbled together. How could they? The modern operating system is developed by hundreds and thousands of individual people contributing millions of pages of code.

Many futurists will talk about something called the technological singularity. The point where a computer can design a better version of itself. Rewriting better and faster versions of itself, at a incomprehensible speed. Leading to an explosion of powerful computers that humans no longer understand. Too late…



Reminiscing about XBConnect

Sunday, January 17th, 2016

What is XBConnect? It was an online gaming service for the original Xbox.  This was before Xbox Live was popular.  XBConnect used the system link mode built into many Xbox games, basically hijacking the network traffic and rerouting over the internet.  XBC stayed relevant through the years due to two factors: 1. it was free and 2. it was compatible with the “new” xbox 360 which used the same system link as the original xbox.


XBConnect was a great service for broke college kids who had access to non-dial-up internet.   After writing several help guides, I was invited to be a moderator and eventually an admin of the forums from 2004 – 2014.  You can still find some of the guides I wrote, hosted here:

As of last year, XBConnect went offline for good, and it’s sad.  The downfall started when the software was abandoned by the original author, leaving only a copyrighted closed source to the remaining team to keep the servers running.  They had permission to add new games to the gamelist, but no new features or bug fixes were possible.

What’s left? There are some cool people that still know what XBConnect was on facebook, and run a fan page.

Other than that, is not much.  The website is gone, the servers turned off.  You have to scrounge the internet wayback machine to find any mention of XBC.
XBConnect doesn’t even appear in the english wikipedia, here is a finnish one:


Final forum post count: 5,298

Let’s talk about luck

Monday, February 27th, 2012

Luck can be quantified. Each person has a set max amount of luck they can carry with them, and a set flow rate that their luck is replenished.

A person throughout the day uses there luck all the time, without even realizing it.  Each time you take a risk, big or small, you use a portion of your luck reserves. Here is where people run into problems. Using too much luck creates a luck deficit, this is the cause of bad luck. A person who presses their luck all the time with a risky lifestyle will find themselves out of luck when they really need it.

Don’t worry too much about you’re own luck deficit. The important thing to remember is too be aware of events in your life and how much luck you are using to accomplish your goals. If you use a lot of luck in any given day, lay your head down low for for awhile. Don’t take any additional risks. Depending on how much luck you used, your luck will be replenished in a few days or weeks.

Another important aspect is that luck can overlap between people and even be shared. This happens because the two people are close emotionally, or sometimes just physical proximity.  A person in a luck deficit will attempt to replenish their reserves, this happens naturally, but is sometimes not enough and that person becomes a luck black hole to those around them.