Thursday, 3 July 2014

Bow for nowt


    Last weekend, I made a bow. Not because I've had a hankering for one, but because I spotted the perfect material and had to give it a go.

    My friend Rebecca was replacing her bed. The old one had a lot of very springy plywood slats, far too good to let go to the tip. So I detached them, bound them up with tape, and brought them back home on the back of my bike.

They're too wide to make a bow on their own, the ends of a bow need to be considerably less stiff than the centre where you hold it. And they weren't long enough to make a bow on their own, I'd have to bolt two of them together.

    So I cut two of them on a taper, leaving a section of a few inches intact where they'd be bolted together. I then drilled three holes for the bolts, and applied a layer of No More Nails glue to the joining surfaces before doing up the bolts. This left me with a long and springy bow, but no means to attach a bowstring to it. Out with the saw again to cut a couple of notches at each end. My bowstring - a knotted piece of the orange baler twine that is ubiquitous on farms - simply hooked over the end of the bow into the notches.
Is it a good bow? Not being an archer I have no idea. But It takes the length of my draw without damage, and it feels as though I'm putting quite an effort into it. And it shoots bamboo canes down the lawn rather well. Next up, a pack of cheap arrows from Amazon, I'm not hard-core enough to do my own fletching.

Wednesday, 12 March 2014

Small computer, big data

    This post comes as one of the longest running scripts I've ever created has just finished its work. In the last week of January I set my Raspberry Pi to the task of processing 5 years of news stories into a 20Gb tree of JSON files, and here in the second week of March it's completed the task.
    Given that a PC has done the same job in a couple of days the first question anyone would ask is simply this: Why?
    My Pi runs all day every day, 24/7. It collects the news stories from RSS feeds and stores them in a MySQL database. It uses somewhere under 2 watts, and it will do this no matter what I ask it to do because it's plugged in all the time. I can touch its processor with my finger, it's not hot enough to hurt me. My laptop by comparison with its multi-core Intel processor, board full of support chips, and SATA hard disk, uses somewhere under a hundred watts. I can feel the hot air as its fan struggles to shift the heat from the heatsink. I wouldn't like to hold my finger on its processor, assuming I could get past its heat pipe.
    Thus since I'm in no hurry for the data processing it will use a lot less power and it makes more sense for me to run the script on the Pi. This isn't an exercise in using a Pi for the sake of it, instead the Pi was the most appropriate machine for the task.
    So having run a mammoth script on a tiny computer for a couple of months, how did I do it and what did I learn?
    The first thing I'd like to say is that I'm newly impressed with the robustness of Linux. I've run Linux web servers since the 1990s but I've never hammered any of my Linux boxes in quite this way. Despite stealing most of the Pi's memory and processor power with my script it kept on with its everyday tasks, fetching news stories and storing them as always. I could use its web server - a little slowly it's true -, I could use its Samba share and I could keep an eye on its MySQL server. Being impressed with this might seem odd, but I'm more used to hammering a Windows laptop in this way. I know from experience the Windows box has not been so forgiving running earlier iterations of the same script.
    If anybody else fancies hammering their Pi with a couple of months of big data, here's how I did it. The script itself was written in PHP and called from a shell within an instance of screen. This way I could connect and disconnect at will via ssh without stopping the script running. The data came from the MySQL server and was processed to a 64Gb USB Flash disk. The Flash is formatted as ext4 without journaling, this was judged to be the best combination of speed and size efficiency. An early test with a different FAT formatted drive provided a vivid demonstration of filesystem efficiency as the FAT ended up using 80% of the space after only a short period of processing.
    The bottleneck turned out to be the Flash drive, a Lexar JumpDrive. Reading and writing seems to happen in bursts, the script would run quickly for about 30s and then very slowly for the next 30s purely due to disk i/o. In future I might try the same task with USB-to-SATA hard disk, though I'd lose my power advantage.
    So would I do the same again, and how might I change my approach? I think the Pi was a success in terms of reliable unattended operation and in terms of low power usage on a machine I'd have had running anyway. But in terms of data processing efficiency it could have been a lot better. A faster disk and a faster computer - perhaps something with the Pi's power advantage but a bit more processor grunt such as the CubieBoard - would have delivered the goods more quickly for not a huge extra investment. And the operating system though reliable could probably have been improved. I used a stock Raspbian, albeit with the memory allocation for graphics reduced as low as it would go. Perhaps if I'd built an Arch image with a minimum of dross I would have seen a performance increase.
    I used a Raspberry Pi for this job because it was convenient to do so, it uses very little power and I had one that would have been powered up anyway throughout the period the script was running. The Raspberry Pi performed as well as I expected, but I can not conclude anything other than that it is not the ideal computer for this particular job. It is sometimes tempting when you are an enthusiast for a particular platform to see it as ideal for all applications, well in this case that would be folly.
    The Pi will continue to crunch the data it collects, though on a day-to-day basis as part of the collection process. In that it'll be much more suited to the task, as a cron job running in the middle of the night the extra work of a day's keyword crunching won't be noticed. And there's the value in this exercise, something that used to require a PC, a while of my time and a little bit of code has been turned into an automated process running on a £25 computer using negligible power. I think I call that a result, don't you?

Friday, 24 January 2014

Can you run a small business with the Raspberry Pi?

    There are three Raspberry Pi computers scattered around our flat. A 512k Model B is an application server running my keyword analysis system 24/7, my original 256Mb Model B serves as a Raspbmc set-top-box, and a 256Mb Model A serves as a general purpose hardware and software hacking platform with an attached camera.
    I am a demanding user of the first of those three, the keyword analysis system involves gigabytes of data and processor-hungry scripts. It's not the fastest machine on the block by any means, but after 18 months or so of continuous Raspberry Pi use for this application I am very impressed with how little intervention it has required. After most of a career developing similar coding and database tasks using office Windows server machines I find myself appreciating the Pi for another reason than its low cost and low power: its reliability in both software and hardware terms.
    If you run any kind of business you do not view your computing needs as a home user might, in terms of hardware cost. Instead you price IT as an ongoing investment over the lifetime of the kit, in which the cost of support, licencing and upkeep may significantly outweigh the purchase price of a computer. Looking at my experience with the Pi as an application server I can't help wondering whether its reliability and stability might make it a surprisingly good fit in a business environment, for at least a small business if not in some cases a larger one.
    So what does a small business need from its IT systems? Every business is different of course, but if you were equipping a generic office network for the first time you might reasonably expect to have the following components:
  • Desktop computers
  • A file server
  • Some means of sharing a printer
  • An email server
  • A firewall and internet connection
  • Network infrastructure - let's go with wired Ethernet here and not start talking about Pi-based wireless hotspots
   As a thought exercise it is worth considering how each component might be addressed using a Raspberry Pi, and what if any benefit that choice might bring.
    Desktop computers: One of the first things most people will do with their Raspberry Pi is write a copy of Raspbian to an SD card and type "startx" at the command prompt once they've booted it and logged in. So it's beyond doubt that given a keyboard, mouse, and monitor the Pi is a desktop computer. But how would it perform in a business environment?
    Software's no problem. Raspbian benefits from a huge library of Linux packages. There's no need to fork out to Microsoft for an Office licence when you can run LibreOffice, for example. But I know if I was using my Pi for office work I'd find myself wishing it was a lot faster. I seem to remember an early description of a Pi as being like a Pentium II with a very fast graphics card, but without the ability to make use of its GPU capabilities the Pi's slow speed Achilles heel is only to obvious. Like many Pi users I look forward to receiving stable OS builds featuring the Wayland support we were shown a preview of last year.
    Benefits of a Raspberry Pi business desktop? Low initial cost, low maintenance cost - simply replace defective hardware with a new one for £25 - no software licence fees, low power consumption.
    Disadvantages of a Raspberry Pi business desktop? It may be the fastest desktop you can buy for £25, but undeniably it's not the fastest desktop you'll ever use
    A file server: This is something the Raspberry Pi can do very well. A headless Linux box has no worries about graphics speed so is only held back as a file server by the speed of its network card and disk drive. Plug an external USB drive into a Pi and it's true you don't have an enterprise-class server, but it will still offer perfectly adequate performance for a small office network. This guide to setting up a SAMBA file server uses Arch Linux, but as my keyword tool server proves every time I pick up data from its share with my Windows laptop the same setup works just as well with Raspbian.
    Benefits of a Raspberry Pi file server? Extremely low cost, reliable hardware, low power consumption
    Disadvantages of a Raspberry Pi file server? Some command line admin is required to set it up, especially if your needs extend beyond simple open-to-all file shares.
    Printer sharing: Nowadays you don't have to pay much money for a printer that can already connect to a network, and it probably makes the most sense to do that if you can. After all in a business network the simplest method of getting what you need is more important than the most technically interesting. But this is a piece about using the Raspberry Pi, and a Pi can make a very good network printer sharing device. So here's a tutorial about setting up CUPS on a Pi and sharing it on a network.
    Benefits of a Raspberry Pi print server? Flexibility to print to whatever device - or even software - that you want to set up.
    Disadvantages of a Raspberry Pi print server? Requires command line admin to set up, more complex than using a network printer in the first place.
    Email: Again from a business perspective does it make sense to use a Raspberry Pi as an email server when you can buy any one of a multitude of cloud hosted email products for your organisation? If it were me I would outsource my email every time and gladly pay for it, but for those who really need their email in house here's a tutorial describing a Raspberry Pi email server.
    Benefits of a Raspberry Pi mail server? Reliability, low power consumption, and complete control of your own email.
    Disadvantages of a Raspberry Pi mail server? Significant admin skills required to set up and run.
    Firewall: Here a Pi can undeniably do an extremely good job. Installing OpenWRT on a Raspberry Pi turns it into a very effective firewall. But yet again this is a feature offered at an almost commodity level by almost all domestic and small business routers. As with printer sharing, in business it pays to go for the easiest way to get what you need.
    Benefits of a Raspberry Pi firewall? Huge flexibility that may not be offered by an off-the-shelf router. Any protocol you want to run through it can be set up.
    Disadvantages of a Raspberry Pi firewall? Significantly more complex than an off-the-shelf router firewall, requires admin skills to set up.
    So there's a small business network with Raspberry Pi desktop machines and a Raspberry Pi file server, and all for Raspberry Pi prices. Network printing is probably better built into the printer, the firewall is probably best left to a commercial router, and email is a lot less hassle in the cloud. All that's missing from most business requirements is a business-level portable Raspberry Pi, maybe somebody's working on a professional laptop enclosure as I write this.
    The question is, would I have run my business on a network like this? Would I drop my nice modern laptop with dual boot Windows and Ubuntu to develop on a machine with a lot less speed? Probably not. But if my line of work didn't rely on raw computing power and needed instead some reliable document processing would I consider this as an alternative to a heap of Dells at several hundred quid each, a hefty Windows Server licence bill, and an ongoing relationship with an IT support company?
    I think I'd be silly not to give it a second look, don't you?

Wednesday, 13 November 2013

Developing the Oxford Dictionaries Quick Search app

    Well here we are after what seems like an age of tweaking, the Oxford Dictionaries Quick Search app is finally available for installation. It can be found on the iTunes website here, and on Google Play here. The Windows Phone 8 version should with luck be out in a few days.
    As the developer of course I'm going to say it's a brilliant app, but that would verge on shameless astroturfing. What I can say is that it's a simple and lightweight free English dictionary look-up app that I hope users will find useful.
    Under the hood, it's a client for the Oxford Dictionaries API. This of course means that it requires a data connection to run, but does give the advantage of the app taking very little space and providing the most comprehensive and up-to-date dictionary entries. Though it's hardly a novel use for the API it does demonstrate the functionality and speed of the service, as well as the ease with which the API can be developed against.
    The app uses the PhoneGap cross-platform HTML5 app framework with jQuery and jQuery Mobile providing the Javascript heavy lifting and user interface respectively. These packages have allowed us to deploy the app on three platforms in quick succession with minimal investment, something we could not have done had we been required to write all three versions natively. The quirks of the different HTML5 implementations have caused us a few headaches along the way, but not significantly more than web developers are used to when dealing with different browsers.
    It's been interesting to compare side-by-side the ease of development on the different mobile platforms. Android is the easiest as you'd expect, but with a Wild West of devices and OS versions out there it needs to be. We've been scouring our colleagues for odd Android versions and form factors to try our app on, yet I'm sure we've not tried them all. In particular we decided that with 25% or thereabouts of the Android market we couldn't abandon support for version 2.3, so we've had to contend with its incomplete font support and sometimes shaky HTML5 implementation.
    iOS by comparison with a set number of devices should be easy to develop for but starts to become more effort due to the stringent demands of Apple. Attaching different iOS devices to our development environment can at times be a challenge, and the ballooning demand for supporting resources such as splash screens, icons and screenshots for different resolutions and OS versions sometimes feels as though it is getting out of hand. However the App Store approval process was much quicker than we expected it to be.
    The Windows Phone 8 development environment shows Microsoft's typical attention to detail. The supporting resource requirements are well-thought-out, getting the app on a device is straightforward, and the free version of Visual Studio is a delight to use. However the fact it would only run on 64 bit Windows 8 seems rather strange, and Microsoft have not quite shed their reputation for quirky HTML environments. I didn't expect the problem we had with an animated GIF loading spinner, for instance.
    So it's been an interesting experience. I'd recommend PhoneGap to anyone wanting quick development of multi-platform mobile apps, though it's provided a few learning experiences of its own.

Tuesday, 5 November 2013

Fixed jQuery Mobile footers on Windows Phone 8

    Here's a solution to something which baffled me for a while: making a jQuery Mobile footer that stayed at the bottom of the screen and didn't scroll away with the content in a PhoneGap app on Windows Phone 8.
    jQuery Mobile is usually pretty good at making fixed footers. The code below works fine on Android, using data-role and data-position attributes on the footer div.

<body> 
<div id="container" data-role="page" data-theme="f">
    <div id="header"  data-role="header" data-position="fixed" data-tap-toggle="false">
    Header content here
    </div><!-- /header -->

    <div id="content" data-role="content">
    Page content here
    </div><!-- /content -->

    <div data-role="footer" data-position="fixed" id="footer" data-tap-toggle="false">
     Footer content here
    </div><!-- /footer -->
</div><!-- /page -->
</body>

    Unfortunately, on Windows Phone 8 it places the footer at the base of the screen but does not allow the content to scroll underneath it. Thus your footer scrolls up the screen with the content, hiding a bit of content as it goes and looking really horrible.
    You might think breaking out of jQuery Mobile by losing the data-role and data-position attributes and then applying a fixed position and a z-index to your footer would do the job. After all, it's the standard fix for some of jQuery Mobile's footer quirks on iOS. But sadly all that does on WP8 is anchor the footer to the bottom of the page rather than the bottom of the screen, resulting in a long scroll to see it. Less obviously broken, but still not what we want.
    My solution then was to apply a fixed height to the content div and allow the default WP8 overflow:auto; CSS to make its content scroll out of sight beneath it. I removed the data-role and data-position attributes from the footer and used a little piece of javascript to calculate the height of the content div from the heights of the surrounding divs and set it. Not necessarily the most elegant solution, but one that should reliably work across a range of devices.

<body> 
<div id="container" data-role="page" data-theme="f">
    <div id="header"  data-role="header" data-position="fixed" data-tap-toggle="false">
    Header content here
    </div><!-- /header -->

    <div id="content" data-role="content">
    Page content here
    </div><!-- /content -->

    <div data-role="footer">
     Footer content here
    </div><!-- /footer -->
</div><!-- /page -->
<script>

//NB the code below references jQuery, not included in this HTML for simplicity

var headerSpace = parseInt($('#header').css("height")) + parseInt($('#header').css("marginTop")) + parseInt($('#header').css("marginBottom")) + parseInt($('#header').css("paddingTop")) + parseInt($('#header').css("paddingBottom"));

var contentSpace = parseInt($('#content').css("marginTop")) + parseInt($('#content').css("marginBottom")) + parseInt($('#content').css("paddingTop")) + parseInt($('#content').css("paddingBottom"));

var footerSpace = parseInt($('#footer').css("height")) + parseInt($('#footer').css("marginTop")) + parseInt($('#footer').css("marginBottom")) +
parseInt($('#footer').css("paddingTop")) + parseInt($('#footer').css("paddingBottom");

var contentHeight = window.innerHeight-headerSpace-contentSpace-footerSpace;


$('#content').css("height", contentHeight + "px");

</script>
</body>

    I hope this helps put you on the right path. There seems to be frustratingly little documentation out there on what WP8 does and does not support, and on workarounds for what seem to be common problems. With luck this fix has plugged one such hole.

Thursday, 24 October 2013

There will be no iPad killer

    This week brought the news that Nokia have launched their long-rumoured tablet running Windows RT. Despite their woes of the last few years when it came to understanding what their consumers wanted, when Nokia get their act together they are still capable of making some of the best hardware there is. It's by all accounts  a decent effort, and the word is if you want an RT tablet it's the one to get.
    The trouble is, it wasn't long while reading about the new Nokia that I read the dreaded phrase "iPad killer". And it looks as if Nokia and Microsoft themselves believe that description because they've priced it in iPad territory, at just under 500 quid with a keyboard.
    I can't remember when I first heard a device described as an iPad killer. Probably not long after the iPad came out. Just for fun I tried to remember a few of the devices once described as iPad killers. Here they are, just a few of many.

  • HP WebPad
  • Motorola Xoom
  • BlackBerry PlayBook
  • Toshiba Thrive
  • Sony S1
  • Microsoft Surface RT
  • Samsung Galaxy Tab (the original one)
    All of these were launched with a fanfare and priced against the Apple product, yet with the possible exception of the Samsung flopped and sank without trace. The BlackBerry and the HP were both particularly nice devices, yet they both ended up being sold at fire-sale prices. The HP famously flopped so badly that HP dumped WebOS overnight and pulled out of the tablet business. Evidently being an iPad killer is a tough business.
    Here's the thing. Despite what the fanbois will tell you, the iPad isn't anything special. All it's got is the Apple logo and all those apps, otherwise its hardware is not too different to its competitors. But the consumers don't care about the niceties of different processors or display technologies (beyond Apple's rather meaningless "retina" marketing fluff), they just know they don't want the tablet equivalent of a Betamax video. So if they're asked to pay iPad money for something that isn't an iPad they'll know a risky deal when they see one and walk away. Meanwhile each successive marketing team makes the mistake of believing their own hype and yet another device heads towards the dustbin.
    So sadly the Nokia tablet will fail. It will do so on price alone, without that consideration the Microsoft Metro interface is a joy to use and Nokia hardware is beautiful. If they forgot the iPad and sold it for half the price it would be an unexpected success, as it is it'll be yet another tombstone in the iPad killer Boot Hill. You'd think a company and an OS vendor desperate for market share at all costs would think about that.
    The title of this piece is "There will be on iPad killer". That's not to say that the iPad will never lose its place as the tablet to own, more that as Apple lose the ability to give it meaningful differentiation its position will inevitably be eroded by ever cheaper and more numerous competition. If I were marketing a tablet I'd rather my device beat that competition than took a pop at the iPad. Let the fanbois have it.

Monday, 22 July 2013

Letter to Tony Baldry MP on internet filtering

    David Cameron has picked up the torch of savior of the nation from internet porn. I think that this, like so many other pronouncements from politicians on the subject of the internet, is largely a piece of think-of-the-children soundbite politics based on little or no knowledge of the subject.
    Because I think there is a real risk of this escalating into an unacceptable level of interference in the workings of the internet I penned the following letter (slightly edited to remove a personal reference) this morning to my MP, Tony Baldry. It won't change anything on its own but since MPs judge the strength of feeling on an issue by the size of their postbag it might have some effect.

Dear Tony,
   I'm mailing you today to express my professional concern as a constituent about the Prime Minister's proposals recently on internet pornography. I feel they owe more to soundbite politics and the readers of the Daily Mail than they do to practicality and they risk placing a burden on the UK internet industry at a time of economic turmoil.
    I'm a search engine and web language specialist by trade. I have worked in the past for Google and in our local web and search engine marketing industry and my current job is with a large publishing house. I make huge web sites of scholarly content and ensure that the search engines see them in the best light.
    I am concerned because I feel that the Prime minister is indulging in soundbite politics without first ensuring that what he is proposing is either practical or not already in place. He's made several points as I understand it: filtering of search terms, internet filtering software, and banning extreme porn including rape scenes. I'll address each one from a professional perspective.
    The proposal with respect to filtering search terms is that the search engines block offensive terms. So a search for porn might give the user a warning page and no results. I feel that this is a noble intent, but ultimately doomed. As a lexicographer will tell you language does not obligingly stay in one place. The porn consumers and their industry will move their vocabulary faster than those blocking terms can react, and we risk a situation similar to that of the "legal highs" industry in which new drug chemicals have to be individually identified and banned at a snail's pace. Something tells me that the Government will not expect to bear the cost of this process, so the internet industry will face yet another unnecessary burden following in the footsteps of confusion over accessibility requirements and the European cookie law.
    I feel that the Prime Minister can not have set up a personal Internet connection in recent years. If he had, he'd know that they already come with filtering software. As part of my job I need to turn mine off from time to time, so I'm fully aware of their existence. It is possible that there is not a legal requirement for them to be turned on, but by my experience internet providers turn them on by default anyway.
    The Government has already enacted a ban on extreme porn, and child porn has been illegal for decades. The online trade in child porn material left the web for other forms of internet traffic in the 1990s and if it is traded online it is not done so in a form that can be blocked by filtering software or search engines. Paedophiles already have a huge amount of law enforcement effort directed at them. Extreme porn may be more visible - It's hardly a subject in which I'm an expert - but I seem to remember that the Government has made something of a fool of itself when it has tried to prosecute people for its possession.
    Of course the illegal end of the porn industry must be dealt with. And it makes sense to ensure that an ISP filtered Internet is available for youngsters. My point is that most of what is needed to do this is already in place, and the Prime Minister risks making a fool of himself  by indulging in one of the Conservative Party's periodic episodes of wrapping itself in morality. You will remember John Major's "Back to Basics" campaign and its somewhat dismal effect on the electorate as a string of scandals engulfed the party, with a series of allegations relating to paedophile politicians and the Wrexham children's home doing the rounds I feel history could repeat itself.
    Now *please* do not reply to this with the default "Think of the children" argument beloved of politicians. It has become such a cliché that there is an entire genre of internet memes devoted to making fun of it. As an industry we are already thinking of the children as I hope I've demonstrated above. Instead I'd urge you and your colleagues to be cautious when making moral pronouncements with respect to the internet, and to seek technical advice before indulging in soundbite politics.
Thanks,
J. W. List

Saturday, 12 January 2013

A month with an Intel smartphone: Motorola RAZR i review

    It's been a month since I received my upgrade from Orange, a shiny new Motorola Razr i. I swore I'd never touch another Motorola after being caught out when they didn't upgrade my DEXT beyond Android 1.5 despite releasing version 2.1 for its American counterpart, but here I am suckered into owning another Moto.
    So why did I pick the Razr i over the competition? After all, there are a whole slew of rather good phones out there at the moment and this one might seem a little of an outsider.
    At this point most phone reviews go into a great long spiel about the minutiae of differences between  near-identical smartphones, talking about screen technologies, fractions of a millimetre in device thickness, minor screen size variations and pointless manufacturer-installed software bling. But it's a futile exercise. Within reason, pretty much all phones at a particular price point are functionally identical; one black slab these days is fairly interchangeable with another of similar specification. What matters in a phone is this: will it run my apps quickly enough and will its hardware ever let me down?  In more specific terms, is it OK for making calls, does it run a decent operating system, does it have a reasonably quick processor, and does it have a decent camera for its price? And if it satisfies those criteria and doesn't come with an outrageous price tag, that's all that needs to be said. If you want a traditional review of the Razr i then most tech sites should have one by now, meanwhile here are my impressions as a user.
    So here are the basics: Build on the Razr i is good, it feels solid with an aluminium frame, Kevlar back and Gorilla glass front. The screen is an OLED job, nice and bright with plenty of space and resolution for desktop site browsing. It's far better than my DEXT was at getting 3G signals in rural areas and it doesn't lose calls as frequently. It's not quite as good as most Nokias at conjuring signals out of nothing though. The OS is Android 4.0, thankfully without Moto's awful MotoBlur interface, and an upgrade to 4.1 is promised. As a DEXT owner that brings forth hollow laughter, but at least by the time 4.0 feels old there will be third party ROMs available for it. The camera is not as good as those on the best phones on the market but it is perfectly acceptable for the price and has a few tricks up its sleeve, of which more later.
    The Razr i's party piece and the feature that attracted me to it though is its processor. It has an Intel processor rather than the more common ARM, and it is one of the first Intel-powered phones to move forward from the Intel reference design.
    The Intel processor in the Razr i is a single core device as opposed to the the multicore configurations usually found in ARM phones. It makes up for this with a faster clock speed, at 2GHz nearly twice that of its ARM competition, and enough to run Android and its apps at a truly blistering pace.
    An Android phone with this processor faces two problems, and on how well Intel have tacked them will ride the success or failure of their push into smartphones. First, the Intel instruction set is not the same as the ARM instruction set so there might be an expectation of software incompatibilities with Android apps designed and tested on ARM devices. And second, such a high clock speed might be expected to shorten battery life as faster processors run hotter than slower ones.
    Based on a month with an Intel smartphone I think they've done a pretty good job. On software incompatibilities there has been no issue save for the unavailability of one app, BBC iPlayer. Since this depends on Flash, a dead mobile technology if ever there was one, I can forgive them for this. In fact the lack of Intel support rather proves that Flash is dead on mobile, for if it was still alive it would surely have been ported by any of the rather large parties involved.
    And on the power consumption front I think they've succeeded too. Intel have a lot of experience in their more traditional markets making silicon that adapts its clock speed and thus power consumption for portable use, and this has resulted in a phone that I need to charge every other day in general usage. Considering that it's not uncommon for smartphones to barely last a day on one charge, that's pretty damn good.
    The camera is one of the make-or-break pieces of hardware in a phone for me. In hardware terms the Razr i's sensor is not as good as some of its competition, at 8Mp it lacks the resolution of more expensive phones and its lens is nothing to shout about. But that said the hardware is perfectly acceptable, and the way it has been implemented makes it stand apart from other phones in its price bracket.
    This camera is fast. Really fast. And it has a mode in which it starts from sleep mode with a single press of the shutter button. I can wave goodbye to fiddling with an unlock sequence to take a picture, to those camera phone pictures that failed to catch fast moving subjects due to shutter lag, or that embarrassing wait while the phone saved your latest JPEG. Press the Razr i's shutter button, and that's the photo taken and saved. No messing about, on to the next one. As someone who takes a lot of camera phone pictures, that has changed the way I use my phone, it really is a point-and-shoot device.
    There's only one feature of the camera software that sticks out from the crowd; it has an HDR mode. HDR, for the uninitiated stands for High Dynamic Range, and it refers to composite photographs created from multiple shots of the same scene at different exposures to ensure that all parts of the scene are at optimum exposure.
    The trouble with HDR is that like all new toys there is a tendency to push it a little too far. Thus if you search Flickr for HDR pictures you'll find reams of startlingly garish pictures in which the photographers have turned the software up to 11 without considering whether or not it makes a better picture. Thus those three letters don't always instill confidence, they usually mean something a little painful to look at.
    The HDR mode on the Razr i is fortunately not turned up to 11. Usually it brings out the detail in shadowed areas of your scene and results in a brighter picture. Exactly what you want from a snapshot camera. However, it sometimes produces a picture with a bit too strong an HDR effect, and at other times it has problems mixing the different exposures. So I find it broadly useful, but sometimes capable of getting it wrong
    Here are some example pictures: First, an outdoor shot in bright sunlight. HDR above, no HDR below. Probably the camera at its point-and-shoot best, it may not be capturing the nuances of a professional model but as a snapshot camera it produces pictures that are bright and full of detail.

 Here the scene is a little more challenging, an overcast day and a tree against the sky. Again the HDR is the upper picture. It's done a good job with the pub, but straight away you can see in the branches of the tree that the HDR algorithm is having problems deciding which exposure to use.

 Now we're pushing the camera to the limit with a night-time shot. As expected, there is plenty of noise present in these images. However, the left-hand HDR image does manage to pull out more detail, for instance the car numberplate is legible. 


 So would I recommend the Razr i to a friend? After all the market is very crowded at that level and there are some real contenders, why buy something a little off-the-wall when you can have a Nexus 4, for example? The answer's simple. I'd recommend the Razr i to someone who wanted a quick phone with a very quick camera and had it on offer as a carrier upgrade. For someone paying up front for a phone I'd suggest they look at getting a phone with a cast-iron guarantee of receiving Android upgrades while its technology can support them.Sorry Moto, you've made a really great phone here, but I still can't forget your cavalier attitude to Android upgrades in the past.

Friday, 4 January 2013

The most valuable piece of code I ever wrote

    Thinking about a planned interactive feature for the OxfordWords Blog recently I was reminded of a little piece of Javascript which is probably the most valuable piece of code I ever wrote. Valuable in terms of revenue generated for the customer that is rather than value to me, for it took a very short time to write.
    It was a mid afternoon in 2007 or 2008 when one of my customers at the time rang up with an idea for a little feature for his web site. His company is a rather large second-hand vehicle specialist and he's one of those customers for whom I have a lot of respect. No-bullshit, but fair in return and one of those guys you can learn stuff from.
    The second hand vehicle business works over the telephone, if they can get you on the phone they're pretty good at persuading you to part with your cash to drive away in one of their machines. Their conversion problem therefore lies in getting the customer on the phone in the first place.
    So their site, a large catalogue of vehicles, was and still is plastered with their phone number. No need for a shopping cart or online payments, their industry has enthusiastically gone online but their customers still like to deal with someone directly when parting with cash.
    The problem facing my customer was that his conversion rates were still pretty low. Our spiffy site was generating him lots of traffic so he knew the customers were interested, but they were browsing and shopping around rather than giving him a ring in sufficient numbers.
    His idea was a simple one. If they stop on the page for a particular vehicle for any length of time, they must be interested in it. So he asked me to make a little pop-up that asked the question "Do you want us to call you about this vehicle?" the first time a customer stopped on an individual vehicle for more than a minute. Fill in your name and number, click the "Yes" button, and an email went off to his salesmen who'd give you a ring.
    Coding it took about half an hour. A hidden div containing the HTML form, a little bit of Javascript with a timer to unhide it after a minute, a bit of code to set a cookie so the user didn't get bothered by the form more than once, and an extra address for his form-to-email script. Nowadays I'd use a line or two of jQuery code and probably a fade or something, but back then it was straight Javascript. Still, hardly a big job, and I had it ready for his approval by the end of the day and live on the site the next day.
    A second-hand vehicle dealer like my customer buys his vehicles at auction, mostly not very old vehicles in bulk from the fleets run by large corporates. He then services them and gives them a current MOT test and warranty before offering them to his customers. His is the reputable end of the second-hand vehicle market so his customers pay a premium for good quality vehicles with a provable history, something they can't get from dodgy used car lots. He thus has quite a high turnover and running cost, but the margin on each vehicle sold is also fairly large. If he sells a vehicle by a means that didn't cost him much money, he's made a four figure sum.
    Hence my half-hour piece of Javascript was the most valuable piece of code I've ever written. Because it provided him with many more conversions from his web site at a very low cost, the first vehicle sold through it paid for it many times over and it made him many thousands of pounds thereafter. I'm guessing over the years it will have generated an astounding amount of money, for even though the company I worked for then has since folded in the recession and the customer's site now runs on a different platform it still features an updated version of my pop-up form.
    I'm glad that it was such a small piece of code that did so well for my customer. The customer went away happy and rewarded us with more business and lots of word-of-mouth recommendation, and I learned something important about calls to action and that not all industries fit the same web shop model.
    If only all my code proved to be of such value to the people paying for it!

Tuesday, 18 December 2012

Why electrical network frequency analysis might be unsafe to trust in court

    Tl;dr: Electrical network frequency analysis involves analysing the frequency of recorded mains hum to verify the time a recording was made, and that it has not been edited. This piece expresses concern that it could be fooled using readily available computer equipment, and makes a suggestion as to how that might be prevented.

    Electrical network frequency analysis has been in the news recently. It offers a solution to the problem facing courts when dealing with audio recordings; that of establishing the time a recording was made and that it has not been edited or tampered with.
    It works by analysis of any mains hum present on a recording. The mains electricity system uses AC, or alternating current, which is to say that its current changes direction many times a second. AC power cables thus are surrounded by an oscillating magnetic field which induces a tiny AC voltage in any electronic equipment that comes within its range. If the electronic equipment is a tape recorder then that tiny AC voltage will be copied onto any recordings it makes, resulting in a constant detectable background hum.
    In the UK our AC power grid operates at a frequency of 50Hz, which is to say that its current changes direction 50 times a second. All our power lines are connected to the same grid, so when there are minute variations in the frequency of the grid power in response for example to instantaneous surges in demand, those variations will be identical everywhere in the country. Thus if you were to store the frequency of the grid power as it varies over a period of time you could identify when a recording was made within that time by comparing the variations in frequency of any mains hum it contained with your stored values for mains frequency.
    It is a very effective technique, because the mains hum provides a readily reproducible timestamp. An infallible weapon in the fight against crime, you might say.

    Unfortunately I have my doubts.

    As an electronic engineer by training, when I read the BBC piece linked above, I thought immediately of Fourier transforms. A Fourier transform, for those fortunate enough never to have had to learn them, is a mathematical method for taking a piece of data in the time domain and looking at it in the frequency domain. If this sounds confusing, consider a musical stave. As you move from right to left along it you are moving in the time domain, the notes it contains are each played as you pass them. If however you shift your viewpoint through 90 degrees and look at the stave end-on, you are now looking at it in the frequency domain and you are seeing each note as it is played represented in its position on the paper by its pitch. If you encounter a chord, you will see several notes at the same time each at a different pitch.
    Now if you were to imagine the same trick applied to a complex recording such as human speech you would need to abandon the musical stave and instead imagine a much wider frequency range. And instead of single frequencies generated by musical notes you would see a multitude of different frequencies at different intensities which make up the astonishing variation of the human voice.
    Once you have transferred a recording into the frequency domain like this, you can examine individual frequencies such as any 50Hz mains hum. The forensic teams will use this technique to measure any variations in the hum, it's an extremely useful piece of mathematics.
    However, as well as examining individual frequencies you can also manipulate them. You can remove them entirely if you want to, or put new ones in. Then you can recombine all the frequencies from your Fourier transform back together into the time domain to create a new, altered copy of your recording.
    And it is this ability that is at the root of my doubts about electrical network frequency analysis, that since it is possible to remove the mains hum timestamp from a recording in this way and replace it with an entirely different one it seems to me that relying on this technique to verify when a recording was made and that it has not been altered is inherently unsafe.
    While researching this piece I had a good long chat with a friend whose career took him in to the world of DSP. From the course of our discussion came an idea as to how the job of detecting manipulation of a hum signature might be achieved.
    As it has been described, the forensic analysis can only look at the frequency of the 50Hz hum. They record it at their lab and compare it with the recording under examination. Yet the local mains supply where the recording is being made will contain so much more information than simply the hum frequency, it will contain a much wider bandwidth of noise that is unique to the mains environment in that particular location. That noise will be generated by the mains equipment electrically close to the recorder; everything from electric motors through fluorescent lights to poorly-shielded electronics. In addition it will contain phase changes, small movements of the waveform in the time domain, caused by any of those pieces of equipment that do not have purely resistive loads, and those phase changes could be readily linked to the noise from the devices that generate them. This information would be much more difficult to remove from a recording than just the 50Hz hum, so could provide a means to tie a genuine hum signature to a recording.
    Unfortunately though the only component of this that will be recorded will be the strongest lower frequency component of this noise, the 50Hz hum itself. This is because whatever is recorded has to be induced in the recorder by the magnetic field of the mains installation, hardly a coupling conducive to the transfer of higher frequencies.
    But what if instead of relying on induction the recorder mixed in a suitably attenuated copy of the complete  mains noise spectrum with the input from its microphone? In that case all the information about nearby mains-connected devices and their effect on the phase of the 50Hz hum it might contain would be preserved, making it extremely difficult to insert another hum signature whose phase changes do not match the changes in electrical noise also present on the recording. It is not beyond the bounds of possibility to imagine that "official" recorders in police stations and the like could be modified to record this noise.
    Of course, I may be an electronic engineer, but I spend my days working for a dictionary. The frequency analysis I do for a living these days involves language and word frequencies rather than audio, and any digital signal processing I have a go at is strictly in the hobby domain. I know the removal and reinsertion of a 50Hz hum signature in the way I have described is nothing special and could be performed by someone proficient with DSP software on a rather modest computer far less powerful than most modern cellphones, but I have no knowledge of any specialist techniques that might be used to detect it in a finished recording. My concern is that I am seeing a forensic technique acquire a scientific halo of being somehow a piece of evidence that is beyond reproach, and this prospect worries me when I can see such a flaw. This is not from a desire to damage justice but to strengthen it, for it is not unknown for evidence to be found to have been fabricated.
    So if there is nothing to be concerned about and manipulation of hum signatures in the way I have described could be easily spotted, fine. That's what I want to hear. Don't just say it though, prove it. But if instead this technique turns out to be a valid attack on network frequency analysis, then let it be brought into the public arena so that methods of detecting it can be devised.