Thursday, 12 March 2015

The day I was DoSed by Google


    I launched my startup this week. On a very modest scale, no debauched parties or penthouse offices on scads of VC money, just me and my laptop in my dad's living room, popping open a bottle of home made cider to mark the event.

    Language Spy is the tangible fruit of a seven or eight year side project, creating a searchable corpus of political language. It's driven by a pair of Raspberry Pis doing the numbercrunching and uploading data to Google Cloud Storage buckets from whence the site is served by Google App Engine.
    You can see events unfolding through the words used about them, for example the correlation between "Hillary Clinton" and "Email" in the last week of US politics. If like me you're a news junkie, it's compelling viewing.

    Unfortunately though if you click the link as I write this, you'll see a Google App Engine quota exceeded message. The site won't work, because I have reached the point at which I can't afford the traffic it's serving and it has exceeded my daily budget.

    This traffic spike would be no problem if it were generated by real site users as then I'd be able to monetise the traffic, but sadly it isn't. Instead it's generated by GoogleBot. That's right, being indexed by a search engine has taken my site down. The bot looks at the site, decides it's on some very fast infrastructure, and issues millions of requests per hour.

    When I examine my problem, it becomes clear that it has several aspects:

  1. It's a language analysis site, so it has a *lot* of pages for the spider to crawl.
  2. Being a language analysis site there are no pieces of language I can exclude using robots.txt, so I can't reduce the load by conventional means. How do you decide which language is more important than other pieces? You can't, at least not when your aim is to have it all open for analysis.
  3. I can't tell Google to slow down a little, as where I'd expect to be able to do this in Webmaster Tools I get a "Your site has been assigned special crawl rate settings. You will not be able to change the crawl rate." message. I see this as the sticking point, if I could restrict Googlebot's rate I'd be able to keep the site running and take the hit of not being so well indexed.
    This means that the spider is eating through my daily Google App Engine quota very quickly indeed. I will find myself gaining a hundred instance hours in a very short time indeed as GAE spins up loads of new instances to deal with the spider. Pretty soon the site hits its daily quota and goes down. I could keep it going by feeding in more money, but I'd have to put hundreds of dollars a day into it and with no end in sight I am not made of money.
    Right now my only hope lies with a crawl issue report I filed with the Webmaster Tools team, if they can give me control over my indexing rate I'll be good to go.  But I can't say when they'll come back to me if ever, so I may just have to come up with a Plan B.
    Is there a moral to this story? Perhaps it's a cautionary tale for a small startup tempted to use cloud hosting. Google Cloud Storage has proved very cost-effective for a huge language database, but the sting in the tail has turned out to be how GoogleBot behaves when it sees a cloud server and how per-instance billing on App Engine handles unexpected traffic surges. The fact that it's Google who are causing me to use up my budget with Google is annoying but not sinister, however neither giving me the option to limit my GAE instance count nor slow down the crawl rate doesn't leave me as the happiest of customers.

    So yes, I've launched a startup. It's live for an hour or two a day while it has budget, in the morning UK time. Perhaps that will be my epitaph.

Sunday, 8 February 2015

When open source software goes astray

     I have just spent about half a day installing and configuring the latest version of Apache Cordova. I've used Cordova for several years now, it remains a great way to write a cross-platform mobile app.
     The reason for this article though lies in the first line. I spent half a day simply installing and configuring a mobile framework, I haven't opened an IDE yet, less still written a line of code. Let's consider that for a minute, this is a mobile framework, not an OS distribution! I am guessing I would have spent less time reinstalling Debian than I did yesterday to end up with a few megabytes of mobile project.
     Because that's all Cordova is, when you get down to it. A set of ready-to-go mobile projects for each platform that you pull into your IDE and start pushing HTML and Javascript into. And until version 3 that was how it was distributed. Unpack the downloaded archive, pick up the directory for your platform, pull it into the IDE, start coding. Have your prototype app in front of the boss at the end of the day.
     Cordova 3 changed all that, and going by my experience yesterday Cordova 4 hasn't fixed it. You install Node, use npm to get Cordova, run a command line to create your project, then another command line to add your chosen platforms and plugins to the project. The project directory has one master www directory in which you put your app code, and a simple command line builds each platform-specific project from that www directory. Putting aside the version and configuration hell that marred yesterday afternoon's install for me it sounds really elegant.
    Unfortunately though that is not the way apps are written in the real world. It would be great to write one copy of your HTML5 app and then deploy it to each platform and compile it from the command line on the same computer, but sadly cross platform app development doesn't work that way.
    Picture for a minute the office of a typical Cordova developer who produces apps across more than one platform. They may have a Linux box for Android development, running Android Studio or maybe Eclipse. Their Windows Phone apps will be developed on a Windows box with Visual Studio. And finally they will have a Mac running Xcode on which they develop their iOS apps. This is not a choice but a necessity, try asking Tim Cook or Satya Nadella for permission to develop for their mobile platform on the other's OS and you won't get very far. It's possible that the Android apps could be developed on the Mac or the Windows box, but it's still largely true that if you want to do cross platform app development for multiple target platforms you are going to have to have multiple development platforms and more than one IDE. It's not ideal and none of us really want it or like it, but it's the harsh reality of developing software for closed-source commercial platforms.
    In that environment Cordova's elegant project build structure is almost completely useless. Putting aside the unfortunate fact that each platform demands subtly different code for its in-app browser, it is impossible to load the generated Cordova projects directly into Android Studio, Xcode, or Visual Studio and develop them from their Cordova build directories because each time the project is rebuilt at the Cordova end the project you see in your IDE is overwritten. So you end up using Cordova's build system to create vanilla projects for Android, iOS and Windows Phone that you copy somewhere else and load into the IDEs before never touching the Cordova build system again. Instead of being an elegant and useful addition to the project it has become an annoyance and a significant inconvenience.
    So there's my Cordova rant. It's a bit unfair, Cordova remains a really useful piece of software. In reality I think it would be better to characterise my rant as one against the tendency in open source software to incorporate features not because they make sense for the software itself, but because someone thinks they are a good idea. In the same way that the developers of PHP who are not PHP coders but C and C++ coders seem to be on a mission to turn a sequential scripting language into a clone of C++, it makes sense to them but is increasingly divergent from the use case for the vast majority of PHP developers.
    Every open source project risks this pitfall as a side-effect of success. People write software because they want to solve a problem in their lives, but eventually they transition from being the people with a need for the software to being the people who only write it. They lose touch with why and how the end users use the software and imagine that they and not their users know best how it should be used.
    There is an obvious difference between the open source movement and the closed source commercial world. But one thing remains constant in both worlds: the customers will dump your product if you ignore their needs and try to dictate to them what they will have. IBM learned this the hard way in the 1980s with the PS/2; the customers and the rest of the industry continued evolving the PCs they already had rather than taking the direction IBM wished to impose on them.
    I find it sad that the same tale is played out in the open source world. Open source software could be a rich dialogue between users and developers, instead each successful project eventually loses its way just as commercial entities do. I suppose moribund companies spawn upstart competitors just as moribund open source software spawns vigorous forks, but there are times when the wait for a fork seems awfully long.

Friday, 14 November 2014

Someone who can't see 3D images uses an Oculus Rift for the first time.

   Last night, at Oxford Hackspace, I had my first try at an Oculus Rift VR headset. It was running one of the Rift demos, an office desk with a lamp and a playing card castle on it. Nothing unusual there, you might say. I'm guessing first Oculus experiences aren't much to write home about these days.
    I am maybe not in Oculus's sights though as a customer, because as I've written here before I was born with a strabismus. I have perfect vision in both eyes, but my left eye looks somewhere around twenty degrees to the left and I can switch my main vision from eye to eye. This means I can see over my left shoulder, keep an eye on the mirror without taking an eye off the road while driving a right hand drive car, and move my left eye independently of my right as a party trick.
    It also means I can't see in 3D in the same way as someone born with both eyes on the same heading. In the few months following my birth when my brain was configuring itself for the world around it, I never acquired the capacity for seeing objects as three-dimensional by combining images from left and right eyes because my eyes didn't deliver images close enough to each other. This doesn't mean I live in a flat two-dimensional world, it simply means I don't process 3D information in the same way as other people. To use an analogy of a modern computer system, I do my 3D processing in software rather than hardware. My brain developed to interpolate 3-dimensional space from a single 2-dimensional image and sometimes by the effect of my movement rather than by comparing two such images from different angles. I live in a 3D world, but sometimes - as for instance when trying to be a batsman in a school cricket team - things happen a little too fast for my 3D processing to follow.
    All of this means that 3D effects designed for people without a strabismus - stereograms, autostereograms, and 3D movies - don't work for me. I will never be able to see an autostereogram, and a 3D movie is for me a 2D movie in which I disregard what's being sent to my left eye. Fine by me, I'm happier to pay less for a 2D showing anyway.
    So I didn't expect much from the Oculus. And yes, once looking into it the feeling was of having a couple of normal 2D screens in front of me. Not too different from my day-to-day workstation, except with full-field vision rather than a pair of LCD monitors.
    The sight of assorted geeks queuing up to flail blindly about a crowded hackspace wearing a headset is mildly amusing. In my case trying to keep my movements to a minimum I started to look around. Looking down, the desk and the lamp were just a 2D picture of a desk surface and a lamp. Switch from eye to eye, nothing changes.
    It was a mild surprise when I moved my head to the left and the card castle came into view. It was a picture of a card castle on a monitor, then suddenly it was a 3D card castle. Swing back to the lamp, and it's a picture of a lamp. Back to the card castle, and it's a 3D object. I was only using my right eye, but I could see inside it as I knelt down.
   So I had one 3D object in otherwise 2D space. Not really Better Than Life, but far more than I expected. What's happening here? In the real world, I interpolate 3D even when I'm not moving because I know something about the environment. I know a teapot in front of me is a 3D object because I have a huge amount of real-world lighting and other visual information. I can reach out and touch it. That knowledge means that when I reach out to take hold of it I know not only roughly where it is, but what size it is so I can grasp its handle with a good expectation of the dimensions my hand will have to close over. If I move around an object the amount of information I have about it increases, and so does my ability to see it in 3D.
    I think that there is an information threshold for me to see something in 3D. A 2D image on a screen or a page does not have enough information from its lighting or how its aspect changes for me to see it as containing 3D objects. I think the lamp and the desk did not pass this information threshold while the card castle did. It had enough complexity for it to provide my brain with the information it needed to cross the threshold as my motion caused the Oculus image to change.
    This opens up the interesting prospect of people with a lifelong strabismus having an ability not shared with most others: that of seeing a 3D image from a single 2D source when that source is coupled with motion sensing. It would be extremely interesting to see if a 3D-rendered-as-2D FPS game could become 3D-immersive for me if it were displayed on an Oculus with its motion sensor controlling in-game movement.
    I am guessing I only have this ability because my strabismus is life-long. I gained it because my brain developed with it, I am therefore guessing someone who grew up with stereoscopic vision but later lost an eye or gained a strabismus through injury would not see a 2D image as 3D in an Oculus headset. I'd be interested to hear whether or not that was the case.
    Some people have a big problem with strabismuses. Parents subject their children to surgery without their consent to have them "corrected". Says more about their insecurities and the desire for "perfect" offspring than it does about the effect of a strabismus. Mine has mostly never been an issue, sometimes it's unnerved people and I'll never play cricket or baseball. I would never subject a child of mine to unnecessary cosmetic procedures, to me a strabismus is just part of who you are. Yesterday's experience added a new angle on my strabismus and for that I'm happy. I can add seeing 2D images in 3D to seeing over my shoulder in the meagre list of super-powers it's given me.



Thursday, 3 July 2014

Bow for nowt


    Last weekend, I made a bow. Not because I've had a hankering for one, but because I spotted the perfect material and had to give it a go.

    My friend Rebecca was replacing her bed. The old one had a lot of very springy plywood slats, far too good to let go to the tip. So I detached them, bound them up with tape, and brought them back home on the back of my bike.

They're too wide to make a bow on their own, the ends of a bow need to be considerably less stiff than the centre where you hold it. And they weren't long enough to make a bow on their own, I'd have to bolt two of them together.

    So I cut two of them on a taper, leaving a section of a few inches intact where they'd be bolted together. I then drilled three holes for the bolts, and applied a layer of No More Nails glue to the joining surfaces before doing up the bolts. This left me with a long and springy bow, but no means to attach a bowstring to it. Out with the saw again to cut a couple of notches at each end. My bowstring - a knotted piece of the orange baler twine that is ubiquitous on farms - simply hooked over the end of the bow into the notches.
Is it a good bow? Not being an archer I have no idea. But It takes the length of my draw without damage, and it feels as though I'm putting quite an effort into it. And it shoots bamboo canes down the lawn rather well. Next up, a pack of cheap arrows from Amazon, I'm not hard-core enough to do my own fletching.

Wednesday, 12 March 2014

Small computer, big data

    This post comes as one of the longest running scripts I've ever created has just finished its work. In the last week of January I set my Raspberry Pi to the task of processing 5 years of news stories into a 20Gb tree of JSON files, and here in the second week of March it's completed the task.
    Given that a PC has done the same job in a couple of days the first question anyone would ask is simply this: Why?
    My Pi runs all day every day, 24/7. It collects the news stories from RSS feeds and stores them in a MySQL database. It uses somewhere under 2 watts, and it will do this no matter what I ask it to do because it's plugged in all the time. I can touch its processor with my finger, it's not hot enough to hurt me. My laptop by comparison with its multi-core Intel processor, board full of support chips, and SATA hard disk, uses somewhere under a hundred watts. I can feel the hot air as its fan struggles to shift the heat from the heatsink. I wouldn't like to hold my finger on its processor, assuming I could get past its heat pipe.
    Thus since I'm in no hurry for the data processing it will use a lot less power and it makes more sense for me to run the script on the Pi. This isn't an exercise in using a Pi for the sake of it, instead the Pi was the most appropriate machine for the task.
    So having run a mammoth script on a tiny computer for a couple of months, how did I do it and what did I learn?
    The first thing I'd like to say is that I'm newly impressed with the robustness of Linux. I've run Linux web servers since the 1990s but I've never hammered any of my Linux boxes in quite this way. Despite stealing most of the Pi's memory and processor power with my script it kept on with its everyday tasks, fetching news stories and storing them as always. I could use its web server - a little slowly it's true -, I could use its Samba share and I could keep an eye on its MySQL server. Being impressed with this might seem odd, but I'm more used to hammering a Windows laptop in this way. I know from experience the Windows box has not been so forgiving running earlier iterations of the same script.
    If anybody else fancies hammering their Pi with a couple of months of big data, here's how I did it. The script itself was written in PHP and called from a shell within an instance of screen. This way I could connect and disconnect at will via ssh without stopping the script running. The data came from the MySQL server and was processed to a 64Gb USB Flash disk. The Flash is formatted as ext4 without journaling, this was judged to be the best combination of speed and size efficiency. An early test with a different FAT formatted drive provided a vivid demonstration of filesystem efficiency as the FAT ended up using 80% of the space after only a short period of processing.
    The bottleneck turned out to be the Flash drive, a Lexar JumpDrive. Reading and writing seems to happen in bursts, the script would run quickly for about 30s and then very slowly for the next 30s purely due to disk i/o. In future I might try the same task with USB-to-SATA hard disk, though I'd lose my power advantage.
    So would I do the same again, and how might I change my approach? I think the Pi was a success in terms of reliable unattended operation and in terms of low power usage on a machine I'd have had running anyway. But in terms of data processing efficiency it could have been a lot better. A faster disk and a faster computer - perhaps something with the Pi's power advantage but a bit more processor grunt such as the CubieBoard - would have delivered the goods more quickly for not a huge extra investment. And the operating system though reliable could probably have been improved. I used a stock Raspbian, albeit with the memory allocation for graphics reduced as low as it would go. Perhaps if I'd built an Arch image with a minimum of dross I would have seen a performance increase.
    I used a Raspberry Pi for this job because it was convenient to do so, it uses very little power and I had one that would have been powered up anyway throughout the period the script was running. The Raspberry Pi performed as well as I expected, but I can not conclude anything other than that it is not the ideal computer for this particular job. It is sometimes tempting when you are an enthusiast for a particular platform to see it as ideal for all applications, well in this case that would be folly.
    The Pi will continue to crunch the data it collects, though on a day-to-day basis as part of the collection process. In that it'll be much more suited to the task, as a cron job running in the middle of the night the extra work of a day's keyword crunching won't be noticed. And there's the value in this exercise, something that used to require a PC, a while of my time and a little bit of code has been turned into an automated process running on a £25 computer using negligible power. I think I call that a result, don't you?

Friday, 24 January 2014

Can you run a small business with the Raspberry Pi?

    There are three Raspberry Pi computers scattered around our flat. A 512k Model B is an application server running my keyword analysis system 24/7, my original 256Mb Model B serves as a Raspbmc set-top-box, and a 256Mb Model A serves as a general purpose hardware and software hacking platform with an attached camera.
    I am a demanding user of the first of those three, the keyword analysis system involves gigabytes of data and processor-hungry scripts. It's not the fastest machine on the block by any means, but after 18 months or so of continuous Raspberry Pi use for this application I am very impressed with how little intervention it has required. After most of a career developing similar coding and database tasks using office Windows server machines I find myself appreciating the Pi for another reason than its low cost and low power: its reliability in both software and hardware terms.
    If you run any kind of business you do not view your computing needs as a home user might, in terms of hardware cost. Instead you price IT as an ongoing investment over the lifetime of the kit, in which the cost of support, licencing and upkeep may significantly outweigh the purchase price of a computer. Looking at my experience with the Pi as an application server I can't help wondering whether its reliability and stability might make it a surprisingly good fit in a business environment, for at least a small business if not in some cases a larger one.
    So what does a small business need from its IT systems? Every business is different of course, but if you were equipping a generic office network for the first time you might reasonably expect to have the following components:
  • Desktop computers
  • A file server
  • Some means of sharing a printer
  • An email server
  • A firewall and internet connection
  • Network infrastructure - let's go with wired Ethernet here and not start talking about Pi-based wireless hotspots
   As a thought exercise it is worth considering how each component might be addressed using a Raspberry Pi, and what if any benefit that choice might bring.
    Desktop computers: One of the first things most people will do with their Raspberry Pi is write a copy of Raspbian to an SD card and type "startx" at the command prompt once they've booted it and logged in. So it's beyond doubt that given a keyboard, mouse, and monitor the Pi is a desktop computer. But how would it perform in a business environment?
    Software's no problem. Raspbian benefits from a huge library of Linux packages. There's no need to fork out to Microsoft for an Office licence when you can run LibreOffice, for example. But I know if I was using my Pi for office work I'd find myself wishing it was a lot faster. I seem to remember an early description of a Pi as being like a Pentium II with a very fast graphics card, but without the ability to make use of its GPU capabilities the Pi's slow speed Achilles heel is only to obvious. Like many Pi users I look forward to receiving stable OS builds featuring the Wayland support we were shown a preview of last year.
    Benefits of a Raspberry Pi business desktop? Low initial cost, low maintenance cost - simply replace defective hardware with a new one for £25 - no software licence fees, low power consumption.
    Disadvantages of a Raspberry Pi business desktop? It may be the fastest desktop you can buy for £25, but undeniably it's not the fastest desktop you'll ever use
    A file server: This is something the Raspberry Pi can do very well. A headless Linux box has no worries about graphics speed so is only held back as a file server by the speed of its network card and disk drive. Plug an external USB drive into a Pi and it's true you don't have an enterprise-class server, but it will still offer perfectly adequate performance for a small office network. This guide to setting up a SAMBA file server uses Arch Linux, but as my keyword tool server proves every time I pick up data from its share with my Windows laptop the same setup works just as well with Raspbian.
    Benefits of a Raspberry Pi file server? Extremely low cost, reliable hardware, low power consumption
    Disadvantages of a Raspberry Pi file server? Some command line admin is required to set it up, especially if your needs extend beyond simple open-to-all file shares.
    Printer sharing: Nowadays you don't have to pay much money for a printer that can already connect to a network, and it probably makes the most sense to do that if you can. After all in a business network the simplest method of getting what you need is more important than the most technically interesting. But this is a piece about using the Raspberry Pi, and a Pi can make a very good network printer sharing device. So here's a tutorial about setting up CUPS on a Pi and sharing it on a network.
    Benefits of a Raspberry Pi print server? Flexibility to print to whatever device - or even software - that you want to set up.
    Disadvantages of a Raspberry Pi print server? Requires command line admin to set up, more complex than using a network printer in the first place.
    Email: Again from a business perspective does it make sense to use a Raspberry Pi as an email server when you can buy any one of a multitude of cloud hosted email products for your organisation? If it were me I would outsource my email every time and gladly pay for it, but for those who really need their email in house here's a tutorial describing a Raspberry Pi email server.
    Benefits of a Raspberry Pi mail server? Reliability, low power consumption, and complete control of your own email.
    Disadvantages of a Raspberry Pi mail server? Significant admin skills required to set up and run.
    Firewall: Here a Pi can undeniably do an extremely good job. Installing OpenWRT on a Raspberry Pi turns it into a very effective firewall. But yet again this is a feature offered at an almost commodity level by almost all domestic and small business routers. As with printer sharing, in business it pays to go for the easiest way to get what you need.
    Benefits of a Raspberry Pi firewall? Huge flexibility that may not be offered by an off-the-shelf router. Any protocol you want to run through it can be set up.
    Disadvantages of a Raspberry Pi firewall? Significantly more complex than an off-the-shelf router firewall, requires admin skills to set up.
    So there's a small business network with Raspberry Pi desktop machines and a Raspberry Pi file server, and all for Raspberry Pi prices. Network printing is probably better built into the printer, the firewall is probably best left to a commercial router, and email is a lot less hassle in the cloud. All that's missing from most business requirements is a business-level portable Raspberry Pi, maybe somebody's working on a professional laptop enclosure as I write this.
    The question is, would I have run my business on a network like this? Would I drop my nice modern laptop with dual boot Windows and Ubuntu to develop on a machine with a lot less speed? Probably not. But if my line of work didn't rely on raw computing power and needed instead some reliable document processing would I consider this as an alternative to a heap of Dells at several hundred quid each, a hefty Windows Server licence bill, and an ongoing relationship with an IT support company?
    I think I'd be silly not to give it a second look, don't you?

Wednesday, 13 November 2013

Developing the Oxford Dictionaries Quick Search app

    Well here we are after what seems like an age of tweaking, the Oxford Dictionaries Quick Search app is finally available for installation. It can be found on the iTunes website here, and on Google Play here. The Windows Phone 8 version should with luck be out in a few days.
    As the developer of course I'm going to say it's a brilliant app, but that would verge on shameless astroturfing. What I can say is that it's a simple and lightweight free English dictionary look-up app that I hope users will find useful.
    Under the hood, it's a client for the Oxford Dictionaries API. This of course means that it requires a data connection to run, but does give the advantage of the app taking very little space and providing the most comprehensive and up-to-date dictionary entries. Though it's hardly a novel use for the API it does demonstrate the functionality and speed of the service, as well as the ease with which the API can be developed against.
    The app uses the PhoneGap cross-platform HTML5 app framework with jQuery and jQuery Mobile providing the Javascript heavy lifting and user interface respectively. These packages have allowed us to deploy the app on three platforms in quick succession with minimal investment, something we could not have done had we been required to write all three versions natively. The quirks of the different HTML5 implementations have caused us a few headaches along the way, but not significantly more than web developers are used to when dealing with different browsers.
    It's been interesting to compare side-by-side the ease of development on the different mobile platforms. Android is the easiest as you'd expect, but with a Wild West of devices and OS versions out there it needs to be. We've been scouring our colleagues for odd Android versions and form factors to try our app on, yet I'm sure we've not tried them all. In particular we decided that with 25% or thereabouts of the Android market we couldn't abandon support for version 2.3, so we've had to contend with its incomplete font support and sometimes shaky HTML5 implementation.
    iOS by comparison with a set number of devices should be easy to develop for but starts to become more effort due to the stringent demands of Apple. Attaching different iOS devices to our development environment can at times be a challenge, and the ballooning demand for supporting resources such as splash screens, icons and screenshots for different resolutions and OS versions sometimes feels as though it is getting out of hand. However the App Store approval process was much quicker than we expected it to be.
    The Windows Phone 8 development environment shows Microsoft's typical attention to detail. The supporting resource requirements are well-thought-out, getting the app on a device is straightforward, and the free version of Visual Studio is a delight to use. However the fact it would only run on 64 bit Windows 8 seems rather strange, and Microsoft have not quite shed their reputation for quirky HTML environments. I didn't expect the problem we had with an animated GIF loading spinner, for instance.
    So it's been an interesting experience. I'd recommend PhoneGap to anyone wanting quick development of multi-platform mobile apps, though it's provided a few learning experiences of its own.

Tuesday, 5 November 2013

Fixed jQuery Mobile footers on Windows Phone 8

    Here's a solution to something which baffled me for a while: making a jQuery Mobile footer that stayed at the bottom of the screen and didn't scroll away with the content in a PhoneGap app on Windows Phone 8.
    jQuery Mobile is usually pretty good at making fixed footers. The code below works fine on Android, using data-role and data-position attributes on the footer div.

<body> 
<div id="container" data-role="page" data-theme="f">
    <div id="header"  data-role="header" data-position="fixed" data-tap-toggle="false">
    Header content here
    </div><!-- /header -->

    <div id="content" data-role="content">
    Page content here
    </div><!-- /content -->

    <div data-role="footer" data-position="fixed" id="footer" data-tap-toggle="false">
     Footer content here
    </div><!-- /footer -->
</div><!-- /page -->
</body>

    Unfortunately, on Windows Phone 8 it places the footer at the base of the screen but does not allow the content to scroll underneath it. Thus your footer scrolls up the screen with the content, hiding a bit of content as it goes and looking really horrible.
    You might think breaking out of jQuery Mobile by losing the data-role and data-position attributes and then applying a fixed position and a z-index to your footer would do the job. After all, it's the standard fix for some of jQuery Mobile's footer quirks on iOS. But sadly all that does on WP8 is anchor the footer to the bottom of the page rather than the bottom of the screen, resulting in a long scroll to see it. Less obviously broken, but still not what we want.
    My solution then was to apply a fixed height to the content div and allow the default WP8 overflow:auto; CSS to make its content scroll out of sight beneath it. I removed the data-role and data-position attributes from the footer and used a little piece of javascript to calculate the height of the content div from the heights of the surrounding divs and set it. Not necessarily the most elegant solution, but one that should reliably work across a range of devices.

<body> 
<div id="container" data-role="page" data-theme="f">
    <div id="header"  data-role="header" data-position="fixed" data-tap-toggle="false">
    Header content here
    </div><!-- /header -->

    <div id="content" data-role="content">
    Page content here
    </div><!-- /content -->

    <div data-role="footer">
     Footer content here
    </div><!-- /footer -->
</div><!-- /page -->
<script>

//NB the code below references jQuery, not included in this HTML for simplicity

var headerSpace = parseInt($('#header').css("height")) + parseInt($('#header').css("marginTop")) + parseInt($('#header').css("marginBottom")) + parseInt($('#header').css("paddingTop")) + parseInt($('#header').css("paddingBottom"));

var contentSpace = parseInt($('#content').css("marginTop")) + parseInt($('#content').css("marginBottom")) + parseInt($('#content').css("paddingTop")) + parseInt($('#content').css("paddingBottom"));

var footerSpace = parseInt($('#footer').css("height")) + parseInt($('#footer').css("marginTop")) + parseInt($('#footer').css("marginBottom")) +
parseInt($('#footer').css("paddingTop")) + parseInt($('#footer').css("paddingBottom");

var contentHeight = window.innerHeight-headerSpace-contentSpace-footerSpace;


$('#content').css("height", contentHeight + "px");

</script>
</body>

    I hope this helps put you on the right path. There seems to be frustratingly little documentation out there on what WP8 does and does not support, and on workarounds for what seem to be common problems. With luck this fix has plugged one such hole.

Thursday, 24 October 2013

There will be no iPad killer

    This week brought the news that Nokia have launched their long-rumoured tablet running Windows RT. Despite their woes of the last few years when it came to understanding what their consumers wanted, when Nokia get their act together they are still capable of making some of the best hardware there is. It's by all accounts  a decent effort, and the word is if you want an RT tablet it's the one to get.
    The trouble is, it wasn't long while reading about the new Nokia that I read the dreaded phrase "iPad killer". And it looks as if Nokia and Microsoft themselves believe that description because they've priced it in iPad territory, at just under 500 quid with a keyboard.
    I can't remember when I first heard a device described as an iPad killer. Probably not long after the iPad came out. Just for fun I tried to remember a few of the devices once described as iPad killers. Here they are, just a few of many.

  • HP WebPad
  • Motorola Xoom
  • BlackBerry PlayBook
  • Toshiba Thrive
  • Sony S1
  • Microsoft Surface RT
  • Samsung Galaxy Tab (the original one)
    All of these were launched with a fanfare and priced against the Apple product, yet with the possible exception of the Samsung flopped and sank without trace. The BlackBerry and the HP were both particularly nice devices, yet they both ended up being sold at fire-sale prices. The HP famously flopped so badly that HP dumped WebOS overnight and pulled out of the tablet business. Evidently being an iPad killer is a tough business.
    Here's the thing. Despite what the fanbois will tell you, the iPad isn't anything special. All it's got is the Apple logo and all those apps, otherwise its hardware is not too different to its competitors. But the consumers don't care about the niceties of different processors or display technologies (beyond Apple's rather meaningless "retina" marketing fluff), they just know they don't want the tablet equivalent of a Betamax video. So if they're asked to pay iPad money for something that isn't an iPad they'll know a risky deal when they see one and walk away. Meanwhile each successive marketing team makes the mistake of believing their own hype and yet another device heads towards the dustbin.
    So sadly the Nokia tablet will fail. It will do so on price alone, without that consideration the Microsoft Metro interface is a joy to use and Nokia hardware is beautiful. If they forgot the iPad and sold it for half the price it would be an unexpected success, as it is it'll be yet another tombstone in the iPad killer Boot Hill. You'd think a company and an OS vendor desperate for market share at all costs would think about that.
    The title of this piece is "There will be on iPad killer". That's not to say that the iPad will never lose its place as the tablet to own, more that as Apple lose the ability to give it meaningful differentiation its position will inevitably be eroded by ever cheaper and more numerous competition. If I were marketing a tablet I'd rather my device beat that competition than took a pop at the iPad. Let the fanbois have it.

Monday, 22 July 2013

Letter to Tony Baldry MP on internet filtering

    David Cameron has picked up the torch of savior of the nation from internet porn. I think that this, like so many other pronouncements from politicians on the subject of the internet, is largely a piece of think-of-the-children soundbite politics based on little or no knowledge of the subject.
    Because I think there is a real risk of this escalating into an unacceptable level of interference in the workings of the internet I penned the following letter (slightly edited to remove a personal reference) this morning to my MP, Tony Baldry. It won't change anything on its own but since MPs judge the strength of feeling on an issue by the size of their postbag it might have some effect.

Dear Tony,
   I'm mailing you today to express my professional concern as a constituent about the Prime Minister's proposals recently on internet pornography. I feel they owe more to soundbite politics and the readers of the Daily Mail than they do to practicality and they risk placing a burden on the UK internet industry at a time of economic turmoil.
    I'm a search engine and web language specialist by trade. I have worked in the past for Google and in our local web and search engine marketing industry and my current job is with a large publishing house. I make huge web sites of scholarly content and ensure that the search engines see them in the best light.
    I am concerned because I feel that the Prime minister is indulging in soundbite politics without first ensuring that what he is proposing is either practical or not already in place. He's made several points as I understand it: filtering of search terms, internet filtering software, and banning extreme porn including rape scenes. I'll address each one from a professional perspective.
    The proposal with respect to filtering search terms is that the search engines block offensive terms. So a search for porn might give the user a warning page and no results. I feel that this is a noble intent, but ultimately doomed. As a lexicographer will tell you language does not obligingly stay in one place. The porn consumers and their industry will move their vocabulary faster than those blocking terms can react, and we risk a situation similar to that of the "legal highs" industry in which new drug chemicals have to be individually identified and banned at a snail's pace. Something tells me that the Government will not expect to bear the cost of this process, so the internet industry will face yet another unnecessary burden following in the footsteps of confusion over accessibility requirements and the European cookie law.
    I feel that the Prime Minister can not have set up a personal Internet connection in recent years. If he had, he'd know that they already come with filtering software. As part of my job I need to turn mine off from time to time, so I'm fully aware of their existence. It is possible that there is not a legal requirement for them to be turned on, but by my experience internet providers turn them on by default anyway.
    The Government has already enacted a ban on extreme porn, and child porn has been illegal for decades. The online trade in child porn material left the web for other forms of internet traffic in the 1990s and if it is traded online it is not done so in a form that can be blocked by filtering software or search engines. Paedophiles already have a huge amount of law enforcement effort directed at them. Extreme porn may be more visible - It's hardly a subject in which I'm an expert - but I seem to remember that the Government has made something of a fool of itself when it has tried to prosecute people for its possession.
    Of course the illegal end of the porn industry must be dealt with. And it makes sense to ensure that an ISP filtered Internet is available for youngsters. My point is that most of what is needed to do this is already in place, and the Prime Minister risks making a fool of himself  by indulging in one of the Conservative Party's periodic episodes of wrapping itself in morality. You will remember John Major's "Back to Basics" campaign and its somewhat dismal effect on the electorate as a string of scandals engulfed the party, with a series of allegations relating to paedophile politicians and the Wrexham children's home doing the rounds I feel history could repeat itself.
    Now *please* do not reply to this with the default "Think of the children" argument beloved of politicians. It has become such a cliché that there is an entire genre of internet memes devoted to making fun of it. As an industry we are already thinking of the children as I hope I've demonstrated above. Instead I'd urge you and your colleagues to be cautious when making moral pronouncements with respect to the internet, and to seek technical advice before indulging in soundbite politics.
Thanks,
J. W. List