Monday, 25 May 2015

A transceiver for not a lot

    I've just built a transceiver for the 4M amateur band. QRP(low power) and AM, it's not going to be something I'll work DX with, or indeed given my rural location not something I'll work *anyone* with, but I've had a lot of fun making it.
    My starting point was a 2M QRP AM transceiver courtesy of G3XBM, the 'Fredbox'. The receiver is a FET super regenerative design with an RF amplifier to stop unwanted radiation, and the transmitter is a very simple series modulated PA giving <100mW. I used 2SJ310 FETs in the receiver instead of MPF102s, and 2N3904s throughout in the transmitter. Everything else came from my hoard of electronic bits. I can't claim any astounding electronic design innovations as I was following someone else's proven design with a few adaptions when it came to winding coils and a few minor tweaks to bias resistors to get the desired voltage.
    The only significant difference from the G3XBM design is the lack of a crystal. Instead I'm using a Raspberry Pi as my frequency generator, running Jan Panteltje's freq_pi software. This controls the Pi SoC's clock generator, and can produce anything from the low kHz to 250MHz. Quite happy working on 70.260MHz.
    So there we are. A home made transceiver costing not a lot, and for not a lot of construction time. Rather more satisfying than a twenty quid BaoFeng handheld, I think.

Wednesday, 20 May 2015

Why I won't be putting that board on Kickstarter

    The trouble with being in the corpus linguistics business is that sometimes you have to throw a very large body of text at a big computer and tell it to get on with number crunching it. If the answer comes in several weeks time and your next step depends on it, so be it.Such is the nature of computing the interdependencies within many gigabytes of messy human-created language.
    So I found myself twiddling my thumbs watching very slow progress indicators trickle up a screen instance, and going up the wall in frustration at not performing any economic activity. I've returned to my electronic engineering training for fun over the last few months, so I started thinking about putting one of my circuits on Kickstarter as a kit. Easy money, you might think.
    The circuit in question is simple enough to be unimportant, a small Raspberry Pi expansion board. Useful enough to me to think it might be to other people too. And cheap, too, all the components together come in at well under £2 even when ordered in prototype quantities. Put together a PCB in Eagle, send it off to Dirt Cheap Dirty Boards to see how it comes out.
    Unfortunately though I won't be putting this board on Kickstarter. Nothing to do with the board itself and certainly nothing to do with Kickstarter, just my acknowledgement that the maths don't add up.
    The simplistic maths look tempting at first sight. The bits cost £2, say you sell it for £5 and take out Kickstarter's cut and a bit for postage and packing, you don't need to sell a crazy number to turn a tidy profit. In fact, if those maths worked, I'd already have it up for you to back.
    Sadly the reality of a small business in the UK puts a massive hole in that model, in the form of VAT registration. As a very small business I am not in the VAT system because my turnover is under the VAT threshold. VAT registration is a bit of a hassle and brings with it all sorts of costs, therefore it's only worth my while if I am going to make enough extra cash to justify that inconvenience. If my turnover goes too high I end up being forced into VAT, so there is no sense in pushing up my turnover needlessly for a relatively small return.
    Back to my board. Once I have my kit neatly packed up and in a padded envelope I will have spent somewhere just under £2.50 plus a few minutes of my time per board. I still won't have shipped it to my customer though, and it's there that my extra turnover comes in. Worldwide shipping would come in somewhere over the £4 mark, and that extra cost nearly doubles my asking price with money that simply adds to my turnover without bringing me any profit. Back to the drawing board, breathing something of a sigh of relief at having come to this conclusion without committing myself to anything.
    One day I will probably be in the VAT system due to Language Spy's linguistic products. At that point I'd look at this in a completely different light and might even go ahead with the project. But meanwhile it's a minor thing to chalk up to experience, and a reminder to do the maths in minute detail before committing to anything.

Wednesday, 22 April 2015

An inexpensive video RF modulator

    This is a description I wrote a few years ago of my RF modulator for displaying video on older TVs that don't have a composite video input. The Grundig telly mentioned and the cheap DVD player that prompted the build are both long gone, but the modulator is still a very useful piece of kit that gets used for a variety of video sources.

    My TV set is an ancient Grundig. It's a good telly for its age but nowadays any TV without a SCART or other AV socket on the back is getting rather difficult to connect to newer kit like my DVD player that didn't come with an RF output. No problem, I thought, I'll just plug the DVD into the SCART on the back of my VCR and feed the TV through that. At which point I came up against our friends the copyright owners. I don't think they like people plugging DVD players into VCRs! To stop people copying DVDs to tape, they incorporate an alternating peak white/peak black bar into the teletext lines on the video signal encoded on the DVD. These are the lines that normally sit out of sight above the screen on the TV, you'll only see them move past if your TV loses frame hold. The effect of this peak white/peak black cycling is to play havoc with the VCR's automatic level control, resulting in a signal from the VCR that flashes on and off and is unwatchable. As someone who is just using their VCR as an RF modulator I get caught in the crossfire.

    At this point I had several options. (1)Buy a new telly with a SCART socket, (2)Return my DVD player and buy one with an RF output, (3)Buy an RF modulator, or (4)build an RF modulator of my own. I chose (4) because I didn't have the cash for the other three and since it was Christmas I wouldn't have found a shop open to sell me one. I looked on the Web to see if anyone else had built one and couldn't find any information, so here to fill the gap are the details of my RF modulator.
I decided not to build my modulator from first principles. A simple design with a UHF cavity oscillator and simple sound and vision carrier and modulation circuits is not impossible to make using parts from a scrap TV set, but when so many set top devices already have a modulator built into them why bother? Instead I lifted the RF modulator from a scrap Salora satellite receiver I picked up at a radio rally.

    RF modulator modules usually conform to a fairly generic design. I have seen almost identical modules from different manufacturers in VCRs, set-top boxes and satellite receivers with a wide variety of brand names over the years. They are usually a shiny tinplate box a bit larger than a matchbox with PCB mounting pins protruding from the bottom and at least one co-axial RF socket on the side. Mine has 2 co-axial connectors, one for the antenna and one for the TV, a small switch to enable a test signal for tuning the TV and a 5 pin PCB header for signal and power. Since it came from a device for the British market it has a 6MHz FM sound carrier and outputs on UHF channel 36. The output channel is adjustable by means of a trimmer screw. If you live somewhere else in the world your local specs may be different, however the principle should be the same.

    Before I removed the module from the donor satellite receiver an element of signal tracing was necessary to work out which pin did what on the PCB header. I was fortunate that while the receiver I was using had a dead CPU its main functions were in working order so I was able to identify the power supply pins quickly by powering it up and using a multimeter. The video and audio pins were a little more difficult to trace, while it was pretty obvious which two pins were the signal pins a little tracing of PCB tracks to the video processor and the sound chips respectively was required to be certain which was which.

    There now follows a quick description of each pin on my modulator. If you do this, there is no guarantee that yours will be the same, however the generic nature of these modules means that they are usually similar. I have numbered the pins from left to right with the RF connectors on the top and to the left.

    Pin 1 Video. I don't know the spec of this input but it is very happy with the 1V peak to peak composite video from the phono socket on the back of the DVD player.
    Pin 2 Audio. This pin takes a line level audio input. My module is not a stereo device so this is mono only. As I run the audio through a hi-fi system this doesn't matter to me but I could simply connect both the left and right audio outputs from the DVD to this pin.
    Pin 3 +5V modulator power. This pin provides power to the modulator circuit.
    Pin 4 Ground. I connected this pin to the tinplate chassis of the module, which also formed the ground for my power supply circuit.
    Pin 5 +5V antenna passthrough power. I did not use this pin. In the satellite receiver it was powered by the standby power supply to provide an amplified passthrough from the antenna socket to the TV socket. Since I did not need this function I ignored it.

    The circuit diagram of my modulator is shown below.

    To power this modulator I built a simple 5 volt regulator using the ubiquitous 7805 IC. I simply soldered a TO220 heatsink to the module case and built the circuit around it. My choice of capacitor values was based on those I had to hand. I also included an LED to serve as a pilot light to indicate that the unit was turned on.

    The 7805 circuit was powered from a surplus 7.6v adapter originally designed for an Ericsson mobile phone. There are so many pieces of electronic equipment powered by small low voltage adapters these days that it should not be difficult to find a suitable surplus power supply. Any DC source between 6 and 9 volts should be suitable, though if you do not have a suitable candidate you can buy a universal power supply.

    The video and audio pins were simply connected to trailing phono sockets by short lengths of coaxial cable and the whole unit was mounted in a small plastic box with some hot glue.

    The performance of the completed modulator is the same as you would expect for the piece of equipment the module came from. Once the TV is conencted to the modulator output and tuned to the test signal the test switch can be turned off and signal applied to the phono sockets. An RF modulated signal is unlikely to deliver picture quality equivalent to a directly fed video signal but in this case the unit performed well and the quality when viewed on my TV was not noticeably different from that of off-air analogue broadcast signals.

    In conclusion, this unit provides a quick high quality RF modulator from a selection of junkbox parts.

Thursday, 16 April 2015

The sad state of the Raspberry Pi software ecosystem

    When looking at possible channels for a future Language Spy product last week, I took a look at the Raspberry Pi Store.  I've been a Pi enthusiast from the moment I heard about the project, indeed the Language Spy political corpus is driven by a pair of them.
    The Pi Store is an app store powered by IndieCity which is available both on the web and through an application included in the Raspbian Linux distribution that most people run on their Pi.
    I hadn't looked at the store since it was launched, so I was rather saddened to see it in something of a moribund state. Very little activity, an absence of the commercial software that was there when I'd last looked, not a channel that seemed to be going anywhere.
    Of course, you might ask why the Pi needs an app store. After all, Raspbian is a Linux distro, and thus has all the huge library of Linux software available for it, at least all that will compile and run on the Pi's limited hardware. And in a sense you'd be right, in that a few seconds with apt-get can satisfy almost your every software need.
    But the success of a machine like the Pi depends on more than just using an operating system with a much wider support. Platforms succeed when they create an ecosystem around them, and while the Pi has been very successful at creating a hardware ecosystem the failure of the Pi Store serves to highlight the sad state of its software ecosystem.
    In the last couple of decades working in software companies it has been my observation that the most successful new platforms are those with the lowest barriers to development. In the 1990s the PlayStation trounced the cartridge consoles by not requiring developers to stump up for a huge inventory of plastic bricks for example.
    In a way one strength of the Pi - its Linux OS - is also its weakness when it comes to the Pi Store. There are so many different paths to Linux development that the Pi Store lacks that crucial low barrier to entry offered by a simple choice. What the Pi Store needs is an "official" way to write code for it. A straightforward community-supported development path encapsulated in a single download from the store which contains everything needed to publish. In fact I'd go further, what it needs is two such routes, one for simple apps and one for more demanding apps, roughly analogous to a framework like Apache Cordova and Java respectively in the Android world. Maybe Cordova on Qt and Qt itself would fit the bill.
    Meanwhile it's no use saying "Rubbish, everyone can just use [insert your pet 1337 dev environment here]!" when the people who are slowly becoming advanced users and wanting to code on their Pis might not have the required technical expertise to master all its nuances straight away.
    In the way thus described the Pi Store could become a channel with a low barrier to entry. This is not to say that the "official" dev path need be the only one, more of a "You can code how you like for the store but here's a straightforward way to do it".
    I won't be developing that Language Spy product first for the Pi Store as it stands, it would not make sense when other channels will give me a much better airing. The Pi remains a platform with some potential for a developer like me, but until some serious attention is paid to its app store I don't see it gathering its own software ecosystem.
    I don't know about you, but as a Pi enthusiast I think that's a shame.

Wednesday, 1 April 2015

Automotive manifesto

    Cars are crap, these days.

    Something has to be wrong, for me to say that. I'm a lifelong motor enthusiast, petrolhead and engineer. Cars are on paper at least, better than they've ever been. Their engines last for hundreds of thousands of miles, they don't rust, and they all handle pretty well perfectly.

    So why do I say that cars are crap these days?

    A few weeks ago I sat in a new car from a global manufacturer. It's nothing special, most cars from the last decade are to a greater or lesser extent similar. It's got an extremely advanced engine that will return MPG figures impossible a generation ago, it has a galvanised body, and it'll reliably haul a family of four all day in supreme comfort at autobahn speeds.

    Yet I am fairly certain that it and nearly all of its model will be headed for the crusher within a decade. Why? Driving it is not the experience we'd expect from a car made a decade or more ago, instead it's a software experience. It has a digital dash whose instruments were as far as I could see mostly on a TFT screen. When you turn the headlights on a switch doesn't complete a circuit to the light, instead its computer sends a signal to the microcontroller in the light to turn on. Same with all the other controls, even the handbrake. Yes, the handbrake, that thing you rely on to stop the car rolling away down a hill, is no longer a cable but a computer

    I looked at that car and saw an engineering masterpiece. I'm an electronic engineer, my dad's a blacksmith and I've been around bits of cars all my life. I know how all this stuff works, in intricate detail.

    But I also looked at that car and saw something I wouldn't touch with a barge-pole. I know that within that car is something that won't live up to the manufacturer's hard-won reputation for reliability. It may be that digital dash, it may be the microcontroller in the headlight, the one in the brakes, the throttle, or even the network of data cables that carry all that info around the car's systems, but something is going to fail on that car that won't be fixable without telephone number money, and then the car will be junk.

    The problem is of course that the manufacturer couldn't care less. The person who owns the car when whatever piece of techno-crap sends it to the scrapheap won't be the person who drove it off the forecourt, because people who are prepared to take the hit of new car depreciation usually do so because they are so desperate to always drive a new car that the money doesn't matter. So the people who keep the manufacturer in business simply don't care about its built-in obsolescence, therefore the manufacturer doesn't care about it either. (I am a rare exception, I've owned my 2001 car from new)

    This makes no sense. It's an oft-quoted line, that more energy is used in the manufacture of a car than in its lifetime of driving. Therefore if we can make cars whose engines and bodywork last forever it is irresponsible to throw them away before they are worn out. No, your fancy hybrid is about as green as a coal fired power station if it only lasts a few years.

    I can not in good conscience participate in this scheme of car ownership. On an environmental level sure, but also as an engineer, I don't think I want to buy something that's designed to be so unnecessarily complex as to be unfixable, it's just wrong.

    So here's my automotive manifesto.

    I will buy a car that has in every instance only the technology that is needed to perform the task in hand, not more than is needed. If all that is needed to turn my headlight on is a switch and a copper wire than I do not need a CAN bus and two microprocessors to do it. I am quite happy to pay for the copper wire and carry the marginal extra weight it brings to the car.

    I will buy a car that has high technology where it is needed and where its use makes sense. For example a microprocessor is necessary to control my engine or my anti-lock brakes, but is not necessary to provide basic instrumentation.

    I will not buy a car that uses high technology to ensure early obsolescence or to lock-in to a dealer network. If your digital dash costs a four figure sum to replace, or if only your dealer can perform service tasks, then you will not see my money and neither will your dealer.

    If I can not buy a car that meets these simple requirements then I will not spend a lot of money for a car made in a European, American or Japanese factory. I will simply spend as little as possible on the cheapest pile of Pacific-rim crap-on-wheels I can find, and bin it when it dies. If I do that then I'll have wasted a lot less money than I would binning a fancy car with a dead digital dash.

    I'm just one person. My consumer choices don't figure on the radar of a car manufacturer. But I know the automotive frustrations of one can also be the frustrations of many, as for years I kept the Austin Rover faith as they kept churning out frankly awful cars. Where are those Austin Rover factories now? Mostly housing estates, retail, and industrial parks.

    I know I won't be alone in walking onto the forecourt of the first budget Chinese carmaker to arrive in my town. Who knows, perhaps it'll occupy the site of the factory where they made that brand new car I mentioned earlier.

Thursday, 12 March 2015

The day I was DoSed by Google


    I launched my startup this week. On a very modest scale, no debauched parties or penthouse offices on scads of VC money, just me and my laptop in my dad's living room, popping open a bottle of home made cider to mark the event.

    Language Spy is the tangible fruit of a seven or eight year side project, creating a searchable corpus of political language. It's driven by a pair of Raspberry Pis doing the numbercrunching and uploading data to Google Cloud Storage buckets from whence the site is served by Google App Engine.
    You can see events unfolding through the words used about them, for example the correlation between "Hillary Clinton" and "Email" in the last week of US politics. If like me you're a news junkie, it's compelling viewing.

    Unfortunately though if you click the link as I write this, you'll see a Google App Engine quota exceeded message. The site won't work, because I have reached the point at which I can't afford the traffic it's serving and it has exceeded my daily budget.

    This traffic spike would be no problem if it were generated by real site users as then I'd be able to monetise the traffic, but sadly it isn't. Instead it's generated by GoogleBot. That's right, being indexed by a search engine has taken my site down. The bot looks at the site, decides it's on some very fast infrastructure, and issues millions of requests per hour.

    When I examine my problem, it becomes clear that it has several aspects:

  1. It's a language analysis site, so it has a *lot* of pages for the spider to crawl.
  2. Being a language analysis site there are no pieces of language I can exclude using robots.txt, so I can't reduce the load by conventional means. How do you decide which language is more important than other pieces? You can't, at least not when your aim is to have it all open for analysis.
  3. I can't tell Google to slow down a little, as where I'd expect to be able to do this in Webmaster Tools I get a "Your site has been assigned special crawl rate settings. You will not be able to change the crawl rate." message. I see this as the sticking point, if I could restrict Googlebot's rate I'd be able to keep the site running and take the hit of not being so well indexed.
    This means that the spider is eating through my daily Google App Engine quota very quickly indeed. I will find myself gaining a hundred instance hours in a very short time indeed as GAE spins up loads of new instances to deal with the spider. Pretty soon the site hits its daily quota and goes down. I could keep it going by feeding in more money, but I'd have to put hundreds of dollars a day into it and with no end in sight I am not made of money.
    Right now my only hope lies with a crawl issue report I filed with the Webmaster Tools team, if they can give me control over my indexing rate I'll be good to go.  But I can't say when they'll come back to me if ever, so I may just have to come up with a Plan B.
    Is there a moral to this story? Perhaps it's a cautionary tale for a small startup tempted to use cloud hosting. Google Cloud Storage has proved very cost-effective for a huge language database, but the sting in the tail has turned out to be how GoogleBot behaves when it sees a cloud server and how per-instance billing on App Engine handles unexpected traffic surges. The fact that it's Google who are causing me to use up my budget with Google is annoying but not sinister, however neither giving me the option to limit my GAE instance count nor slow down the crawl rate doesn't leave me as the happiest of customers.

    So yes, I've launched a startup. It's live for an hour or two a day while it has budget, in the morning UK time. Perhaps that will be my epitaph.

Sunday, 8 February 2015

When open source software goes astray

     I have just spent about half a day installing and configuring the latest version of Apache Cordova. I've used Cordova for several years now, it remains a great way to write a cross-platform mobile app.
     The reason for this article though lies in the first line. I spent half a day simply installing and configuring a mobile framework, I haven't opened an IDE yet, less still written a line of code. Let's consider that for a minute, this is a mobile framework, not an OS distribution! I am guessing I would have spent less time reinstalling Debian than I did yesterday to end up with a few megabytes of mobile project.
     Because that's all Cordova is, when you get down to it. A set of ready-to-go mobile projects for each platform that you pull into your IDE and start pushing HTML and Javascript into. And until version 3 that was how it was distributed. Unpack the downloaded archive, pick up the directory for your platform, pull it into the IDE, start coding. Have your prototype app in front of the boss at the end of the day.
     Cordova 3 changed all that, and going by my experience yesterday Cordova 4 hasn't fixed it. You install Node, use npm to get Cordova, run a command line to create your project, then another command line to add your chosen platforms and plugins to the project. The project directory has one master www directory in which you put your app code, and a simple command line builds each platform-specific project from that www directory. Putting aside the version and configuration hell that marred yesterday afternoon's install for me it sounds really elegant.
    Unfortunately though that is not the way apps are written in the real world. It would be great to write one copy of your HTML5 app and then deploy it to each platform and compile it from the command line on the same computer, but sadly cross platform app development doesn't work that way.
    Picture for a minute the office of a typical Cordova developer who produces apps across more than one platform. They may have a Linux box for Android development, running Android Studio or maybe Eclipse. Their Windows Phone apps will be developed on a Windows box with Visual Studio. And finally they will have a Mac running Xcode on which they develop their iOS apps. This is not a choice but a necessity, try asking Tim Cook or Satya Nadella for permission to develop for their mobile platform on the other's OS and you won't get very far. It's possible that the Android apps could be developed on the Mac or the Windows box, but it's still largely true that if you want to do cross platform app development for multiple target platforms you are going to have to have multiple development platforms and more than one IDE. It's not ideal and none of us really want it or like it, but it's the harsh reality of developing software for closed-source commercial platforms.
    In that environment Cordova's elegant project build structure is almost completely useless. Putting aside the unfortunate fact that each platform demands subtly different code for its in-app browser, it is impossible to load the generated Cordova projects directly into Android Studio, Xcode, or Visual Studio and develop them from their Cordova build directories because each time the project is rebuilt at the Cordova end the project you see in your IDE is overwritten. So you end up using Cordova's build system to create vanilla projects for Android, iOS and Windows Phone that you copy somewhere else and load into the IDEs before never touching the Cordova build system again. Instead of being an elegant and useful addition to the project it has become an annoyance and a significant inconvenience.
    So there's my Cordova rant. It's a bit unfair, Cordova remains a really useful piece of software. In reality I think it would be better to characterise my rant as one against the tendency in open source software to incorporate features not because they make sense for the software itself, but because someone thinks they are a good idea. In the same way that the developers of PHP who are not PHP coders but C and C++ coders seem to be on a mission to turn a sequential scripting language into a clone of C++, it makes sense to them but is increasingly divergent from the use case for the vast majority of PHP developers.
    Every open source project risks this pitfall as a side-effect of success. People write software because they want to solve a problem in their lives, but eventually they transition from being the people with a need for the software to being the people who only write it. They lose touch with why and how the end users use the software and imagine that they and not their users know best how it should be used.
    There is an obvious difference between the open source movement and the closed source commercial world. But one thing remains constant in both worlds: the customers will dump your product if you ignore their needs and try to dictate to them what they will have. IBM learned this the hard way in the 1980s with the PS/2; the customers and the rest of the industry continued evolving the PCs they already had rather than taking the direction IBM wished to impose on them.
    I find it sad that the same tale is played out in the open source world. Open source software could be a rich dialogue between users and developers, instead each successful project eventually loses its way just as commercial entities do. I suppose moribund companies spawn upstart competitors just as moribund open source software spawns vigorous forks, but there are times when the wait for a fork seems awfully long.

Friday, 14 November 2014

Someone who can't see 3D images uses an Oculus Rift for the first time.

   Last night, at Oxford Hackspace, I had my first try at an Oculus Rift VR headset. It was running one of the Rift demos, an office desk with a lamp and a playing card castle on it. Nothing unusual there, you might say. I'm guessing first Oculus experiences aren't much to write home about these days.
    I am maybe not in Oculus's sights though as a customer, because as I've written here before I was born with a strabismus. I have perfect vision in both eyes, but my left eye looks somewhere around twenty degrees to the left and I can switch my main vision from eye to eye. This means I can see over my left shoulder, keep an eye on the mirror without taking an eye off the road while driving a right hand drive car, and move my left eye independently of my right as a party trick.
    It also means I can't see in 3D in the same way as someone born with both eyes on the same heading. In the few months following my birth when my brain was configuring itself for the world around it, I never acquired the capacity for seeing objects as three-dimensional by combining images from left and right eyes because my eyes didn't deliver images close enough to each other. This doesn't mean I live in a flat two-dimensional world, it simply means I don't process 3D information in the same way as other people. To use an analogy of a modern computer system, I do my 3D processing in software rather than hardware. My brain developed to interpolate 3-dimensional space from a single 2-dimensional image and sometimes by the effect of my movement rather than by comparing two such images from different angles. I live in a 3D world, but sometimes - as for instance when trying to be a batsman in a school cricket team - things happen a little too fast for my 3D processing to follow.
    All of this means that 3D effects designed for people without a strabismus - stereograms, autostereograms, and 3D movies - don't work for me. I will never be able to see an autostereogram, and a 3D movie is for me a 2D movie in which I disregard what's being sent to my left eye. Fine by me, I'm happier to pay less for a 2D showing anyway.
    So I didn't expect much from the Oculus. And yes, once looking into it the feeling was of having a couple of normal 2D screens in front of me. Not too different from my day-to-day workstation, except with full-field vision rather than a pair of LCD monitors.
    The sight of assorted geeks queuing up to flail blindly about a crowded hackspace wearing a headset is mildly amusing. In my case trying to keep my movements to a minimum I started to look around. Looking down, the desk and the lamp were just a 2D picture of a desk surface and a lamp. Switch from eye to eye, nothing changes.
    It was a mild surprise when I moved my head to the left and the card castle came into view. It was a picture of a card castle on a monitor, then suddenly it was a 3D card castle. Swing back to the lamp, and it's a picture of a lamp. Back to the card castle, and it's a 3D object. I was only using my right eye, but I could see inside it as I knelt down.
   So I had one 3D object in otherwise 2D space. Not really Better Than Life, but far more than I expected. What's happening here? In the real world, I interpolate 3D even when I'm not moving because I know something about the environment. I know a teapot in front of me is a 3D object because I have a huge amount of real-world lighting and other visual information. I can reach out and touch it. That knowledge means that when I reach out to take hold of it I know not only roughly where it is, but what size it is so I can grasp its handle with a good expectation of the dimensions my hand will have to close over. If I move around an object the amount of information I have about it increases, and so does my ability to see it in 3D.
    I think that there is an information threshold for me to see something in 3D. A 2D image on a screen or a page does not have enough information from its lighting or how its aspect changes for me to see it as containing 3D objects. I think the lamp and the desk did not pass this information threshold while the card castle did. It had enough complexity for it to provide my brain with the information it needed to cross the threshold as my motion caused the Oculus image to change.
    This opens up the interesting prospect of people with a lifelong strabismus having an ability not shared with most others: that of seeing a 3D image from a single 2D source when that source is coupled with motion sensing. It would be extremely interesting to see if a 3D-rendered-as-2D FPS game could become 3D-immersive for me if it were displayed on an Oculus with its motion sensor controlling in-game movement.
    I am guessing I only have this ability because my strabismus is life-long. I gained it because my brain developed with it, I am therefore guessing someone who grew up with stereoscopic vision but later lost an eye or gained a strabismus through injury would not see a 2D image as 3D in an Oculus headset. I'd be interested to hear whether or not that was the case.
    Some people have a big problem with strabismuses. Parents subject their children to surgery without their consent to have them "corrected". Says more about their insecurities and the desire for "perfect" offspring than it does about the effect of a strabismus. Mine has mostly never been an issue, sometimes it's unnerved people and I'll never play cricket or baseball. I would never subject a child of mine to unnecessary cosmetic procedures, to me a strabismus is just part of who you are. Yesterday's experience added a new angle on my strabismus and for that I'm happy. I can add seeing 2D images in 3D to seeing over my shoulder in the meagre list of super-powers it's given me.



Thursday, 3 July 2014

Bow for nowt


    Last weekend, I made a bow. Not because I've had a hankering for one, but because I spotted the perfect material and had to give it a go.

    My friend Rebecca was replacing her bed. The old one had a lot of very springy plywood slats, far too good to let go to the tip. So I detached them, bound them up with tape, and brought them back home on the back of my bike.

They're too wide to make a bow on their own, the ends of a bow need to be considerably less stiff than the centre where you hold it. And they weren't long enough to make a bow on their own, I'd have to bolt two of them together.

    So I cut two of them on a taper, leaving a section of a few inches intact where they'd be bolted together. I then drilled three holes for the bolts, and applied a layer of No More Nails glue to the joining surfaces before doing up the bolts. This left me with a long and springy bow, but no means to attach a bowstring to it. Out with the saw again to cut a couple of notches at each end. My bowstring - a knotted piece of the orange baler twine that is ubiquitous on farms - simply hooked over the end of the bow into the notches.
Is it a good bow? Not being an archer I have no idea. But It takes the length of my draw without damage, and it feels as though I'm putting quite an effort into it. And it shoots bamboo canes down the lawn rather well. Next up, a pack of cheap arrows from Amazon, I'm not hard-core enough to do my own fletching.

Wednesday, 12 March 2014

Small computer, big data

    This post comes as one of the longest running scripts I've ever created has just finished its work. In the last week of January I set my Raspberry Pi to the task of processing 5 years of news stories into a 20Gb tree of JSON files, and here in the second week of March it's completed the task.
    Given that a PC has done the same job in a couple of days the first question anyone would ask is simply this: Why?
    My Pi runs all day every day, 24/7. It collects the news stories from RSS feeds and stores them in a MySQL database. It uses somewhere under 2 watts, and it will do this no matter what I ask it to do because it's plugged in all the time. I can touch its processor with my finger, it's not hot enough to hurt me. My laptop by comparison with its multi-core Intel processor, board full of support chips, and SATA hard disk, uses somewhere under a hundred watts. I can feel the hot air as its fan struggles to shift the heat from the heatsink. I wouldn't like to hold my finger on its processor, assuming I could get past its heat pipe.
    Thus since I'm in no hurry for the data processing it will use a lot less power and it makes more sense for me to run the script on the Pi. This isn't an exercise in using a Pi for the sake of it, instead the Pi was the most appropriate machine for the task.
    So having run a mammoth script on a tiny computer for a couple of months, how did I do it and what did I learn?
    The first thing I'd like to say is that I'm newly impressed with the robustness of Linux. I've run Linux web servers since the 1990s but I've never hammered any of my Linux boxes in quite this way. Despite stealing most of the Pi's memory and processor power with my script it kept on with its everyday tasks, fetching news stories and storing them as always. I could use its web server - a little slowly it's true -, I could use its Samba share and I could keep an eye on its MySQL server. Being impressed with this might seem odd, but I'm more used to hammering a Windows laptop in this way. I know from experience the Windows box has not been so forgiving running earlier iterations of the same script.
    If anybody else fancies hammering their Pi with a couple of months of big data, here's how I did it. The script itself was written in PHP and called from a shell within an instance of screen. This way I could connect and disconnect at will via ssh without stopping the script running. The data came from the MySQL server and was processed to a 64Gb USB Flash disk. The Flash is formatted as ext4 without journaling, this was judged to be the best combination of speed and size efficiency. An early test with a different FAT formatted drive provided a vivid demonstration of filesystem efficiency as the FAT ended up using 80% of the space after only a short period of processing.
    The bottleneck turned out to be the Flash drive, a Lexar JumpDrive. Reading and writing seems to happen in bursts, the script would run quickly for about 30s and then very slowly for the next 30s purely due to disk i/o. In future I might try the same task with USB-to-SATA hard disk, though I'd lose my power advantage.
    So would I do the same again, and how might I change my approach? I think the Pi was a success in terms of reliable unattended operation and in terms of low power usage on a machine I'd have had running anyway. But in terms of data processing efficiency it could have been a lot better. A faster disk and a faster computer - perhaps something with the Pi's power advantage but a bit more processor grunt such as the CubieBoard - would have delivered the goods more quickly for not a huge extra investment. And the operating system though reliable could probably have been improved. I used a stock Raspbian, albeit with the memory allocation for graphics reduced as low as it would go. Perhaps if I'd built an Arch image with a minimum of dross I would have seen a performance increase.
    I used a Raspberry Pi for this job because it was convenient to do so, it uses very little power and I had one that would have been powered up anyway throughout the period the script was running. The Raspberry Pi performed as well as I expected, but I can not conclude anything other than that it is not the ideal computer for this particular job. It is sometimes tempting when you are an enthusiast for a particular platform to see it as ideal for all applications, well in this case that would be folly.
    The Pi will continue to crunch the data it collects, though on a day-to-day basis as part of the collection process. In that it'll be much more suited to the task, as a cron job running in the middle of the night the extra work of a day's keyword crunching won't be noticed. And there's the value in this exercise, something that used to require a PC, a while of my time and a little bit of code has been turned into an automated process running on a £25 computer using negligible power. I think I call that a result, don't you?