Other things on this site...

Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

A pea soup

A nice fresh pea soup can be great sometimes, and also a good thing to do with leftovers. This worked well for me when I had some leftover spring onions, creme fraiche and wasabi. You can of course leave out the wasabi, or swap the creme fraiche for cream or a dab of milk, or you could add watercress perhaps.

  • A small knob of butter
  • 4 spring onions
  • 3-4 handfuls frozen peas (no need to defrost them!)
  • A dab of wasabi paste
  • About 75ml creme fraiche
  • Black pepper

Boil a kettle.

In a smallish pan melt the butter. Chop the spring onions, and fry the white bits gently to soften them, about 4 minutes. Then add the green bits of the spring onions, as well as the peas and the tiny dab of wasabi.

Turn up the heat and also add the boiling water, just enough to cover things. Once you've brought the pan to the boil you can turn it right down low, put a lid on it, and let it bubble gently for approx 10 minutes, no need for more.

Take the pan off the heat, and with a hand blender you can whizz up the pan's contents to blend it to a smooth soup. Add the black pepper and creme fraiche and stir it through.

| recipes |

Best veggieburgers in East London

Veggieburgers have come a long way - and London's vegetarian scene has quite a lot going on in the burger department! - so I wanted to try a few of them. I particuarly wanted to work out, for my own cooking, what's a good strategy for making a veggieburger that is hearty, tasty, and, well, frankly, worth ordering. For me, your old-fashioned beanburger isn't going to do it.

So here's my list of top veggieburgers in London so far, mostly in East London. It's not an exhaustive survey, and I might update it, but I wanted to get at least 5 really solid recommendations and then write this up. I'm surprised to find that most of them are not only vegetarian but vegan. Here's the top 5! You should definitely eat these:

  1. Untitled Arancini (Dalston) "vegan burger" - Really tasty and substantial fast food, made of their risotto stuff. A good whack of umami from the aubergine sauce, and good chips too. Highly recommended burger. As this other blogger puts it: "This is a burger where all the elements are nicely thought through - good bun, fresh salad, lots of good chutney [and the burger is] one note in a well-played burger orchestra."
  2. Untitled Mildred's (Camden and elsewhere - Dalston coming soon) "smoked tofu, lentil, piquillo pepper burger" - A great burger with plenty of bite to it, nice smoky burgery flavour. I think the tofu provides the heft while the lentils add texture. It was a little bit dry so I had to add tomato ketchup (nowt wrong with that), and with that in place it was the full package. Just look at it!
  3. UntitledEat17 (Homerton): This was a curious place, housed in the corner of a Spar supermarket, and also serving the Castle cinema upstairs. Anyway they gave me a nice black bean + quinoa burger, with a good crispy exterior. Presumably it's got some beetroot in it too, given the rich purpley colour. The flavour of the burger itself could be a bit heavier, and the chips were too salty as always, but otherwise a very good showing, especially coming from a non-veggie place.
    (BTW I think this is the only non-vegan burger in the top five.)
  4. Untitled Black Cat Cafe (Hackney): "beefish burger" I think they called their vegan burger, but actually to me it had a pleasing chorizo-like taste. The burger is firm and hefty (good), the chips are good, and their vegan mayo tastes great.
  5. Untitled Vx (Kings Cross): a tasty and filling cheezburger, near to Kings Cross station - handy! They're not trying to be fancy-fancy - in fact this burger has quite a McDonaldsy taste to it, except that the burger patty (made of seitan) is more generous/filling than a McD. Good stuff.

...so that's the top list so far. Plenty more to try. Here's something that surprised me: although there are plenty of nouveaux popup veggie burger stalls popping up and down all around, those aren't where I seem to find the best burgers. They get the twitter hype but they don't seem to have got their recipes to match the hype. The list I just gave you is mostly well-established places (and two of them are omni).


Some other burgers I tried:

  • Mooshies (Brick Lane) - I had the "where's the beef" burger which is made of black bean and quinoa, plus all the trimmings. I was disappointed that the burger pattie had not much bite to it - it was more like "vegan mince" than "vegan burger", splodging everywhere at the slightest provocation. The flavour was nice, the black beans providing a dark enough flavour (more beef-like that a nut-burger or a veg-burger) and the quinoa provided a good bit of structured feel on the tongue. So I think black-bean-and-quinoa is a good idea, but you really need to put those together with something that'll make a good firm patty. Flavour-wise, the mayo left me with an overly vinegary aftertaste, so I hope they do something about that too.

    On a second visit we had the pulled-pork bbq jackfruit which was fine (but my recipe's better ;), and the onion bhaji burger which was really nice - crispy and full of flavour.

  • Greedy Cow (Mile End) - nice fresh-tasting vegetable burger with a nice crispy exterior to the patty. Definitely tasted like they care about their veggieburger. I liked it and for an omni place it's very good indeed, but it's not up in the top league since I'm more interested in the more "meaty" angle on a veggieburger.

  • The Hive Wellbeing (Bethnal Green) - The burger pattie was oddly small, about 1/2 the size of the bun, which was silly. The pattie itself had a lovely clear fresh pumpkin-seed flavour and texture (it's made of mushroom and courgette too). Nice chutney underneath. The flavours overall are savoury and sharp (also from the mustard mayo too) - good, though potentially not for everyone. On a second visit, I found again that the burger was made of good stuff but was kinda awkwardly put together - proportions a little bit off - maybe it's a good recipe, inattentively prepared? It seems to me it could be in the running to be the best, if it were given a bit more care.

  • Vurgers (popup) - I had the "BBQ nut" one, because it looked likely to be the most substantial. It was a decent meal and hefty enough, but way way overseasoned - so much sugar, so much salt, so much acid. I know that "BBQ" largely implies they'll overdo those things, but there's supposed to be more than those blunt notes. The burger was filling and decent but didn't have much bite - it kept its shape through inertia rather than strength, if you get what I mean. Certainly good enough to mention, but not getting near the top, and served in a poncey-seeming location with price to match.

| Food |

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at a distance". They turned to philosophy and developed new metaphysics of their own. Thought-experiments such as Schrodinger's cat, originally intended to highlight the absurdity of the standard "Copenhagen interpretation", became standard teaching examples.

In the twenty-first century, researchers in artificial intelligence (AI) and machine learning (ML) find themselves in a roughly analogous position. There has been a sudden step-change in the abilities of machine learning systems, and the dream of AI (which had been put on ice after the initial enthusiasm of the 1960s turned out to be premature) has been reinvigorated - while at the same time, the deep and widespread industrial application of ML means that whatever advances are made, their effects will be felt. There's a new urgency to long-standing philosophical questions about minds, machines and society.

So I was glad to see that Neil Lawrence, an accomplished research leader in ML, published an article on these social implications. The article is "Living Together: Mind and Machine Intelligence". Lawrence makes a noble attempt to provide an objective basis for considering the differences between human and machine intelligences, and what those differences imply for the future place of machine intelligence in society.

In case you're not familiar with the arXiv website I should point out that articles there are un-refereed, they haven't been through the peer-review process that guards the gate of standard scientific journals. And let me cut to the chase - in this paper, I'm not sure which journal he was targeting, but if I was a reviewer I wouldn't have recommended acceptance. Lawrence's computer science is excellent, but here I find his philosophical arguments are disappointing. Here's my review:

Embodiment? No: containment

A key difference between humans and machines, notes Lawrence, is that we humans - considered for the moment as abstract computational agents - have high computational capacity but a very limited bandwidth to communicate. We speak (or type) our thoughts, but really we're sharing the tiniest shard of the information we have computed, whereas modern computers can calculate quite a lot (not as much as does a brain) but they can communicate with such high bandwidth that the results are essentially not "trapped" in the computer. For Lawrence this is a key difference, making the boundaries between machine intelligences much less pertinent than the boundaries between natural intelligences, and suggesting that future AI might not act as a lot of "agents" but as a unified subconscious.

Lawrence quantifies this difference as the numerical ratio between computational capacity and communicative bandwidth. Embarrassingly, he then names this ratio the "embodiment factor". The embodiment of cognition is an important idea in much modern thought-about-thought: essentially, "embodiment" is the rejection of the idea that my cognition can really be considered as an abstract computational process separate from my body. There are many ways we can see this: my cognition is non-trivially affected by whether or not I have hay-fever symptoms today; it's affected by the limited amount of energy I have, and the fact I must find food and shelter to keep that energy topped up; it's affected by whether I've written the letter "g" on my hand (or is it a "9"? oh well); it's affected by whether I have an abacus to hand; it's affected by whether or not I can fly, and thus whether in my experience it's useful to think about geography as two-dimensional or three-dimensional. (For a recent perspective on extended cognition in animals see the thoughts of a spiderweb.) I don't claim to be an expert on embodied cognition. But given the rich cognitive affordances that embodiment clearly offers, it's terribly embarrassing and a little revealing that Lawrence chooses to reduce it to the mere notion of being "locked in" (his phrase) with constraints on our ability to communicate.

Lawrence's ratio could perhaps be useful, so to defuse the unfortunate trivial reduction of embodiment, I would like to rename it "containment factor". He uses it to argue that while humans can be considered as individual intelligent agents, for computer intelligences the boundaries dissolve and they can be considered more as a single mass. But it's clear that containment is far from sufficient in itself: natural intelligences are not the only things whose computation is not matched by their communication. Otherwise we would have to consider an air-gapped laptop as an intelligent agent, but not an ordinary laptop.

Agents have their own goals and ambitions

The argument that the boundaries between AI agents dissolve also rests on another problem. In discussing communication Lawrence focusses too heavily on 'altruistic' or 'honest' communication: transparent communication between agents that are collaborating to mutually improve their picture of the world. This focus leads him to neglect the fact that communicating entities often have differing goals, and often have reason to be biased or even deceitful in the information shared.

The tension between communication and individual aims has been analysed in a long line of thought in evolutionary biology under the name of signalling theory. For example the conditions under which "honest signalling" is beneficial to the signaller. It's important to remember that the different agents each have their own contexts, their own internal states/traits (maybe one is low on energy reserves, and another is not) which affect communicative goals even if the overall avowed aim is common.

In Lawrence's description the focus on honest communication leads him to claim that "if an entity's ability to communicate is high [...] then that entity is arguably no longer distinct from those which it is sharing with" (p3). This is a direct consequence of Lawrence's elision: it can only be "no longer distinct" if it has no distinct internal traits, states, or goals. The elision of this aspect recurs throughout, e.g. "communication reduces to a reconciliation of plot lines among us" (p5).

Unfortunately the implausible unification of AI into a single morass is a key plank of the ontology that Lawrence wants to develop, and also key to the societal consequences he draws.

There is no System Zero

Lawrence considers some notions of human cognition including the idea of "system 1 and system 2" thinking, and proposes that the mass of machine intelligence potentially forms a new "System Zero" whose essentially unconscious reasoning forms a new stratum of our cognition. The argument goes that this stratum has a strong influence on our thought and behaviour, and that the implications of this on society could be dramatic. This concept has an appeal of neatness but it falls down too easily. There is no System Zero, and Lawrence's conceptual starting-point in communication bandwidth shows us why:

  • Firstly, the morass of machine intelligence has no high-bandwidth connection to System 1 or to System 2. The reason we talk of "System 1 and System 2" coexisting in the same agent is that they're deeply and richly connected in our cognition. (BTW I don't attribute any special status to "System 1 and System 2", they're just heuristics for thinking about thinking - that doesn't really matter here.) Lawrence's own argument about the poverty of communication channels such as speech also goes for our reception of information. However intelligent, unified or indeed devious AI becomes, it communicates with humans through narrow channels such as adverts, notifications on your smartphone, or selecting items to show to you. The "wall" between ourselves as agents and AI will be significant for a long time.
    • Direct brain-computer interfacing is a potential counterargument here, and if that technology were to develop significantly then it is true that our cognition could gain a high-bandwidth interface. I remain sceptical that such potential will be non-trivially realised in my lifetime. And if they do come to pass, they would dissolve human-human bottlenecks as much as human-computer bottlenecks, so in either case Lawrence's ontology does not stand.
  • Secondly AI/ML technologies are not unified. There's no one entity connecting them all together, endowing them with the same objective. Do you really think that Google and Facebook, Europe and China, will pool their machine intelligences together, allowing unbounded and unguarded communication? No. And so irrespective of how high the bandwidth is within and between these silos, they each act as corporate agents, with some degrees of collusion and mutual inference, sure, but they do not unify into an underlying substrate of our intelligence. This disunification highlights the true ontology: these agents sit relative to us as agents - powerful, information-rich and potentially dangerous agents, but then so are some humans.

Disturbingly Lawrence claims "Sytem Zero is already aligned with our goals". This starts from a useful observation - that many commercial processes such as personalised advertising work because they attempt to align with our subconscious desires and biases. But again it elides too much. In reality, such processes are aligned not with our goals but with the goals of powerful social elites, large companies etc, and if they are aligned with our "system 1" goals then that is a contingent matter.

Importantly, the control of these processes is largely not democratic but controlled commercially or via might-makes-right. Therefore even if AI/ML does align with some people's desires, it will preferentially align with the desires of those with cash to spend.

We need models; machines might not

On a positive note: Lawrence argues that our limited communication bandwidth shapes our intelligence in a particular way: it makes it crucial for us to maintain "models" of others, so that we can infer their internal state (as well as our own) from their behaviour and their signalling. He argues that conversely, many ML systems do not need such structured models - they simply crunch on enough data and they are able to predict our behaviour pretty well. This distinction seems to me to mark a genuine difference between natural intelligence and AI, at least according to the current state of the art in ML.

He does go a little too far in this as well, though. He argues that our reliance on a "model" of our own behaviour implies that we need to believe that our modelled self is in control - in Freudian terms, we could say he is arguing that the existence of the ego necessitates its own illusion that it controls the id. The argument goes that if the self-model knew it was not in control,

"when asked to suggest how events might pan out, the self model would always answer with "I don't know, it would depend on the precise circumstances"."

This argument is shockingly shallow coming from a scientist with a rich history of probabilistic machine learning, who knows perfectly well how machines and natural agents can make informed predictions in uncertain circumstances!

I also find unsatisfactory the eagerness with which various dualisms are mapped onto one another. The most awkward is the mapping of "self-model vs self" onto Cartesian dualism (mind vs body); this mapping is a strong claim and needs to be argued for rather than asserted. It would also need to account for why such mind-body dualism is not a universal, across history nor across cultures.

However, Lawrence is correct to argue that "sentience" of AI/ML is not the overriding concern in its role in our society; rather, its alignment or otherwise with our personal and collective goals, and its potential to undermine human democratic agency, is the prime issue of concern. This is a philosophical and a political issue, and one on which our discussion should continue.

| science |

Sneak preview: papers in special sessions on bioacoustics and machine listening

This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself, yes, but also the diversity of research groups and countries from which the work comes.

The official programmes haven't been announced yet, but as a sneak preview here are the titles of the accepted submissions, so you can see just how lively this research area has become!

Accepted talks for IBAC 2017 session on "Machine Learning Methods in Bioacoustics":

A two-step bird species classification approach using silence durations in song bouts

Automated Assessment of Bird Vocalisation Activity

Deep convolutional networks for avian flight call detection

Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis

JSI sound: a machine-learning tool in Orange for classification of diverse biosounds

Prospecting individual discrimination of maned wolves’ barks using wavelets

Accepted papers for EUSIPCO 2017 session on "Bird Audio Signal Processing":

(This session is co-organised with Yiannis Stylianou and Herve Glotin)

Stacked Convolutional and Recurrent Neural Networks for Bird Audio Detection preprint

Densely Connected CNNs for Bird Audio Detection preprint

Classification of Bird Song Syllables Using Wigner-Ville Ambiguity Function Cross-Terms

Convolutional Recurrent Neural Networks for Bird Audio Detection preprint

Joint Detection and Classification Convolutional Neural Network (JDC-CNN) on Weakly Labelled Bird Audio Data (BAD)

Rapid Bird Activity Detection Using Probabilistic Sequence Kernels

Automatic Frequency Feature Extraction for Bird Species Delimitation

Two Convolutional Neural Networks for Bird Detection in Audio Signals

Masked Non-negative Matrix Factorization for Bird Detection Using Weakly Labelled Data

Archetypal Analysis Based Sparse Convex Sequence Kernel for Bird Activity Detection

Automatic Detection of Bird Species from Audio Field Recordings Using HMM-based Modelling of Frequency Tracks

Please note: this is a PREVIEW - sometimes papers get withdrawn or plans change, so these lists should be considered provisional for now.

| science |

Jeremy Corbyn has already won

I'm writing this on the morning of the day of voting for the 2015 election.

Opinion polls are notorious here in the UK for having a complex relationship with reality. What I expect will happen is that the Tories will win but with an embarrassingly modest lead. From the last election they had a working majority of 12 seats. The polls in April suggested a Labour wipe-out was on the cards, and unfortunately for Theresa May she grabbed that opportunity and took it on herself to throw it away: it's hard to see her doing anything but failing to make good on her potential.

Theresa May called this election for entirely selfish reasons. She wanted her own mandate, yes, but she'd previously said it wasn't needed. She called the election, as she said herself, taking advantage of the moment to get herself a lovely big majority. It's highly likely that this gambit will fail and that she'll be back in a position rather similar to the starting position, in which case she'll have wasted two months of all of our time - and, crucially, two months out of the two-year time limit when she was supposed to be negotiating Brexit.

So even if the Tories win, Theresa May is likely to have failed badly. Jeremy Corbyn, however, has defied the expectations of the pundits and built up organic support for Labour. I was sceptical about him and in particular about his election strategy, but it seems really to have worked, and he's shown himself to be a much better leader than many of us thought. Will his parliamentary party finally get behind him after the election? We shall see.

There's another thing we can thank May-vs-Corbyn for. Putting aside for the moment differences of policy, this election seems to me to be a victory for

  • talking principles, rather than soundbites;
  • going out and engaging with people, rather than hiding and stage-managing.

And it's been the first election in my adult life in which the two big parties have actually represented a meaningful choice of two options. In previous years, New Labour and the Tories may have come from different stock but their political visions were so close as to be redundant. Corbyn's Labour have offered not just a coherent vision, but a genuine alternative. I don't expect them to be able to win, but given that they're fighting uphill against a hell of an onslaught of negative media, it's been heartening to see their principled and engaged version of political campaigning to reap massive rewards, building themselves a massive swing from 25% up to almost 40% (that's according to voting-intention polls). I don't consider myself a Labour voter but Corbyn's made it plausible to consider that a possibility.

I had expected this election to be dispiriting but it has been heartening.

| politics |

Computing for the future of the planet: the digital commons

On Wednesday we had a "flagship seminar" from Prof Andy Hopper on Computing for the future of the planet. How can computing help in the quest for sustainability of the planet and humanity?

Lots of food for thought in the talk. I was surprised to come out with a completely different take-home message than I'd expected - and a different take-home message than I think the speaker had in mind too. I'll come back to that in a second.

Some of the themes he discussed:

  • Green computing. This is pretty familiar: how can computing be less wasteful? Low-energy systems, improving the efficiency of computer chips, that kind of thing. A good recent example is how DeepMind used machine learning to reduce the cooling requirements of a Google data centre by 40%. 40% reductions are really rare. Hopper also have a nice example of "free lunch" computing - the idea is that energy is going unused somewhere out there in the world (a remote patch of the sea, for example) so if you stick a renewable energy generator and a server farm there, you essentially get your computation done at no resource cost.
  • Computing for green, i.e. using computation to help us do things in a more sustainable way. Hopper gave a slightly odd example of high-tech monitoring that improved efficiency of manufacturing in a car factory; not very clear to me that this is a particularly generalisable example. How about this much better example? Open source geospatial maps and cheap new tools improve farming in Africa. "Aerial drones, crowds of folks gathering soil samples and new analysis techniques combine as pieces in digital maps that improve crop yields on African farms. The Africa Soil Information Service is a mapping effort halfway through its 15-year timeline. Its goal is to publish dynamic digital maps of all of Sub-Saharan Africa at a resolution high enough to serve farmers with small plots. The maps will be dynamic because AfSIS is training people now to continue the work and update the maps." - based on crowdsourced and other data, machine-learning techniques are used to create a complete picture of soil characteristics, and can be used to predict where's good to farm what, what irrigation is needed, etc.

Then Hopper also talked about replacing physical activities by digital activities (e.g. shopping), and this led him on to the topic of the Internet, worldwide sharing of information and so on. He argued (correctly) that a lot of these developments will benefit the low-income countries even though they were essentially made by-and-for richer countries - and also that there's nothing patronising in this: we're not "developing" other countries to be like us, we're just sharing things, and whatever innovations come out of African countries (for example) might have been enabled by (e.g.) the Internet without anyone losing their own self-determination.

Hopper called this "wealth by proxy"... but it doesn't have to be as mystifying as that. It's a well-known idea called the commons.

The name "commons" originates from having a patch of land which was shared by all villagers, and that makes it a perfect term for what we're considering now. In the digital world the idea was taken up by the free software movement and open culture such as Creative Commons licensing. But it's wider than that. In computing, the commons consists of the physical fabric of the Internet, of the public standards that make the World Wide Web and other Internet actually work (http, smtp, tcp/ip), of public domain data generated by governments, of the Linux and Android operating systems, of open web browsers such as Firefox, of open collaborative knowledge-bases like Wikipedia and OpenStreetMap. It consists of projects like the Internet Archive, selflessly preserving digital content and acting as the web's long-term memory. It consists of the GPS global positioning system, invented and maintained (as are many things) by the US military, but now being complemented by Russia's GloNass and the EU's Galileo.

All of those are things you can use at no cost, and which anyone can use as bedrock for some piece of data analysis, some business idea, some piece of art, including a million opportunities for making a positive contribution to sustainability. It's an unbelievable wealth, when you stop to think about it, an amazing collection of achievements.

The surprising take-home lesson for me was: for sustainable computing for the future of the planet, we must protect and extend the digital commons. This is particularly surprising to me because the challenges here are really societal, at least as much as they are computational.

There's more we can add to the commons; and worse, the commons is often under threat of encroachment. Take the Internet and World Wide Web: it's increasingly becoming centralised into the control of a few companies (Facebook, Amazon) which is bad news generally, but also presents a practical systemic risk. This was seen recently when Amazon's AWS service suffered an outage. AWS powers so many of the commercial and non-commercial websites online that this one outage took down a massive chunk of the digital world. As another example, I recently had problems when Google's "ReCAPTCHA" system locked me out for a while - so many websites use ReCAPTCHA to confirm that there's a real human filling in a form, that if ReCAPTCHA decides to give you the cold shoulder then you instantly lose access to a weird random sample of services, some of those which may be important to you.

Another big issue is net neutrality. "Net neutrality is like free speech" and it repeatedly comes under threat.

Those examples are not green-related in themselves, but they illustrate that out of the components of the commons I've listed, the basic connectivity offered by the Internet/WWW is the thing that is, surprisingly, perhaps the flakiest and most in need of defence. Without a thriving and open internet, how do we join the dots of all the other things?

But onto the positive. What more can we add to this commons? Take the African soil-sensing example. Shouldn't the world have a free, public stream of such land use data, for the whole planet? The question, of course, is who would pay for it. That's a social and political question. Here in the UK I can bring the question even further down to the everyday. The UK's official database of addresses (the Postcode Address File) was... ahem... was sold off privately in 2013. This is a core piece of our information infrastructure, and the government - against a lot of advice - decided to sell it as part of privatisation, rather than make it open. Related is the UK Land Registry data (i.e. who owns what parcel of land) which is not published as open data but is stuck behind a pay-wall, all very inconvenient for data analysis, investigative journalism etc.

We need to add this kind of data to the commons so that society can benefit. In green terms, geospatial data is quite clearly raw material for clever green computing of the future, to do good things like land management, intelligent routing, resource allocation, and all kinds of things I can't yet imagine.

As citizens and computerists, what can we do?

  1. We can defend the free and open internet. Defend net neutrality. Support groups like the Mozilla Foundation.
  2. Support open initiatives such as Wikipedia (and the Wikimedia Foundation), OpenStreetMap, and the Internet Archive. Join a local Missing Maps party!
  3. Demand that your government does open data, and properly. It's a win-win - forget the old mindset of "why should we give away data that we've paid for" - open data leads to widespread economic benefits, as is well documented.
  4. Work towards filling the digital commons with ace opportunities for people and for computing. For example satellite sensing, as I've mentioned. And there's going to be lots of drones buzzing around us collecting data in the coming years; let's pool that intelligence and put it to good use.

If we get this right, 20 years from now our society's computing will be green as anything, not just because it's powered by innocent renewable energy but because it can potentially be a massive net benefit - data-mining and inference to help us live well on a light footprint. To do that we need a healthy digital commons which will underpin many of the great innovations that will spring up everywhere.

| IT |

Roast squash, halloumi and pine nuts with asparagus

This was gorgeous. I hadn't realised that the sweet butternut and the salty halloumi would play so well off each other.

Serves 2, takes 45 minutes overall but with a big gap in the middle.

  • 1/2 a butternut squash
  • 1 sprig rosemary
  • 2 cloves garlic
  • olive oil
  • a generous …
| recipes |

The Economist shopping list for UK work

I don't always agree with The Economist magazine but it's interesting. It thinks bigger than many of the things you can buy on an average news stand. The current issue has an article about Britain and Marx, which happens to end with a clear and laudable shopping-list of things that …

| politics |

Another failed attempt to discredit the vegans

People love to take the vegans down a peg or two. I guess they must unconsciously agree that the vegans are basically correct and doing the right thing, hence the defensive mud-slinging.

There's a bullshit article "Being vegan isn’t as good for humanity as you think". Like many bullshit …

| science |

Asparagus and chestnut risotto

It's asparagus season, plus I have a half-used packet of ready-cooked chestnuts. Wait a moment - maybe those flavours can come together over a risotto. Yes they can.

Note: I would have started with some leek or onion to help get things going - if I'd had some.

Quantities are to serve …

| recipes |