My main work tools are text and graphics. Graphics for use in presentations, and text to get all the information input for processing and thinking about.

I read a lot. Partly because it’s my job, and partly because I just enjoy it. But most of the time, when I read, I don’t just read, and I suspect most of you don’t, as well. It turns out it’s surprisingly hard to only focus on the text, and not to have thoughts wander off on a tangent.

Because that’s what we usually do best. Associations, memories, context. When I read, I work up on the context and the associations of what I read. I read about changes to the German Feed-In Tariffs, I’m mentally calculating potential upsides and downsides. I read about a new IoT-project, I look for similarities and differences to related projects. That’s how I work, and I’m beginning to suspect I’m not the only one.

Last week, at a conference called FinHTML5 I was lucky enough to be the opening speaker, and thus not be nervous and mentally going through my slide-deck and presentation all over again for the day, but be present. And so I could watch the impressive Andrew Betts talk about the work he’s done for the FT. He made a lot of valid points about how the reading experience on the web is in many ways suboptimal, especially on devices that have no reliable and persistent connectivity. And then he said the following:

“…the web was built for a publishing use case, and if it can’t do that well, that’s a pretty bad indictment of the web”

@stephanierieger

While I don’t necessarily agree that the web was built for the publishing use case, Andy has a point here.

Interacting with digital text is incredibly cumbersome.

We haven’t yet managed to get anywhere the ease of use that traditional printed text gives us, and we haven’t yet begun to explore the advantages of digital that expand beyond immediacy and ‘weightlessness’. Simple stuff that for me is absolutely essential, stuff like highlighting and annotating, still remain essentially unsolved. As Fabian, a good friend of mine, quipped over coffee the other day: for highlighting, there’s still nothing that beats paper and markers.

Highlight markers “Screw Digital” by Fabian Mürmann

There’s a couple of apps that try to solve this. I use Readmill quite religiously. They have a beautiful iPad app and so far seem to be the most convenient and easiest to use way to highlight and annotate text in ebooks. That is, unless the ebook you want to read is only available from Amazon, which provides its format, or non-Adobe DRM-ed epub. Talking of Amazon: I have a Kindle3, which I loved because for just quick reading it’s amazing. But highlighting? Forget about it. Also forget about consuming any content on that device that is not bought from Amazon. Even with the bad UX design that comes with highlighting on the device itself, for non-Amazon titles, you don’t even get a central repository of your highlights. And that’s just books we’re talking about.

Moving onto the web, it’s only getting more bizarre. There’s Quote.FM which I loved to use, but recently lost me because I feel the software is too opinionated. Only sharing one highlight out of a piece of text is simply not my use case. Often I need to highlight multiple passages. Also, it doesn’t integrate easily with another service I rely on heavily, which is Instapaper. That’s where the majority of news items, analysis, blog posts and other assorted text reside and wait for the next free 30 mins to be worked through. Highlights there get shared to Tumblr, because unfortunately that seems to be the only location that Instapaper let’s me reliably push text to.

These are the pains I have, working with books and online text. I do not even want to start imagining the pains academics and specialty professions have to go through.

Felix Salmon recently had an interesting piece up. Coming out of DLD13 he asked: “Are annotations the new comments?” and it seems to me that it slowly seeps into our collective conciousnesses that the way we work with digital text is a complete mess.

It’s 2012 and digital text still isn’t solved. Can someone fix that, please?

It’s that time of the year where everybody publishes their accounts of what happened last year. Victories and defeats, achievements and and the way there, wrapped up and nicely tinted by the holidays which after all gave some time for consideration and thought. So here goes mine:

2012 has been one hell of a ride. This time last year I was longing for some change, but little did I know it would come crashing down on me. 2012 has been a turbulent year of losses, falling-downs and getting-back-ups, soul-searching, reinvention and pushing forward.

I lost my Granddad in 2012 who died after a quick illness in his hospital bed one early morning, and to this day I am glad to have visited him one last time a week before he passed away. I also lost a dear friend and mentor to cancer this summer. A good-humoured friend who essentially talked me into moving to Berlin and encouraged me to take on more risks. He always told me to not become “middle-aged before my time” and he was right. Phil, you will be missed.

2012 also marks the biggest change in my professional life so far. After having been made redundant late February and some soul-searching afterwards (I haven’t not had a job since I turned 14. Being without one was unknown territory) and turned freelance consultant later this year. So far, it’s working out alright.

Getting Up

There’s a proverb, saying that “failure is not in falling down, failure is in not getting up again.” I’m extremely grateful for my friends who helped me to get up after being let go. I don’t think I would have dared to go freelance if it wasn’t for all the wonderful people in Berlin who encouraged me to do so. If anyone is looking for for reasons why Berlin is so hot and trending right now, it is those people: People who’ve been around the block, who’ve done the one odd thing or other, and can help with experience and encouragement. The worst thing that could’ve happened to me would’ve been to still be in Heidelberg, where the losing of ones job would’ve spelled disaster in the eyes of friends and family.

Taking on Challenges

And so I’m now running my own business. Which is challenging in its own right, as anyone who runs their own business can attest. You have to learn all the little things. How does setting it all up even work, how to invoice and tax correctly, how to get clients.

And you have to decide which jobs to work on. I like to take on jobs that venture into something new. I’m a consultant helping clients making sense of the Internet of Things, so that’s a natural. But nobody prepares you for the dread of doing things for the first time.

Everybody is doing things for the first time all the time, or so one would hope. But it leads to a sense of constant uneasiness. What if I’m not as good as I think I am. What if the client doesn’t like the work. What if I overstretch on this assignment. It takes a certain attitude to cope with this. In jargon you would probably call it a risk-profile. But the truth is: if you’re constantly pushing outwards of your comfort zone to pursue interesting projects and clients, well, you just won’t be comfortable a lot of time.

And that has been the biggest learning for me in 2012. Learning to live with the uneasiness, trusting that I’m oftentimes more capable than I think I am, taking on the interesting projects which I’m not 100% comfortable with.

So here comes 2013. And this one’s going to be a doozie.


PS: The media is hysterical about 2013 being the year of the Internet of Things. If you actually want to make it so, maybe we should talk.

Katie Fehrenbacher writing for GigaOm:

Nest’s learning thermostats have collectively saved over 200 million kilowatt hours of energy since they were launched back in October 2011, says the startup. That’s about the equivalent amount of energy to power the Empire State building for four years, and which Nest says is a figure that blew their minds when they calculated it. […]

In comparison software energy leader Opower, which processes data from more than 50 million homes, says it’s saved over 1.6 billion kilowatt hours of energy, or enough energy to power the Empire State Building for more than 33 years. But Opower saves much smaller energy percentages per home — or on average about 1.5 to 3.5 percent reduction on an energy bill.

GigaOm

The numbers are interesting for the low roll-out that both services have achieved yet. Further, both services are just first iterations of a larger trend that will increasingly apply data-driven insights to questions of efficiencies.

We’re only just starting to see what connected devices truly mean for energy.

I’m reading a very interesting book at the moment, which is called “Thinking, Fast and Slow” and is written by psychologist Daniel Kahnemann, which deals with the persistent biases in our thinking (others would say: mental models) because we just have no intuitive feel for complexity, randomness and statistics in general, but try, automatically, to generate a good story which makes causal sense.

Back in Heidelberg, I did two terms of Introductory Psychology and Statistics for Psychologists, and although I wasn’t very good at psych 101, stats for psychologists left me with a very good understanding of there almost never being causalities in anything we research, and to always look for artefacts that could artificially inflate correlation coefficients.

I’m enjoying Kahnemann’s book quite a bit, as it allows me to refresh fundamental stats knowledge and re-experience the how stats and intuition so often clash. Also, it’s a tremendously good read. How can you not love writing like this?

We are pattern seekers, believers in a coherent world, in which regularities (such as a sequence of six girls) appear not by accident but as a result of mechanical causality or of someone’s intention. We do not expect to see regularity produced by a random process, and when we detect what appears to be a rule, we quickly reject the idea that the process is truly random.

I’m quite glad that with Kahnemann and Nate Silver, stats is getting back into “pop science” if such a thing exists, and people tend to give numbers and probabilities a bit more weight over “intuition.”

And then I’m reading a post called “How to get Startup Ideas” by Y Combinator founder Paul Graham, and I’m baffled, as I see my timelines swelling with people recommending the piece.

Empirically, the way to have good startup ideas is to become the sort of person who has them.

What? How would you even test that, empirically?

I’m baffled because the people gushing over this piece which is riddled with confirmation bias and 20/20 hindsight is that it’s the same people who regularly pronounce to love Nate Silver and hate the election-cycle punditry. But this is exactly the same, albeit in a slightly different arena.

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. Kahnemann

Yes, we all love startups which solve a problem that exists, and which have some kind of organic discovery behind them. But the question then not only becomes whether that organic discovery actually happened or whether it just makes a good story. The bigger point is: where’s the data? Is there any indication that startups in general perform better when conceived this way?

The amount of evidence and its quality do not count for much, because poor evidence can make a very good story. Kahnemann

Yes, Apple, Microsoft, Google, they all were break-out successes, that did not originally start as a company. So it’s obvious that we look for patterns they might share. However, I’m not sure this has a lot of predictive value, not only because circumstances change, but because we tend to discount all those cases that essentially shared the same patterns but did not become breakout successes.

I’m as much a critic of the startup fad as Paul is in his piece. I’m annoyed by what I call the MBA-driven startup scene in Berlin, who are going for what they think might target a good market rather than build a product they genuinely are excited about. However, I readily acknowledge that this is an investments thesis which so far has failed here, where the runaway successes have been rehashes of proven business models.

An organically conceived startup certainly makes a good story. I wouldn’t be sure it makes the better investment.

When William Gladstone, a four-times Prime Minister of Britain in the 19th Century and, as it might be judged now, overall pedant, decided to count the occurrences of different colours in Homer’s epics The Iliad and The Odyssey, he made a striking discovery. There’s no mention of the colour Blue anywhere in the over 28,000 verses of the combined works. Instead, objects that are traditionally thought of as blue are referred to as dark shades of colours close in wavelength.

An experiment with tribal populations in Africa sought to find out what people see when they have no word for the colour Blue. It turns out that it is indistinguishable from similar colours.

Both of these stories appeared in an episode of RadioLab that aired earlier this year, and serve as a powerful reminder of how the words we have and use influence and determine our thinking about the world.

In preparation for on of my latest talks about the Internet of Things I was stumbling about this. The theme of my talk was to be “Social Objects”1, and for me it was quite clear that this means something different than the classical interpretations of the Internet of Things. Because our current models of thinking about the Internet of Things are anything but “social”.

For me — and I think I was quite clear about this in the panel after my talk — the Internet of Things signifies a kind of 1992 moment. Massive improvements of wildly divergent strands of technology come together and form something categorically new. However, we’re not really equipped to cope with the new, and hence we package it in terms of what we already understand.

Take for instance this amazing ad of the X10 Powerhouse, a home-automation system, published in 1985 and written for the C64.2

Ad for the X10 Powerhouse

I find this ad so endearing, not because it shows how far the thinking in the 80s already was, but because it showcases that our thinking around networked objects and the instrumentation of the world fundamentally hasn’t changed in the last 30 years.

Metaphors are how the brain ‘sees,’ and thus when we lack metaphors for the ___ [future/present/data/wisdom] in front of us, it is invisible. — @kylecameron

That’s the reason why I’m so keen to see what happens in our world right now in analogy to 1992. When the Web was developed, the only way we could understand it was in terms of a better document access and retrieval system, much like the early home- and personal computers were understood as better typewriters and calculators. It took an enormous effort of testing out the possibilities of the new technological tools. Tinkering and building and building and exploring were fundamental in bringing the web to where it is today. But even more important was the sense of community and the sharing of the findings of exploration, as this necessitated the building of a shared vocabulary, of commonly understood metaphors, which spread out. And with each accepted metaphor, the groundwork for further exploration was laid.

To define the future, we have to shape the language, the terminology, the concepts, the metaphors with which we talk about the future. If we have no metaphors for the future, it might as well not exist. That’s why we’re confounded by the lack of Capital-F Future. The ideas that usually resided under the umbrella of Capital-F Future nowadays smack of 80ies retro-futurism.

We’re only slowly coming to terms with the world of networked thought, we’re only slowly building the corpus of metaphors to cope with pocket computers that, for acceptance’s sake we still call phones, and the opening up of knowledge. However, if we want to build the future of connected devices, of a meaningful, open Internet of Things, we need to tinker and explore, and share the results of our explorations, in the hopes of building a shared language, a body of metaphors, to shape the future.

The notion that the Future is already here, it’s just unevenly distributed, does ultimately not only refer to the technology we have at hand, but firmly asserts itself in the language we use.


  1. And in that it had nothing to do with the locus of social interaction, as framed by Jyri Engeström

  2. Thanks again for that find to Claire Rowland

There’s a term lately which get’s me increasingly annoyed. The term is “Digital Immigrants”, which is usually used to paint a line between them and the so called “Digital Natives.”

Think about “Digital Natives” what you want. I don’t like it because of its imprecision, its focus on a quality as a function of age rather than a function of usage.1

I don’t like “Digital Immigrants” for a wholly different reason: it is mostly used as a line of defense by people who do not want to engage with the Internet, or digital technology, in their lives. The term “digital immigrant” implicitly gives them permission to not have to put in effort to learn this medium, and makes their inability to grasp its effects or even just how to use it not a failure of their own or the engagement they have with the medium, but a function of their early birth.

It gives them an easy out.

“Please forgive my inability to correctly send an email, but, you see, I’m an digital immigrant” they say, without even a hint of irony.

When you encounter people referring themselves as “Digital Immigrants”, proceed with caution.

Chances are they’re not “Immigrants”, but ignorants.


  1. Which in turn correlates with age, but age here is a corollary, not the primary determinant. 

Nick Rowe:

Everybody knows that if you press down on the gas pedal the car goes faster, other things equal, right? And everybody knows that if a car is going uphill the car goes slower, other things equal, right?

But suppose you were someone who didn’t know those two things. And you were a passenger in a car watching the driver trying to keep a constant speed on a hilly road. You would see the gas pedal going up and down. You would see the car going downhill and uphill. But if the driver were skilled, and the car powerful enough, you would see the speed stay constant.

So, if you were simply looking at this particular “data generating process”, you could easily conclude: “Look! The position of the gas pedal has no effect on the speed!”; and “Look! Whether the car is going uphill or downhill has no effect on the speed!”; and “All you guys who think that gas pedals and hills affect speed are wrong!”

And:

If the driver is doing his job right, and correctly adjusting the gas pedal to the hills, you should find zero correlation between gas pedal and speed, and zero correlation between hills and speed. Any fluctuations in speed should be uncorrelated with anything the driver can see. They are the driver’s forecast errors, because he can’t see gusts of headwinds coming. And if you do find a correlation between gas pedal and speed, that correlation could go either way. A driver who over-estimates the power of his engine, or who under-estimates the effects of hills, will create a correlation between gas pedal and speed with the “wrong” sign. He presses the gas pedal down going uphill, but not enough, and the speed drops.

While these ideas are nominally formulated on monetary and fiscal policy, I can’t help but wonder whether there’s some “big data stuff” in there as well…

Yeah, I know, another post about the iPhone 5. But we’re going to look at another angle here.

I’ve been trying to get a handle on the apparent disinterest of the early-adopter crowd in the latest iPhone announcement.1 And while talking about that over beers with a good friend, a thought occurred to me: Mobile computing is essentially solved.

Maybe that’s what all the critics of the patent decision in Samsung vs. Apple are trying to get at when they talk about how the design decisions are obvious — we’ve arrived at large demographics having easy and ready access to powerful mobile computers and a visual and interaction language that is still evolving somewhat, but is readily intelligible to many.

We’ve arrived at the standard assumption being everyone you interact with having ready access to communications and computation devices, mostly in the form of what we still oddly call “Phones”, and not constantly carrying around your Smartphone is unimaginable to many, and may only be ironic expression of a certain quirkiness.

In short: smart phones, and by that I really mean ultra-mobile computers, have arrived. Done.

And maybe that’s why those nagging voices who expect a new “Revolution” from Apple are so persistently annoying: because they kinda have a point, albeit from a completely skewed vantage point.

Revolutions in technology, although happening way more frequently than in any other industry, only happen so often. The real question is: what’s the next frontier.

If we assume mobile computation to be solved, then the focus should be on what can happen around mobile computation. Sure, payments is big on the hype cycle,2 cars are going to profit hugely from the connectivity provided by those mobile computers, and we see a huge uptick in hobbyist sports applications.

However, we might want to look at further, and more interesting corollaries of mobile computation.


  1. And really, my own disinterest in it. 

  2. And with what Square does, rightfully so. 

Quentin Hardy for the NYT, quoting Tony Fadell of Nest Labs:

“We can gather all that data, mix it with other data we store in the cloud, and push different algorithms to different houses to see how people react.”

That approach, continually testing one feature against another and going with the one that consumers respond to best, is called A/B testing when done with Internet software. It is how Google and others make their products. As more physical objects fill up with software and develop two-way interactions with the network, Mr. Fadell says, they can be developed the same way.

We’re going to see a lot of this, with all the up- and downside. I wonder whether traditional manufacturers are ready for this…

Bear with me, this is very rough thinking yet…

Prisoner’s Dilemma

At uni, one of the more impressing encounters with political theory was the International Relations 101 course, especially the intro to Neorealism, which at its heart is an adaptation of the Prisoner’s Dilemma to the circus of International Relations.

The Prisoner’s Dilemma was developed by researchers at RAND1 in the 1950’s and was meant to give a theoretical frame to situations where individual actors act rationally, but this rationality leads to outcomes which are not the optimal outcomes possible in the given system2 and was used extensively to explain the arms race during the Cold War.

What is important is that the Prisoner’s Dilemma assumes that the players do not or cannot communicate. They are left in the dark about the intentions of the other party and, left to their own devices, pursue ego-rational strategies. The non-communication, and hence the uncertainty about the intent of the counterparty, are key.

Singalling Theory

What was made explicit in the Prisoner’s Dilemma, is an implicit assumption that economists make about human nature and a multitude and very diverse set of interactions.

We’re fundamentally ignorant3 of other’s intentions, so we have to rely on sending out signals about our intentions/status and interpreting those signals4. This is nothing new, nor extra-ordinary, but it’s often good to remind ourselves of this fact, especially since the neglect of these facts can lead to worrisome results. For instance, when the UK residential solar photovoltaics programme stalled and needed jumpstarting, many municipalities decided on equipping council housing with PV installations. This had an adverse effect on total PV installations, however, as now PV installations seemed to indicate lower-income council housing.

Now, we have plenty of mechanisms and supporting economic theory about interactions between actors based on this kind of signaling. Price essentially is a function of signalling, as can be a University degree. And as an episode of the Freakonomics podcast explains, it can account for the difference in sales of hybrid automobiles.

However, economics looks at signalling only at the level of human interaction and intent. Biology looks at signalling through a lens that’s labelled evolution. But in a world increasingly riddled with algorithmic actors, we have to ask: do they engage in some kind of signalling?

Efficiencies and disturbances in algorithmic environments

The main advantage of technical systems is that communications is cheap and explicit. Systems that can “talk” to each other usually do so explicitly, without much need for interpretation on the recipient system’s part. That makes a lot of things much easier. Imagine driving, for instance. There’s a lot of interpretation games going on while driving as you as the driver try to anticipate the actions other drivers around you are about to take. The situation is further complicated by every other driver trying to anticipate the actions of the drivers around them, all while nobody having any sort of complete or reliably information about the current flow of traffic they’re in.

Image machine communication added to that mix, where every car is broadcasting it’s current status. This broadcasting of information makes it much easier to gain an informational overview over traffic and reduces the need for continuous interpolation and interpretation, as the current status of the traffic around a driver has a massive effect on the option-space of the driver.

However, this assumes that cars in this example use “honest broadcasting” as no driver has anything to gain from fudging the machine’s signals. What happens though, when there is a strong incentive for machines to “lie”? The example that immediately comes to mind is, of course, algorithmic trading. With the flash crash and the recent Knight Capital Disaster5 there has been plenty of writing about the role of algorithms in complex, multi-variate systems, and how humans now are not capable of deciphering the decisions those algorithms make. These are usually concluded by extrapolating the financial markets and paint a gloomy picture about a world where algorithms take over, incomprehensible to us feeble humans.

What those arguments usually forget, however, is that it’s us humans that have written those arguments to engage in a very specific task: interprete human signals, which are often ambiguous and contradictory.

If machines are to take over larger and larger parts of our everyday, we better make them work with explicit information, rather than inferred signal interpretation. This way, they can solve huge problems.


  1. RAND Corp. is a think tank that spun out of a US Air Force research project. Some claim it’s “the original think tank.” 

  2. For a detailed account of the Prisoner’s Dilemma, please check the comprehensive Wikipedia entry. 

  3. This is really just a really fancy way of saying that we just don’t know… 

  4. The easiest intro to signalling theory I’ve so far encountered is an Episode of Freakonomics. Go check it out. 

  5. Flash Crash and Knight Capital