Dave Winer, on App.net:

What seems very unlikely to me is that the new thing will fall neatly in place on one company’s servers, or even with one company’s servers at the center. It’s never been that way. Going back to the previous examples, C and Unix shipped in source and were widely adopted. After a short period every system had a C implementation, with a lot of interop. There were huge numbers of PC clone makers that all ran each others’ software. Anyone could start a web server.

It seems to me IoT people should pay careful attention to the current discussions around App.net. There could be a lot to be learned about platforms, interop and adoption.

The Open Knowledge Foundation, announcing the OKF Lab:

Energy Data is theme for the first OKF Lab, taking place in Berlin 1-8 October, 2012, and bringing together a small team of coders, designers, data wranglers, technologists and policy experts. The theme is structured broadly to incorporate a wide range of sub-topics e.g. renewable energy resources and energy efficiency, fossil fuels and traditional energy structures, electricity demand and supply, government spending around energy policies as well as emissions from energy use in transport, industry, etc.

This looks very interesting, especially as the OKF folks have a bit of a track record in opening up data. My suggestions would be to get proper EU grid data in as close to real-time as possible, especially demand curves which tend to be thought of as closely guarded but mostly are in plain sight, just in weird formats. Having a make-up of energy sources in close to real-time would further allow to pipe that data through AMEE and get close to real-time emissions data.

It’s very good to see OKF doubling down on energy data after their successful Energy & Climate Hack Day back in March.

Recently, while researching for a story I was writing, I stumbled over a blog post by Cosm’s Ed Borden, in which he stated:

It never occurred to me that crowdsourcing radiation data from Geiger counters would be an application for [Cosm]!

This is pure gold. This statement captures at the core why it’s necessary to put products out there: because we ultimately have no idea how it’s going to be used.

You can design all you want, the moment your product reaches its customers hands, they are going to decide if, and how, it fits into their lives. On a conference last week, Sami had a slide with a very compelling quote by William Gibson:

The strongest impacts of an emergent technology are always unanticipated. You can’t know what people are going to do until they get their hands on it and start using it on a daily basis[.]

However, there are two ways of dealing with that; you acknowledge that this is what happens, and design for it, allowing for unintended use by the users of your product, or you deny that this is what happens, try to lock down your product and do what often is dubbed “control the experience.”

The latter is a treacherous promise, however. You have no control over the circumstance in which your product is going to be used, how then can you even attempt to “control the experience”, to design for specific use?

There are countless examples of (especially) web-businesses realizing that their customers were doing things with their services that “never occurred to them.” There are so many examples, in fact, that the industry has even created a word for that realization and the subsequent re-alignment of business strategy: it’s called “pivoting.”1

There are whole corpus of theories about how businesses should work with their customers to better the experience for all parties involved.

But at the core, this is the spirit of the web – with technologies openly available and reusable, we’re lowering the barrier to new applications, to new surprises, to people using stuff in the way it fits them, and more importantly: in a way they see fit.

“It never occurred to me” is the gold medal of technologies that enable, expression of success, of new, unforeseen applications and manifestation of working on stuff that matters. Let’s hear it more often.


  1. Please bear in mind that most of the business manoeuvres that are called pivots aren’t in fact pivots but rather fundamental restarts of the business. 

An interesting discussion happened earlier this week, on the merits of free, when it comes to web-based (some say: “cloud”) software products. It started off with Fred Wilson’s In Defense of Free and immediately got countered by Dave Winer’s For-pay Services.

Both articles are worth reading. But what struck me was the very different view on the economics of business they have, and this strikes me as a continual source of frustration between hacker/engineer/designer-types working on the web, and business people on the other hand.

The first group likes to see, and favours, business models like Instapaper’s, of which Marco Arment says:

I sell an app for money, then I spend less than I make.

This is what I’d like to call the cost-driven thinking in economics. It’s intuitive and easy to understand.

But it doesn’t get you very far when discussing value-creation. And of course, Fred is right when he asks why users should pay for a service for which they are the primary value creators. And it’s value where most of the traditional economic theory takes it’s origin. Value, even just perceived value (some might argue: most importantly the perceived value) dictates all other subsequent mechanisms. Value is what makes the demand curve work. For a quick primer on this, please check out Chris Dixon’s recent piece.

So why am I blogging this? I think it’s important to point out that people, although they think to talk about the same thing, don’t really do that.

People without a background in economics tend to fall into the camp of “cost driven economics,” and argue along those lines. However, as Chris observed:

Situations where cost and price have zero or negative correlation are far more common than most people assume.

People with a background in economics, obviously, think more along the lines of market segmentation, value creation and capture, and if they’re of the good, web-friendly kind, they’ll try to “create more value than they capture.

Of course even web based software is incurring costs, but just thinking in those terms will lead to under-adoption. Just thinking in terms of value generation and capture, however, under-accounts for the very real, and growing, concerns about ownership and longevity of services.

Alexis Madrigal, over at The Atlantic, points to the rather interesting concurrence of horse riding and acceptance of pants as garments.

What all these examples suggest is that technological systems — cavalry, bicycling — sometimes require massive alterations in a society’s culture before they can truly become functional.

Interesting are both the swipe at conservatives, who decried the new dress codes women adopted to account for bike riding, and the overall embedding of social norms to make technology useful. We’re starting to see the first alterations in our social behaviour to adjust to the net. This seems interesting especially around recent discussions around the ”Manufactured Normalcy Field.”

John Lilly, of Greylock Partners, observes how his iPhone home screen changes. Notably, it now includes a “Home” folder:

The other folder is my “Home” folder — it’s all the apps that monitor & control things in my house. Nest, Sonos, Comcast, things like that. Feels to me like we’re right on the verge of an avalanche of things like this — they’ll be different when you’re at your house, at your office, and on the road — but they’re apps that help you understand your environment and interact with it.

This is part of my nightmare of home automation and the Internet of Things: a lack of interoperability is leading to a plethora of vendor-specific applications. As a customer, I then have to remember that to talk to my thermostat, I need to use a different app than to talk to my smart meter.

Of course Fred Wilson has a point when he posits that:

Mobile does not reward feature richness. It rewards small, application specific, feature light services. I have said this before but I will say it again. The phone is the equivalent of the web application and the mobile apps you have on your home screen(s) are the features.

However, those features in home automation usually cross over multiple vendors. If I want to go into Movie Mode, I want my lights to dimm, my stereo and my tv to go into movie mode, and the blinds down. If I want to correlate my heating/cooling costs, I need to see the data of my thermos alongside my smart meter. In short: Interop is absolutely necessary. The current attempts of appliance makers to own the whole stack don’t move into that direction, though.

From the Guardian’s Data Blog:

What’s particularly baffling is that while government support given to environmentally beneficial renewable power sources is subject to seemingly endless media and political scrutiny, the 500% larger subsidies given to oil, gas and (to a much lesser extent) coal rarely get much attention.

In case that 500% figure sounds hard to believe, here’s a chart showing the IEA’s estimate of all the energy subsidies given out globally over the last few years. As it makes clear, fossil fuels – and specifically oil and gas – account for the overwhelming majority.

It’s worth pausing for a moment to take in the sheer amount of money we’re talking about here: more than half a trillion dollars in 2008 (when energy prices hit record highs), equivalent to the total GDP of Sweden or Saudi Arabia.

Madness.

One of the big advantages of being a freelancer is the ability to move fast, to make split-second decisions on what you do, where you follow up, and what to engage in.

And it was such a split-second decision which made me scramble for bandwidth at the Berlin Data Science day to organize a trip to the Birmingham ‘Seminar on Household Energy Consumption, Technology and Efficiency‘. A decision, which turned out to be the absolutely right decision to make.

Working on the Energy Industry in Germany, the specific problems and the context of the German energy transformation often get so embedded in you thinking that it’s at times necessary to step out and ponder the issues other countries are facing, just to give you back some perspective.

It turns out that Great Britain is facing many challenges itself, many of them very different from the issues we face in Germany. Whereas we are increasingly struggling with balancing electricity supply and demand in the face of ever greater contribution of renewables, Britain is working on interventions which range from energy savings to bettering the outlook for the undersupplied.

The core issues, however, seem to be almost exactly the same: how to engage people in a dialogue and bring energy to the attention in the first place. How to make people aware of their energy consumption, and maybe help them modify their behaviours in ways that contribute to a greener energy supply, less and more balanced demand, and a generally more conscious approach to energy matters.

Behaviour of Energy Consumption

Starting out the seminar was Prof. Gordon Walker of Birmingham, who posited we need to look beyond energy to what people actually try to do when their using energy. That very much sounds like the thinking which seems to inform a lot of design lately: don’t look at the product in the first place, look at what jobs people want the product to do. In order to have a shot at influencing peoples attitudes towards energy consumption, we need to understand the jobs people require that energy for.

That segues nicely into Benjamin Cowans presentation on how to design technologies to impact behaviours. One of the major points in his presentation were that pure numerical presentation of data (basically every In-Home Energy Display out there) doesn’t necessarily work1, and that trying to “gamify” energy efficiency and savings lead to short-term, unsustainable behaviour, such as using candles for light, to win games.

However, using a combination of numerical and ambient information, one should be able to tailor specific feedback to impact behaviour. The big challenge is that a lot of discretionary energy expenditure is habitual, and it’s hard to modify those habits.

Also working on Social Feedback Systems is Derek Foster of Lincoln University. The projects introduced (KillaWhats and Watts-Up) tied energy consumption into the social platforms people use to study how social pressure can affect energy consumption patterns.

Skinnerian Behavior Modification?

What struck me as particularly interesting is that a lot of the presented approaches seem to align with the core assumptions of an article I just read on the flight into Birmingham. In it, David H Freedman is tying weight-loss programs into the behavioural psychology of B.H. Skinner and sees huge inroads being made by technology driven solution in bringing down the costs of applying these approaches to a mass market.

However, the weight-loss market is a very different beast, in that you can assume motivation as a given when customers sign up for the program. This is not the case with a lot of the energy interventions that were discussed in Birmingham. Especially on the Smart Metering front, we are faced with a government mandate to deploy the tech to households, even if they may not at all be technologically inclined.

Any approaches trying to have broader impact will have to find a way to deal with people’s generally low interest in all things energy consumption.

Practice Theory

One major take-away of the trip was that the academic discourse on those matters very much seems concerned with the Practice Theory, which beforehand I knew nothing about.

Behaviour Triangle

It tries to look into the broader context of specific behaviour in considering the technological artefacts used, the cultural norms and the skillset and intention behind actions and as such might prove to be a useful tool in analysing the root causes of specific behaviour.

Economics?

What was noticeable absent from the seminar was some kind of economics influence (especially Behavioural Economics.)

When Prof. Walker talked about the Passivhaus Movement in the UK, for instance, he mentioned that, in keeping with the “behavioural triangle” above, they were talking about keeping the interfaces, layout and functionality of those houses roughly the same as to not run into acceptance problems with tenants, which, he thinks, want regular homes. This painfully reminded me of a Freakonomics episode, looking into why the Toyota Prius is such a massive success, while other Hybrids founder. The gist: the distinct design of the Prius has a signalling effect, portraying the driver as caring about the environment, whilst the other models are indistinguishable from their regular, gas-only models.

It seems to me that taking advantage of this signalling function could lead to a massive boost in adoption of energy efficiency tech, and really, that’s what’s at the core of social feedback mechanisms as those presented by Derek Foster, or commercialised by OPower.

I was promised that the slides of the event will be made available. As soon as they are, I’ll link to them here.


  1. Particularly insightful on this problem is the 2010 study of the TU Delft[pdf] which looked at engagement with home-energy monitors over the medium term. Additionally, as one participant mentioned, “the problem of in home energy displays is that they show how cheap energy is.” 

There’s the popular wisdom going around that you only really learn when you do something. It’s in the process of making, creating that you really interrogate the subject matter and might glimpse additional insight.

It was in that vein that in preparing my talk for the NEXT conference back in May I discovered what really gets me so excited about the Internet of Things.

A lot of my thinking lately has been around the forces which fuel the trajectory of the Internet of Things. We see an enormous uptake in digitally connected objects, and in trying to grasp why, three laws crystalized for me which make a broader Internet of Things almost an inevitability. Those forces can be summed up by three laws:

Moore’s Law

Moore’s law should be a commonplace for anybody following the technology sector. Described by Alan Moore in 1965, it essentially posits that the number of components on an integrated circuit (chip) doubles every year. And this law has proven to be remarkably stable since its inception. Prices per unit of computation have come down remarkably, as ever more computation can be put into the same package, and more importantly, for the same price. This means we can have chips incredibly cheaply now.

Koomey’s Law

A corollary of Moore’s law is Koomey’s law, which posits the energy efficiency of computation double’s roughly every one and a half years. That means, in essence, that the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has: A fully charged Macbook Air, when applying the energy efficiency of computation of 1992, just two decades ago, would completely drain its battery in a mere 1.5 seconds.

What Koomey’s law in essence means is that going forward, the energy requirements for computation in embedded devices is shrinking, to the point that harvesting the required energy from ambient sources should suffice to power the computation necessary in many applications.

Metcalfe’s Law

Metcalfe’s law has nothing to do with chips, rather than with connectivity. Formulated by Robert Metcalfe while inventing Ethernet, the law essentially states that the value of a network increases exponentially to the number of its nodes. This is the foundational law on which many of the social networking platforms are based: the value of an additional user does not increase the value on a linear scale, but rather increases the number of connections, and thus the value, for all users.

So what‽

But what does that mean with regard to the Internet of Things? Moore’s and Koomey’s laws make embedding chips into almost everything both economically viable and technically feasible. The chips get smaller and cheaper, and their energy footprint decreases dramatically. But it’s Metcalfe’s law that implies a strong incentive to actually pull that off, because the more nodes we connect to the network, the more valuable the network becomes, and the more value we can derive from the network.

How that value is expressed depends on the form of the individual nodes, and how users can access those. But from a systemic perspective, following these three laws it seems most unlikely that we’re not going to integrate bigger and bigger chunks of our everyday into the network.

Solar subsidies are a huge discussion in Germany recently. The Feed-In-Tariff which made it economical for large swaths of the building-owning population to invest into Photovoltaics has led to a land rush into installing solar capacities. It turns out that the growth in production is far outpacing even the most optimistic predictions. Just how far have we come in terms of solar power?

Reuters had a story yesterday about how Germany’s Solar output was up to 22 GW, from 14 GW just a year ago. That would be a difference of 8 MW capacity. For reference, the biggest German nuclear power plant has peak output of 1.4 GW. 1 This would be a staggering growth in solar capacity within just one year, even surpassing the record growth of 2010.

However, Reuters mentioned that this provided just about half of the German energy demand. Which would’ve been surprising because usually, the aggregate for Germany during lunch-time isn’t anywhere near the Winter peak-demand of about 50 MW. So, being the energy geek that I am, I decided to check on the data. And what I found was pretty surprising.

According to the reported data, solar production outstripped demand for the first time in Germany.

System Vertial Load vs Renewable Production May 2012

This graph shows the aggregate System Vertical Load as reported by the Transmission Systems Operators2, which is a good gauge for aggregate demand, and the Solar and Wind production as reported by the EEX. On Saturday, 26. May, we can see that the curve for Solar output cuts the curve for vertical grid load. At the peak, we have solar output of 22.1 GW with a grid load of 20.9 GW, a surplus of 1.2 GW – approximately one nuclear power plant.

We know of this phenomenon from Spain and Portugal, where it is a somewhat regular occurrence for peak wind output to eclipse off-peak grid demand, but for Germany, this is a new situation. And it is somewhat surprising that it happens this fast.

This puts new pressure on the legislative approach to reduce the Feed-in-Tariffs paid to Solar Photovoltaics, and necessitates renewed urgency in Smart Grid approaches to balance the energy grids.

Update: It turns out Vertical Grid Load is not as good a measure for electricity demand as I initially thought, as it systematically underreports demand, especially under conditions of high production in the low- and mid-voltage distribution networks, which is the case in scenarios of high solar production. I am looking for better data sets and will amend this post accordingly once better data is found.


  1. Yes, I am perfectly aware that nuclear has a more stable output profile. Thanks for thinking of that. 

  2. The four German TSOs are: TenneT, 50Hertz, Amprion and TransnetBW