There’s some interesting thinking around the kind of problems startups should address, and on which scope they should tackle them. Especially the folks at Andreesen-Horowitz have been pretty outspoken about this, starting with Chris Dixon writing about Full Stack Startups, and today with one of »a16z’s« GPs, Balaji S. Srinivasan tweet stream, which is storified here.

The gist of the argument is essentially: if you want to »disrupt« a legacy industry, you’ll need to be able to service the full stack of that industry, rather than just embed you at any level of that industry, because otherwise the cost of operating within the industry becomes too high (integration with existing infrastructure) and the value capture might not work in full, as you rarely have direct access to the customer.

Suppose you develop a new technology that is valuable to some industry. The old approach was to sell or license your technology to the existing companies in that industry. The new approach is to build a complete, end-to-end product or service that bypasses existing companies. […]

The full stack approach lets you bypass industry incumbents, completely control the customer experience, and capture a greater portion of the economic benefits you provide.

Full stack startups | chris dixon’s blog

And there is – of course – a point to this argument. If you’re not attacking a significant part of an industry’s value chain, you may end up being a feature rather than a company, and integration with legacy infrastructure is a costly endeavour.

What is »The Stack«?

What’s missing from this discussion, however, is what actually constitutes “the stack”, and this leads to statements like the following, which have me tweeting in disagreement.

Let me elaborate.

For the last year, whenever I gave a talk about the Internet of Things, I have been talking about the importance of accepting and respecting what Stewart Brand called »Shearing Layers«. Coming at it from a critique of corporate visions revolving around »Smart Cities« and the »Intelligent Planet« or the »Sensing Environment«, the thing that stands out is a blatant disregard for this elemental concept of architecture.

The disregard for those differential velocities of change and the resulting shearing forces is what stops a lot of technology rollouts cold in its tracks, or worse, leads to massive retro-fitting operations.

The site of a building changes on a geological time scale. The core structural elements may last for decades or centuries. The exterior for a decade or two. The services (wiring, HVAC, plumbing) for fifteen years. The layout could change every few years in a commercial space as walls are moved and changed according to the needs of commerce or the org chart. At the bottom layer, the stuff that sits in an office consists of things like desks and computers that get moved around monthly, weekly, or daily.

The big risk with shearing layers is that if you misapprehend on which layer a component belongs, you end up with a building that’s extremely difficult to use. Embed your clever new HVAC system in the structure and you could find yourself tearing down a whole building because air regulations changed.

Shearing Layers of Stuff | Quiet Babylon

Steadfast or Hot-Swappable?

We see that, and I’ve elaborated on this before, although not in as much detail, strategies that revolve around completely setting up your home with connectivity, and thus supplying the full stack, flounder in the market place, whereas simple and single-use appliances tend to work. They might work even better together, and that’s the promise of connectivity, but they are singularly useful.

A while back, there was the interesting discussion around »Steadfast« and »Hot-Swappable« things, and how they relate to each other. This discussion is a good example of Brand’s »Shearing Layers« and serves to make the concept easily accessible. And it shows how deep integration of the stack at the wrong end might actually prove detrimental to your product’s , and ultimately your company’s, success.

So, here’s the idea: Just buy dumb TVs. Buy TVs with perfect pictures, nice speakers and and attractive finish. Let set top boxes or Blu-ray players or Apple TVs take care of all the amazing connectivity and content afforded to us by today’s best internet TVs. Spend money on what you know you’ll still want in a few years—a good screen—and let your A/V cabinet host the changing cast of disposable accessories. Besides, interfaces like Apple TV’s or Boxee’s are miles better than the clumsy, underdesigned connected TV interfaces turned out by companies like LG and Samsung.

And TV manufacturers: Don’t just make more dumb TVs. Make them dumber.

I Just Want a Dumb TV

Consider Moore’s law

The most interesting industry which had to cope with this phenomenon, and is still in the process of coming to terms with the ramifications, should well be the auto industry. Overwhelmed by the sudden success of what we nowadays call Smart Phones, their product development cycles and integrations proved to sluggish to respond to changing customer requirements in a timely manner. When you order a new car, the internal computers which power navigation and in-car entertainment are usually already two years old. Combine that with an average life-time of a car of around 15 years (at least in Europe.) and you see why cassette-tape adapters are still a thing. The auto industry is just slowly beginning to realise that computing-intense tasks should be outsourced to that small computer almost all of us carry with us. The upgrade cycles are just much faster. Apple’s CarPlay, Ford’s Sync platform and the Android Open Automotive Alliance are steps to address this.

But how does this relate to »Full Stack«?

Let’s go back to the Smart Cities example.

Things get trickier when you want to build in sensors and responsive systems to smart buildings or when you want structures that can be shaped and reconfigured by subsystems and cybernetics. The speed of software is cruel and unrelenting. A structure built for durability must have an upgrade plan for its disposable components. Layers are tightly coupled at the creator’s peril.

Shearing Layers of Stuff | Quiet Babylon

By embedding software into increasingly mundane parts of the world, we establish another potential shearing layer within the stack we operate under.

The Stack or Infrastructure?

When we ask startups to attack the full stack of their respective industries, we need to include two conditionals.

Firstly, we must insist that they don’t tightly couple their services, which would result in the shearing forces within the respective industry going unaddressed, making the whole endeavour less efficient.

And secondly, we should ask to identify what actually constitutes the stack before setting out, with an eye towards what is steadfast and what is hot-swappable.

We can also achieve a certain steadfastness in unpredictable or quick to change areas, such as technology. We do this by identifying the needs that are steadfast and the devices that fill those needs, and labeling the parts of the system that are “hot-swappable,” meaning that they can be replaced quickly and easily without bringing the whole system down. (Also see this great blog post on steadfast components by Robin Sloan.)

For instance, if you’re a film buff, owning a nice television or projector would be a steadfast purchase. The utility of a nice picture will never change. What will change, however, is the technology that feeds that picture to the screen, whether it be an AppleTV, a DVD player, or a BluRay player. So, investing in a nice screen is manifestly more important than investing in a top of the line BluRay player. The devices that hook into the television are hot-swappable.

Frank Chimero – Making the Bed You Sleep In

I would strongly encourage startups to have an eye towards the full stack. I think it is important to not be overly dependent on just a few providers for crucial parts of our digital infrastructure. However, there’s no need to ship a power generator with your smart device. Sometimes parts of the stack are just infrastructure you can rely on.

With the Internet of Things gaining in media presence, big corporations realise that there’s something going on that they have no real stake in yet. And they come out rushing. Mostly, that’s in conjunction with creating a new term for that thing that nobody has a solid definition of, so they can own the space. Given that, here’s a quick, tongue-in-cheek primer of what corporations mean when they rebrand the Internet of Things:

The Industrial Internet (GE):
We’re talking about the Internet of really big Things
The Internet of Everything (Cisco):
If everything has an address, it’s like the Internet, but with everything on it
Industry 4.0 (Siemens):
The Internet is like steam – in that we’re thinking about how it affects our machines.
Internet of Customers (Salesforce):
It’s really interesting how your warranty claim is contradicted by your usage pattern.
We’re putting our SCADA Systems online.
Social Web of Things (Ericsson):
Your toaster wants to be your friend!
Embedded Internet (Intel):
Yay, Microchips Everywhere!
Hyperconnectivity (WEF):
We hear this internet thing is really good at what we used to be good at.
Networked Matter (IFTF):
Because how are we going to act on the world if our brains are uploaded to machines after the singularity?
Web of Things (W3C):
There’s always room for one more standard.
Web Squared (O’Reilly):
Your Information Shadow isn’t restricted to Web 2.0 anymore.
The internet of nature (ENEVE):
Those trees don’t hug themselves.
Industrial Internet of Things (Echelon):
Like the Industrial Internet but with more Internet and more Things, presumably.
IOx (Cisco):
An intermediary enablement framework layer to leverage smart BYOI and BYOA on the network edge. Exactly.

I hope this clears things up. Oh, and if you come across more creative new names for IoT, please do let me know!

If you’re a casual observer of tech, you’ll no doubt have noticed that Google announced a rather large acquisition the other day. Heck, it even got prime time and first page news coverage here in Germany, albeit admittedly with the unavoidable spin towards matters of incursions of privacy. It seems this acquisition got people talking.

In my view there are three interesting discussions or inquiries into this acquisition. And although people seem to be totally intrigued about the potential motives of the purchase on Google’s part, I’m not going to wade into that territory.

Market Validation

I had a post sitting in my drafts folder which was picking up on this piece at Slate reporting on the financing round Nest was raising about two weeks ago.

But there’s a real reason why investors like weightless social media companies more than manufacturers of tangible products. It’s called “scale.” The next 100 million Facebook users will add very little to Facebook’s overall cost. But they’ll substantially increase the size of Facebook as an advertising platform. And these companies also benefit from network effects. […]

A company like this can, at the peak of its powers, achieve totally obscene profit margins. So you want to pay a lot for a firm that has even a small shot at that kind of ring.

Slate – Nest vs Pinterest: Here’s why investors like intangible products.

The issues of scaling are of course there. Marginal costs in hardware businesses are not to be underestimated. However, potential profitability is one concern for potential venture investors. The overriding concern is about liquidity events – how big can potential exits go, and how likely is an exit vs. a wind-down of the company. And that’s where marginal costs play a big role, because with pricey user acquisition and lesser network effects, the hockeystick that so many of the investment community have come to rely seemed to be an unfeasible proposition for hardware. It’s the breakout successes that make or break a fund. You need an investment with crazy multiples to survive in an industry that has power laws in its very foundations.

The Nest acquisition proves that those exits are possible with hardware, too.

Multiple sources say Kleiner Perkins was Nest’s biggest investor, and was able to invest $20 million in Nest across the A and B rounds. Our sources say the $3.2 billion cash price Google paid for Nest will generate a 20X return for KPCB — which matches the 20X multiple Fortune’s Dan Primack heard from a source. The money came from 2010′s $650 million KPCB XIV fund, which means Kleiner returned over 60% of the fund with just its Nest investment.

TechCrunch – Who Gets Rich From Google Buying Nest? Kleiner Returns 20X On $20M, Shasta Nets ~$200M

I expect this to increase the hunger for hardware startups among many of the institutional VCs whom so far have sat on the sidelines. The market is validated, and this should encourage additional investments.1

Scaling is still hard

One of the main advantages that the Nest team had from the get-go over potential competitors in the space was its intimate knowledge of manufacturing and supply chains. Coming from Apple, they had to figure out how to manage production of millions of units of any given product. This is knowledge that necessarily is hard earned. But it gets worse: The whole discussion around MVP and lean is in direct opposition of what’s necessary in a hardware startup. You don’t have the luxury of adjusting your product mid-flight, and A/B-testing your way to success would be prohibitively expensive.

MVP and product-market-fit are reasonable ways to go about a software startup, where your feedback loops are much tighter, and you can learn from your customers much more readily. Further, most startups start gathering feedback from users with what are essentially prototypes.

Prototyping in hardware has come down in cost massively, and we see a lot of experimentation as a direct result of that. However, we also see a lot of those experiments hit the brick wall when they have to deliver to 1,000 or 10,000 customers. Often, their prototypes haven’t made it to the DFM-stage, and the resulting re-design and re-engineering add non-trivial amounts of time to the roadmap. I fully expect to see more movement here, as companies and accelerators strive to fill this void and help startups navigate complex supply-chain issues.

This talk by Matt Webb of BERGCloud is instructive.

Don’t bank on connectivity

This is a point I’ve been talking about repeatedly. Nest’s proposition wasn’t an “internet-connected” thermostat. It was the “learning thermostat.” As such, it took an everyday experience and made it better by adding connectivity. This is the core differentiator to a lot of other “connected objects” whose prime differentiator is connectedness without any fundamental changes to the product experience, oftentimes even degrading the usability of said products. This latest addition to LG’s product portfolio is a good example of that, and would be a good addition to this ongoing collection.

An especially interesting characteristic about the Nest, but the Withings product range as well, is that they do not rely on any Gateway boxes, and thus avoid even the appearance of geeky and tech-heavy tools. Rather, they are products that work on their own, without customers having to purchase any enabling equipment. This is, I believe, a huge part of their success formula.2

This is part of a larger trend where I see “Internet of Things” increasingly becoming a term for those of us who work in the industry, while completely being irrelevant in product descriptions and marketing. We might be interested in how products work together, how the connective tissue is going to be used and what those new products might look like. Customers, on the other hand, are mostly just interested in better product experiences. They couldn’t care less.

Disclosure: Both Nest and Withings have been clients of mine in the past. This post reflects my personal opinions on the market and is not informed by their positions.

  1. It’s funny how that Gartner report already looks outdated, a mere 4 months after it was released. 

  2. This is one of the big reasons why I think the impact of Bluetooth Low Energy is going to be much bigger than most currently anticipate. 

Pip Stress Tracker

The Verge, one of the less startup-and-tech-hype-y (although not completely free of it either) publications takes a look at a new fitness tracker that wants to incorporate Galvanic Skin Response, a biomarker that, as a lot of other markers, is perennially ambiguous. And yet, we have heaps of gadgets and methods and organisations focussing on this one measurement in the hopes of learning something about our emotional state, our exertion levels while doing sports, whether we’re lying or which topics give them the best leg-up into talking us into joining them.

Instead of metrics like movement or heart rate, the PIP looks at your galvanic skin response (GSR) — basically, the level of moisture on your skin. It’s already a popular measurement, used by everyone from the police to the Church of Scientology, but it can also be mysterious and difficult to read. And as body-tracking devices like the PIP get in on the game, they’re discovering just how ambiguous GSR can really be.

From Scientology to the quantified self: the strange tech behind a new wave of self-trackers | The Verge

The problem, as should be readily apparent, is that one data stream cannot possibly tell you all those things; one marker does not allow inferences about what we want to know from it. And yet this approach is symptomatic in the Quantified Self-movement. In the hopes that additional context or some nifty machine learning algorithms can at some stage make sense of the data that is captured right now, the data that is easily captured right now gets collected. And that shouldn’t be much of a problem in the Quantified Self movement, although it draws a caricature of the stated goal of self-enlightenment of the movement, as this is just hobbyists trying to learn a bit more about themselves, while (hopefully) mostly understanding the capabilities and limitations of the technology they choose to employ.

This line of thought becomes hugely relevant, however, once the rhetoric and methods of said movement travel, while the understanding of limitations gets left behind. We might observe this when we look how fitness apps try to push into the healthcare space, and learn rather harshly that what they thought was meaningful data doesn’t hold up to the levels of scrutiny that we understandably set towards medical applications.

It is, perhaps, with this in mind that we should look at the FDA’s recent decision to bar 23&Me from operating, giving the pseudo-medical nature of their product, combined with the rather fitness-market approach to data acquisition and delivery. This example that recently made the rounds in Germany is but one marker of this.

The problem with all these approaches, and this is where “Smart Cities”, a topic, judging by the number of interview requests I’ve received lately, is making a resurgence in the technology and economics press, starts to enter the picture, is: We measure what we can easily measure, and then forget that this was the reason we measured it.

There exists a school of thought, which, unfortunately, has overtaken management that subscribes to the credo of “if you can’t measure it, you can’t manage it.” I say unfortunately, as most who apply this credo take it to say “I only manage what I can measure” – and then, because, you know, you’d do it as well, go on to start managing that which is most easily measured. It’s called the low hanging fruit.

And under these conditions, Smart City concepts start to look like fitness trackers that rely on Galvanic Skin Response as their primary source of data. It tells you something, but you have no reliable way of figuring out what that something is, never mind making decisions based on that something. And nevertheless, we’re pushing forward to instrument our cities, with no mind of what we actually want to learn about them, and whether what we’re doing right now is in any way fit to tell us that.

Smart Cities are coming along our way. This much is clear. And I’d be remiss to say I don’t crave a little more efficiency and legibility in the infrastructure and processes that I encounter in the Great European Cities. My native Berlin is particularly lacking in this regard. However, given as cities are the cultural and economic engines, and probably the best assets we have, we need to take care of how we approach this.

Because making policy decisions based on data you only gathered because you could cheaply collect it isn’t smart. Not in a fitness tracker, and exponentially not in a city.

Nest announced the next step in their product lineup yesterday. The Nest Protect, a smarter smoke detector which comes with all kinds of bells and whistles, and sports Tony Fadell’s trademark design chops, seems on first glance like a product that is undoubtedly going to sell well. The first conversations I had yesterday and today indicate that people were clearly waiting for this kind of product. Everyone seems annoyed with their smoke detectors, and to be honest, I myself have been thinking about hacking the smoke detectors we currently have to make them accessible programmatically.

What I’m really interested in, however, isn’t the Protect in and of itself, but how it showcases what I’ve been trying to convince clients of for a while now. That is, a product strategy for Home Automation that actually works.

Most companies in the Home Automation space (even though Nest likes to call it “Conscious Home”) tend to take the perspective that we see playing out in the wider IoT-Space as well: they focus on the integration, the middleware, the hubs, because they know how to do that, and because they’ve learned over the last couple of years that being a platform play is the most profitable one. So they’re trying to sell you their integrated solution, but mostly they just want to have the hub, and ideally a couple of API’s to then go out and beg developers and hardware companies to please build for their platform.

However, the incentives for customers to actually invest into a hub or integrated solution just aren’t there. Experience in the markets shows us that customers don’t upgrade their home all at once, but do it piecemeal, which flies in the face of the integrated players. And people certainly don’t buy abstract hubs for which they see no clear value.

And here, the beauty of Nest’s strategy really shines through: They are developing a platform, but it’s not at the core of their product strategy. Their products are individually useful, and obviously so. There’s no assumption here that, in best “build it and they will come” fashion, once they have a hub and documentation, companies will start building for it. Rather, it’s a collection of individually interesting products which work even better together.

And that’s at the core of the depressed home automation market. Every product that wants to play a role here needs to be individually useful. The assumption needs to be that any given product is going to be the only one of a product range that a customer buys.

If they get better with additional components, then you’re onto a winner, but you ultimately cannot require customers to completely invest into your – in most cases nonexistent – ecosystem.

Disclosure: Nest Labs is a client of mine. This post however is not informed by my work with them but purely reflects my personal opinion.

On the heel of Apple’s latest product announcement, we’ve seen a lot of coverage about the devices, and the potential impact in terms of Apple’s strategy, what it means for the market, what it means for the wider industry. So allow me some idle speculation, in the form of three short quotes, as well:

However, one thing you can say is that when you use passwords together with biometrics, you have something that is significantly stronger than either of the two alone. This is because you get the advantages of both techniques and only a few of the disadvantages. For example, we all know that you can’t change your fingerprint if compromised, but pair it with a password and you can change that password. Using these two together is referred to as two-factor authentication: something you know plus something you are. […]

Another interesting fact is that the phone itself is actually a third factor of authentication: something you possess. When combined with the other two factors it becomes an extremely reliable form of identification for use with other systems. A compromise would require being in physical possession of your phone, having your fingerprint, and knowing your PIN.

Fingerprints and Passwords: A Guide for Non-Security Experts

In the age of context, iBeacon can provide the information you needed when it is needed. Just like NFC, iBeacons even allow you to pay the bill using your smart phone.

With iBeacon, Apple is going to dump on NFC and embrace the internet of things — GigaOM

Apple now has 400 million active accounts in iTunes with credit cards. That means that all of Apple’s iOS devices are linked to those credit cards, too.

Apple’s Stash of Credit Card Numbers Is Its Secret Weapon –

This, of course, is just one direction this could be going. We’ll definitely see more of the Smartphone industry going into the direction of authentication and identification devices. The current scandal regarding intelligence overreach, especially in the US, couldn’t come at a worse time for this.

@mhoye: The Internet Of Things You Don’t Really Own

Being based in Berlin, I cannot help but look at the music industry every so often. It certainly doesn’t help that the Berlin Music Week is happening.

And we all should look to the music industry every so often, because a lot of battles have played or are currently playing out there that eventually spill over to other industries. Music had to deal with disbanding way before it hit the newspapers, and with monopolistic distribution infrastructures way before Amazon became a thing in eBooks. And given their greed, they’re also the folks who seem most stuck-up when it comes to “intellectual property” rights, and licensing.

I haven’t “owned” music in a long time.

Access means licensing

When I moved from Heidelberg to Berlin, I sold all the CD’s I still had. And even though I’ve invested into a good LP-player when I got to Berlin, my Vinyl collection isn’t actually shaping up at the pace I initially had hoped. Nonetheless, I consume more music than ever.

I haven’t “owned” music in a long time, because in most cases, I get a license to listen to music, rather than ownership over a physical copy of a piece of music.

This is how we deal with what the industry likes to call “content” nowadays. We don’t own it. We own licenses to it.

This change happened, because the underlying structure changed. Instead of physical objects, music turned into data. Files on your computer. On those files on your computer, you never really own. You just have a license to store them, and to play them. So when I listen to music on my stereo, it plays on the stereo that I own, but I don’t really own the music.

Terms and Conditions may apply

Here’s an interesting thought-experiment: With the new intelligent, smart door-look, which relies on the manufacturers server infrastructure to communicate with your mobile phone to reliably establish, what happens when you run afoul some asinine provision in that service’s Terms of Service?

You own the physical product. There is a lock on your door, that you can remotely control from your smart phone. But you cannot use the service anymore, so it’s functionality, that added benefit that you originally bought it for, is essentially meaningless.

If you take, for instance, the slew of smart fitness tracking devices. Be they manufactured by Nike, Fitbit, Jawbone or Withings, they all don’t just do tracking. Their real value lies in the connection to a web service, which allows me, as a user, to track my workouts, compare that with my friends, give me aggregate analytics, etc. That use, of course, is bound by Terms of Service, and a whole slew of other legalese, and means, in essence, that I only really own a thing on my wrist. The functionality that I bought it for resides on their servers. I only have a license to use it. But should I ever run afoul of any license restrictions, that object, that fancy thing on my wrist that I paid good money for, is essentially useless. It’s a Thing I Don’t Really Own.

As A User, I Want To Own My Stuff

A while back, Amazon-users suffered an ironic incidence: Unauthorized copies for George Orwell’s 1984 had been distributed as eBooks via Amazon’s Kindle store. After the rights-holders complained to Amazon, the company decided to remove all purchased copies from any Kindle, including notes and all.

How do we ensure that the same situation doesn’t happen with the users of fitness tracking devices, smart door locks, or intelligent heating systems? One way would be to move as much smartness to the edge, that is, the devices themselves. That of course is not always practical, given the restrictions that most connected devices operate under. And it’s often not desirable, as those devices benefit from aggregate data analytics which combines the data from several units.

Another would be to force companies to service users who bought their devices. But as every purchaser of DRM-protected music from the Microsoft Zune-Store will be able to attest to, this is not a guarantee for users.

Ultimately, nothing will beat open standards and interoperable systems. Service portability – being able to control my intelligent heating system with whichever service provider I so chose – is the only meaningful solution to this. The alternative would be heavy-handed regulation or upset users. Both scenarios no vendor could currently like.

I’m still waiting for the first accident involving an “autonomous vehicle.” Not because I want this particular set of technologies to fail, but rather because it will set off a discussion that currently is only playing out at the sidelines in Special Interest Groups and Industry Consortiums but which really needs a broader focus and much broader input.

The question at the heart of the first accident with a self driving car is going to be: who is to blame?

Who is to blame?

Was the accident triggered by a malfunctioning algorithm that controls the car? Bugs are everywhere and unavoidable, after all, and who is to say that the software driving your car should be without fault? Or was the accident cause by the traffic infrastructure sending the wrong signals? Maybe the traffic light signalled red, but the electromagnetic signal that the car relies on was configured wrongly.

Maybe there was interference on the radio-frequency spectrum. Maybe even intentional interference. Did the car manufacturer integrate the software that drives the car correctly? And what about attendant data? Did the GPS unit report wrong information and the car thus misjudge the velocity it was travelling at? What about the driver? Did they enforce some kind of out-of-bound situation that the car just wasn’t able to cope with?

So really, who is to blame?

Early warning

The ensuing discussion around that question will be instrumental in shining a light on the coming world of the internet of things. Because that self-driving car might just be an isolated incident, but at the same time is an early indicator of a much broader trend: networked and pervasive actuators.

The roll-out of distributed sensor networks is just the first step of the Internet of Things — that part, where the Internet learns about the world. Ultimately, though, we want stuff to automatically act upon the information that were gathered, and that means rolling out actuators. That might just be automatic door-locks that open when someone authorised is geographically near, but it might as well be traffic signalling and guidance systems, and autonomous vehicles.

Lawyers love this

However, the big question around that roll-out is one that lawyers love: the question of liability.

Because, when—and it’s going to be when, not if—bad stuff happens, it’s not as easy anymore to determine who fucked up.

As hinted upon earlier, there’s plenty of moving parts that determine how an actuator will react in a given situation, and of course, the more complex the interactions of the actuator with the world, the more complex the surrounding dependencies and conditions that actuator relies on.

It is said that Google’s self-driving car currently generates around 1 GB per second.

But what is ensuring that all that data is accurate? And what happens if you network those cars, that is to say, cars rely on data generated by other cars. There are plenty of attendant considerations to pay attention to: can you ensure data is coming from where data claims it’s coming from? How do you ensure it’s an accurate reflection of reality? How do you make it tamper-proof, but at the same time interoperable? Open even?

Beyond Privacy

We’re currently talking a lot about the privacy implications of the Internet of Things, and in the current political climate, that is one conversation that is about time we had. We should, however, not lose sight of the bigger picture.

We’ve already created systems that are not legible by humans anymore. Kevin Slavin’s talk about “How Algorithms rule our world” hopefully has made the round far enough. All that information that we collect or start collecting about the world – it wants to be acted on. Preferably in the real world. We need to think about liability chains, and data provenance, if we want this to be in any way manageable.

I love it when Alex tells it like it is.

Gartner, a trends research group (or science fiction that costs a lot more to subscribe to) published their yearly Hype Cycle Chart, describing the Internet of Things (which they’ve only started adding to their chart in 2010 after “mesh networks” was doing rather well) as being “more than 10 years away” for the third year in a row. Well I sincerely don’t know what they’re smoking in Connecticut but they should come to Europe more often. Trends researchers have a disproportionate influence on the tech sector and C-suite executives who don’t have time to keep up with what’s going on outside their organisations and rely on outside opinion. This is totally fine, unless that source is living under a rock.

» Against the culture of “meh” and the internet of things. designswarm thoughts

I’ve had my fair bit of frustrations with the Gartner HypeCycle, and she cuts right to the chase. I think the hype cycle as such is a good, intuitive tool for looking at the progression of technology (through the hype, one might add.)

This all wouldn’t be too bad, if Gartner’s predictions didn’t have the effect of self-fulfilling prophecies. If C-Level execs hear that IoT is still 10 years out, they’ll more-likely-than-not defer investment in that space. After all, there’s still time.

Gartner is doing the industry, and their clients whom take their reports at face value, a massive disservice. The infrastructure is being built as we speak, and industry after industry is looking to adopt more computing and communications in their products. Either Gartner have a very narrow definition indeed of what the Internet of Things is, or they’re not really paying attention. I’d love to see their methodology and the reasons why they believe it’s still 10 years out.

Cue Alex again:

We should remember that the people paid to write reports for the C-suite executives in technology will breed the type of opinion that Ken Olson had in 1977.


“There is no reason anyone would want a computer in their home.”

I never really announced it here on the blog, but I’ve been working out of a shared office space in Kreuzberg for the last 6 months. What started out as just a space shared by Peter and Matt turned into more of a shared space for work, collaboration and sparring ideas. And after Alper joined and Chris had a brief run here, it needed a name. Enter the Kreuzberg Academy for Nerdery and Tinkering.

And as KANT, we sat down last week for a one-day content sprint – a concept which is loosely based on the book-sprint methodology and produced our first report.

Intrigued as we are with the wider area of healthcare and fitness, we were looking at current developments and potential implications of technology in this sector. Especially the convergence of the unregulated “fitness”-market and necessarily strictly regulated “health-care” caught our attention, as did the enormous changes in development velocity in these areas. We also looked closely into changing attitudes around patient empowerment, and how Electronic Patient Records are changing the industry.

The report was produced during the course of a day, and as such has been a tremendous exercise in collaboration and a good learning experience.

We quite enjoyed writing this report, and are pretty proud of the result. I hope you find it valuable.

If you wish, you can also download the full report as PDF or ePub