Entertain, for a bit, the notion that you are one of the predominant players in today’s technology environment, amassing unheard-of fortunes, selling upmarket – some say luxury – goods on a nice margin, and generally being lauded as one of the best design-driven and customer-focussed organisations around. Also, the technology press and the analysts regularly beat you for having missed the biggest technological shift in recent memory.

Apple wwdc 2014 ios quicktype privacy

The simile should be obvious, and the technological shift we’re talking about is »cloud«1. What most of the “analysis” on Apple’s weakness in cloud misses is the notion – and if you’re in any way working in a field that creates stuff, you’ve heard it way too often already, so sorry – that focus is more about saying no than saying yes, that you, your career, your company will be defined more by what you say no to than you agree to, and that excluding things from your option space will open up opportunity to discover non-obvious solutions to problems presented to you.

What we’ve seen in last weeks keynote about the new possibilities in Apple’s ecosystem is exactly that: novel, non-obvious and deeply Apple-y solutions to a problem-space that appeared solved, but unsatisfactorily so.

Take the integration of phone calls and SMS into the operating system, based on proximity and without a cloud component. Sure, a service like this existed before, but it required that you hand over control of your communications to a centralised server instance that then reroutes your communication according to pre-set rules to either phone numbers or an app on your computing device, be that a phone, laptop or phablet. That is the cloud-centric way of doing things: put the smarts – the control, really – into the network center and distribute actions out to the network edge.

With their perceived Cloud-weakness, Apple was able to come up with a solution that doesn’t require that forfeiting of control to a centralised server instance to enable the same functionality.

It’s a long-standing quip in the technology sphere that of the big players only Apple, by virtue of its business model, is able to offer you some semblance of privacy. Their motivation, as Benedict Evans puts it, is to sell you smart, premium devices, whereas Google for instance wants to sell you dumb glass to access their services.

And this leads to this huge philosophical divide which we will see playing out as both Google and Apple, joined by many more companies, branch out to bring chips and connectivity into more and more parts of our lives. Do we put the smarts into the networks center, or onto the network edge? The gauntlet has been thrown. It’s onto every company that works in this field to choose its side. Choose carefully.

  1. It’s funny in a way how we don’t even say »The Cloud« anymore. Kind of like »Web 2.0« became »Social Media« and subsequently »social«… 

If VCs are talking about your industry, you know that the game is about to get hot. So I follow with a certain amount of curiosity how VCs talk and write about the Internet of Things. The recent discussion I see with the folks from a16z and USV is intriguing.

Another question is where the value sits – which really means where the ‘smart’ part sits. There’s no one answer here. In some cases it seems pretty clear that the connected thing itself is just the end-point for a siloed cloud service – a diagnostics module in a boiler, or a smart meter, for example. You might even have no actual end-user interaction at all. A Nest is somewhere in between – how much comes from the stand-alone device and how much from the cloud? A Chromecast or Apple TV adds ‘smart’ to a TV, relegating it to dumb glass, but it’s really the smartphone or tablet that has all the intelligence to drive it, with the cloud playing the part of ‘dumb storage’. The same may well happen to cars.

The internet of things — Benedict Evans

There are tremendous cost benefits to using compute cycles and storage on smartphones or in the cloud. The old timesharing model lives again. And, as Benedict touches on, it will be easier to connect all the intelligence coming from these “things” in our lives if the data and the compute cycles are in the same place. So my bet is that most “things” will be dumb and the smarts will be in the phone or in the cloud. At least that’s what I woke up thinking about today.


Interestingly, this squares incredibly well with a talk I’ve just given at ThingsCon which was abstractly titled: »Mind the gap, please. Connected products and their context.«

I do believe the question of where to place the smarts with Connected Objects and The Internet of Things1 is going to be the crucial one. This question will in no small part whether Startups and Incumbents will succeed with their products or fail, as the answer to this question will showcase the degree to which they themselves understand their customers circumstances.

I don’t fully buy into both Ben Evans’ or Fred Wilson’s vision, but they give a good perspective. Given the different replacement cycles, it’s a recommendable choice for a lot of applications to defer the actual computing to objects with more power and faster churn rates, namely: smartphones and server farms. And there’s no question that “Cloud” will play a distinct role in the ecosystem and the products we see emerging. But even there, as exemplified by Cisco’s latest push towards “Fog Computing” we see a tendency to put more emphasis and compute power onto and into the network edge.

The smarts will ultimately not end up in the cloud only – after all, Metcalfe’s Law only holds up if the nodes have at least fractional compute power (or Authority) – but putting too much emphasis on the individual device might seriously hinder your chances in an environment where you need to provision devices for lifespans of upwards of a decade.

  1. Yes, it’s increasingly clear that these are two categories. Please refer to Alexandra Deschamps-Sonsino’s short categorisation

There’s that notion in the public perception of Berlin that people come here to delay having to grow up. It’ll get masked as Berlin being the creative city, the poor-but-sexy city, the come-here-for-inspiration-and-stay-for-the-party city, but the root of the matter is: Berlin allows you to not having to deal with all those matters of growing not quite so quickly.

Part of that is due to the fact that it’s so damn cheap here, still. Sure, the rents are rising, but comparatively, it’s very affordable to live here. Another part is that what is considered unthinkable in other cities is just the norm here: experimentation, taking some time off to figure things out, non-linear life paths. That’s the stuff Berlin is made of. The German language even has a term for that: “Berufsjugendliche”

Over dinner the other day friends of mine and I were talking about a crop of Valley startups that just seem flush with money, have some interesting tech, but seem in no hurry to figure out how to actually make a viable business. Reach first, money later. And in that discussion it suddenly struck me – that whole approach is pretty much like moving to Berlin to not have to grow up quite so quickly. Not having to make all those hard decisions that you’d be forced to answer in other, less “creative” or simply more expensive cities. Functionally there’s no difference between being flush with cash in an expensive city, or being skint in a cheap one.

I find that an intriguing lens to see the valley through. MVP and pivoting are basically puberty. Companies trying to find their place in the world. And that can, I guess, help explain why we’re so often disappointed by those Acqui-Hires. After all, it’s like that kid from the block that could have had an awesome career as an artist or musician, but in the end gets lured into an MBA because the money’s good.

If you follow the chit-chat of twitter and the technology “press” you’re very likely to have come across Spurious Correlations, a site showcasing the absurdities of “big data”.

Correlation between US Crude Oil Imports from Norway and Drivers Killed in Collision with Railway Trains

That is a very roundabout way of saying what a lot of critics of “data journalism” and “big data” have been shouting from the rooftops for a while now: just because there’s a correlation doesn’t mean there’s actually a connection between datasets. And that’s before taking into account that by overfitting and assorted other means of statistics, you can kinda manufacture correlations.

But that alone wouldn’t be very problematic. Statistics had to, and continues to, battle with this problem for a long time now. The problem arrives once we start collecting and correlating data on large scale with very concrete impact on everyday life.

Take for instance the latest coverage of the potential impact of health-tracking devices on the overall healthcare system and health insurance specifically. We’ve seen coverage of employers systematically equipping their workforce with fitness trackers to drive down health care costs and the providers of those fitness trackers starting to sell that data of insurers outright.

Amy McDonough, who oversees Fitbit’s employer program, wouldn’t comment on how Fitbit data would affect pricing negotiations between employers and health care providers, though health insurer Cigna said fitness trackers “may” have an impact on future group insurance pricing .

The Quantified Other: Nest And Fitbit Chase A Lucrative Side Business – Forbes

This is problematic on a couple of fronts: first, it perpetuates an existing bias for able-bodied employees, and secondly, it relies on the implicit assumption that activity as measured by the devices maps to better health outcomes, completely disregarding the individual capabilities of the persons tracked. Nevermind the extension of reach employers have over their employees.

And then there’s the question of what those fitness trackers actually measure and try to motivate you to do. We want people to be more active, to develop healthier habits (and this is foregoing the whole argument of us having to possibly address systemic challenges first before we attribute blame and try to fix behaviour at the individual level), and to generally »level up«. But on the design front, there’s very little discussion around how to actually encourage sustainable behaviours.

One other thing that I’m still a little fixated upon is what happens when wearable devices are successful. From a mental health point of view, one of the things that I realised about myself was that I was only really happy when numbers were trending in the right direction. When blood sugar and weight were coming down, that was great. But the wearable systems that we have at the moment aren’t particularly good (and obviously that depends on how you define ‘good’) for what happens when the numbers are trending in the wrong direction. Or, what happens when you don’t need to progress to a goal state, but that your goal state is instead a *steady* state. For me, stereotypical language like the Nike Fuelband includes exhortations to Crush It and Win The Day, which (again, this may be idiosyncratic) imply a relentless striving for betterment that may not be each and every user’s goal. Maintenance, as opposed to excellence, requires a different type of design.

Episode Sixty Two: Wearables Unworn; Look At What They Want

And, to quote from another Dan Hon episode:

If you’re sensing, then you’d better have either novel sensors or outstanding data manipulation, because the sensors have become pretty much commoditised now that it’s trivial to whack a three-axis accelerometer in a thing and now that everything has a three-axis accelerometer in it, you’d better be able to deliver on the promise of your sensing which, to be honest, probably doesn’t live up to what the public thinks of your device, no matter what you actually tell them. Fitbits and Fuelbands should measure *effort*, in a consumer’s eyes, but don’t, and the degree to which they’re able to fulfill that promise relies upon your smarts with data. 

Episode Forty Three: Wearable Reckons; Mirror Neurons; More Google

Looping back to Spurious Correlations, what we have right now is the bearings of a systems where your employment chances and your health care expenditure are going to be in no small part determined by the measurements of little three-axis-accelerometers with no health-care grade certification, data scientists trying to make sense of the data but in large part no medical experience to interpret the data in a theory-driven way, and that is heralded as “the new way” for healthcare.

We need to do better.

(Disclosure: I’m currently retained by a startup to work in the connected health space. I’ve also previously contributed to the KANT HEALTH report)

There’s some interesting thinking around the kind of problems startups should address, and on which scope they should tackle them. Especially the folks at Andreesen-Horowitz have been pretty outspoken about this, starting with Chris Dixon writing about Full Stack Startups, and today with one of »a16z’s« GPs, Balaji S. Srinivasan tweet stream, which is storified here.

The gist of the argument is essentially: if you want to »disrupt« a legacy industry, you’ll need to be able to service the full stack of that industry, rather than just embed you at any level of that industry, because otherwise the cost of operating within the industry becomes too high (integration with existing infrastructure) and the value capture might not work in full, as you rarely have direct access to the customer.

Suppose you develop a new technology that is valuable to some industry. The old approach was to sell or license your technology to the existing companies in that industry. The new approach is to build a complete, end-to-end product or service that bypasses existing companies. […]

The full stack approach lets you bypass industry incumbents, completely control the customer experience, and capture a greater portion of the economic benefits you provide.

Full stack startups | chris dixon’s blog

And there is – of course – a point to this argument. If you’re not attacking a significant part of an industry’s value chain, you may end up being a feature rather than a company, and integration with legacy infrastructure is a costly endeavour.

What is »The Stack«?

What’s missing from this discussion, however, is what actually constitutes “the stack”, and this leads to statements like the following, which have me tweeting in disagreement.

Let me elaborate.

For the last year, whenever I gave a talk about the Internet of Things, I have been talking about the importance of accepting and respecting what Stewart Brand called »Shearing Layers«. Coming at it from a critique of corporate visions revolving around »Smart Cities« and the »Intelligent Planet« or the »Sensing Environment«, the thing that stands out is a blatant disregard for this elemental concept of architecture.

The disregard for those differential velocities of change and the resulting shearing forces is what stops a lot of technology rollouts cold in its tracks, or worse, leads to massive retro-fitting operations.

The site of a building changes on a geological time scale. The core structural elements may last for decades or centuries. The exterior for a decade or two. The services (wiring, HVAC, plumbing) for fifteen years. The layout could change every few years in a commercial space as walls are moved and changed according to the needs of commerce or the org chart. At the bottom layer, the stuff that sits in an office consists of things like desks and computers that get moved around monthly, weekly, or daily.

The big risk with shearing layers is that if you misapprehend on which layer a component belongs, you end up with a building that’s extremely difficult to use. Embed your clever new HVAC system in the structure and you could find yourself tearing down a whole building because air regulations changed.

Shearing Layers of Stuff | Quiet Babylon

Steadfast or Hot-Swappable?

We see that, and I’ve elaborated on this before, although not in as much detail, strategies that revolve around completely setting up your home with connectivity, and thus supplying the full stack, flounder in the market place, whereas simple and single-use appliances tend to work. They might work even better together, and that’s the promise of connectivity, but they are singularly useful.

A while back, there was the interesting discussion around »Steadfast« and »Hot-Swappable« things, and how they relate to each other. This discussion is a good example of Brand’s »Shearing Layers« and serves to make the concept easily accessible. And it shows how deep integration of the stack at the wrong end might actually prove detrimental to your product’s , and ultimately your company’s, success.

So, here’s the idea: Just buy dumb TVs. Buy TVs with perfect pictures, nice speakers and and attractive finish. Let set top boxes or Blu-ray players or Apple TVs take care of all the amazing connectivity and content afforded to us by today’s best internet TVs. Spend money on what you know you’ll still want in a few years—a good screen—and let your A/V cabinet host the changing cast of disposable accessories. Besides, interfaces like Apple TV’s or Boxee’s are miles better than the clumsy, underdesigned connected TV interfaces turned out by companies like LG and Samsung.

And TV manufacturers: Don’t just make more dumb TVs. Make them dumber.

I Just Want a Dumb TV

Consider Moore’s law

The most interesting industry which had to cope with this phenomenon, and is still in the process of coming to terms with the ramifications, should well be the auto industry. Overwhelmed by the sudden success of what we nowadays call Smart Phones, their product development cycles and integrations proved to sluggish to respond to changing customer requirements in a timely manner. When you order a new car, the internal computers which power navigation and in-car entertainment are usually already two years old. Combine that with an average life-time of a car of around 15 years (at least in Europe.) and you see why cassette-tape adapters are still a thing. The auto industry is just slowly beginning to realise that computing-intense tasks should be outsourced to that small computer almost all of us carry with us. The upgrade cycles are just much faster. Apple’s CarPlay, Ford’s Sync platform and the Android Open Automotive Alliance are steps to address this.

But how does this relate to »Full Stack«?

Let’s go back to the Smart Cities example.

Things get trickier when you want to build in sensors and responsive systems to smart buildings or when you want structures that can be shaped and reconfigured by subsystems and cybernetics. The speed of software is cruel and unrelenting. A structure built for durability must have an upgrade plan for its disposable components. Layers are tightly coupled at the creator’s peril.

Shearing Layers of Stuff | Quiet Babylon

By embedding software into increasingly mundane parts of the world, we establish another potential shearing layer within the stack we operate under.

The Stack or Infrastructure?

When we ask startups to attack the full stack of their respective industries, we need to include two conditionals.

Firstly, we must insist that they don’t tightly couple their services, which would result in the shearing forces within the respective industry going unaddressed, making the whole endeavour less efficient.

And secondly, we should ask to identify what actually constitutes the stack before setting out, with an eye towards what is steadfast and what is hot-swappable.

We can also achieve a certain steadfastness in unpredictable or quick to change areas, such as technology. We do this by identifying the needs that are steadfast and the devices that fill those needs, and labeling the parts of the system that are “hot-swappable,” meaning that they can be replaced quickly and easily without bringing the whole system down. (Also see this great blog post on steadfast components by Robin Sloan.)

For instance, if you’re a film buff, owning a nice television or projector would be a steadfast purchase. The utility of a nice picture will never change. What will change, however, is the technology that feeds that picture to the screen, whether it be an AppleTV, a DVD player, or a BluRay player. So, investing in a nice screen is manifestly more important than investing in a top of the line BluRay player. The devices that hook into the television are hot-swappable.

Frank Chimero – Making the Bed You Sleep In

I would strongly encourage startups to have an eye towards the full stack. I think it is important to not be overly dependent on just a few providers for crucial parts of our digital infrastructure. However, there’s no need to ship a power generator with your smart device. Sometimes parts of the stack are just infrastructure you can rely on.

With the Internet of Things gaining in media presence, big corporations realise that there’s something going on that they have no real stake in yet. And they come out rushing. Mostly, that’s in conjunction with creating a new term for that thing that nobody has a solid definition of, so they can own the space. Given that, here’s a quick, tongue-in-cheek primer of what corporations mean when they rebrand the Internet of Things:

The Industrial Internet (GE):
We’re talking about the Internet of really big Things
The Internet of Everything (Cisco):
If everything has an address, it’s like the Internet, but with everything on it
Industry 4.0 (Siemens):
The Internet is like steam – in that we’re thinking about how it affects our machines.
Internet of Customers (Salesforce):
It’s really interesting how your warranty claim is contradicted by your usage pattern.
We’re putting our SCADA Systems online.
Social Web of Things (Ericsson):
Your toaster wants to be your friend!
Embedded Internet (Intel):
Yay, Microchips Everywhere!
Hyperconnectivity (WEF):
We hear this internet thing is really good at what we used to be good at.
Networked Matter (IFTF):
Because how are we going to act on the world if our brains are uploaded to machines after the singularity?
Web of Things (W3C):
There’s always room for one more standard.
Web Squared (O’Reilly):
Your Information Shadow isn’t restricted to Web 2.0 anymore.
The internet of nature (ENEVE):
Those trees don’t hug themselves.
Industrial Internet of Things (Echelon):
Like the Industrial Internet but with more Internet and more Things, presumably.
IOx (Cisco):
An intermediary enablement framework layer to leverage smart BYOI and BYOA on the network edge. Exactly.

I hope this clears things up. Oh, and if you come across more creative new names for IoT, please do let me know!

If you’re a casual observer of tech, you’ll no doubt have noticed that Google announced a rather large acquisition the other day. Heck, it even got prime time and first page news coverage here in Germany, albeit admittedly with the unavoidable spin towards matters of incursions of privacy. It seems this acquisition got people talking.

In my view there are three interesting discussions or inquiries into this acquisition. And although people seem to be totally intrigued about the potential motives of the purchase on Google’s part, I’m not going to wade into that territory.

Market Validation

I had a post sitting in my drafts folder which was picking up on this piece at Slate reporting on the financing round Nest was raising about two weeks ago.

But there’s a real reason why investors like weightless social media companies more than manufacturers of tangible products. It’s called “scale.” The next 100 million Facebook users will add very little to Facebook’s overall cost. But they’ll substantially increase the size of Facebook as an advertising platform. And these companies also benefit from network effects. […]

A company like this can, at the peak of its powers, achieve totally obscene profit margins. So you want to pay a lot for a firm that has even a small shot at that kind of ring.

Slate – Nest vs Pinterest: Here’s why investors like intangible products.

The issues of scaling are of course there. Marginal costs in hardware businesses are not to be underestimated. However, potential profitability is one concern for potential venture investors. The overriding concern is about liquidity events – how big can potential exits go, and how likely is an exit vs. a wind-down of the company. And that’s where marginal costs play a big role, because with pricey user acquisition and lesser network effects, the hockeystick that so many of the investment community have come to rely seemed to be an unfeasible proposition for hardware. It’s the breakout successes that make or break a fund. You need an investment with crazy multiples to survive in an industry that has power laws in its very foundations.

The Nest acquisition proves that those exits are possible with hardware, too.

Multiple sources say Kleiner Perkins was Nest’s biggest investor, and was able to invest $20 million in Nest across the A and B rounds. Our sources say the $3.2 billion cash price Google paid for Nest will generate a 20X return for KPCB — which matches the 20X multiple Fortune’s Dan Primack heard from a source. The money came from 2010′s $650 million KPCB XIV fund, which means Kleiner returned over 60% of the fund with just its Nest investment.

TechCrunch – Who Gets Rich From Google Buying Nest? Kleiner Returns 20X On $20M, Shasta Nets ~$200M

I expect this to increase the hunger for hardware startups among many of the institutional VCs whom so far have sat on the sidelines. The market is validated, and this should encourage additional investments.1

Scaling is still hard

One of the main advantages that the Nest team had from the get-go over potential competitors in the space was its intimate knowledge of manufacturing and supply chains. Coming from Apple, they had to figure out how to manage production of millions of units of any given product. This is knowledge that necessarily is hard earned. But it gets worse: The whole discussion around MVP and lean is in direct opposition of what’s necessary in a hardware startup. You don’t have the luxury of adjusting your product mid-flight, and A/B-testing your way to success would be prohibitively expensive.

MVP and product-market-fit are reasonable ways to go about a software startup, where your feedback loops are much tighter, and you can learn from your customers much more readily. Further, most startups start gathering feedback from users with what are essentially prototypes.

Prototyping in hardware has come down in cost massively, and we see a lot of experimentation as a direct result of that. However, we also see a lot of those experiments hit the brick wall when they have to deliver to 1,000 or 10,000 customers. Often, their prototypes haven’t made it to the DFM-stage, and the resulting re-design and re-engineering add non-trivial amounts of time to the roadmap. I fully expect to see more movement here, as companies and accelerators strive to fill this void and help startups navigate complex supply-chain issues.

This talk by Matt Webb of BERGCloud is instructive.

Don’t bank on connectivity

This is a point I’ve been talking about repeatedly. Nest’s proposition wasn’t an “internet-connected” thermostat. It was the “learning thermostat.” As such, it took an everyday experience and made it better by adding connectivity. This is the core differentiator to a lot of other “connected objects” whose prime differentiator is connectedness without any fundamental changes to the product experience, oftentimes even degrading the usability of said products. This latest addition to LG’s product portfolio is a good example of that, and would be a good addition to this ongoing collection.

An especially interesting characteristic about the Nest, but the Withings product range as well, is that they do not rely on any Gateway boxes, and thus avoid even the appearance of geeky and tech-heavy tools. Rather, they are products that work on their own, without customers having to purchase any enabling equipment. This is, I believe, a huge part of their success formula.2

This is part of a larger trend where I see “Internet of Things” increasingly becoming a term for those of us who work in the industry, while completely being irrelevant in product descriptions and marketing. We might be interested in how products work together, how the connective tissue is going to be used and what those new products might look like. Customers, on the other hand, are mostly just interested in better product experiences. They couldn’t care less.

Disclosure: Both Nest and Withings have been clients of mine in the past. This post reflects my personal opinions on the market and is not informed by their positions.

  1. It’s funny how that Gartner report already looks outdated, a mere 4 months after it was released. 

  2. This is one of the big reasons why I think the impact of Bluetooth Low Energy is going to be much bigger than most currently anticipate. 

Pip Stress Tracker

The Verge, one of the less startup-and-tech-hype-y (although not completely free of it either) publications takes a look at a new fitness tracker that wants to incorporate Galvanic Skin Response, a biomarker that, as a lot of other markers, is perennially ambiguous. And yet, we have heaps of gadgets and methods and organisations focussing on this one measurement in the hopes of learning something about our emotional state, our exertion levels while doing sports, whether we’re lying or which topics give them the best leg-up into talking us into joining them.

Instead of metrics like movement or heart rate, the PIP looks at your galvanic skin response (GSR) — basically, the level of moisture on your skin. It’s already a popular measurement, used by everyone from the police to the Church of Scientology, but it can also be mysterious and difficult to read. And as body-tracking devices like the PIP get in on the game, they’re discovering just how ambiguous GSR can really be.

From Scientology to the quantified self: the strange tech behind a new wave of self-trackers | The Verge

The problem, as should be readily apparent, is that one data stream cannot possibly tell you all those things; one marker does not allow inferences about what we want to know from it. And yet this approach is symptomatic in the Quantified Self-movement. In the hopes that additional context or some nifty machine learning algorithms can at some stage make sense of the data that is captured right now, the data that is easily captured right now gets collected. And that shouldn’t be much of a problem in the Quantified Self movement, although it draws a caricature of the stated goal of self-enlightenment of the movement, as this is just hobbyists trying to learn a bit more about themselves, while (hopefully) mostly understanding the capabilities and limitations of the technology they choose to employ.

This line of thought becomes hugely relevant, however, once the rhetoric and methods of said movement travel, while the understanding of limitations gets left behind. We might observe this when we look how fitness apps try to push into the healthcare space, and learn rather harshly that what they thought was meaningful data doesn’t hold up to the levels of scrutiny that we understandably set towards medical applications.

It is, perhaps, with this in mind that we should look at the FDA’s recent decision to bar 23&Me from operating, giving the pseudo-medical nature of their product, combined with the rather fitness-market approach to data acquisition and delivery. This example that recently made the rounds in Germany is but one marker of this.

The problem with all these approaches, and this is where “Smart Cities”, a topic, judging by the number of interview requests I’ve received lately, is making a resurgence in the technology and economics press, starts to enter the picture, is: We measure what we can easily measure, and then forget that this was the reason we measured it.

There exists a school of thought, which, unfortunately, has overtaken management that subscribes to the credo of “if you can’t measure it, you can’t manage it.” I say unfortunately, as most who apply this credo take it to say “I only manage what I can measure” – and then, because, you know, you’d do it as well, go on to start managing that which is most easily measured. It’s called the low hanging fruit.

And under these conditions, Smart City concepts start to look like fitness trackers that rely on Galvanic Skin Response as their primary source of data. It tells you something, but you have no reliable way of figuring out what that something is, never mind making decisions based on that something. And nevertheless, we’re pushing forward to instrument our cities, with no mind of what we actually want to learn about them, and whether what we’re doing right now is in any way fit to tell us that.

Smart Cities are coming along our way. This much is clear. And I’d be remiss to say I don’t crave a little more efficiency and legibility in the infrastructure and processes that I encounter in the Great European Cities. My native Berlin is particularly lacking in this regard. However, given as cities are the cultural and economic engines, and probably the best assets we have, we need to take care of how we approach this.

Because making policy decisions based on data you only gathered because you could cheaply collect it isn’t smart. Not in a fitness tracker, and exponentially not in a city.

Nest announced the next step in their product lineup yesterday. The Nest Protect, a smarter smoke detector which comes with all kinds of bells and whistles, and sports Tony Fadell’s trademark design chops, seems on first glance like a product that is undoubtedly going to sell well. The first conversations I had yesterday and today indicate that people were clearly waiting for this kind of product. Everyone seems annoyed with their smoke detectors, and to be honest, I myself have been thinking about hacking the smoke detectors we currently have to make them accessible programmatically.

What I’m really interested in, however, isn’t the Protect in and of itself, but how it showcases what I’ve been trying to convince clients of for a while now. That is, a product strategy for Home Automation that actually works.

Most companies in the Home Automation space (even though Nest likes to call it “Conscious Home”) tend to take the perspective that we see playing out in the wider IoT-Space as well: they focus on the integration, the middleware, the hubs, because they know how to do that, and because they’ve learned over the last couple of years that being a platform play is the most profitable one. So they’re trying to sell you their integrated solution, but mostly they just want to have the hub, and ideally a couple of API’s to then go out and beg developers and hardware companies to please build for their platform.

However, the incentives for customers to actually invest into a hub or integrated solution just aren’t there. Experience in the markets shows us that customers don’t upgrade their home all at once, but do it piecemeal, which flies in the face of the integrated players. And people certainly don’t buy abstract hubs for which they see no clear value.

And here, the beauty of Nest’s strategy really shines through: They are developing a platform, but it’s not at the core of their product strategy. Their products are individually useful, and obviously so. There’s no assumption here that, in best “build it and they will come” fashion, once they have a hub and documentation, companies will start building for it. Rather, it’s a collection of individually interesting products which work even better together.

And that’s at the core of the depressed home automation market. Every product that wants to play a role here needs to be individually useful. The assumption needs to be that any given product is going to be the only one of a product range that a customer buys.

If they get better with additional components, then you’re onto a winner, but you ultimately cannot require customers to completely invest into your – in most cases nonexistent – ecosystem.

Disclosure: Nest Labs is a client of mine. This post however is not informed by my work with them but purely reflects my personal opinion.

On the heel of Apple’s latest product announcement, we’ve seen a lot of coverage about the devices, and the potential impact in terms of Apple’s strategy, what it means for the market, what it means for the wider industry. So allow me some idle speculation, in the form of three short quotes, as well:

However, one thing you can say is that when you use passwords together with biometrics, you have something that is significantly stronger than either of the two alone. This is because you get the advantages of both techniques and only a few of the disadvantages. For example, we all know that you can’t change your fingerprint if compromised, but pair it with a password and you can change that password. Using these two together is referred to as two-factor authentication: something you know plus something you are. […]

Another interesting fact is that the phone itself is actually a third factor of authentication: something you possess. When combined with the other two factors it becomes an extremely reliable form of identification for use with other systems. A compromise would require being in physical possession of your phone, having your fingerprint, and knowing your PIN.

Fingerprints and Passwords: A Guide for Non-Security Experts

In the age of context, iBeacon can provide the information you needed when it is needed. Just like NFC, iBeacons even allow you to pay the bill using your smart phone.

With iBeacon, Apple is going to dump on NFC and embrace the internet of things — GigaOM

Apple now has 400 million active accounts in iTunes with credit cards. That means that all of Apple’s iOS devices are linked to those credit cards, too.

Apple’s Stash of Credit Card Numbers Is Its Secret Weapon – NYTimes.com

This, of course, is just one direction this could be going. We’ll definitely see more of the Smartphone industry going into the direction of authentication and identification devices. The current scandal regarding intelligence overreach, especially in the US, couldn’t come at a worse time for this.