I always struggle to explain why strategy is essential. Since a very young age, I’ve been a long-term thinker. For me, it’s as natural as breathing. My failure to show others the importance of sitting down to think things through is frustrating. I know I’m going against a fundamental attribute of human nature, maximum entropy. And it’s aggravating because now more than ever, we (startups) are in need of some long-term scheming.
During the past decade, startups have risen from the ashes of indifference, all the way to the altar of epicness. What started as an obscure, trial-and-error, weekend hobby for corporate drones, has turned into a job-status for a whole generation.
From Slugs to Hares
Three decades ago, things where different. There was no Venture Capital, and the risks were immense. Entrepreneurs (which wasn’t even a word at that point) would think each step carefully. They planned and worked and dreamt. Soon, the community realized that so much planning was incompatible with disruptive innovation. Or moving fast and breaking things.
The shift to a customer-centric approach allowed for quicker and more precise development cycles. The change had consequences though. Entrepreneurs started to disdain any prior-thinking of their ideas. Too much planning was bad; Plenty of testing was good. Rinse and repeat.
As with many things in life, when you oversimplify a complicated procedure and turn it into a moralistic game, the outcome tends to suffer. And so it happened that, gradually, entrepreneurs began to skip the planning and strategic process.
The only reason why founders could skip that stage was that, for once, money was aplenty. If you got it wrong, you could always try again and again and again and.
That most industries weren’t digitalized also helped entrepreneurs deliver on the low hanging fruit. Disruption started happening with unnervingly simple applications. The market was ripe, and the startup ‘job’ was being democratized.
Nevertheless, nothing lasts forever.
Say Hello To Mr. Complexity
The world has changed since then. More people than ever before are turning to startups as their life choice. On top of that, the world is better connected than ever, turning Friedman’s flat world into paper-thin-ultra-flatness. The speed, funding and global aggressiveness of startup players is unprecedented.
Where before you had myriad startups attempting to dominate a vertical, now you have an oligopoly at best. The problems left to solve aren’t trivial anymore. A lousy app built on top of a common mobile framework won’t cut it today. The digitalization of society is rising, not decreasing, the complexity of the issues we face. Scale and speed are transforming conventional pains into massive headaches. The age of spray-and-pray is over.
“In a thread attached to his tweet about start-ups, Krisiloff, the former Y Combinator executive, added that the opportunities “to start compelling start-ups,” for college students without industry-specific knowledge, “has vastly shrunk.””
As complexity rises, the habit of not thinking concepts through is collapsing. Maybe, the most prominent poster boy of this phenomena is Uber.
“Uber is a taxi company with an app attached. It bears almost no resemblance to internet superstars it claims to emulate. The app is not technically daunting and does not create a competitive barrier, as witnessed by the fact that many other players have copied it. Apps have been introduced for airlines, pizza delivery, and hundreds of other consumer services but have never generated market-share gains, much less tens of billions in corporate value. They do not create network effects. Unlike Facebook or eBay, having more Uber users does not improve the service.”
I can’t, but echo’s Smith’s arguments. While Uber is an excellent app, it’s not life-changing. Much less a multi-billion dollar company. Their initial market traction was extraordinary. This became their Achilles heel, blinding the company to reality. The truth is, the product worked well in cities with cab supply inefficiencies. This supply and demand unbalance don’t exist everywhere though, drastically capping Uber’s market. Experts in the space knew this fact. Nevertheless, no one implemented it into the strategy. Arrogance got the best of it.
Time To Think
It all comes down to having time to think. Strategy is the byproduct of thoughtful thinking. Overthinking is seen by startup intelligentsia as a sign of stalling, of inefficiency. The key is to measure what is ‘too much.’ For years, many interpreted ‘too much’ as no thinking at all. And got away with it. Until now. Too many startups operate under very simplistic models. Or no models at all for that matter. And the tide is catching up with them.
Many forces affect society, and it’s important to see where the startup fits in all this. As I wrote before, the age of unregulated startups is coming to an end. The time for startups operating in a vacuum has passed. The immediate consequence is that a narrow and simplistic analysis won’t yield a successful startup.
And this takes me to my last point. Too many people keep focusing on what’s happening now. Few people ask themselves what will happen in the future. All their analysis are based on current trends (current as in last month), and not on behavior evolution in the future.
The current scooter fever is a perfect reflection of this. Before the summer all investors rushed to invest in the so-called next big wave of smart mobility.
“Investors rushed in after seeing rapid adoption in several California cities. Some companies reported revenue of more than $20 a day for each scooter, suggesting significant profit potential given they cost about $500 apiece.”
Lure by a pipe dream, they joined the merry wagon — an investment that’s resulting in a folly. Rapid adoption isn’t synonymous of success much less in just a few cities in California (San Francisco being one of them).
“The economics, though, have proved tougher than expected, people familiar with the companies said.
One issue is scooters not designed for heavy use are breaking down quickly. In some markets, scooters last about two months, investors said, often less time than it takes to recoup the purchase cost. “
The most surprising aspect is the belief that because you’re making $20 a day, it will keep going like that once the fad is gone. Focusing on the short-term and projecting into the future is a bad indicator. One that shows rush behavior and lack of depth in the thought process.
For the record, I am very intrigued by the micro-mobility space. I believe it’s here to stay. What I don’t buy is the ephemeral arguments many fling around. What matters is the usefulness of micro-mobility services within the long-term. Today’s standards won’t stand in six months. The question is, what behavior will it replace if any and why?
Too often, entrepreneurs and investor alike, fall prey to a lack of strategic thinking. The motto is to keep up with the wave, no matter what. FOMO (Fear Of Missing Out) and herd mentality are becoming the norm (ok, they were always there). The need to keep up is erasing any time for thinking things through. You’re either in or missing out, and your status falls within the technology upper caste.
Is it Time to Change?
Rising a big round isn’t a success indicator. Making quick money for a month isn’t an indicator either. The business game, the value generation aspect, happens in the mid-long term.
It’s time we start acknowledging that the next wave of startups, those dabbling in the Deep Tech space, require better strategic thinking. Quick flips won’t make it anymore.
I believe in unfiltered naivety. Not having all the answers is the reason why we innovate. It’s healthy not to think about things too much. What people forget is that this fact doesn’t preclude some mid-long term reasoning. ‘Why’ and ‘What If’ are critical questions we have to ask again and again.
We don’t ask them enough. We don’t invest enough time thinking about them either. The outcome is crappy, evanescent startups that increase social unrest while enriching the plutocracy.
One of the most recurrent complains in western countries is how rents are skyrocketing. And raising they are. The question is, who is at fault? Who is pushing thousands to the brink of homelessness? AirBnB of course!
This narrative fits, and so it keeps reverberating. We adore our villains, and we love to simplify the complex. The clearer cut the enemy is, the bigger the resistance against them. That’s precisely what’s happening to AirBnB at a global scale. But I have issues with such an explanation. I don’t believe it.
By focusing on the AirBnB effect, we’re obviating a much more devastating trend, the rise of the megacities. According to the United Nations Department of Economic and Social Affairs, Megacities are urban agglomerations that surpass 10 million inhabitants.
As of 2017 there where 47 Megacities in the world with 55% of the global population living in cities. By 2050, the UN expects this to grow to 68% with China, India, and Nigeria accounting for 35 % of the growth.
The rise of such urban agglomerations is increasing the pressure around specific dimensions, including affordable housing. The consequences of unplanned fast-growing urban areas are devastating. The faster people flee to the city, the harder it is to accommodate them with the current housing offer. This imbalance between supply and demand diminishes affordable housing and the city’s density. This pushes people to the sprawl and fosters the creation of slums. Within these slums, homelessness, crime and diseases run amok. In turn, the lack of urban planning puts tremendous pressure on the already precarious municipal transport system.
Nonetheless, a rapid growth of urban dwellers is precisely what we’re experiencing. And the worst is, it’s going to get a lot worse, especially in developing countries.
The question is, why is the population accumulating around the Megacity? There are several causes for it, and AirBnB is not one of them. If anything, AirBnB is augmenting pre-existing trends. The significant growth factor in most cities, though, is a drastic increase in rural to urban migration ratios. The effect has been more pronounced in rural-heavy regions like Asia and India, which hold 90% of the world’s rural population. That said, the effect is felt everywhere.
This migration is coupled with the natural growth of the population in the city. Most developed countries are experiencing slow growth due to aging populations. However, this deceleration isn’t the case in less developed countries.
One final factor that’s accelerating the rise of the Megacity is migration due to climate change. It’s getting harder to make a living in certain regions due to capricious weather patterns. To survive, people are emigrating to the perceived security of the city.
The sprawl and the AirBnB effect
The convergence of all these factors is fuelling the expansion of most world cities. An expansion that’s speeding up dramatically. Current studies estimate that when “the population of a city doubles, the urban extent triples.“
“During the 24-year period between 1990 and 2014, the total population of the universe of cities grew by 53%—from 1.6 billion to 2.5 billion—while the area occupied by these cities grew by 105%—from 275,000 km2 to 570,000 km2. There is no doubt that cities are now expanding at a faster rate than their population growth rate. At current rates, when the population of a city doubles, its urban extent triples. Interestingly enough, the growth rates of both the population and area of cities were found to be statistically independent of city size.”
These numbers unearth a truth few people want to acknowledge. The only way to fit the newcomers is to either increase the built-up area density (people living within a particular area) or expand the urban area.
There are three ways to achieve higher urban density. Increase the built-up saturation (how much of the urban area is actually built-up), reduce the size of the home unit (think Japan) or the construction of higher buildings. This last option requires a tore-down of the previous structure, rebuilding it with increased hight. All this while retaining the same surface area.
The reality though is that most of these options aren’t feasible. Most cities have high levels of built-up saturation. Increasing them would mean removing parks and green areas. This is not an option in most places.
Splitting the family unit, reducing the floor per person is one option. The practice, though, is unacceptable in most developed cities. If you’re, culturally, used to a specific space, it will be hard to convince you to pay the same for housing half the size.
The last choice, rebuilding with more hight is also quite impractical in many cities. Not only it would require to evict the building, but it would probably need new land-use regulations. Besides, it’s a long and slow process, one that can’t cope with rapid growth.
“Indeed, community resistance to densification, or the inability to reform planning regulations that prohibit it, may limit densification and accelerate expansion. Expansion is typically the preferred course for those concerned with overcrowding or with land supply bottlenecks that may lead to unaffordable housing.”
The inability to increase density and the need to fit a larger population is having the opposite effect. As housing becomes less affordable in the city center, families move to the outskirts, expanding the urban area and developing the sprawl.
How does AirBnB fit in this picture? The rise of short-term rentals at scale is accelerating the move to the sprawl. Let’s remember that the trend is towards lower urban density. AirBnB is increasing the speed at which this is happening by removing long-term rental and turning them into short-term ones.
“Short-term rentals have a long history in many countries, often unregulated and in the form of informal exchanges. Companies like HomeAway and Airbnb have paved the way for the short-term rental market to move online, and have given homeowners an easier entry into the business of hosting.”
The bottom line: Is AirBnB evil? Not really. They’re just adding speed to a natural and known process. That said, the extra acceleration is turning an inherent problem into a dramatic one. This is the reason why an increasing number of city councils are trying to limit the number of short-rentals in the city.
I’ve said it many times, speed, when coupled with increased acceleration, like what the new generation of startups is providing, is becoming lethal for our social fabric. Our brain has a limit as to how many stimuli it can take. As much as it pains many, human beings aren’t all-powerful (yet).
Why it matters: The speed of de-densification is hitting the limits at which most families can scale their income. Such an effect is artificially pushing the population to the sprawl at an even faster rate than before. This leaves the city center powered by both, leisure and business tourism. As tourism is highly sensitive to economic downturns, I wonder what will happen to such short-term rentals when we hit the next crisis. A crisis that many are predicting for 2019 or 2020.
New city models
Bigger sprawls entail many problems. The most obvious one is that of transportation. By definition, the sprawl is less densely populated than the central districts. The lack of density makes it hard to justify the extension of public transportation so far from the center. Hence, most people will choose to ride to their job in their car. The result is an increase in the commuting time and widespread traffic congestion.
Beyond the obvious environmental impact and wasteful time consumption, long commutes have essential repercussions for talent retention.
“Urban and metropolitan labor markets thrive when all workers have access to all jobs because that ensures that firms can hire the best workers and workers can find the best jobs. Labor markets are integrated when all locations are connected by inter-city arterial roads that allow workers to reach their workplaces all over the urban area rapidly and efficiently. The presence of an arterial road, preferably one that carries public transport, within walking distance of a residence greatly facilitates access to jobs throughout the urban area.”
These commuting problems are changing the physiognomy of many cities too. The traditional city model entails a Central Business District (CBD) and people commuting to and from it. This structure is known as the Monocentric City model, and it’s been a reality for most cities until now.
However, as we accelerate the rise of megacities, the city model is beginning to shift. The Central Business District is losing its job attractor capacity, and new business areas are popping around the city. This loosening of the CBD gives rise to the Constrained Dispersal City model.
It might be time to rethink the role of co-working and co-living spaces. The increasing use of co-working spaces by corporations is a sure sign of change. Co-working is already dispersing the location of business centers. Will Co-living marry both, affordable housing and jobs, all within an acceptable commuting range?
Be smart: The surge of the megacity will most definitely accelerate the changes around the future of work. Remote work will drastically increase, and so will co-living and co-working spaces on the outskirts. The combination of work and living will alleviate the need for long commutes, reducing traffic congestion and increasing talent retention.
Are startups really helping?
Despite the rise of startups in the urban vertical, it’s still early to say they have a real impact. We’ve gone from developing entertainment apps to try to tackle those pains we, urbanites, suffer every day. And while I’ve been vocal about having startups tackle hard-problems, the approach most startups are taking towards urban-related issues is, to be gentle, entirely misguided.
“The tech industry also thrives by working in ways that can be incompatible with public-sector city building. It’s hard to “move fast and break things” in government, and public funds don’t allow for the failure rates that venture capital does. New tech products are often targeted to early adopters, and they spread to the rest of us later. But you can’t do that with innovations in public service; if the tech savvy get the nicer bus routes or the new Wi-Fi kiosks first, that raises equity issues for the rest of the city.”
A city is a complex dynamic system. Like all dynamic systems, it strives for an equilibrium point, and it’s inherently inefficient. When we apply simplistic optimization logic to a city, we do fix a variable, but we unbalance other subsystems. Gentrification, marginalization or income inequality are real things. While we ignore these effects online, we can’t disdain them in a city.
Does this mean that their technology isn’t solving real issues in cities? Not at all. We not only need to apply technology but much more than we’re doing now. Then, should we blindly follow the regulations even when utilizing cutting-edge technology? No, we should push the rules and bend them whenever necessary.
Yes But: The way we do both needs to change fundamentally. Simplistic problem optimization doesn’t work when you deploy it at a city level. Startups need to start thinking in systemic ways and aid with urban planning challenges.
Take the Uber-like wars in most cities. Did Uber solve a real problem? Not so far. Issues between supply and demand of cabs have existed for decades. The reason why cities started to limiting taxi licenses was precisely because of imbalances between both. Too many taxis on the roads and most will be empty, with occupancy rates below 30%.
Nevertheless, if you limit the number too much, then the waiting time for passengers is unbearable. The optimum solution is to strive for a fair balance, an equilibrium point. The problem is hard because, as the population and the urban area grows, the equilibrium and deployment patterns shift.
Here is where technology can make a real impact. One of the endemic problems in urban planning is the lack of data. Without real-time and historical data series, it’s hard to predict how the city is behaving.
“San Francisco notoriously never got this balance right (by the dawn of the Uber era, it had about 1,700 licensed cabs). “It is no accident that Uber and Lyft began in San Francisco,” Mr. Schaller said. “It wasn’t just because it was Silicon Valley. It was because they had seriously too few taxicabs.””
Companies like Uber, Lyft, Google or AirBnB posses such information troves. If correctly exploited, Uber or Lyft’s systems can provide invaluable data to predict when, where and how many cars are needed in the city.
Go deeper: The question though is, how aligned are these companies’ growth objectives with the cities they operate in. Uber’s business model demands a maximization of trips. What happens when supply saturates an area? Will they stop at that or will they try to profit with other schemes that go against the equilibrium point of the city system?
One of the companies missing this trend is, surprisingly, AirBnB. While their design lab (Samara) did some first experiments with communal housing in Japan, it seems they’ve been discontinued. If we want to criticize the company for something, this would be it. It’s shocking that, after proving they’re a catalyzer for the destruction of long-term rent in many cities, they aren’t putting their data to the service of the cities.
Technology companies though, still struggle with their involvement in public matters. Alphabet’s Sidewalk Labs keeps getting into trouble with their Sidewalk Toronto project. Their sporadic arrogance, privacy issues and lack of stakeholder cohesion act as a significant roadblock for the development of the plan.
“[…] According to Bianca Wylie, co-founder of Tech Reset Canada and one of the lead organizers of the opposition to Sidewalk Toronto. “The process Sidewalk Toronto has started has been so anti-democratic that the only way to participate is to be proactive in framing the topic,” Wylie continued.”
Technology is proving instrumental in the rise of megacities. Problems with technology though, arose from the lack of understanding of how the irrational city model works. After all, cities are powered-by-humans, and we are the epitome of irrational.
To achieve a better understanding, we need more data and better models. This is a space several companies are exploring, but it’s still a minuscule niche. So far, the best crop of startups that I’ve seen working in this area is coming out of the Urban-X accelerator. Urban-x is a joint venture between the Urban US fund and Mini by BMW. I have to give these guys kudos because some of their portfolio companies are very impressive.
With better IoT deployments, we’ll be able to create real-time models of how the city behaves. Empowered by these models, other startups will be able to solve the myriad challenges that befall megacities.
Lastly, it’s worth noting that the most significant challenges won’t happen in developed countries. As I stated at the beginning of the article, the fastest growing cities will be located in Africa, India, and China. Their trials and tribulations are already major, but so are the opportunities.
“Focusing on the new urban peripheries of the future will mean focusing more and more on the peripheries of cities in less developed countries. And the challenge here, it should be noted, is quite different: Preparing new urban peripheries for occupation will often take place in cities with weaker rule of law, weaker adherence to land use and land subdivision regulations, smaller municipal infrastructure budgets and reduced access to infrastructure finance, higher levels of corruption and greater control of private developers over the planning process.”
The next generation of startups will come out of those that help tame in the megacity beast in developing countries. No wonder Chinese startups are very well positioned to take the lead here.
The big picture: It’s also worth reflecting what extremes will developed countries startups go to exploit the opportunity. Will they partner up with local startups, elevating the regional inequality or will they perpetuate the plutocracy and drain the local resources?
The bottom line: Car-sharing is booming, and it’s here to stay. The goal of these services is to take over car ownership in high-density urban areas. The transition period between one and the other will be dramatic as car density will increase considerably. Detractors will argue that car-sharing services increase cars, not the opposite. Car-sharing companies will defend that they’re taking cars off the streets as people won’t own a car anymore. Both will be right. At different points in time.
The market is big and ready, but, is it worth it? Are these companies making money? Why would any of them enter such a crowded space?
I ran some financial scenarios for Car2Go based on their public numbers and my estimates.
The amortization of old (400) and new cars (450). All of them are Smart cars but of different generations. I assumed a car price markdown of 5% as Smart and Car2Go are both under the Daimler umbrella. I also applied a 24 month amortization period for the older cars, and a 48 month one for the new ones.
Battery charging costs. I assumed they’re charging most cars using the official electric vehicle rates. This rate is cheaper than the average household electricity price.
Repairs and accident costs. These are, by far, one of the most substantial costs. The average lifetime of a car under a car-sharing service is roughly six months.
Salaries of all car operators. Operators are in charge of moving the car to the assigned charging stations and back. They’re one of the most significant expenses for the company.
The revenue model of the company is based in euro per minutes driven. The current rate is that of 0,21 € / minute. They recently included pre-paid minutes which lower this rate.
The average trip is 20 minutes. I based this average on my own experience (living in one of the most active Car2Go areas, just on the border of their operational area).
The average car utilization per day is 15 trips per day per car. This rate is the official one published by the company, one that’s rather impressive. It’s one of the major Key Performance Indicators of the business.
The average number of trips between charges is 7,5. I based this estimation on the average trip time, distances and the official Smart Electric Drive range.
Scenario I: Current state of affairs. The company recently expanded their current fleet with 450 cars more. I’m assuming that such an increase will drastically lower the average daily car usage. At least during the first months.
Scenario II: Increase market penetration. While adding new cars temporarily decreases the use per car (scenario I), the increased density will pay off on the long-term. I projected a new scenario where the average daily vehicle use grows by 2% after a year or so.
Scenario III: The next logical step is that the company extends their operational area. This scenario implies larger operational costs, but also increased time per trip and more revenue.
Scenario IV: The efficient future. I projected the final stage of efficiency. Improved car specifications that lower the cost of repairs. Longer trips without recharges, and improved algorithms that increase the number of cars an operator can handle.
As it stands (scenario I), car-sharing is an excellent business if you hit certain operational improvements, but it requires a significant upfront investment in cars. Not any car though, but an electrical one (EVs). The numbers don’t add up if the cost of operating the vehicle is based on anything else. This includes the use of hybrids, even those ultra efficient like the Kia Niro (3 liters / 100 km).
The last company to enter the game in Madrid was Wible, a joint venture between Kia and Repsol. I’m not sure what numbers they’re running, but using plug-in hybrids is a significant disadvantage.
Beyond the cars employed, the business requires certain operational finesse. One of the most critical indicators for the business is the number of daily rides per car. The higher the number, the bigger the revenue. However, several factors restrict the growth of that indicator. The longer the car is on the streets and not charging, the higher the use of that car. That means that battery life, charger locations and speed of charging are critical factors.
While the car needs to be available, this isn’t enough to maximize revenues. Cars need to be available to the users at the right time and location. Nailing when and where is essential to achieve peak daily use of the vehicles. It’s not an easy problem to solve, mainly due to the time-changing nature of the demand.
There are several ways to increase such availability. One of them is growing the car fleet. This is what Car2Go did in Madrid. The theory is sound, but on the short-term, the daily vehicle use will plummet. Offer and demand don’t grow simultaneously. The company duplicated the offering, but demand doesn’t grow at the same rate. Hence, it will take some time to expand the user base and by proxy, the daily vehicle use.
Another essential factor is distance. The longer the distance, the longer the time spent driving. The longer the drive, the more revenue for the company. Nonetheless, the limited availability zone of most car-sharing companies, cap the average distance vehicles can cover.
Expanding the operational area, though, comes at a cost. It requires more charging points to cover all new areas. It also increases the number of operators needed to move the cars, as the proximity between vehicles decreases.
Why it matters: While the car-sharing business is a profitable one, it’s also susceptible to specific parameters. A small decrease in one of the critical indicators can put you in the red fast. It’s a business that demands high efficiency and tight value chain integration.
As some of the scenarios show, car-sharing is profitable, but it’s not a money machine yet. The financial analysis I ran only contemplates one city. Granted, Madrid is one of the top cities for Car2Go, but they also operate in 25 other locations.
Even if we export Scenario I with a slight markdown in other cities, the business generates between 14 – 20 million euros of yearly revenue.
Then why are all companies getting into car-sharing? Two words: declining sales. Daimler’s Mercedes-Benz revenue is down by a whopping 7%, and it’s expected to keep worsening. The truth is, car ownership is in decline everywhere. And it will only accelerate further.
“In a report ahead of the Las Vegas and Detroit shows, Morgan Stanley, an investment bank, said the motor industry was being disrupted “far sooner, faster and more powerfully than one might expect.” It predicted that conventional carmakers would scramble in the coming year to reinvent themselves.”
What’s next: This disruption isn’t only touching car makers, but the whole auto industry ecosystem, including insurance, part makers or oil companies. Disruption will evaporate revenues faster than what most organizations can grow other models.
Competing with a new model
While the initial investment to start a car-sharing operation is steep (~15 million euros per city), the car industry is getting involved in droves.
This struggle to find the next big thing is driving a fierce competition. Nonetheless, the industry has been competing for decades. Larger models, better consumption, powerful engines, aerodynamic designs, etc.
Be Smart: Competition in the car-sharing space is very different from that of the car industry. We’re going from an ownership model to a service one. Barriers of entry in one and the other are utterly different.
Failing to see this difference will be the downfall of most competitors. A glance at the current offerings shows how misguided most are.
Because users don’t own the car, the number one feature for them is availability, not style. Other features are secondary. Users don’t care about the car model. They care that when they pick the car, they can reach their destination without incidents. All features are subordinated to one big goal, maximum availability.
If a company isn’t available in my area or at a particular time, I’ll switch to another brand. The cost of switching is close to zero; brand loyalty is non-existent.
The Daimler advantage
The question is then, how to maintain top availability and differentiate from the other car-sharing companies.
Daimler is one of the few car-sharing services that has executed a long-term, value-chain-wide strategy. The German automaker understood early on that, while car-sharing is the goal, to achieve dominance in that space, it’s not enough to deploy cars.
For starters, Daimler has invested in all stages of the value chain. They not only own the sharing service layer but the lower physical car layer (Smart), as well as the top application layer with Moovel.
Electric Vehicle Production
Control of the physical car is a massive advantage. Daimler has embarked in a company-wide effort to electrify all their vehicles. One of the consequences of this is that they’re expanding their footprint into the smart energy space. They now produce their own battery packs that get assembled in their EV factories.
But batteries aren’t only used for cars. They’re also deploying them to create energy storage stations too. This storage capacity allows them to control part of the surplus electric grid capacity.
The bottom line: Control of energy storages allow Daimler to create an alternative grid that can feed their car-sharing fleets at a fraction of the cost of the competition.
“I have been saying for a long time now that the best way to determine how serious an automaker is about electric vehicles is to look at their battery supply chain and their efforts to establish production for their EV programs. In that regard, Daimler appears to be the most serious amongst established automakers in my opinion.”
It’s worth noting though that their battery cells are manufactured by SK Innovation, a South Korean cell maker that also provides them to Kia. I expect SK to eventually become Daimler’s exclusive provider, in the same way Panasonic did with Tesla.
Why it matters: Having control over the EV cycle gives Daimler a massive advantage over their competitors. The company can rollup any improvement in battery life, charging speed, or modularity of car components to their different business units, including Car2Go.
One app to rule them all
It is plausible that other competitors can rival Car2Go’s availability capacity. Most will compete, but their businesses will most definitely be less efficient.
Be Smart: The transition between car ownership to car-sharing services will be long. While users and manufacturers are feeling the effects now, it’ll take years to complete. Long-term financial survival is paramount. Any company not operating their car-sharing services efficiently, won’t have enough runway to survive the race.
Daimler knows they can’t rely on operational efficiency either. At least not on the short-term. That’s why they’re making sure they also own the user’s behavior.
When the cost of switching is low, there is only one thing to do, play on the user’s laziness. If a single application covers all transport needs, it’s costly to switch to another for a single use. The key here is aggregating all transport needs under one roof and make it the user’s default. Such aggregation acts as a lock-in mechanism. It’s a direct play from WeChat’s book.
There is an abundance of startups attempting the same aggregation play. Their capacity to achieve this though is greatly diminished in comparison with Daimler.
The company doesn’t only run Car2Go. They own MyTaxi/Hailo, the largest taxi fleet in Europe. Besides, they’re investors in Taxify and Careem too. They have an investment in several limo services, as well as peer-to-peer sharing services both in Europe and the US. In a nutshell, they control, taxi, limo, and private transportation. Their autonomous vehicle division already deployed their first buses, giving them a hold on public transport too.
Why it matters: Participating in the whole transport stack gives Daimler much better leverage to aggregate further services under their app. Such all-in-one control allows them to generate lock-in effects that will serve all car-sharing operations.
The ultimate goal is the autonomous vehicle (AVs) layer. Autonomous cars will blend the car-sharing and taxi businesses. AVs will improve efficiency all across the board.
From a car-sharing perspective, it will allow Car2Go to cut out all human operator expenses. Cutting the human element also unlocks other efficiencies. With no operators, cars will be free to relocate to demand-heavy areas without any extra cost.
Daimler has already invested in cutting-edge technology that allows them to predict if a user is in need of a car. I have no doubts they’ll integrate this soon enough, giving the whole system maddening predictive powers.
The big picture: Smarter and cheaper ways to serve the changing demands of users will unlock new business models. Despite what they’ve claimed, I wouldn’t discard price surging strategies, as well as different services for different users.
Their strategy around AVs is similar to the one they’ve been following with their electrification efforts. They’re investing in the whole value chain, including legal frameworks or AV infrastructure like autonomous valet.
Owning the autonomous vehicle stack is a priority for Daimler. They’re doing pretty well so far, but I’m not sure how well it will stack up against much more advanced players like Waymo or Baidu. I fear that they don’t own the critical technologies for AV, in the same way, they don’t hold exclusiveness around battery cells. They need to lock that if they want to keep their advantage.
Car-sharing is the future of personal transportation. During the next few years, we’ll attest to a sharp decline in car sales. The transition period will be bumpy. Cars in the road will soar, and car-sharing services will take the heat. The only way that won’t happen is if cities set bans on fossil-fuel fuelled cars.
Go Deeper: That’s been the case in Madrid. The reason why Car2Go has expanded their fleet isn’t only because they’re hitting a growth ceiling. The city of Madrid is expected to shut down all fossil-fuelled car traffic in a large chunk of the metropolitan area. The only way to move within the restricted area will be with public transport or electric vehicles. Car2Go is already gearing for a soaring demand in EVs.
Winning the car-sharing game takes much more than deploying a fleet. Strict financial control is paramount. To achieve that, it’s essential to own the upstream and downstream parts of the value chain.
Those that don’t invest in a holistic strategy won’t have enough reserves to survive the transition period.
Daimler is already feeling the heat around their legacy business models. That said, they’re the automaker most invested in the future. If I had to bet who in the industry will survive this disruption wave, that would be Daimler.
Kudos to Dr. Zetsche and all the work the innovation teams are doing.
Many trends are shaping our current technologic landscape. However, there is one that everyone keeps ignoring. That’s the collision between the technological and political establishments. Between startups and impeding regulations.
For decades, the technology industry has been evolving at the fringes of the political space. Power circled Washington, and the enemy was the Wall Street club. The forces at work kept bankers on a tight leash. For decades, Wall Street has influenced the country and by extension, the world. That ended in 2016.
The 2016 US Presidential Elections changed that perception. Suddenly, the political establishment started regarding big technology companies as the new power broker. And with good reason.
Like a wild vine, tech corporation’s tendrils have extended everywhere. They not only control their core business but myriad other side industries. Such unchecked expansion is creating real shockwaves in the social fabric at a global scale.
The acceleration of automation, AI and other technologies are corroding the pillars of society. And the political sphere just woke up to it.
“The company decided against informing the public because it would lead to “us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal,” according to an internal memo.”
The era of “ask for forgiveness later” is finished. Startups aren’t targeting game apps anymore. Disruption is expanding and touching industries deeply tied to our society. Such a push is finding increasing resistance and stiffer regulations.
This isn’t necessarily wrong. The cry for more regulations around technology, isn’t only political. An increasing number of technocrats are asking for better safeguards.
The problem though is the complexity of the matter. The political establishment is wary of several aspects. On one side you have the monopolistic aspects of certain technology corporations like Amazon or Google. The fear is that they have so much power that they can swing whole markets.
On the other side, the capacity of individual companies like Facebook or Twitter, to subvert the information channels, terrorizes politicians. If Facebook’s News Feed algorithm is casually capable of swinging an election, what would happen if they did it on purpose?
The problem though is that such interventions are only scratching the surface. The unregulated technology iceberg is massive, and it’s growing more extensive and complex every day.
Rethinking the regulation game
That technology is out of control is an understatement. As much as I love technology, we’ve gone from a niche market to an industry that controls every aspect of our lives. When the incentives of the tech elites diverge from the people they serve, there is a problem.
Governments are struggling with the rapid rise of technology as a world-defining force. The speed of innovation and the increasing reach of startups is outpacing the already sluggish process. It’s not only the speed and reach. It’s the complexity of assessing the actual impact of technology on society.
I’ve been quite vocal about the way politicians draft new regulations. It’s not feasible to have a political taskforce draft something that can impact millions. Not even when politicians seek help from the experts. The issue is that technology is increasingly entwined with much larger systems. It’s not an isolated application anymore. It’s an app that connects with our social graph, with the financial system, with the insurance world or the global transport infrastructure.
Understanding the impact of a simple change in the regulations requires technology. We need predictive models and simulations that inform the short, mid and long-term effects of regulation. Even more importantly, such models need to make sure they don’t introduce biases or unethical recommendations. Drafting whatever we feel like isn’t a viable option anymore.
“Ubernomics keeps a low profile, despite the fact that Uber has collaborated on research papers with economic superstars like Levitt and former Obama adviser Alan Krueger. Its wide-ranging mandate includes studying the consumer experience, testing new features and incentives, supporting Uber’s public policy needs, and producing peer-reviewed academic research.”
The Bottom Line: The weakest aspect of our current regulatory framework is its passiveness. The exponential acceleration of disruption requires active regulation, not a passive and rigid process. It needs to understand when and where society requires new protections, not to wait until people die because of it.
As I mentioned before, the clash between technology and politicians is reaching a breaking point. The problem is, there isn’t a single fault line. Several disruptive trends are on a collision course with fundamental social and civil rights. Some of them, though, are happening concurrently and approaching faster than most predictions.
Next Generation Transportation
The one space that’s ripe for regulation is the transportation industry. There are no doubts because change is here and happening as we speak. There are two significant forces in play here. On one side we have the growing presence of ride-sharing companies like Uber, Lyft or BlaBlaCar.
Their business model is challenging the public transport sector at large. Governments are deflecting any change of the current licensing model. They’re making small concessions and exceptions, but they seem reluctant to address the elephant in the room. The current model doesn’t work and needs to be rethought.
On the other side, we have car-sharing companies like Car2Go, Lime or Mobike. These organizations are tapping and accelerating the change from ownership to renting of the transportation infrastructure. They are challenging many local rules, including stationing, accessibility or rights of drivers. Like the previous group, there is increasing friction between municipal forces and startups. Most cities wrestled with Uber’s aggressive push some years ago. Second and third generation sharing companies aren’t finding such fertile ground anymore.
What’s Next: Many municipalities are ignoring the consequences of not rethinking the model. Consumers will use new ways of transportation, no matter what. There will be new services, unbeknown to all, built on top of these platforms. Cities need to start making processes that are scalable, informed and balanced for all actors. Dealing with new startups every weekend with an out-of-band approach won’t work.
The previous two trends are reshaping public transport, especially in cities. They are the present. Following that trajectory closely we have autonomous vehicles.
Driverless vehicles are going to have a massive impact on current regulations. From insurance claims to liability scope to the obliteration of whole industries like that of truck drivers.
Two of the most challenging and fastest moving countries in this respect are US and China. The US is achieving an impressive track record, but China is catching up quickly.
The one advantage of China is the lax enforcement of specific laws there. If Beijing has a vested interest in a technology, they won’t enforce the rules. And that’s what’s happening.
Why it matters: The domino effect autonomous vehicles will trigger is immense. Insurance will have to change. High-speed data connectivity and smart roads will be deployed. These changes will require new regulations including new privacy or health-related regulations. Thousands of people will lose their job nearly simultaneously. The effect will strain the already battered social welfare systems. It’s not the next scenario that’ should worry us, but the second and third-degree changes.
Automation of Labor
As startups keep automating century-old processes, the menace of a labor world without human workers looms bigger. This wave is unavoidable and has significant repercussions for any country, but mainly, for any political party.
Politicians draw their power from their constituents (that’s the theory). Anything that threatens their livelihood is a political nuke. Automation of labor is the mother of all thermonuclear bombs.
The impact will reshape the political landscape. It will obliterate the old partisan groups and will give rise to new forces within the spectrum. It’s surprising though that there isn’t a single country-level regulatory framework that covers this. The European Union spearheads the effort with some early-stage civil robotic laws, but it’s still far off and too abstract.
The consequences of ignoring this are dramatic. Addressing three side effects is essential. The obvious one is, what to do with all the low-skilled workers that will lose their jobs to automation. Many experts argue that their integration into the new digital markets will be impossible. Besides, society needs to absorb these unemployed in a very short time span. For many, the only viable option to do this is through nation-wide Unconditional Basic Income (UBC). The problem though is that the current experiments with UBC are throwing mixed results.
“Whereas many auxiliary jobs are now performed by untrained workers, students or trainees on the basis of marginal employment, it is to be assumed that the demand for these jobs will decline massively in a technically modernized establishment. The integration of these workers into the new digital labor market is practically impossible.”
Why it matters: The state will need to support these vast amounts of long-term unemployed. This situation will add higher financial pressure to the social welfare system. The fact that such unemployment will happen within a short period will spark civil unrest and a deep political crisis.
Another issue to tackle is labor protection against automation. Will governments allow full automation of any job? Will there be a “Human Quota” for certain industries? Everything points to the rise of a “made by humans” brand. Right now there are zero thoughts or legislation around this. Some will see it as a protectionist approach, but Human Quotas might be the only way to slow down the growth of the unemployed artificially.
Last but not least, the automation of work will take a toll on our tax system. With fewer workers comes fewer taxes. As mentioned before, some countries are already toying with technology taxes. But their scope is aimed at taking a cut of their digital businesses. There needs to be a serious analysis of what’s being automated and how much should we tax it with. Robot taxes will be first, mostly because they’re easy to comprehend and straightforward. AI systems taxes might be a much more difficult task to assess or tax.
“Many people will not be able to retrain for another position for physical or cognitive reasons. These people will become long-term unemployed and will have to be supported by the state. The high financial pressure on social welfare systems will be a central problem.”
Among the growing concerns around labor is the so-called Gig Economy. In all honesty, the Gig Economy isn’t new. It’s been in our societies for years now. Technology-enhanced platforms are only augmenting the impact of a preexisting situation.
Several ramifications are essential to explore. The first one is around what exactly is a crowd worker. There isn’t a clear cut definition, much less one all countries can agree upon. The absence of this makes it hard to decide which legislation applies to them. Most of the friction between startups and regulators is around this point.
Apart from an unclear description, there is uncertainty around what legal jurisdiction is appropriate. Most of the Gig Economy platforms have a global footprint. It’s hard to assess what takes priority; the country of origin of the crowd worker, of the recipient, or of the platform.
The outcome of such legal void is the unprotected status of the crowd workers. Governments need to start regulating on a pan-regional scope. For example, the European Union should set a Directive that describes and protects EU crowd workers.
“Crowd workers are freelancers who offer their skills via their computers on online platforms. Crowd working is a symbol of a changing world of work for white-collar workers in the gig economy. This covers smaller tasks, such as writing product reviews, searching for phone numbers, and more comprehensive work, such as testing software, providing legal advice, ghostwriting or designing and programming a website. […] These newly created ‘mini-jobs’ are particularly popular in developing countries and with young people.”
These issues though, are superficial. The most worrisome effect of the new Gig Economy is the continuous erosion of high-wage structures in western countries. Many high-wage positions are being spliced into smaller tasks. Such “mini-jobs” are being outsourced to the Gig Economy, mostly to youngsters in developing countries. The consequence is an erosion of high-paid jobs in the west, and low access to highly skilled positions in developing countries.
It’s critical to start drafting regulations to protect minimum income for all. It’s paramount that we protect the most vulnerable workers from turning to the Gig Economy to survive. In other words, Gig Economy status needs to be legislated and narrowed down. The enabling platforms shouldn’t be the ones handing down and defining the eligibility of their crowd workers. That should be a nation and region-wide effort by the states.
“Due to the lack of alternatives, many young employees are working in less well paid ‘crowd working mini-jobs’ outside social security systems, which could lead to poverty risks.”
What to watch: The automation of work will render many workers unemployable. Their only means of survival will be to join the Gig Economy. The outcome will be a rapid destruction of the middle class, social unrest, and increased populist movements.
Artificial Intelligence Bias
Derived from the extended use of automated systems is the fairness and morals of such. One thing is using these Artificial Intelligence systems for commercial purposes; another is employing such predictive models to make life or death decisions about people.
What recent studies have demonstrated is a pervasive recurrence of bias in such systems. The most significant risk here is the invisibility of the problem. Prejudice is hard to prove in humans; when we scale it with AI systems, it’s even harder.
There is an increasing number of companies spurring AI-bias expert teams to keep their systems in check. There are also new organizations devoted exclusively to such a task. However, the proliferation of critical decision-based systems is faster than that of the checks and balances.
The more extended these systems become, the more significant the risk of shattering society. The development of smart cities or autonomous vehicles is accelerating the adoption at a critical infrastructure level.
“Particular attention must be paid by developers and regulators to the question of human-machine interfaces. Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly, in order to avoid misunderstandings that in many applications could have serious consequences.”
What to watch: The pervasiveness of these AI-backed decisions at all levels of society is massive. The subtleness of the bias and the technological illiteracy of most people, especially politicians, makes detection impractical from such a high vantage point as a European watchdog. Detection, tracking, and counteraction requires nation-wide capillarity. Each country should start developing an agency to keep the right checks and balances.
But not all the trends are related to Artificial Intelligence. Synthetic Biology and more precisely, the use of CRISPR-Cas9 for DNA manipulation is becoming the future of many industries. With great advances comes grave risks too.
For example, gene editing is becoming the de-facto for an increasing number of startups in the food industry. Food is tightly regulated, and the use of CRISPR-Cas9 in the industry has raised concerns.
“This will have a chilling effect on research, in the same way, that GMO legislation has had a chilling effect for 15 years now,” says Stefan Jansson, a plant physiologist at Umeå University in Sweden. Gene-edited crops will not vanish from European research labs, but he worries that the funding to develop them could dry up. “If we cannot produce things that society finds helpful, then they will be less likely to fund us.”
Both regions illustrate how difficult it is to strike the right balance. Europe always tries to protect the consumer, while the US tends to favor research and free commerce.
However, the technology is showing incredible results in a wide variety of fields, including human experimentation. Adding strict regulations might hinder progress. Some countries like China are already taking advantage of this.
“Most countries are struggling to assess whether gene editing may or may not be different from classical genetic engineering—from both a research, applications, and product standpoint. The most contentious areas of debate seem to revolve around gene-edited agriculture and food products and whether they should be regulated differently from genetically modified organisms. Contentious and unresolved debates also surround human germline editing.”
Beyond human or crops editing, CRISPR-Cas9 is also used for insect gene editing. Some scientists use it to produce Malaria-deterrent mosquitos. These modified impotent mosquitos can wipe whole populations in one generation. The goal is to eradicate the major Malaria infection vector, mosquitos. The disease kills north of half a million people each year. Making black or white regulatory decisions isn’t as clear-cut in specific fields, much less when it has a direct impact on human survival.
The bottom line: When regulating cutting-edge technologies it’s important to remember that most use-cases haven’t been deployed yet. Due to globalization, restricting based on fear might stifle innovation and make a country lose their competitive advantage. An increasing number of states are adopting a legal-sandbox approach to increasingly sophisticated technologies.
There are two elements to Blockchain; the underlying infrastructure, commonly known as the “ledger,” and the applications built on top via Smart Contracts.
While everyone is debating about the infrastructure level, the most complicated aspect to Blockchain is the distributed contracts built on top of it. If a contract gets out of control, the potential repercussions are significant. An increasing number of organizations are adopting the underlying Blockchain infrastructure as an experimental, computational platform. It’s a matter of time before we start experiencing more and more incidents.
The question though is who is responsible if something happens? The very decentralized nature of Blockchain makes it very hard to assess. And while the industry is in its infancy, contracts will get exponentially more complicated.
Imagine a field that brings black-box-decision-making algorithms and runs in a distributed and decentralized platform. The more systems we tie into this network (and we will), the higher the chance of triggering a Butterfly Effect.
Several countries have already started regulating Blockchain. The problem is, they’re focused on its economic use cases. Those are the low-hanging fruit issues. As the infrastructure layer matures, the Blockchain fabric will become more complex. New use cases will find room in it to thrive. The most challenging aspect about it will be its global footprint. It will be close to impossible to enforce local regulations on such a distributed system.
Europe, though, is taking an interesting approach. They already have an EU Blockchain Observatory that serves as a watchdog. Now they’re toying with the idea of applying a “stamp of approval“ on specific applications built on top of Blockchain. The idea is enticing. It will turn regulators into trusted entities. This approach though still conflicts with the current privacy regulation, GDPR. Regulators need to iron out these tensions between a decentralized system and the right to privacy control. For new rules to be enforceable, they’ll need to take a global scope and become very flexible.
“The most obvious and oft-cited point of tension comes from the fact that blockchains are, generally speaking, constantly growing, append-only databases, to which information can only be added, not removed. GDPR, on the other hand, explicitly gives individuals the right to have their data amended to ensure it remains accurate or (with certain exceptions) erased when no longer needed.”
Why it matters: The increasing class disparity is convulsing the world. This social fracture is eroding trust in politicians and world leaders. Consumers will increasingly turn to decentralized structures as a way to upset the status quo. Governments, despite the oxymoron, should start deploying blockchain labs and sandboxes to keep track of the new use cases. Regulators might need to evolve into certification agents with a global footprint. Passing rigid laws won’t be enforceable in Blockchain.
Weaponizing of Artificial Intelligence
These new trends are shaping our future. Some of them are already in full swing. Others will take a little longer, but they’re unstoppable. Innovation though usually has two faces. The useful, effective, potentially-life-saving one, and a much darker one.
One of these dark sides is the trend toward civil-military integrations. The boundaries between commercial, civil uses and military ones are blurring. Many governments are approaching the leading Artificial Intelligent companies with partnership proposals.
The truth is, there are no regulations that address the use of such technologies by the government. Even the GDPR, which is rather broad, doesn’t apply when it comes to national security activities or law enforcement. And this is a big issue. Many security agencies are deploying these AI systems to track and control their citizens. Sometimes even worse. In China, already, such civil-military collaborations are the norm.
“So far, there have not been any indicators of resistance to the idea that Chinese technology companies should be in service of the party-state. That’s hardly surprising; Xi’s regime has cracked down harshly on dissent, and open policy debates are far more limited than they were even just five years ago.”
Why it matters: The next wave of AI-based technologies is orders of magnitude more potent than what humanity has ever seen. Such power is eclipsing whatever force the political establishment had. We’re going to see a growing interest in such tools by the military and security agencies. The unchecked exploitation of such weapons by a “shadow” government could destroy society as we know it. We’re closer to a world out of 1984 than we know it. There needs to be a national AI ethics committee that balances such collaborations.
“Developing strong working relationships, particularly in the defense sector, between public and private AI developers is critical, as much of the innovation is taking place in the commercial sector. Ensuring that intelligent systems charged with critical tasks can carry them out safely and ethically will require openness between different types of institutions.”
This is a long article, but I wanted to give a 360 view of the significant regulation challenges we’re facing as a society. Politicians are finally waking to the menace of technology. During the next few years, we’re poised to see a string of ill-advised regulations trying to clench the rising power of startups. It will not work. The whole regulation process needs to be rethought. We won’t be able to protect our societies otherwise.
The regulatory process needs to take a much more active role. New systems need to designed so that new predictive models can be used as part of the regulatory process. Boundaries will matter less and less, so some global legal framework needs to happen too. There’s been talks about a split of the Internet into two or even three networks. Rest assure, the partitioning will be done around regulations.
And on top of all that, we must elevate the average technological literacy of our government.
The bottom line: Regulations, as we know them today, are outdated. The legal system needs to be upgraded to cope with the scale, speed and global reach of our disruptive technologies. Moving forward, few startups will be free to operate without regulations. This will add strain to the innovation engine. Startups need to plan for major legal risks and invest in Public Affairs efforts.
I am not an Apple fanboy, but it’s hard to ignore all the things Apple is doing as of lately. One of the first articles I wrote was about the disruptive potential of the iPhone X. While new apps using the True Depth sensor camera will take time to flower, the Face ID application was an instant hit.
But there was a big question about Apple’s future then. What happens when the iPhone franchise ends? The future of Apple wasn’t obvious. During the past few years, the company has been increasing their cross-sales capabilities. It’s not about the device itself anymore, but about all other devices, you can connect to the iPhone. The strategy, though, has always revolved around the anchoring device, the smartphone.
However, the recent announcement of the Apple Watch Series 4 changed all this. Apart from the usual array of iPhone upgrades, the introduction of the new Series 4 Watch stood out as a new strategic direction for Apple.
I’ve been particularly disenchanted with Apple’s efforts around the smartwatch. Their first incarnation, launched in 2015, was despite the record sales, a big disappointment from a product perspective.
The Big Picture: Apple products have always been driven by what they solved. The iPhone addressed a big obsession for Jobs and a natural evolution from the iPod. Nonetheless, the Apple Watch was one of those products that weren’t clear what problem it responded to. In a way, it resembled the Google Glasses fiasco.
The Apple Watch Series 3 delivered significant improvements over its predecessor. A clear message emerged from Apple: “We’re focusing on fitness and health.” This was the first time they stated a clear goal for the product. It was a big win.
Why it matters: Fitness and health isn’t just a wearable category, it’s a whole change of behavior. As our society becomes more complex, our mind and body are barely keeping up with the exponential acceleration of technology. The turn towards healthier lives is a growing trend, and it stands to reason that we’ll turn to technology to help us cope with technology. Apple is leveraging this lifestyle change to sell the Apple Watch to this growing audience.
Still, many other wearable companies have been pushing fitness tracker devices. Apple’s approach wasn’t groundbreaking or especially novel. There wasn’t anything in the Apple Watch Series 3 that made it stand out beyond its slightly better hardware and, of course, the iPhone ecosystem integration.
I don’t want to downplay the fantastic work Apple has done with their smartwatch though. Their iPhone integration is indeed critical. It propelled Apple to the top of the wearables industry, toppling other players that dominated the space until then. This is the power of catering new hardware to the existing iPhone base; it spreads like wildfire.
Apple Watch Series 4
Apple’s new smartwatch, though, is a different thing altogether. Two new features are placing the device in a completely different league. Those are the fall detection system and the ECG (EKG) sensor and application. These two features alone moves the smartwatch squarely into the predictive medicine realm.
Until now, most wearables have been focusing on fitness applications. They record your heartbeat, your blood oxygen, number of steps, etc. It all revolves around tracking and displaying. Such devices have been the foundation of the Quantify Self (QS) movement. Predictive medicine though is a much more complex challenge. While Quantify Self is about tracking, predictive medicine aims to predict when a user is getting sick. For years, being able to predict an illness before it happens has been the Holy Grail of many startups.
The reason why it’s so challenging is that it requires a confluence of three elements. You need a multi-sensory device, a continuous stream of sensory data and complex predictive algorithms and models. Each one of these is already hard on its own. The need for all three turns it into a significant challenge.
The new Apple Watch is a big step towards that goal. The Series 4 delivers on two of these premises. It creates a real-time, always-on health stream data that informs a complex prediction model on top.
What’s Next:The device isn’t there yet though. While Apple has achieved a critical milestone, they still need to ramp up the multi-sensory approach. I expect them to cramp other sensors within the smartwatch. Once they’ve saturated the device’s surface, they’ll expand to the periphery with novel add-ons. The new sensory inputs will translate into better predictive capabilities and improved algorithms.
Why having a one-lead ECG doesn’t matter
Several doctors have shared their mixed feelings about the new ECG capabilities. Their primary criticism is around its capacity to correctly diagnose (or misdiagnose) severe health conditions. The arguing is that a one lead ECG will never have the depth of the hospital-grade 12 lead ECGs. What Apple has built within their smartwatch will only detect a minimum subset of heart conditions.
While I agree, I think that line of thought completely misses the point. Apple didn’t come out with a compact ECG to compete with hospitals. What Apple wants is to skip hospitals altogether through an early detection system.
One of the dimensions most critics fail to observe is the time variable. While professional ECGs are state of the art, they are only attached to a patient briefly. The Apple Watch will be attached to a user all day long, every week, every year.
It doesn’t matter if the device can’t detect certain conditions on the short-term. The analysis of thousands of hours of recordings of a single individual can yield other long-term detection methods when compared to that user’s baseline. The combined data of every Apple Watch user far outpaces any sample any hospital or study has ever seen.
What to Watch: It’s important to remember that Apple isn’t a standalone wearable company. They come with millions of iPhone users worldwide which will eventually buy into the Apple Watch narrative. That will allow them to have millions of health data streams to train their algorithms with. Additionally, they’ll be sitting on the most extensive medical sample ever. The current Apple Heart Study done in collaboration with Stanford Medicine is an excellent first example of this. There will be many more in the future.
The Apple Watch opens a new era for the company. One that will revolve around massive amounts of sensitive information. It begs the question of how will Apple handle such data. The company has taken a robust privacy-first stance, but I wonder how strict will that be when exploiting this side of the business.
“Yet Apple seems to be the only major tech company that had the foresight–and the will–to begin tackling these issues before they reached a crisis point.”
While I don’t believe Apple will share any information without consent, it will, most definitely, begin offering integrations with interested third parties. These can range from family members, doctors, autonomous vehicles or insurance companies, to name a few.
The Bottom Line: The new Apple Watch is not only a communications device. It’s the first step towards a nascent medical prediction platform with global reach. Its potential to change the medical, pharma or insurance industry, to name some, is dramatic. Competitors will eventually catch up with the current hardware, but they will have a much harder time replicating the complex prediction models Apple is building as we speak. Their moat won’t be the hardware, but their algorithms.
Content has always been notoriously hard to monetize. It seems surprising, as most people would agree that good content is valuable. It brings insight, opinions, perspectives or pure entertainment.
In the physical world, we still retain a certain tangible principle to content. You can touch it, squeeze it, burn it or throw it. You pay for the object, and you own it, herein lays the value. However, when we transpose content to the web, it suddenly loses all value. I can’t own it, and I can’t feel it. I can’t touch or weight it.
The nascent Internet
The Internet changed many things. One of the most relevant ones was the scarcity of content. Replicating physical content was hard. The Internet, though, made replicating and distributing its digital counterpart a child’s play.
Content that was unique before, lost its uniqueness as it became wildly distributed online. The old content gatekeepers struggled to keep up. The Internet’s capillarity wrestled their control away from them. As content became widespread, it became harder to concentrate large audiences under a single banner. Without eyeballs, it was hard to make advertisers pay what they did in the physical world. This decrease in advertising revenue, collapsed the most significant revenue stream publishers had.
Logic dictated that, if advertisers wouldn’t pay, consumers should. However, brick and mortar logic rarely applies in the digital economy. They soon found out that it was excruciatingly hard to get anyone to pay for content; let it be video, text or images.
If consumers wouldn’t pay for content, it was back to the advertiser model then. The problem though was how to make them pay more. Publishers realized they needed a bigger audience to offset the income drop. And increasing the audience was a problem because they had lost control over how their content got consumed.
The Internet had shattered the content distribution fiefdoms. Or more like, they had changed hands. Distribution was now the domain of search engines like Google. They owned search through their algorithms; something content producers couldn’t replicate.
If publishers wanted distribution for their content, they had to go through Google first and pay the Royal tax.
And along came Facebook
As the web matured and became more accessible, the usage patterns changed too. The Internet went from a place of research to a place of social connection. Attention shifted from pure search to people connecting with others through shared content.
This evolution opened a new avenue for distribution. Publishers didn’t need to pay tributes to Google anymore. They could hack the distribution loop by getting people to share their links among their contacts. The faster and more comfortable this sharing was, the larger the reach. The more significant reach, the more eyeballs and in consequence, bigger advertising revenues.
But nothing comes for free. The new social distribution channel triggered new problems. As organic distribution grew, brand awareness got diluted. While before the brand was the atomic reading unit, the new constituent was the article. Producers had to promote each piece independently. The brand couldn’t be relayed on to drive traffic to the advertisers.
The impact was immediate. Reading audiences dropped liked stones. The advertising model, which was starting to gather steam, crashed. Again.
“Mark [Zuckerberg] doesn’t care about publishers but is giving me a lot of leeway and concessions to make these changes,”
“We are not interested in talking to you about your traffic and referrals anymore. That is the old world, and there is no going back.”
Campbell Brown, Facebook Global Head of News Partnerships. Nieman Lab, Aug 2018 (Editor’s note: Comments contested by Brown)
Meanwhile, some content producers started to think outside of the proverbial box. Their legacy advertising model didn’t work. Many recognized that a new medium required different revenue streams. Plenty agreed, very few tried.
One of those first trials was direct content monetization via a paywall. This model had fans and detractors. Many believed that the content they produced was inherently valuable. But the market proved them wrong.
The first experiments with gated content failed. One of the biggest lessons was that the readers didn’t want to pay for a content they could have elsewhere. But it was more profound than that. At the time, every piece of content was accessible for free. Users saw gated content as defacement of the Internet’s freedom of information credo.
“Paywalls are not a new idea. The Atlantic previously had a different one for a while in the mid-’00s. The Adweek article announcing that this paywall was being pulled down is a fascinating time capsule. Paywalls, back then, were often seen as a way of protecting the existing print businesses.”
Paywalls had many problems at the time, but one of them was rather ominous, the audience. For the system to work well, it required a large audience that had a taste of the content and then decided to pay for it. The bigger the haystack, the larger the needle. There was one issue though, during the mid-’00s the online population was still meager.
In late 2000, online users accounted for only 6,8% of the global population. By the end of 2017, that number was 54,4%. That’s eight times more. The lack of online audience meant that conversions for the paywall would be limited and scarce.
A bigger question was why anyone should pay for content? Many legacy producers argued that their content was the most valuable there was. The truth was, it wasn’t. Most contents resembled one another. Even worse, in most cases, it wasn’t relevant to people in different geographies.
The Internet enabled new distribution channels, but it also provided access to users with different cultural backgrounds. The content produced at the time lacked all three; uniqueness, originality and global relevance.
The only reason why non-unique content had prevailed was that consumers didn’t have another option. The Internet killed that.
And so, the word on the street was that people wouldn’t pay for content. Every Internet expert chiseled this into stone. At the time, it was a good piece of advice, but it required specific understanding.
Be Smart: It’s important to differentiate between current restrictions and the potential development of technology. Digital content could and would become valuable to consumers. Payments for virtual assets, ignored by many, would become a massive market.
Netflix meets Fake News
While the social networks were hacking their News Feed algorithms, new trends were colliding and changing, once again, the landscape.
Several new trends became instrumental in the emergence of successful business models around content. The first one was the rise of the “Netflix Model.” The model itself was well known. Traditional media companies had employed subscription-based-content for decades. Nonetheless, the model seemed only to work for offline media and entertainment.
As successful as Netflix was with their DVD business, its customers paid a monthly subscription fee for a service. Content was part of the deal but wasn’t the only reason. In 2007, Netflix introduced Video On Demand (VOD) and began unwinding their DVD business. This marked an important milestone. Netflix had suddenly transitioned from a service to an Internet cable company where the value rested on the content itself.
As Netflix kept growing, it signaled the way forward. Other content providers, like Spotify, started experimenting with the model. There is one thing that was unique to the model, which was the bundling of content that consumers had to pay for independently before. Pre-Netflix, the unit was the movie. Post-Netflix you paid to get access to any film in the catalog. Spotify did the same for music, from paying for an album to paying for the streaming.
There was a fundamental difference with news content though. At the time it was hard to find movies or music online. Many took to piracy to get their daily fix of content, but it became increasingly harder to do that every day. When Netflix or Spotify came around, they hit the sweet spot between convenience and price sensitivity.
News was different. It wasn’t hard to find that type of content. It was everywhere, with various degrees of quality, and it was free. It was hard to imagine you could put the Jinn back in the lamp and stump a price on it.
But things started to take a turn at the beginning of the decade. More and more content providers began their paywalls experiments, including freemium approaches (metered paywalls). As the years went by, it became harder to find quality journalism for free anymore. The key word here is quality. At last long, some content providers started upping their game. Scoops, in-depth research and powerful op-eds became the drive of these digital subscriptions.
And so we reached November of 2016 and the US Presidential race. When Donald Trump won the elections many were flabbergasted. As the dust settled, the first hints that something had gone fishy during the race came out. Exaggerated news, misquoted people, fake photographs or tainted explanations became the bread and butter of the campaign.
A look at the content that users shared on social networks painted a grim picture. Trusted content was absent, and the News Feed algorithm was happily promoting fake sources to the users.
While media manipulation has always existed, the scale, automation, amplification, and consequences that the 2016 election campaign achieved, set a new record.
The outraged at being duped flared for months. One of the immediate consequences was a sharp increase in digital subscriptions for news.
The erosion of trust, or more precisely, who to trust, propelled the success of the paywall. The inertia of the system pushed content providers out of the social networks and onto their platforms.
The bottom line: Many consumers are scrambling for reliable and trustworthy brands. Those that ensure it, produce high-quality, relevant content and drive engagement beyond it, will see an increase in their digital subscriptions.
The paywall platform
As monthly subscription models become common-place, people are getting used to paying for content. Currently, it’s hard not to find anyone that’s not subscribed to at least one or two of these services.
The everyday nature of the model is easing the way for more and more content offered behind a subscription. A decision made easier by the constant erosion of trust produced by social networks. As top publications and providers keep accruing subscribers, the challenge now is how to turn a content catalog into a platform.
In the past, common lore highlighted the power of the social media walled gardens. This has been especially true of Facebook. Nonetheless, people’s attention, and more importantly, the consumer’s content behavior is moving away from these platforms.
Why It Matters:The Internet has evolved from a scattered landscape of free content to a series of big powerful walled islands controlled by GAFA (Google-Apple-Facebook-Amazon). These gardens are now being splintered into myriad smaller islands of protected producer-owned content. This opens the door for a meta content subscription service that could aggregate these fragments again. Such a supra-aggregator would wrestle the customer relationship away from the content owners, yet again. However, that’s precisely what Apple is trying to achieve with their Texture approach.
Current paywall challenges
Despite the growth of subscription-based models, most companies are still facing critical challenges.
As I stated before, to make a paywall work, it’s essential to have a large enough audience in the first place. Not all content providers have such audiences. It’s easy to refer to top publishers like The New York Times or The Washington Post. But both companies had a vast audience even before putting a paywall in place. The question is, how can smaller publishers build a broad enough subscriber base.
“The most common cause of poor digital subscription sales is not asking enough users to pay.”
Digital Subscription Best Practices – The Lenfest Institute
According to The Lenfest Institute, the top aim is to increase, not just the audience, but engaged users.
While large publishers have a much bigger audience to draw conversions from, smaller publications can focus on their niche audience. Such users are much easier to engage as they tend to be more homogeneous and more straightforward to engage with.
The key for many content providers is to establish how much content is in the open and when to ask for a subscription. As reported by The Lenfest Institute, the ideal stop rate is something between 10% to 5%. The industry average hovers around 1.8%, so there seems to be significant room for improvement.
Another big challenge is how to turn engaged users into paying customers. The industry standard is rather low, with a conversion rate of 0.54%. By comparison, top publications are achieving between 1% to 2% conversion rates.
In other words, it’s paramount to detect engaged users and ask them to pay, but it’s as essential to making it easy and clear, what are they paying for. The value perception is something that is still not very clear for most users. The higher the literacy of the users, the higher the probability the users will pay.
Most consumers have no idea how much time is invested in crafting their favorite stories. The more they know, the higher the probability they’ll pay. And it seems that the cost of the subscription isn’t a problem once the consumers understand the need to pay for it.
Once subscribed, churn rate is becoming an issue. Successful content providers are turning their paywalls into platforms. Engagement inside these is, thus, becoming critical.
Be Smart:While many content publishers are moving towards subscriber-based models, it’s important to assess how many engaged users do you have. Don’t be afraid to ask for money, as long as your fan base knows what they get and why they should pay. Once subscribed, make sure they keep coming back and promote features that empower your users.
Beyond the paywall, new business models for content
Although subscription models are becoming the industry de facto, many publishers are complementing them with other revenue models. Most organizations are no longer leaning on advertising only. They’re leveraging their expertise to build adjoining businesses.
“Over the past decade, the company has worked to diversify its revenue sources. In 2006, 85 percent of The Atlantic’s revenue came from print advertising and circulation. This year, print will account for less than 20 percent of incoming revenue; the company’s digital, events, and consulting divisions make up the remaining 80 percent. In the past decade, the company’s annual revenue has quadrupled, to nearly $80 million.”
They aren’t the only ones. The Washington Post is turning their content platform Arc into an absolute money maker. The Post turned their content expertise into a service with Arc. They went from being in content to becoming a service provider for the industry.
“The Washington Post doesn’t disclose Arc Publishing’s revenue or whether it’s currently profitable. (The Post itself turned a profit in 2016.) It does say, however, that Arc’s revenue doubled year-over-year and the goal is to double it again in 2018. According to Post CIO Shailesh Prakash, the company sees the platform as something that could eventually become a $100 million business.”
One of the most interesting concepts I’ve seen so far is Uzabase’s NewsPicks. I’m not surprised at all they recently bought Quartz. Both companies have very similar DNA regarding content product experimentation.
NewsPicks is Uzabase’s content curation content. But it’s also much more than that. They select handpicked stories, and they focus the engagement not on the stories itself, but on the comments from their users. On top of that, they’ve built a subscription service that gives access to original content. Their numbers in Japan are imposing. They yield four million users and 70.000 subscribers. That’s a 57% conversion rate, which at an average of 15 dollars per month brings in 12.6 million dollars each year.
While their subscription clearly works in Japan, I’m not sure it will convert as well in the US though. After days of playing with their app, the one thing I value is the thoughtful comments. Would I pay for original content? I wouldn’t. Would I pay to get access to insightful and strategic remarks around content? You bet. I believe they’ve stumbled upon a unique approach where content is relayed to a second plane, and unique perspectives take precedence. I like the notion of getting access to a curated selection of brains and not the news per se.
Either way, it’s clear that users are increasing their willingness to pay for quality, noise-filtered content. The collapse of trust around social media and the prevalence of subscription models are unlocking new business paradigms.
The growth of the “Netflix Model” though carries its own problem, that of unequal access to information. In an age where inequality is at the root of most global issues, many people are concerned that locking information behind paywalls will only accelerate the trend. That’s why some publishers like The Guardian are opting for a donation-based revenue stream.
“The rise of subscription has raised concerns about a two-tier system, where high-quality news is reserved for those who can afford it. This is why some news organizations prefer to keep access free but to ask for voluntary contributions. In the UK, the Guardian adopted the approach in 2016, and since then it has received 600,000 voluntary payments, raising tens of millions of pounds each year. It has also started to crowdfund around specific stories such as the recent US school shootings where it raised $125,000 to produce solutions-based reporting.”
Reuters Institute. Digital News Report 2018
The big picture:As online content’s value increases for a broader audience, new business models will develop. Quality and truthfulness are becoming the hallmark of what users are looking for. Supporting deserving publishers is paramount. Nevertheless, it’s vital to keep relevant information accessible to all. Education is one of the keys to relieving inequality. The automation of work will bring hard times, and it’s essential that publishers help and inform the next generation, independently of their social background.
Everyone I know talks about Millennials. However, I’m under the impression most people don’t fully appreciate how different they are. I was born on the fringes of Generation X with Millennials. My parents are Baby Boomers. My brother is Millennial.
One of the things I’ve observed is how different Gen X are from Millennials. I look at my brother, and it’s hard to believe we had the same parents. I have more in common with my parents than my brother will ever have. The irony though is, when I look at Generation Z, I feel the gap isn’t as big as with Millennials.
And the truth is, the shift in product usability patterns is a clear example of how disparate they are compared to my generation.
The behavioral change is tremendous, and it’s becoming one of the hardest things to design for. The reason is, key decision makers aren’t from that generation. Companies bring in Millennials; they even give them management positions. But Senior Management has a hard time going all in with Millennial ways. The only companies I’m seeing do it right are those founded, run and staffed by Millennials.
“I’m looking at an app, and I could swear it’s Instagram. I see large square photos in an endless feed. Avatars appear in round circles. You can tap a heart to like something. Inside any post, there are dozens of comments.”
The generational gap isn’t just about Millennials. The consequences are rippling over everyone. The way this generation understands products is changing not only theirs, but everyone else’s too.
“Marketing to millennials may sound overplayed, but in the interior design world, there are only a few moments in someone’s life when they actually make a lot of major purchases: The move to college, the move into the first home, the onslaught of kids, and the pared-down, empty nester life.”
We’ve gone from a product-centric approach to a customer-centric one. The change has spurred differences on how companies pursue product development. The customer’s feedback is critical, and so, product cycles are keyed into them. Agile, Lean, Scrum, are familiar terms by now.
Still, not all processes have changed. Many parts of the overall product development cycle remain the same. Interviews, feedback gathering, observation techniques. And the core should stay the same, for the simple reason that, despite generations, we’re still human.
Nonetheless, as I stated before, most people don’t acknowledge how different Millennials are. So, while the human behavior is still typical to everyone, the way to measure and detect it is different.
This is why a product like Studio Connect caught my attention. While the design team is deep in Design Thinking territory, they realized they needed a new approach to engage with their audience.
“For any goods Target develops, it takes a firm design thinking approach. By spending time working with customers, the team identifies pain points, prototypes a solution, and then it iterates with wave after wave of consumer feedback.” Made By Design is Target’s big bet on minimalism
Traditional focus groups or interviews aren’t cutting it anymore. Images and video do. Stories do. Why not use the same approach that powers the largest Millennial Social Network? Why not replicate what Instagram or Snap have achieved? Could it be used to drive the attention to problems we want to solve internally?
That’s precisely what Studio Connect has done. And it seems it’s been very successful with it. It’s a brilliant move. Bring what’s resonating with their behavior and focus it on our products.
While the article doesn’t grant much detail, I’m pretty sure the system enables powerful microtargeting and additional bells and whistles that help on the feedback gathering phase.
As childish as it might look, you can’t fight fire with the same tools of old. You also need to improve and update the internal approach to Design Thinking. I would argue that new feedback gathering tools are one field, but we’ll see others getting upgrades, like remote co-design tools.
Millennials are driving changes around product expectations. They want cheap, high quality, well-designed products. And they want them now. There is a certain entitlement element that’s driving every single design trend. From Target’s Made by Design, to things like WeWork’s office spaces. Millennials know they can complain and that their following hordes will support them. Brands will kneel, and they’ll get what they want. For free.
I won’t discuss the morals of such entitlements. The fact remains that companies are designing their products to fit the bill. They require not only quicker product cycles (Agile), but more informed feedback (Lean/Design Thinking). On top of that, they need to find ways to keep the quality but lower their production costs. This is where disruptions like 3D Printing come into play. Last, they want their products, and they want them now. Last mile delivery and robust logistics are the final nails of the Millennial-Product-Design-Cycle (M-PDC).
Those companies that don’t invest in each of these steps will find themselves at a disadvantage and will be removed from the market. The game isn’t to advance in one, but to push all four aspects simultaneously.
Every company focuses on some new aspect, but sometimes we forget the most elemental aspect. Use consumer products dynamics to aid you in the design process. I expect more companies to take a page from Target’s manual and turn to products like TBH, Snap, Tinder, Wattpad, Twitch, Venmo or Uber, for inspiration. What would a product testing Twitch look like? Would something like Instagram Stories work for employees? Could we do an Uber for physical products?
It’s all about rethinking how we do what we do, in the most engaging way for our target audience. And the target audience has changed a lot. Not only is it different, but their new behaviors are spreading, and fast, to other segments.
Our world is changing and changing fast. Some transformations are straightforward. Others are the consequence of the convergence of certain trends.
The electrification of transport and the rise of Electric Vehicles (EV) is one such evolution. EVs are, by no means new. They’ve been around for a time now. However, the confluence of three big mega-trends has accelerated its growth and deployment worldwide.
On one side, we have the continuous trend towards a more sustainable energy footprint. In this case, this means the rapid growth of renewable energies.
Despite Trump’s EPA perversion, most countries understand the need for a sustainable and clean environment. Renewable energies are becoming critical to achieving such goals. On top of that, clean energies are becoming of enormous importance in the geopolitical arena. Most countries want the break free from the tyrannic shackles of fossil fuels and the political games attached to them.
Another significant trend is one related to health. As major cities keep aggregating population and turn into megacities, traffic, pollution and air quality are becoming crucial for further growth. The need for smarter and cleaner transportation systems is one of the major challenges.
The last trend is the final piece of the puzzle, the rise of ride-sharing services and Autonomous Vehicles (AV). As ownership declines and car fleets increase, the need for sustainable fuels grows. In this case, it’s not just a health issue, but an economic one. If I need to maximize miles per day, I also need the cheapest (government incentivized) fuel for the vehicles. And surprise, surprise, those are Electric Vehicles.
As I said before, EVs existed before, but the intersection of these three global trends is supercharging its growth. The fastest adopters will be, as usually happens in innovation, developing countries. And the apparent poster boy is China.
Despite the acceleration of the industry, massive adoption is still hindered by several aspects. The first one is the lack of a big offer of EV models. If you want something semi-usable, your choices are pretty limited so far to about ten models.
The second issue is known as range anxiety. The lack of a decent driving range without the need to recharge is still a limiting factor. I recently asked a friend, and he confessed he sold his EV after a year due to this and other complaints.
Range restrictions are a byproduct of battery capacity and recharging stations infrastructure. The current infrastructure is still rather brittle, and it makes it tough to use an EV beyond your neighborhood.
Fast changes ahead
The industry, though, is evolving fast. This year and the next will become significant in terms of new models. All the major automakers will push novel EV models into the market. This new wave will dramatically increase the offering. New versions will come with better batteries (longer driving ranges), and faster-charging capabilities (from 1h charging to 15 minutes).
Battery costs are falling rapidly, and that’s allowing bigger and more densely packed batteries for every maker, not just Tesla. That’s pushing the millage of most models into a decent range, so expect significant gains next two years.
The market is already feeling these improvements and sales are growing exponentially. One of the major drives for many is the existence of car sharing EV fleets operating in their cities. The fact that you’re used to driving an EV every day is a great incentive to remove the fear of owning one. The other push is coming from China, who is making EVs a national priority.
Charging stations and general EV infrastructure is significantly improving too. The last 18 months have shown a rapid expansion of most charging networks. Not only they’re growing in terms of locations, but also regarding charging speed and available charging stands per station. These rapid improvements are the underpinnings of the quick growth the industry is experiencing.
Despite the good news, some demographic segments are still underrepresented. While EVs are becoming nice conveniences, especially in cities, they still make it hard for families. There are no present or future plans to support the needs of family transportation. That is a big chunk of long-range driving. A segment that requires, not only capable EVs, but a supporting infrastructure for kids and not lonely chargers in the middle of a deserted plain.
But not everything is there yet. Yes, it’s growing, but there are still big roadblocks for significant adoption.
The locations of most chargers for EVs is still a problem. The necessary infrastructure requires three different settings. Home chargers, which are, well, the plug you have at home. These chargers tend to be slow, which is ok if you have all night for charging.
When on the road most EVs need destination chargers. These are charging points you can use at your final destination. Think of your office, a mall, the airport, a hotel, etc. These are a mix. Most are still slow chargers, which depending on the case, might be ok or not. The time we spend in a mall is not the same as the time at the office.
The last location is what I call on-the-road chargers. These would be the equivalent of gas stations in major highways. To be able to extend the EVs range and take them out of the city, we need to have charging stations on the way to long-distance destinations. These, in particular, require fast chargers to avoid long waits and stops.
So far, big cities are more or less well covered with a mix of home and destination chargers. Many of these city charging points, though, have somewhat tricky access. Either they’re private, in hard to reach points or behind closed doors. The accessibility of charging stations should be total if the industry wants to keep growing.
When on the road, the location of intermediate charging stations is also a problem. Most chargers are located haphazardly. The reason is ownership fragmentation. The cost of some charging stations is so cheap and the barrier of entry so low, that any business can set up one. As it’s “unplanned” and there is no follow through, these stations tend to be poorly located, nonvisible and very unreliable.
Slow charging stations are excellent for home or long-term waits. But when on the road, there is a need for fast charging stations. These are expensive. They can cost north of 50.000 euros a piece. The steep price and the lack of market traction have been one of the reasons why they’re scarce. Until recently, the only reliable fast-charging network was Teslas.
Fast charging though isn’t unique to long distance drives. The nascent ride-sharing industry is one of the largest customers. The need to put the shared EVs back into operation as fast as possible is a big operational must. Fast charging is, therefore, a much-needed feature for fleet operators.
On top of the random locations and lack of capillarity, most stations have far few chargers. The average station holds two chargers. This is insufficient to serve the growing EV market. Tesla understood this from the onset. They’ve been upgrading their stations and fitting them with 10+ chargers per station. The exciting thing is, they’ve done it ahead of the market growth, not after.
Last but not least, most charging stations are emplaced in locations that don’t have much amenities. The fastest charge we can get now is roughly a 20 minutes one. Even if this was the norm, with no services around, it could get frustrating. Stations need to foster a service ecosystem around them, and this part will take time.
While Electric Vehicles consume electricity from the grid, their consumption is small compared to other energy-hungry appliances.
However, as the number of ride-sharing companies increases and the markets get flooded with EVs, things will change. One of the current worries is the overload of the low voltage grid.
Most of the home chargers connect to the low voltage grid. This network wasn’t designed to support such a load. If a rapid number of EVs start cropping in the market and pulling from the low voltage grid, they might take it down.
Fast chargers are typically connected to the medium voltage network. This gives them much more power, reduces unreliability and allows for multiple chargers without a loss of power. Nonetheless, this kind of grid connection isn’t always available. It might require permits, or it simply might not exist in certain places.
EV fast chargers might push for a need of a dedicated electrical grid just to support them. Japan’s TEPCO had to create such a dedicated network to support the growing rise of their CHAdeMO fast chargers.
One of the proposed solutions to accelerate the expansion of such fast charging networks is the use of solar energy. As stated before, EVs are not power hungry, but they draw enough to place some strain on the grid. What if we could create an off-grid energy independent unit? It wouldn’t need a connection to the electrical network. It would generate enough electricity to power all fast chargers and store the extra in stationary batteries for its use when the sun is gone.
Economics of charging stations
For many years, Electrical Vehicles and renewable energy had the economics against them. As with any disruptive technology, the initial investment is massive. As prices of components go down and efficiency goes up, the overall investment decreases and allows a reasonable return.
We are at that point. Electricity prices, EV’s rising market share and the push and drive of the underlying trends are making this industry profitable.
As with any brick and mortar business, the initial investment is still significant. For a decent fast charging stop with four charging stations, we could be looking at an initial investment of 360.000 euros. Assuming the current average electricity price (in Spain) and a 50% usage of the station, we would break even in roughly two years.
There are many If’s in such an equation, but it’s not an outrageous investment. Now, taking into account the upfront cost, it’s understandable that there aren’t that many fast charging networks out there.
There is still a remaining question. Should we charge drivers for the electricity they recharge? Tesla’s network is free for most of their users, but as many point out, that’s not sustainable. In a way, it resembles companies like Uber which subsidizes an artificial offer of drivers. Tesla has been doing the same with their stations. The question is always until how long. My prediction is that it won’t be much longer for Tesla to start charging every single user. The market is hitting the tipping point, and it’s at a place were, despite charging drivers, the cost is still half of what gas is.
Meanwhile, some startups like IMPACTsSpark Horizon is trying a different business model. Why apply the old pay per use model? Why can’t we displace the cost to interested advertisers? That’s Spark’s proposal, one that’s being received with open arms so far. It does make sense. I have the drivers attention for at least 20 minutes. Why not engage with him somehow and create a lasting impression? Not only that, as EVs are digital, it’s easy to tie a user to their online persona and create targeted experiences tuned for them.
I don’t think it will work everywhere, but I can’t shake the notion that in the future we’ll see hybrid business models operating in the space.
The rise of the competitors
As I mentioned before, Tesla isn’t the only player anymore. For all the criticism Tesla gets, they’ve done outstanding work. Not only were they pioneers in the space. They’ve set many of the behavioral standards.
One of their smartest moves was to deploy a free-for-all fast-charging network. While some debate the long-term sustainability of the system, I would refer back to what I wrote about the difference between finances and impact. Tesla might crash financially, but they’ve set up the norm. And it’s not going away. It’s their moat and lock-in strategy, and they’re winning at it.
Not only do they have the most extensive network, but they also have the fastest chargers, the best locations, and the most pleasurable designs. The whole Tesla experience is orders of magnitude ahead of anyone else.
That said, the war is on. New entrants are picking up ground, and the unsexy market turned into a new format war. The competitors are coming up with new charging standards, all different from Teslas. This standard fragmentation is turning the problem into an even bigger one.
Right now there are, at least, three different standards: Tesla, CHAdeMO (Japan) and the CSS standard.
“The CHAdeMO Association made available the specifications for charging up to 200 kW but – at least in Europe – no vehicles have been announced that support CHAdeMO charging at more than 50 kW. It remains to be seen how CHAdeMO will develop in Europe since the CCS standard is increasing its market share very quickly.”
It reminds me of the VHS vs. Betamax wars or if you’re a little younger, the HD DVD vs. Blue-Ray conundrum.
On top of these standards, the latecomers are building their charging networks to try to compete with Tesla. Companies like BMW with their ChargeNow network, E.ON-Clever’s association, Fastned Netherlands network, IONITY Consortium or the independent Porsche’s Mission E network for North America. Some are doing better than others, but what’s clear is that there’s a need for a unified established system across vast geographies.
The truth is, Tesla is still three to four years ahead of everyone else. It’s not only their superior charging network. It’s their holistic vision. Not only they have superior chargers, better driving range, and better design. They have better navigation systems, Autopilot, home stationary batteries and vertical energy integration systems through SolarCity. And this is what we know off.
But maybe, the essential asset for Tesla isn’t their technology, but their brand. A quick look at all the EV online publications shows the big truth. Tesla occupies 3/4 of all the news around EVs. That, despite all their innovation, is their biggest strength. One that’s hard to match; one that few are contending.
Electrical Vehicles aren’t just for driving, though. They are sophisticated digital machines with powerful batteries. This combination might be useful, not only for driving but for energy regulation.
An increasing number of regions are fast forwarding their renewable energy strategies. Oil and Gas dependency is becoming a costly and dangerous drag.
One of the problems with Renewable Energies though is their generating intermittence. This inconsistency creates several problems including overgeneration, under generation and very pronounced up-ramp and down-ramp effects.
“Given the intermittent generating profiles of renewables such as wind and solar, unique challenges arise from increasing renewables generation to maintain grid balancing (the matching of supply and demand), which is critical for maintaining the reliability of the electricity grid.”
To avoid such effects, there is an increasing need for stationary batteries that can hold the energy during overproduction and release it during underproduction phases.
These need for better control of offer and demand is giving rise to what many are calling the Smart Grid; an electrical network that can self-regulate itself and optimize the energy generation sources.
Smart Grid is a whole world on its own, but Electrical Vehicles might have a predominant place in it. Would it be possible to employ EVs as smart batteries? This is a concept known as Controllable Load. If instead of charging EVs in a dumb fashion (one-way), we could turn them into a two-way street (Vehicle To Grid or V2G), then we could turn them into a Smart Grid appliance.
These V2G operations could, in theory, enable us to regulate the problems arising from peaks and ramps due to Renewable Energy usage.
This space is only picking up now, but I wouldn’t be surprised if different states start incentivizing the use of Smart Grid devices. Up until now, the incentives were mostly cosmetic. Let’s save the environment. As we increase our sustainable energy footprint though, the motive becomes more of an operational one.
I expect a new wave of startups sprouting in this space with new ideas for smart grid washing machines, dishwashers, Roombas, etc. In this regard, some startups are already working in the area, including some exciting use of Smart Contracts and Blockchain technology. Keep an eye on it, because I expect many more coming up during the next two years.
The Ride-Sharing Tide
One of the most exciting convergence with the whole electrification effort is the rise of ride-sharing companies. The advent of this new consumption model is changing the way we understand transportation. We’re moving away from an ownership model and into a pay-per-use one. The optimizations are rather apparent, but the change has a broad impact on many industries.
The obvious one is the incentive to retrofit ride sharing fleets with EVs. It’s not only the reduction of fuel costs, but also the car-part simplification that EVs provide. Let’s remember that ride-sharing fleets are exposed to much higher use than a regular car. This means that the fewer parts the vehicle has, the lower is the cost of maintenance. So, ride-sharing companies have a big incentive to, not only become the largest owners of EVs but the most prominent evangelists.
The growth of such fleets casts doubts about the personal EV ownership model. While early adopters are buying EVs, I’m not sure the public at large will. Why would you want to own a car when the city is already blanketed with ride-sharing EVs? Even more pressing, is the fact that when using a ride-sharing platform I don’t care about charging issues. The fleet managers take care of recharging the EVs.
The change of model shifts the interest from Owners to Fleet Managers. So, there isn’t much need for home chargers but fast dedicated chargers for the fleet.
Renewable Energy Geopolitics
Another more significant question is how are these countries, cities, states, take care of the new power demands. As I stated before, while the current energy footprint of EVs is rather small, future expansion will be significant.
The need for cleaner energies, and more importantly, the breaking of fossil fuel dependency is high up on most government’s agendas.
The search for new sustainable alternatives is unleashing a significant shift within the geopolitical arena. In the same way, fossil fuel deposits are geographically bounded, so are the capacities for renewable energies.
Despite the need, not every country can generate the same amount of energy, and this will increase during the next few years. A good example is Japan. Despite how advanced they’re regarding sustainable energy generation, their room for improvement is small.
“Japanese telecom firm SoftBank partnered with Russian, Chinese and South Korean utility firms Korea Electric Power Company, State Grid Corporation of China and PSJC to develop a Global Energy Interconnection (GEI) system.”
The EV ecosystem is growing, but there are still major challenges. And where there are challenges, there are also opportunities. The general feeling I’m getting is that more and more entrepreneurs are finally jumping into the field. I expect some innovative approaches to many of the challenges coming up soon enough.
An obvious starting point is the software aspect. From smart navigation to traffic predictions for EVs. I already see some startups competing with Tesla’s navigation system. Intelligent tools for EV drivers like IMPACTs ChargeTrip is an excellent example of this. One, which by the way, is a perfect acquisition target for some of the new entrants.
Another space I mentioned before is Off-Grid energy generation. Over the years I’ve seen several projects in this space. Most of them were aimed at developing regions like Africa. Nonetheless, the creation of alternative grids to support EVs, Autonomous Vehicles (AVs) or drones, is a space that might drive this vertical.
Smart Grid devices, as mentioned before, is another potential vertical. The convergence of IoT and Smart Grid software will start to make sense pretty soon. The space will mature, and the solutions will turn more Plug & Play.
EV ecosystem services is a space that isn’t being worked yet. Yes, until now there hasn’t been much traction, but that will change and fast. Creating services like food delivery, content & entertainment, or family-friendly attractions could become very lucrative.
One last idea that might be worth exploring is smart renewable energy prospecting. In the same way, oil conglomerates or mining rigs have their specialized prospection teams, why not the same for renewables? Where to place an EV charging station that maximizes solar energy production? Where do we set a stationary battery park to supply energy for peak demand? And wind? And geothermal? When? Where? How? Can we take advantage of a region and then sell the electricity to a power-hungry neighbor?
As far as I know, many of these decisions are still made manually. There are plenty of AI-based optimizations that can generate better yields and accelerate the expansion of EVs and renewable energy sources.
The electrification game is real, and it’s moving fast. The tipping point is fast approaching, and we’re about to see drastic changes in our transportation behaviors. Like I stated at the beginning, Electric Vehicles don’t happen in a vacuum. The convergence of other significant trends is the driving force for the change. Don’t look at the EV market in isolation but as a needed tool to deploy other substantial and holistic strategies.
Never before there’s been so many startups cropping worldwide. Some of them will undoubtedly disrupt their markets. They’ll take innovative technologies and use them to push disruption forward. And that’s good. We need change. We need things to keep improving.
However, despite their numbers, very few will deliver solutions to humanity’s most significant challenges. One of these challenges, that of sustaining our environment, is the topic of Rachel Carson’s extraordinary Silent Spring book. Published in 1962, it brought up the dangers of pesticides to the White House.
While I was reading it, I became curious about the current state of affairs. I wondered how much we had improved since the 60s. The answer was staggering, not much. Virulent, toxic pesticides are widespread. Water, crops, forests, and animals are widely polluted. The most shocking discovery wasn’t to learn how toxic these compounds are, but that, even though we now know about them, they’re still broadly used.
“The fact that chemicals may play a role similar to radiation has scarcely dawned on the public mind, nor on the minds of most medical or scientific workers.
Although chemical manufacturers are required by law to test their materials for toxicity, they are not required to make the tests that would reliably demonstrate genetic effect, and they do not do so.”
Carson, Rachel. Silent Spring (1962)
One would think we’ve become better, and while we’ve banned many of these highly toxic compounds, similar ones are extensively used in developing countries like China, India or Africa. Let me remind everyone that most of our food, clothes or products come from developing countries.
“[…] Pesticides are responsible for an estimated 200,000 acute poisoning deaths each year, 99 per cent of which occur in developing countries,3 where health, safety, and environmental regulations are weaker and less strictly applied. While records on global pesticide use are incomplete, it is generally agreed that application rates have increased dramatically over the past few decades.”
There is an increasing body of work that links the pollution of our environment with deadly diseases. Cancer, dementia, autism, and infertility are some of the conditions related to pesticides.
The irony is, there are plenty of biotech companies developing biomarker detectors for cancer, dementia and whole startups devoted to improve your fertility. Nonetheless, there are very very few companies trying to tackle the root of the problem, toxic chemicals in our environment. And it is shocking and sad and discouraging.
While we keep on chatting away on our smartphones, our interactions with nature, water, food, and air, keep becoming deadlier. Toxicity of our own design.
One of the most significant challenges for humanity is connected with the substrate of all life, water. Water (H2O) is essential to the whole natural ecosystem. The planet can’t sustain life without it. And even if we’re ignorant of the system at large, we should be selfish enough to care about your safety.
Do you know the Coachella music festival? Last year they had to postpone camping due to a toxic cloud emanating from the Salton Sea, one of the most contaminated bodies of water in the US. The poisonous lake is just 15 miles away from the camp.
“Government officials acknowledge the daunting challenges ahead for water utilities. In the final months of the Obama administration, the EPA’s Office of Water published a report highlighting aging infrastructure, unregulated contaminants and financial support for small and poor communities as top concerns for drinking water quality going forward.”
In Europe matters are slightly better, but not much more. In 2000, Europe approved their most ambitious environmental regulation, the Water Framework Directive. The WFD goals for 2015 failed by a landslide.
“However, fifteen years after the WFD was introduced, achieving its objectives remains a challenge, with 47% of EU surface waters not reaching the good ecological status in 2015–a central objective of EU water legislation (European Commission, 2012a). During the first WFD cycle, which operated from 2009 to 2015, the number of surface water bodies in “good” state only increased by 10% (van Rijswick and Backes, 2015).”
When you look at state of the art in this space, the feeling is of dismay. For all the talk around IoT and Industry 4.0., there are very few high-tech early warning and detection systems for water assets. On top of that, there seem to be very few startups working on fast, cheap, and portable water pollutant detection devices.
There is plenty of research around this area. Many scientists are developing new ways of detecting harmful compounds in bodies of water. However, most of these, either stay in the lab or are hardly commercialized.
It’s not hard to understand the reason. Two significant factors are dragging the field. The need for research and a lack of consumer markets. It’s easier to develop a new market-place for pets than bring to market five years of research, within an acceptable price range.
For all the talk around startups, there seems to be an unwillingness to drive research into the market. I understand the allure of fast and quick flips. People do startups for the fun and glory. They want to be portraited as the next Facebook. Nevertheless, few want to get their hands dirty and do challenging things.
And that’s the key, technology’s democratization is enabling a more extensive range of actors. Still, risk-takers, pioneers, visionaries, remain in low supply. As the pond gets wider, challengers get diluted within the mass.
There is one country though, that is consistently challenging itself. And that country is, no surprise here, Israel. Their capacity to bring research into the market is unparalleled, and it’s paying off.
Out of all the water-related startups I reviewed, only one, Israel-based Lishtot, resembles anything like a consumer product. It’s not surprising they won the Techcrunch Best Gadget award at CES 2018. But maybe, the most astonishing part of it, is that we feel it’s groundbreaking. And it is, don’t get me wrong, but it should be the norm, not the exception.
When it comes to food, it’s even worse. In Europe, if you live in a city, chances are, your local water agency treats and monitors the status of your tap water. Sometimes it tastes better than others, but you know you have a low chance of getting sick.
Food is a different story altogether. Testing food for pesticides, herbicides or insecticides is hard, slow and expensive. It’s impossible to check every ingredient that gets produced with current detection methods, much less test it for all the thousands of contaminants it can get exposed too. In addition to this, add the fact that it takes years for food agencies to include new compounds on those forbidden lists. Meanwhile, you, your family, your kids, are eating slow poison. So quiet, that it might take decades to turn mortal. But kill it will.
“Most malignancies develop so slowly that they may require a considerable segment of the victim’s life to reach the stage of showing clinical symptoms.”
Carson, Rachel. Silent Spring. (1962)
There are two big fronts in the space. On the one side, you want to attack the problem at its root. You want to produce food that’s free of toxic chemicals and grown in clean soils. Here is where the big picture is essential. It’s not only about not using toxic pesticides. It’s about making sure the surroundings are free of those pollutants too. This last part is the hardest to achieve. Streams, rain, soil, underground water reservoirs, are all polluted. Growing anything removed from it is hard.
This is one point Agrotech startups are trying to tackle. We’re watching an increase of investments around sustainable and organic food producing vertical farming startups. We’re going to need many more. And soon.
The other side though is still a problem. At current consumption rates, it’s impossible to feed organic food to everyone. So far, inequality is striking the food chain again. Only people from a certain socio-economical level have access to fresh organic and expensive ingredients.
“Despite lower yields, organic agriculture is more profitable (by 22–35%) for farmers because consumers are willing to pay more.”
Beyond the ethical aspects, most farms are producing ecological food because it has a premium in the market. The motivation is, for many, still economical. The question is, could we put pressure on the producers and manufacturers from the consumer side?
That’s where food contaminant detectors come into play. If we could test our food at home, we could protect not only our health but also put a massive amount of pressure on the industry. A strain so vocal that can, hopefully, turn the tide.
There are, once again, few startups working on bringing early detection technologies to market. The reason is similar than on the water front. It’s expensive and hard to bring new detection methods like biosensors or Near Infrared Spectrography (NIS) systems to market. But beyond the difficulty of the task, remains an absolute naiveness of what matters.
The winner and finalists of the first food detection competition are impressive. On one side there is Spectral Engines, a Helsinki-based startup focused on commercializing their modular NIS system. The runner-ups were Consumer Physics SCiOScanner and Tellspec. SCiO is surprise, surprise, an Israeli startup with money, surprise, from Khosla Ventures, one of the top impact investors in the world. Their pocket size spectrometer for smartphones is impressive.
Other companies are working in the field like the Israeli’s Inspecto or the Taiwanese ITRI HS3D device. The first one isn’t still available; the former one is still quite limiting.
As I said, developing these devices is very hard, but the field would advance rapidly if more startups focused their aim on it.
Are startup seeders focusing on the right challenges?
One of the striking patterns is the difference in the selection processes of some startup seeders. Startups don’t grow out of anywhere. In most cases, they’re incubated, accelerated and invested by the ecosystem. As Silicon Valley loses their grip on the startup ecosystem monopoly, new areas are rising.
But while there are new centers for innovation worldwide, not all display the same quality. I found two accelerator programs related to food tech (there are several others too). One is Startupbootcamp Food out of Rome. The other one is the Israel-based The Kitchen Hub by Strauss, the largest food corporation in Israel.
I have no data to determine which is the better accelerator. There are many factors you could measure them against. But there is something that stood out to me, the difference in the problems startups are tackling.
An intelligent coffee brewer vs. a pesticides detector. Hydroponic urban farms vs. in-vitro clean meat growing. Smart stock management system for restaurants vs. enzymes for healthier fruit juices.
It’s hard not to see a pattern. I feel Israelisare much more grounded in research and tackling human challenges than elsewhere. I’m not against an intelligent coffee brewer, but it’s, if you wish, a nice to have, not a life-threatening problem like pesticides.
This is a global trend. Many organizations encouraging entrepreneurship aren’t targeting worthy challenges or merging research with product development correctly.
Increasing our challenge perception
I am not surprised about the lack of focus or risk-taking. There is s a global shortage of imagination that correlates closely with echo chambers. People don’t travel. People don’t read. People don’t explore. People don’t research.
We live in a frugal society where every minute, matters. Anything that takes more than two days to achieve is skipped over. On top of that, and despite the constant warnings, people keep driving a narrow view of the world.
It’s hard to connect themes, trends or challenges if your model of the world is reduced to your elite brotherhood. It’s tough to see beyond the trees when you refuse to unfocus. Heads down and constant execution is the technology mantra. And it’s excellent advice. But, as I mentioned in other articles, maximum optimization is a bad strategy. It makes you laser focused but at the cost of other valuable assets, as well as unseen connections.
The key is to be able to switch gears. Focus and execute, but also act with a systemic view of both the problem and the solution we’re tackling. Breaking out of our comfort area, hear different voices, travel distant cultures, live other lives.
One of the worst aspects of this technology myopia is the insensitivity towards other human beings. If we lose our capacity to understand what matters, what’s essential and what’s irrelevant, then we’ll never focus on the right challenges. This moral balance is lost to many.
Can corporations take advantage?
Like other chapters in human history, the saviors might come from the most unexpected places, corporations.
Corporate employees have a compelling mix. They accumulate years of education and research, with a deep understanding of their markets. Because they aren’t driven by the latest fad, or the need to be acknowledged, they have a much-balanced view of the world.
The one thing they lack is the capacity to shackle off the corporate business model rhetoric. There is an increasing number of corporate employees that, if given a chance, would jump ship and start their own company. Many have years of research under their belts, and they’re looking for a haven to put it into play. They do need guidance though. They do require startup discipline. This is why fiery entrepreneurs teaming corporate counterparts can build game-changing companies.
Another asset corporations can provide for is long-term financing. The lack of important challenges isn’t unique to entrepreneurs. It extends to many professional investors too. And there is a lot to be said about corporations behaving like VCs.
The corporation’s biggest asset is their long-term sustainability. This allows them to trigger long-term investments packed with research and significant challenges. The blend between this and startup product development approaches can deliver extraordinary results. Results that matter.
Corporate innovation is a bitch. It’s hard, for many reasons, but it is also a window into solving real challenges. While incumbents should approach startups, I feel we should be creating better connections between hungry entrepreneurs and local intrapreneurs.
The rise of Deep Technology
Deep Technology is the term being thrown around to define groundbreaking solutions to some of the most substantial challenges. It’s not a new theme. Critical voices have been claiming for this for years. Vinod Khosla started Khosla Ventures 14 years ago and currently manages over one billion dollars in assets. He was one of the most vocal voices in the innovation investment sphere. Since then, more and more funds have been focusing on impact investment. And more will join.
Some of humanity’s challenges aren’t optional anymore. It’s either finding meaningful solutions or crippling society. It’s still not clear if we’ll be able to avoid planetary collapse, but let’s hope more people start tackling impactful problems, and not makes-my-elite-life-better-please-more-Ubers kind of issues.
I’m sure some investors will be rolling their eyes just about now. And with good cause. Yes, companies are supposed to make financial sense, and there is an apparent reason for that.
Entrepreneurs start companies with the goal of generating value for society. The larger the value, the more their customers are ready to pay for it. The more painful the problem the company solves, the more money they’ll be able to charge. The more universal is the issue, the bigger their market is.
That’s the theory at least.
Money isn’t only a ‘reward’ for bringing about value. That payoff allows the company to sustain their operations. In other words, it makes the value generation sustainable.
If we look at System Theory, it’s what we call a reinforcing feedback loop. The more value a company generates, the more money it gets to do more. If the company stops reinvesting, the money it makes will eventually decline.
However, on some occasions, the entrepreneurs can’t deliver value without initial capital. This initial capital or seed is what in System Theory we call Stocks. Investors are usually the ones providing the starting stock (money) of the system.
Their goal is to provide the kindle to start the system going. Once it’s grown, investors can take a big cut on the enlarged money stock earned by the company if everything went well. And that’s a big If. And that’s why it’s called Venture Capital investment and not safe-and-easy-as-bonds.
The thing about feedback loops is that they’re rarely immediate. It takes time for a company to create value. This is what System Theory dubs feedback loop Delay. In the case of companies, this delay can range from near-immediate to decades.
There is an increasing movement in the industry called Deep Technology. One of the main characteristics of Deep Technology is their long delays of this feedback loop.
The longer the delay, the more money the company will require to survive before depleting their capital, also known as going bankrupt or system collapse.
Finance vs. Innovation
Most investors want this delay to be as small as possible. The quicker the company can show they can generate value, the higher their worth is. VCs measure this “valuation” both as the amount of current profit the company perceives from the market, and the potential profit of the next iteration (future potential).
To maximize this valuation, they pour their money to accelerate the iteration. On the one hand, investors want to increase the speed at which the company can deliver value (reducing the delay of the loop). On the other, they want to sustain the company as long as possible until the market starts reacting.
Here is the catch though. While companies can increase their value and the speed they deliver it, the delay of the returning loop is beyond their control. The market will react at their own time.
What’s obvious is that if the market takes a long time to react, the company will run out of capital and die, plunging their investors with them.
The question though is if that’s bad for the system or just for some of the actors. From a financial perspective, primarily, from the company’s investors, it’s terrible. They’ll lose their money.
But is it bad for the system at large?
One of the aspects that make System Thinking so compelling is the fact that all systems are connected. So while, on one side the system at large exposes a particular behavior, on another subsystem you might bear testimony to something completely different (i.e., Chaos theory).
Let’s forget financial sustainability for a second (a subsystem). If we focus on the impact of certain startups on the broader system, then a new picture emerges.
Startups, spurred by their investors, push the boundaries of innovation and try to provide ever-increasing value to the market. The effects of such innovations affect the end consumers and their latent behaviors.
For most failed startups, this effect is trivial and negligible. But a few do capture the market’s imagination. And while the market’s reaction, from an economic perspective, might be timid, they do react in other ways. One of these ways is by reinforcing an emerging behavior. A behavior that will stay in place independently of the startup that reinforced it.
Disruption at Work
An excellent example of this is WeWork. Many keep focusing on the risky and fragile financial position of the organization. And while they’re right to be wary of it, they’re missing the broader trend.
WeWork’s value is strengthening this new behavior. The demand for Co-Living, Co-Working, Co-Everything, will remain in place, independently of WeWork. And this is the real power of Disruption; it doesn’t go away.
Worst than that, all incumbents laughing at the predictable fall of WeWork, are not picking up the new trend. Most will fail to see it, even after WeWork’s potential disappearance and will implode as a result.
Another beautiful example is the recent drama with Vice Media. It’s easy to highlight the gross negligence of their lack of business model. Nonetheless, their continuous innovations in media formats are shaping and shifting the market. It’s irrelevant they aren’t the ones benefiting from them. Someone will and when that happens, they’ll disrupt the market.
Sometimes we mistake financial soundness for innovation. Both concepts are connected, but as I’ve shown, they work at different speeds. Delays in the feedback loops that govern the system will make them play, sometimes in tandem, sometimes independently.
Particular caution should be observed when the technology employed by such companies is disruptive. The mere nature of disruption prompts the system to run at different speeds and this will have an impact on the competition landscape subsystem too. Some incumbents will collapse, some entrants will thrive.