I honestly don't know how to start today's article. I want to talk about ethics and values. Or lack of them. But I'm torn about it because most people don't find it relevant enough. It's not newsworthy. It's not about the next Facebook.
The mission of The Aleph is to bring insight beyond technology. I believe ethics is an essential component of the future we're heading into. Having the right set of values will determine our long-term survival, both as a business and as a species.
The other day, my children got a gift from a friend of the family. It was a book about a mole that gets poo all over his head. He gets mad and starts accusing all the animals of pooing on him. He seeks revenge. He finally finds the culprit, the family dog. He then poos on the dog and runs for safety. Sorry for the spoilers.
"The story of the Little Mole is a tale loved by children and their parents all around the world with more than a million copies sold!"
I can't even begin to describe how I felt about the book. Don't let go of your anger, look for the culprit and pay them for what they did. An eye for an eye. But hide what you've done and run for the safety of anonymity. Those are the ethical values we're teaching. I'm not sure how anyone would "love" these principles. But it seems there is a market for them.
The same parents that buy such books, then get astonished by my kids. Their tenderness, their willingness to help others and lend a hand to someone in trouble, surprises them. They always wonder why my wife and I are so lucky to have such good kids.
I don't believe in luck. Not in the sense that most people infuse the word with. I believe in values. I believe in ethics. And it shocks me to see how other parents marvel at such a display of respectfulness. It always makes me wonder what is then, their standard for respecting others. After reading the Harvey Weinstein's testimonies, I wonder no more.
It's frightening to see the lack of social empathy and disregard for the consequences of their products some have. I would argue, it doesn't come from a bad place. Most ascribe to the "Don't be evil" motto. Still, the unwillingness to assume any responsibility is disheartening.
Even within the technology elites, there is this unconscious belief that technology isn't complex. This is a fallacy. Technology is extremely complex. We've gone from a single program in an isolated computer, to a vast global network of multi-parallel computing units. The exponential growth of the system is unrivaled by anything previously built.
This proliferation has turned a determinist automata into a massive dynamic system. We can no longer apply deterministic logic to solve the problems. We need to take a much more abstract and systemic view of things. The system doesn't end with your homepage. The boundaries of the system extend way beyond your company's servers and into our social fabric. Ignoring this fact is one of the reasons for our current social upheaval.
But if understanding the shift of the system's boundaries is already hard, trying to predict the behavior of the whole, is unrealistic. Still, we insist on our knowledge of what our software will do. We strive for scientific precision in a sea of uncertainty. The truth is, despite our best intentions, we can't predict most of the ramifications derived from our products. Not just that. While an error of judgment can be fixed in our software, the consequences of our lack of understanding will have long-lasting implications in society.
“You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”
And here is where the ethics and values come into play. When dealing with systems, there is no deterministic answer. We play with a scale of greys, not black and white. In such ambiguous situations, it's the founder's ethics that shine. It's the employee's values that come forward.
The lack of ethics, the twisted values we impose on our children, have their reflection on the decisions being made by the next generation founders.
This all comes back to the mole and the poo. If we teach our kids that revenge is right; that accusing people is the norm; that you should pay it back; that you should run from your acts; then those are the values that founders will dig when they face unpredictability.
As I write these words, I feel I'm falling prey to age. That my words just reflect the maturity of a concerned dad. That generation after generation has said the same. The only argument I can add is that never before in history, a single individual has had the amount of influence some technology founders have now. And while the risk of the absence of an ethical framework has always existed, it has never been so relevant as today.
The fact that Harvard, MIT, UT, Cornell, and Stanford are deploying Ethics and regulations courses should be a warning sign. Still, I find it outrageous that only ivy league students get exposed to ethics around Computer Science. This isn't a problem only limited to the privileged classes. And it's also not a problem reserved for Computer Science graduates either. System thinking and ethics frameworks should be taught in school and at home. By every person in society. We are all responsible.
It's easy to fall prey to simplistic thinking and argue that we should apply technology to better people's lives. I reckon most innovators have their fellow citizen's welfare at heart. The truth, though, is that technology's complexity far exceeds our capacity to understand the ramifications of it entirely. This intricacy also has its reflection on regulations. Trying to regulate our shifting landscape with the uninformed opinion of some politicians, doesn't cut it anymore.
We are all responsible.
"The deed done, a happy and satisfied Little Mole disappeared back into his mole hole."
The Story of the Little Mole Who Went in Search of Whodunit.
Amazon is doing what they do best, apply technology to infrastructure problems. Once they've mastered their operations, they'll open it for others to use. Delivery As A Service.
My interest isn't much on the service per se but the reiteration of the same maxim. Businesses that don't place technology and more specifically, automation, at the heart of their operations, will be ousted by technology incumbents.
Owning the logistic layer is becoming increasingly strategic in the battle for world domination. Amazon is but one example, but other industries would benefit from it too.
For example, Inditex, the retail behemoth, owns his logistic system and can place orders in less than 48 hours. Apple is another example of critical logistic operations.
As the acceptable threshold of moving goods in less than 48 hours becomes the norm, many of these private logistic operations will become outdated.
Companies built these systems in an era where the expected delivery window wasn't that critical. Not only speed wasn't essential, but expectations were low too. If the system collapsed, no worries, we will outsource the delivery to a third party.
This situation is unacceptable today. Not only the expected delivery window is shorter, but third-party companies can't cope with their workload either.
This is why Amazon's move might have a much more profound implication. It's not just about Amazon's reliance on UPS or FedEx. It's the fact that Amazon might become the underlying logistic operator for many industries.
Still, Amazon can't best a tightly implemented logistic operation. Now is the time to invest heavily in upgrading your business's logistic structure. The goal must be seamless scaling capabilities and decreasing delivery times.
There is a need for better automation, tracking, and autonomous technology. On top of this, predictive software, aided by Deep Learning, will become mission critical.
There is also another space Amazon isn't aiming for, that other companies could exploit and defend. Last mile delivery.
It still surprises me the lack of interest and focus most companies pay to the last mile delivery. A user doesn't care how you transport something, as long as it gets to its destination fast enough. Nonetheless, it doesn't mean they don't care about the last mile integration.
Companies need to focus on that part of the logistic chain. They need to create real-time last-mile tracking systems. Predictive models that can optimize for the bests routes for local deliveries. Automatic sorting, scheduling, and management of unsuccessful deliveries.
Startups are dabbling in this space; I know at least a couple just in Spain. However, not enough companies are taking this issue seriously. Product quality isn't the only metric anymore. Product-driven markets are long gone. The current status quo are customer driven markets. And customers care about last mile delivery. Customers care about the delivery experience. Customers care about delivering companies that deal correctly with 'unsuccessful deliveries'.
If you make me chose between a food delivery service that keeps my food warm or a cheaper one that brings it cold, I'm going for the warm one. Telling a customer they weren't home when they were, won't be tolerated much longer.
So I hope Amazon's move serves as a wakeup call for all those businesses that are in need of robust logistic operations. Here are some key points companies need to think about.
Invest in upgrading your systems now or face ostracism.
Upgrading won't be enough though. Current fast-moving markets require serious innovation of the logistic layer.
Design user-friendly customer interfaces so that the end user can track last-mile delivery of your goods seamlessly.
Build last-mile predictive models to allow for better delivery routes and resource management.
Start investing in the use of autonomous vehicles. Both in the warehouses and for the last mile deliveries. Relying on humans for scale and speed won't cut it much longer.
As much as I love Amazon, they're becoming increasingly problematic. Their hold on cloud computing, voice control and online retailing is complete.
This tight grip on vertical markets is creating a serious lack of competitiveness. Anyone that wants to go against them faces a nearly impossible task. Those that rely on the Amazon platform are subject to the unyielding grip of the retail titan. It's either sell through them or face oblivion.
As Amazon expands into other vertical markets, more industries will feel Bezos's wrath. Now it's the turn of groceries, logistics and in a very close future, pharma.
Is there such company? I am surprised to admit that there is. But you won't believe me. The most likely Amazon nemesis is called Walmart, and it's being spearheaded by one of the smartest minds in the retail space, Marc Lore.
The new new Walmart game
I'll give you a little more time to stop laughing. I had the same reaction. Am I serious? After looking at the research, I am. Walmart has taken a 360º change in digital strategy that, not only is working, but it's already putting pressure on Amazon. The emergence of a real incumbent is forcing Amazon's hand. The pressure is creating gaps that others are using to extricate themselves from Bezos's grasp.
Let's start at the beginning.
Despite Walmart's size, they've always neglected their ecommerce business. When you're the top corporation in the world and your market value is 485 billion dollars (35% more than the second largest), the lack of interest is understandable.
"The philosophy created problems on the web, though. Whenever the online price dropped below the in-store price, the merchants in Bentonville would balk. They were worried about siphoning away customers from their stores, which account for more than 97 percent of Wal-Mart’s sales.”
During the years they've tried different strategies, with mixed results. Meanwhile, Amazon has kept growing at a 24% yearly average, while Walmart barely hits a 1.7%.
If both companies maintained the same growth multipliers, Amazon could overtake Walmart in six years. Let me repeat this, six years.
Still, it wasn't until 2015, when Doug McMillon took the Walmart CEO chair, that things started changing in the digital space. McMillon has a very different background from his predecessor, Mike Duke. He's not only 17 years younger but has also been at Walmart longer. He rose from the lower ranks, working on many of the strategic areas that make Walmart the retail juggernaut that is.
I bring this up because Doug McMillon is the person that has enabled the biggest strategic shift at Walmart in more than two decades. That takes, not only an exhaustive knowledge of the organization but incredible courage.
“I want us to sell VR before the customer is ready for it,” he says. “I keep telling our folks, ‘Buy a little and put it online, put it in 50 stores.’ Don’t tell me it won’t sell unless you try it. You gotta catch the wave. And to catch the wave, you gotta be early.”
In 2016, he made one of the boldest moves the retail industry has seen,the acquisition of Jet.com. The company, one-year-old at the time, lacked positive income and a major market share in the online retail space.
The operation raised eyebrows all over the industry. The reason wasn't the acquisition itself, but the price, 3.3 billion dollars.
After the acquisition, McMillon promoted Jet.com's co-founder and CEO, Marc Lore, to president and CEO of Walmart eCommerce US. And just like that, Walmart's CEO handed the keys of the Walmart.com kingdom to Amazon's most dangerous nemesis, Marc Lore.
“They were assigned perhaps the most urgent rescue mission in business today: Repurpose Wal-Mart’s historically underachieving internet operation to compete in the age of Amazon. “Amazon has run away with it, and Wal-Mart has not executed well,” says Scot Wingo, chairman of Channel Advisor Corp., which advises brands and merchants on how to sell online. “That’s what Marc Lore has inherited.”
“And it was this really depressing sort of moment where we didn’t even want to go out for a drink. It wasn’t a celebration — it was sort of like mourning. That’s what it felt like. And it was really weird. We were like, ‘Why do we feel so bad right now?’ Like, we just sold this company and made a lot of money, and we just didn’t feel great.”
Two years later, he ascended to become Walmart.com's new US CEO. Armed with Jet.com, Walmart.com and a 485 billion dollar war-chest, he was ready for payback.
Walmart's online strategy
Walmart has deployed a wide range of strategic changes to aid them in their battle against Amazon.
The first ingredient to take on Amazon? Increase your inventory. Bezos's obsession has always been to become the everything store. The first thing you need for that is, well, everything.
Since Lore took over, Walmart.com increased their inventory four-fold, offering over 40 million products. Amazon though, has an inventory of north of 350 million products, nearly nine times more.
Lore knows he can't compete head to head with Bezos. That's why, while increasing the inventory is a priority, Walmart isn't adding just anything.
During the past year, the retailer went on a startup buying spree, acquiring Shoebuy (shoes), Modcloth (womenswear), Moosejaw (outdoor apparel) and Bonobos (menswear) to their marketplace.
Lore is betting on the vertical fashion angle, something he knows Amazon isn't good at. Deep down, the strategic move is brilliant. They focus on niche markets, taking good care of their clients while delivering a great experience. They can't compete on quantity, but they can do it in quality, both regarding customer service and product quality.
Price wars & logistics
The reason why Jet.com fits nicely with Walmart is that they are both driven by the same mission, give their customers the best deal.
“The Jet concept of sharing savings with customers is a very Sam Walton-like idea,” says Richard Cook, co-manager of the Cook & Bynum Fund, which owns Wal-Mart stock. “You will help us lower costs, and we will share that savings with you.”
To achieve that, Lore incorporated to Walmart.com two of his favorite tricks from Jet.com. The first one is to increase the customer's average order, so they can aggregate volume and lower the shipping costs. Lower costs mean better prices for the client and better margins for Walmart.
“Shipping one unit is ex-pen-sive,” McMillon says, drawing out the word. “It costs five bucks to ship one item, seven bucks to ship seven. So when you aggregate volume on the supply side, the economics change in your favor.”
Another way of lowering the price is taking advantage of Walmart's brick and mortar operations. With 4700 stores in the US alone, the retailer has a massive physical footprint. By levering the existing infrastructure, they can also save on shipping costs.
Do you want to save more on your groceries? Order online, but pick them up at your local Walmart. A classic Jet.com move.
Piggybacking on the current infrastructure is something Amazon can't do. For them to achieve this, they would need to build the physical stores, adding years and billions of dollars to their operation. Lore is intent on exploiting this advantage as much as possible.
"Every day, I become more and more convinced about the omnichannel advantage," Lore said, referring to a sales strategy that combines online and in-store shopping.
They cornerstone of Walmart's new strategy is to segment their audience. Walmart's traditional target has been price-sensitive customers, those looking for the cheapest deal.
While Amazon started with those customers too, they've moved away from that. Their focus is now on their PrimeNow scheme, not on selling the cheapest products. The company now uses massive discounts as a way to pressure and cajole their partners or potential competitors. They can do price dumping if needed, but it's not the general strategy anymore.
Nevertheless, most people are still price-sensitive online. Walmart is combining their lower prices strategy, Jet.com's saving schemes, and a brilliant millennial targeting tactic.
Most millennials don't care much about getting the widest product selection range (29% – 32%). They want products catered to them and them only. They want to feel special. But at the lowest price.
Lore's recent startup acquisitions follow this formula. Smaller product range and saving options for a highly sophisticated audience. A segment Walmart never touched before.
Amazon is counteracting Walmart by buying Whole's Foods. The former had certain customer overlap with Walmart. After the acquisition, they slashed prices, in an attempt to poach Walmart's user base and focus on millennials. That's a price war Amazon won't win though. Worse, they devalued Whole's Foods prime segment, wealthy individuals.
Either way, it's a great bet. Amazon, while good at building horizontal businesses, is bad at personalization. Amazon never got the user experience right. Millennials and the newer generations are very sensitive to this. Walmart and Lore's team have a good chance of besting Bezos to that segment.
Another big advantage, often undervalued, is Jet.com's corporate culture. Marc didn't come alone. He brought most of his people with him. People that follow him and that work as a team.
On top of that, Lore is placing each acquired startup CEO as his deputy. Each one manages their company's vertical. Making use of the startup CEO's expertise and empowering them within the larger organization is a great move. Marc is aligning their acquisitions with the overall online Walmart strategy.
This might not seem like an advantage, but it is. It offsets the lack of technology Walmart has. The company is in need of innovation and fast. Nonetheless, acquiring startups without a smart process to assimilate both the technology and the talent, is a recipe for disaster.
Marc's own experience when Amazon acquired his previous company is the perfect example. You spend money, retain the talent for some years and then the churn rate spikes.
If Walmart wants to win this race, they have to bet on the long-term. That means investing in their culture and their people.
Historically, when it comes to international markets, Walmart hasn't fair too well. They've had a hard time in Europe and in Asia, to name a few.
This has changed drastically with McMillon. As I said before, the company's innovation capabilities are small compared to Amazon. Trying to enter fast-moving markets like China, while competing simultaneously with Alibaba, JD.com, and Amazon is insane.
“According to Nielsen, 11% of total retail sales in China come from e-commerce, compared to 8% in the US. And e-commerce sales in China are growing at a rate of 53% annually, compared to roughly 12% in the US.”
This is where McMillon has taken a brilliant path. Instead of investing in building local operations, he's started partnering and buying into the local incumbents. But not any incumbent, but those currently in an open war with the number one online retailer.
In Japan, Walmart just announced a similar move. They partnered with Rakuten to achieve a dual goal. On one side, they entrench themselves in Japan and help Rakuten defend their top retailer spot against Amazon Japan. With Walmart's products getting sold through the Rakuten partnership, the Japanese firm can counteract some of Amazon's Japan allure. At the same time, Walmart is importing the Kobo to the US to both, compete with the Kindle and to increase their online book's footprint.
"The move echoes deals that Rakuten has made to place Kobo e-readers in major bookstore chains around the world, such as WHSmith in the UK. Physical retail presence is a potential advantage for Kobo, which doesn't have the book selection or brand recognition of Kindle but is competitive in terms of hardware — the bigger-screened, water-resistant Aura One, for example, beat Amazon's similar new Kindle Oasis to market by more than a year.”
Marc Lore isn't deluded. He knows he won't be able to compete with Amazon in certain categories. However, putting pressure on key Amazon verticals like publishing gives authors, editors, and publishers another potential platform to operate in. This unclenches the grip Bezos has on particular markets. It allows for better deals for providers and a continuous eroding of Amazon's bottom line.
“There are certainly areas where we are playing defense, and we’re behind and need to catch up,” Marc Lore, chief executive officer of Wal-Mart’s U.S. e-commerce business, said at the Bloomberg Breakaway Summit in New York Wednesday. “One example is the long-tail categories that we’re going after with acquisitions.”
McMillan's vision is clear. To compete with Amazon, they need to win the fast-moving international markets. To achieve that he's focusing on investing and supporting the local players with cash and resources.
Walmart's Open Innovation approach
Walmart isn't a technology company. Amazon though was born in the midst of the dot com. It's DNA has always been technological. Walmart, on the other hand, has been the quintessential brick and mortar.
As I've pointed out many times though, technology isn't a vertical industry anymore. For a while now, it's become a horizontal industry. One that touches every other industry.
Kosmix founders became the seed for what's now called Walmart Labs, the innovation heart of the company. The Labs team has produced some outstanding things but still pales in comparison to the Amazon's machinery.
One of the things Walmart Labs can't do is disruptive product innovation. They work in innovating internally, but for a while now, the company has lacked a true outside disruptive engine.
This all changed with the announcement of Store Nº 8, Walmart's startup incubator. More than an incubator, I would say, it's a startup studio. It's an independent entity that enables Walmart to capture innovations from outside the organization. It operates as a bridge, connecting both sides, the startup world, and Walmart Labs.
“The goal is to have a fast-moving, separate entity to spot emerging technologies that can be developed and used across Wal-Mart.”
So far they plan to spin out five ventures by 2019. To date, we know of four of them. The first one, called Code Eight, was unveiled not long ago. Their mission is to build a personal shopping service for "busy NYC moms." Once more, we can see the new strategy pouring over the incubator direction too. Code Eight is one of Lore's experiments in trying to develop new retail experiences for high-end customer segments.
Another of those ventures is Project Kepler, Walmart's attempt at the next generation grocery store. The startup looks very similar to Amazon Go. The fact that Amazon was first to it shows, once again, the innovation speed differential between both.
The incubator is also doubling down on VR technology. They just announced the acquisition of Spatialand, one of the reference VR companies in the world. Walmart has been eyeing VR for a while now. While some of their prototypes were just flashy toys, some of the uses they're exploring are bringing great results.
“He says the company uses Strivr in about 187 employee training centers, for three types of training: preparing for situations like Black Friday or emergencies where you can’t set up a simulation in a store, learning customer service, and teaching operational stuff like how produce should be stacked and arranged.”
Walmart is far from being a technology juggernaut, but it's doing a great job at catching up. One of the most impressive things is the kind of people they're luring into Store Nº8 projects.
One would think most engineers would balk away from the old-school brand. But it seems Store Nº8 is creating a certain gravitas. I don't doubt for a second that it's another of Lore's effects.
Marc has imprinted Walmart.com with a startup DNA. That's one of the hardest things for a mammoth corporation like Walmart.
Still, Walmart tends to lean on partnerships instead of owning their technology. For the time being this might be good enough, but eventually, it will fail. Controlling your technology stack is critical for the long-term survival of Walmart.
I want to believe these are just the first steps and that in the future we'll see Walmart Labs crack some unique products like Alexa or AWS. To achieve this though, technology needs to become even more prominent, and Walmart needs to think even further into the future.
Strategy lessons from Walmart
As improbable as it might look, Walmart is the toughest competitor Amazon has right now. Under the McMillon and Lore, the retail giant can bring real thunder to the market, breaking the Amazon yoke.
So far, their strategies, even though they're pretty recent, are paying back handsomely. The next three years will be critical for the fight, and Walmart needs to speed it up if they want to break Amazon.
There are some great strategic takeaways other businesses can use in their markets.
Hire visionaries and fighters. Pick someone that has the experience and the drive and let them do.
Break your core values. No company can change if they keep ascribing to old, nonoperative values. Fast-moving markets and technology requires flexible and reviewed values.
Don't go head to head against someone stronger than you. Make sure you identify the weak spots and focus on them, not on the stronger side.
Identify those things the competition can't do due to their current cost structure. Exploit it.
Invest in your corporate culture. If you want to compete, you need innovators. To retain talent, you need to deserve it. Poaching employees from the competition is easy when your corporate culture is better.
Invest in a technology organization. This one is pretty obvious, but hard to implement. Build a powerful technology stack and use it to deploy future products. Having a "tech" department isn't enough. The whole company has to be technology driven.
Your enemy's competitors are your friends. Find critical markets beyond your local one. Partner with the incumbent and negate those markets to your competitor.
Develop an open innovation approach. Most organizations get this wrong. Open Innovation isn't just about working with startups. It's about building the processes that allow inside and outside innovation to cooperate and deliver future products or services.
This week Facebook bought the Boston-based Confirm.io for an undisclosed amount. Confirm.io runs ID authentication checks on any government issued ID. They do it on the spot and without retaining any personal information.
It surprised me how shallow most reporting around this was. The fact that Facebook is buying a company that works on Proof of Identity is very telling. Not only telling but it could have massive consequences for other businesses.
Facebook has demonstrated they're extraordinary at scaling. The drawback is that they're awful at knowing who their users are. Yes, they can segment and micro-target like anyone. But they still can't differentiate a persona from who that person is. Facebook isn't alone on this. Any scalable platform is wrestling with the same issues. From Twitter to YouTube to Instagram.
For years, the goal has been to grow. To scale at any rate. To swell no matter what. Trust has been one of these values that we've thrown out of the window. Trusted news? Nah, I get them on my social network. Trusted opinions? Nah, I get them from my Internet friends. Trusted recommendations? Nah, I can ask for the wisdom of the crowds.
Only now we're realizing what the lack of trust can bring upon us. Lack of confidence is manipulating opinion, polarizing countries. It's pushing them towards civil conflict, or worse.
So governments are taking things into their own hands and demanding accountability. Part of that liability is being able to know who the user is. This is, by no means, unique to Facebook. Banks and other industries are required by law to identify their customers. So is Facebook really that different from Paypal?
It's not surprising though, that Facebook is buying a company that can bridge the final gap, tying a user with a physical identity. This way they appease Congress' wrath and try to sidestep any potential regulation.
"Our simple API lets you integrate in minutes, and confirm a person’s identity for any transaction that requires or benefits from proof of identity."
So what transactions benefit from Proof of Identity? There are a couple of obvious candidates. One, which most journalists are pointing out, is proof of ownership. You can link my Facebook account to your government ID card. You can then prove ownership of it by producing your document and comparing both. This scenario though isn't the most critical for the company's interests. The scale of ownership requests pales compared to other situations.
The most critical transaction for them involves an exchange of goods. In this case, advertising inventory. Right now, anyone can buy advertising on the platform, without verifying who they are. As long as you have a valid account the company can charge, the rest is mute. The company does this at scale through automation. This means they process thousands of ads per day. It would be impossible for them to confirm each advertiser by hand. This wasn't a problem until Congress starting demanding liability. They needed a scalable solution for this, and fast. This is where Confirm.io came in handy. The amount of advertising transactions per day is orders of magnitude bigger than any ownership claim. Hence the strategic acquisition.
Adding an ID verification step will impact their growth metrics. They know it. But having Congress take matters into their own hands would be worse. Also, they already command, with Google, the majority of the online advertising market share. While they'll feel the hit, they're still an aggregator, and they'll retain most legit users. This is due to the simple fact that they own the user's relationship like no one else does.
The moment Facebook rolls out the change, it will set a new precedent. One that will have broad ramifications. Governments might pass regulations that enforce such verifications. They can even extend it to another type of transactions like posting news.
The potential for abuse doesn't stop there. Confirm.io ensured they didn't retain any personal information. We don't know what will happen in the case of Facebook. Once again, we have to trust Facebook, something that's becoming harder and harder. No wonder decentralized platforms powered by Blockchain technology are gaining adepts.
Another ID company called SheerID raised 18 million dollars this same week. The Portland organization focuses on verifying specific sectors of society like students, teachers or the military. Their goal is to enable their customers to check their users and decrease coupon or discount redemption fraud.
While a lofty goal, companies can also use their verification process to segment and discriminate their users. It's one thing to use your internal data to infer, with a specific probability, what segment a user is in. It's a very different thing to know it as a fact.
The verification conundrum
The verified vs. anonymous dilemma isn't easy. Where do you set the line? Operating anonymously is one of the critical allures of the Internet. One that has brought much-needed change to decadent power structures. Under the cloak of darkness, lies the possibility of abuse and impunity. The unchecked, uncontrolled freedom of anonymity, paired with our current autonomous systems, can tear the seams of society too.
Verification should be an option for most, a necessity for others. The question remains, what kind of superpowers we ascribe to verified users? Should the platforms treat them with deference? There is no easy answer to this question. It's not black or white. The use cases aren't deterministic and will need to evolve and change as society itself changes.
There are significant ramifications here. Will other platforms adopt similar verification policies? Will Google? Will YouTube? Will Medium? If you're an advertiser on these platforms, how will you handle this? Who will be responsible for your company?
Will other governments start enforcing verification through regulation? The EU Commission comes to mind. Will we see age restrictions implemented on the Internet through ID verification? Any rule to such effect will always hurt the smaller players. Big agents already have the scale and virtuous cycles. These dynamics enable them to keep adding users, despite the increased friction. That's not the case for startups.
Last but not least, I wonder if the enforcement of identity verification can open the door to other incumbents. It's easy to imagine a new player offering unrestricted ads, bot-friendly uploads, anonymous posting, etc. While some countries stifle growth with regulation, other locations might take advantage and create regulatory safe havens.
I'm all for the return of trust, but anonymity is a precious gift too. It's important to balance both if we don't want to live under authoritarian regimes. It's time for the pendulum to swing towards more controls and verifications. Let's hope it will turn back to a more balanced view soon enough.
Despite my admiration for the underlying technology, I can't but cringe at the current fervor around the space. The industry is riddled with fraud, wannabes, and ignorant. This wouldn't be a problem except for the fact that we're playing with sophisticated technology and people's money.
One of the hottest cryptocurrency areas within the recent fever is Smart Contracts. When, years ago, I dove into Blockchain technology, Smart Contracts were in their infancy. Plenty has changed since then.
It now seems prudent to go on an exploration of what are Smart Contracts and how useful are they. Are they a fad? Do they even make sense? Who should use them? Is it wise to get your company involved?
What is a Smart Contract?
In 1997 Nick Szabo coined the term Smart Contracts to describe the digital automation of certain aspects of traditional contracts.
A contract is a set of promises agreed by two or more parties. It has been the standard way to formalize relationships in society for millennia.
A contract isn't limited to paper, but to any promise agreed upon, implicitly or explicitly. For example, we can make an agreement between a client and a freelancer. The freelance promises to deliver on what the client wants. The client agrees to pay the freelance a negotiated fee in exchange. If the freelancer doesn't provide or the delivery doesn't perform as the client approved, then there is a breach of contract. The client is then entitled, upon their contract, to withhold the freelancer's fee.
This is an elementary example, but one we're very familiar with. In this case, the contract is explicit, but it could also be implicit. Every time you go to a Parking, you implicitly agree to a contract with the parking owner. You can park your car, but in exchange, you need to pay for the use of the space. Upon payment and delivery of the proof of payment, the parking barrier will let you out of the structure.
So you see, in a way, contracts are what binds most of our social interactions. Some of these agreements are formal; others are informal. Some are explicit; others aren't.
The notion of contracts though entails several problems. The first one is the enforceability. Often, contracts end up in dispute because the parts can't agree on the performance of the delivery. Other times, one of the parties breaches the contract and cheats the other side.
The second problem is transparency. In some cases, there are some information asymmetries between the parts. One side might have inside information, putting the other part at a disadvantage when negotiating the contract.
Last but not least, there is no universal set of rules that apply on a global scale. The way we negotiate the terms of a contract; the dispute resolution laws; or even the consequences of a breach of the agreement, will vary wildly between jurisdictions, cultures, and continents.
"The basic idea behind smart contracts is that many kinds of contractual clauses (such as collateral, bonding, delineation of property rights, etc.) can be embedded in the hardware and software we deal with, in such a way as to make breach of contract expensive (if desired, sometimes prohibitively so) for the breacher.
Smart contracts go beyond the vending machine in proposing to embed contracts in all sorts of property that is valuable and controlled by digital means. Smart contracts reference that property in a dynamic, often proactively enforced form, and provide much better observation and verification where proactive measures must fall short." (Emphasis my own)
I can't but admire Szabo's prescient mind. Not only did he predicted the potential of cryptography to secure binding contracts, but he also took a pass at the future state of the Internet of Things (IoT).
In a nutshell, the idea behind Smart Contracts is to use programs to enforce the clauses of a contract, and cryptography to ensure non-tampering, transparency and fraud protection.
At the time, the technology couldn't deliver on all this. Bitcoin, Blockchain, and Ethereum changed that.
Early days of Smart Contracts
As technology and broadband started evolving, the future of Smart Contracts became the present.
The increase of Internet penetration and the rise of the cloud computing paradigm started allowing complex deployments with global reach.
Smart Contracts started appearing everywhere. Amazon's one-click, Amazon Web Services, Salesforce Software as a Service concept, etc. These systems allowed the deployment of automated enforcement of social contracts. The systems supporting them became distributed and automated, fulfilling part of what Szabo envisioned.
There was still some unresolved problems with such architectures. It lacked two key components, transparency, and decentralization. Transparency to know what set of rules govern the contract, what information is the other part storing and what are they doing with it. Lack of transparency puts the security of the deal in jeopardy. While one party promised privacy and security, the others had to trust it was so blindly. The infinite string of hacking incidents, some with disastrous consequences, is a testament to how problematic this has become.
Decentralization was also lacking. It ensures the integrity and enforceability of the contract. If the contract is own by a single entity, it's easy for them to change the terms of the contract unilaterally and even to avoid the enforceability. Decentralization allows for independent verification and makes it harder to commit fraud.
The rise of Bitcoin and the Blockchain
In 2009, Bitcoin became operational and brought to the market the first decentralized digital currency. The underlying technology, the Blockchain, suddenly makes genuinely decentralized computing a reality. On top of its decentralized nature, the way the Blockchain operates includes mechanisms for trusted consensus and irrevocable operations. Some start seeing the potential of the Blockchain as a vehicle to implement the missing pieces for Smart Contracts adoption.
With a strict focus on the currency aspect, Bitcoin though remains very limited. The Bitcoin team ignores voices asking for an expansion of the protocols. An extension that would enable more advanced uses of the Blockchain.
One of those voices was Vitalik Buterin, co-founder at the time of Bitcoin Magazine. In 2013, frustrated with the lack of action, he starts working on a new system, inspired by the Blockchain, but with a focus on computation.
"What Ethereum intends to provide is a blockchain with a built-in fully fledged Turing-complete programming language that can be used to create "contracts" that can be used to encode arbitrary state transition functions, allowing users to create any of the systems described above, as well as many others that we have not yet imagined, simply by writing up the logic in a few lines of code."
In July of 2015, Ethereum comes to life. The system builds on the idea of Blockchain and uses its characteristics to develop the first distributed, decentralized secure Turing Complete computing platform.
Ethereum Smart Contracts
Ethereum, while inspired by Bitcoin, is an entirely different beast. It's wrong to think of the network as a version of Bitcoin, because, while it employs similar elements, it's goal and capacity is quite different.
The Ethereum network is like a big distributed computer where you can run code in a distributed and decentralized way. In the same way, you used to pay for time on a supercomputer; you also pay for the time your code runs on the Ethereum network (Note: Ethereum charges per executed instruction, and not time per se).
Ethereum was designed to support the creation of Smart Contracts. Under the system, though, we should understand contracts more as programs and not just contracts. Ethereum's primary scripting language is called Solidity and enables some basic programming operations.
Solidity programs are executed on each node of the Ethereum network by the Ethereum Virtual Machine (EVM). Each node of the system runs the program simultaneously. This synchronicity ensures the anti-tampering of the execution. But It also imposes severe limits on the computational capacity of the network.
As I said before, executing code on Ethereum isn't free. There is a cost, both regarding network use, the number of instructions executed and storage used. To pay for all this, you need to purchase Ethereum's digital currency called Ether.
Ether's goal is to pay for resources in its network, while Bitcoin was designed to operate as an alternative to fiat currencies. Here lies a big difference between both systems.
There are a series of implications to this that are critical to Smart Contracts. Ethereum enables any developer to code a simple program that runs in a distributed, decentralized network. Once we send the application to the system, the code and it's associated data (storage database) will get replicated around the network. Our code is now secured, replicated and protected against any tampering.
Once stored, the Ethereum Blockchain assigns a unique id (or address in Ethereum parlor) to reference the uploaded code.
We can then trigger the execution by calling the program at that address.
Ethereum transactions and code execution
While on a regular computer we interact with it via clicks, in Ethereum we interact with transactions. There are three kinds.
The first one is the essential currency exchange between two accounts. The idea is identical to Bitcoin. We might want to transfer, split, share or give Ether to other people.
The second kind of transaction is slightly more complicated. It's a transaction than uploads our code to the network (Contract creation). The transaction will send both the code and the money to pay for the execution (gas). The system will return the address we can call upon to execute our program.
Once the code is on the network, we can use the third transaction to execute the program. In the same way as before, we transfer money to the program's address, and this will trigger the execution.
Ethereum Smart Contracts caveats
The fact that Ethereum even exists is already impressive. Nevertheless, it's important to understand that its capacity and utility is far from perfect.
There are still several problems with Ethereum and its Smart Contracts that are worth highlighting.
For starters, the current interfaces to manage its Smart Contracts are crude. It begs to remember what Szabo mentioned in his 1997 article:
"To properly communicate transaction semantics, we need good visual metaphors for the elements of the contract. These would hide the details of the protocol without surrendering control over the knowledge and execution of contract terms.”
That's not the case for Ethereum. Even if you best coders craft a fabulous contract, it's essential to design an abstraction layer to hide the protocol details. This hasn't been the case and won't be for a while.
Ethereum Smart Contracts are severely limited. On the one hand, it only allows you to build elementary logic (IF This Then That style). On the other side, it's not easy to import data streams from outside the network. External information is relevant to bot negotiating clauses and analyzing the performance of the contract. Without external validation, the use of such Smart Contracts is severely limited.
The workaround is to employ what's known as Oracles. These are programs that connect the Ethereum platform with the outside. They allow Ethereum code to import external data feeds. There are two issues with them though. For starters, it's hard to secure the information that comes from an Oracle. Due to the secure nature of Smart Contracts, any information used to execute it also needs to be ensured through a chain of custody. This is not easy and requires the use of trusted Oracles. The second concern is that Oracles make Smart Contracts expensive. Using an Oracle incurs on higher processing fees within the network. This might render the advantages of running it on Ethereum mute.
Immutability of code
Once we upload the code to the network, it becomes immutable. This is by design. The problem though is that updating or upgrading the contract becomes a nightmare.
It's tough, if not impossible, to preempt all potential scenarios of a contract. Not being able to fix an issue with a contract is a blessing and a curse. A blessing because it prevents any part from unilaterally changing the rules of the game. But it's also a curse because it stops any change that fixes a non-desired consequence for all participants. There are potential workarounds, but they aren't simple or cheap.
"Although code is theoretically immutable, one can easily get around this and have de-facto mutability by having chunks of the code in separate contracts, and having the address of which contracts to call stored in the modifiable storage."
Another effect of the immutability is that whenever we update a contract, we need to upload new code. Code that the Ethereum network will give a different address. The contract owners will require notifying all the parties with the repercussions this has.
Coding Smart Contracts isn't easy at all. The Ethereum network is a concurrent system. This brings to play all the known problems around parallel programming. And while the EVM executes operations synchronously, Solidity's code is still vulnerable to other effects, like reentrancy attacks.
Many developers out there have no experience with concurrency. Much less with concurrency-related security bugs like race conditions or reentrancy problems.
"We knew that programming, in general, is difficult, that most of the valley runs on cut&paste from stack overflow, directed by technological decisions made by reading hearsay carefully planted by marketing professionals masquerading as programmers on social media.
It's just difficult, what with all the Slack chat and Pokemon Go effort, to get all of those pesky little pre– and post-conditions right to build solid code that actually works. We also knew that some notable professors had given up on trying to teach concurrency to their students, instead preferring to teach them how to use "event-driven" frameworks. An "event-driven framework" is just some code that someone else wrote where the framework "handles concurrency" (a.k.a. kills it) by grabbing a mutex and making it impossible for the student's code to take advantage of concurrency, thereby avoiding concurrency bugs."
The need of expert developers, the use of a sophisticated concurrent system and the severe economic consequences of any bug in the code, make the development of Smart Contracts a nightmare.
On top of this, the most recent studies have determined that while Ethereum is much more decentralized than the Bitcoin network, the control of the network is still in the hands of a few.
"Put another way, there are more Ethereum nodes, and they are better spread out around the world. That indicates that the full node distribution for Ethereum is much more decentralized.
Part of the reason for this is that a much higher percentage of Bitcoin nodes reside in data centers. Specifically, only 28% of Ethereum nodes can be positively identified to be in data centers, while the same number for Bitcoin is 56%."
As I mentioned before, the performance of the EVM and node processing is quite limited. There are several projects (Raiden project, Blockchain sharding) that are trying to speed up the network and EVM's performance.
"Currently, in all blockchain protocols each node stores all states (account balances, contract code, and storage, etc.) and processes all transactions. This provides a large amount of security, but greatly limits scalability: a blockchain cannot process more transactions than a single node can. In large part because of this, Bitcoin is limited to ~3-7 transactions per second, Ethereum to 7-15, etc."
Until the network doesn't scale, most Smart Contracts will only be toys. Speed and performance are critical for more complex contracts.
Maybe the cost of operations is the major problem with Smart Contracts. As we pay per execution, having long, complex contracts are prohibitive. Also, storing data on the Ethereum Blockchain is also extremely expensive.
“Storing vast amounts of data to the blockchain is also not an ordinary task. Depending on the task, a user would likely store a cryptographic reference (a hash) of the data on-chain and keep the rest of the data off-chain.”
As more people speculate with the cryptocurrency, the price will keep oscillating with virulence. This has massive consequences for any Smart Contract deployment at scale. It's hard to sell someone on this, and then have them pay exorbitant prices that offset the benefits of having a Smart Contract.
Should you use Smart Contracts?
All in all, we've gone a long way from that initial theoretical concept that Szabo exposed in 1997. That said, the current solutions aren't quite there yet.
We live exciting times, and I believe we need more experimentation in the field. But I also think most organizations need to ask themselves the following question.
Do we need such a trusted and secure environment for our contracts?
Current technology like Amazon Lambda and other serverless computing offerings can achieve much more than the Ethereum computing platform. And cheaper. Orders of magnitude cheaper.
They lack decentralization and transparency, but it begs the question, is it critical for everyone? Security is about tradeoffs. The more secure, the more unusable the system is. Each organization needs to set their balance. Each company has a different customer. The use of Smart Contracts and the underlying platform needs to be aligned with the needs of those customers.
I don't believe that everyone should be building Smart Contracts on Ethereum. There are some edge use cases where it makes sense though. One of the aftermaths of the network's decentralization is the capacity to bypass the local law. Sometimes, the fact that the rules are the same all across the globe makes the underlying operations easier. Other times, giving a chance to people to bypass dictatorships and their censorship can have enormous repercussions.
As with all technology, it's important to understand it and measure the cost-benefit for our organization. Smart Contracts and cryptocurrencies are hard and complicated. They're sturdy and jaw-dropping, but they're also dangerous, unstable and volatile.
The letter is worth reading. It covers, not only Uber's Intelligence unit's structure but what they did and how. Some revelations are shocking due to their illegality. Others are striking because of how advanced they are.
Uber's Intelligence operations went from data leak protection to counterintelligence, cyber attacks, covert operations and infiltration. The list is exhaustive.
I won't delve into the illegality of Uber's acts. Nor I'll defend them. What impressed me, above all, was the level of sophistication of the whole operation.
The Uber case highlights two things. The first one is that Intelligence operations aren't unique to corporations. The technology industry is becoming such a cutthroat space, where any advantage can have a massive impact.
"Jacobs was struck by the incredibly talented people at the company, the unmatched level of challenges and threats they faced and energized by the opportunity to build a holistic intelligence team, across the spectrum of threat intelligence, geopolitical analysis, and strategic insights. He would go on to build capabilities to serve a constantly growing community of interest at Uber, and deliver insights to shape engagement strategies, advise business decisions, and continually protect his colleagues and the community of riders and drivers they served in cities across the globe."
"These independent contractors were given the meaningless acronym LAT to protect discussions about this resource and poke fun at TalGlobal, a former vendor who provided intelligence collection support to Uber. LATs were seen as the opposite of Tal, who Uber had discontinued working with due to their low quality work."
Who needs Intelligence?
The question, though, is who needs this kind of intelligence? A decade ago only world corporations would need such services. Here are some factors that can determine if an organization needs Intelligence or not.
Global footprint. Companies undergoing an expansion, such as growth startups, are an excellent example. There is a need to understand the geopolitics of each new country and region.
Stiff competition. Organizations that operate on very competitive spaces need comprehensive competitive intelligence.
Strategic leadership. The use of intelligence is the primary input of strategy, only companies that have a strategic thinking mindset will benefit.
Well funded companies. Building an Intelligence unit entails an investment, both regarding manpower and tools.
How do you setup your Intelligence unit?
Intelligence units respond to the need for gathering information that serves a set of organizational goals. The first stage, before establishing any team is to have a distinct idea of what do you need the information for.
Once with a clear mission, we can set up the collection process. These are the inputs of the process. There are many ways of collecting information some of which are:
Signal Intelligence (SIGINT). Refers to intelligence-gathering by interception of signals. These signals can be communications between people (COMINT) or from electronic signals not directly used in communication (ELINT).
The easiest to deploy is OSINT. It's not only free in most cases, but it's also legal. On the other hand, Human Intelligence can border illegality depending on the country and regulations. Signal Intelligence is by far the most expensive one. One that is certainly illegal for anyone except government intelligence agencies.
Information gathering has always existed, but organizations need to put particular attention on how they gather it. Some practices, while legal, border the unethical. Others are outright illegal.
Setting up a collection process isn't a one time exercise. The organization needs to create a stable, repeatable method to keep the information flowing. This includes not only the inputs but a way to store the data.
Once the collection process is in place, then we need to filter it. Collection will create an inhuman amount of information. The team needs to process the sources, normalize data, test its relevance, etc.
After cleaning the raw data, the unit integrates many pieces of Intel into a coherent picture. The assembled information will be then bundled under different formats, depending on the needs of the organization.
It's important to understand though, that Intelligence only informs decisions. What actions to take with the gathered Intel is management's prerogative. And as such, ethical questions rest upon executives and not the actual Intelligence.
Intelligence operations business case
How will a regular business take advantage of an Intelligence Unit? Here is a fictional example of the retail industry.
Imagine an international brick and mortar retailer. The business has a global footprint with multiple stores in many different countries. They want to keep hold and grow their current markets. At the same time, they want to expand beyond their present countries, opening new regions.
The top executives have decided to start a global Intelligence Unit to support all locations with on the ground Intel. Each country is expected to consume and factor in Intelligence reports from the unit.
These are some of the challenges they'll need to resolve.
International footprint. Global operations require a constant pulse of what's happening in each country. A change of government or legislation can have significant consequences for the business.
New openings. To proceed with their expansion, it's crucial for them to know where and when they should open a new shop.
Strategic relocations/cost control. Maintaining a physical store is very costly. They need to keep an eye on customer displacement areas and potential opportunities to move within a different area of a city.
Price analysis. The firm needs to keep their prices within competitive margins. They need to track the competition's prices, not only globally, but on a country by country basis.
Competition analysis. It's important to know what the competition is doing. They want to predict potential threats like new store openings, offers, promotional events, etc.
Information leaks. It's also vital to prevent critical leaks. They want to inoculate employees against poaching, information extraction or mystery shoppers.
The organization operates hierarchically. Information needs to flow from the headquarters to the regional managers, to the country managers, to the city managers.
The first step for the Intel team is to set up some collection processes. They want to be able to monitor specific things:
Retail news feeds. They want to store all retail news for further analysis.
Global news about each country. They'll be storing global news from each country. Both political and international.
Real Estate news feeds. They'll keep track of all the new openings, relocations, and real estate offering feeds for each city they operate in.
National Bureau of statistics. They want to store all macroeconomic metrics of each country they operate in. Unemployment rates, employment growth, GDP, education level, inhabitants growth, etc.
Social Media feeds. They'll establish feeds for social media streams, both for the competition, as well as any brand mention or employee engagements.
Competitor's web scrapping. They'll set up automated scrappers to detect changes on the competition's web. They'll also store product information and price information.
Financial Reports. Whenever possible, they'll feed annual financial reports of the competitors in each country.
The team will also establish a small Human Intelligence operation. They will assess customer's buying patterns on the ground. They'll also act as mystery shoppers for other brands. This will allow them to establish competitor's metrics like top sales, customer estimation, average customer ticket, etc.
All the intel will be stored in an isolated and encrypted computer network. Access to it will require specific gear to prevent unauthorized access. This isolation will minimize any leaks if hackers compromise the corporate system.
Filter systems. The team will filter each feed so it can highlight, through statistical models, potential leads. The filters will target competitors, the brand, employees, countries, cities, and potential competitors.
Prediction models. The team will build prediction models that can use the filtered data to assess potential risks to the organization.
The team will create periodic reports that will send to the local heads. Each report will contain the following:
Current geopolitical situation. A brief on the existing domestic situation highlighting the top news and potential threats.
Potential relocation opportunities. Based on the analysis, possible new areas that can lower rental costs while maintaining a competitive location.
Potential new locations. Analysis of likely new stores to open in the area.
Price analysis. Price fluctuation of similar products in the country.
Demography analysis. A study of the evolution of the target customer within the country. i.e., Evolution of tourism in the region or city. Top nationalities, top expenditures, etc.
New competitors. Prediction of potential new competitors looking to enter the country or city.
Competitor's strategies and statistics. Research on the top competitors with typical customer behavior, average ticket, customer type, etc.
The operation is an evolving one. The more information gathered, the more Intelligence can be generated. At first, the Intelligence team will push information to each local player. Initially, they also operate as sensors that determine what would be useful for the local organization. With time, they'll start incorporating new reports and intelligence based on the unfolding needs of each local team.
Conclusions on Corporate Intelligence
As I commented before, Intelligence operations aren't for big corporations anymore. Global competition, data gathering, and stiff competition are forcing organizations to be smarter. If your rivals are making informed decisions and you're not, you'll lose and become irrelevant.
The irony is, companies are already digitalizing their businesses and are incorporating more and more data. The need for data is allowing them to build comprehensive collection processes already. Some are already using it to create competitive Artificial Intelligence systems. Why not use it for Intelligence purposes too?
Leadership must enforce caution around building their Intel operations. Not everything is acceptable. Some methods are illegal or borderline unethical. Is imperative that organizations establish a clear ethical code around their Intelligence efforts. This will prevent unnecessary investigations or legal complications.
There is an information war happening as we speak. Those that don't arm themselves will be victims of the current information warfare. In the best case scenario, they'll see an erosion of their market share. On the worse case scenario, they'll be wiped out from the market. Don't wait. Start building necessary Intelligence capabilities now.
We live in the age of scale. Everything has to be scalable. Everything has to accelerate. It seems that if your business, division or team isn't achieving rapid growth, it's not successful.
Scale, though, isn't always creating more opportunities. It induces an effect called aggregation. The more prominent a business is, the more people flock to it. The more information it gets, the better it gets. The further it improves, the larger it gets. And so on.
For businesses to compete with aggregators, they need scale. Without scale, it's hard to make enough money to sustain operations. But scale depends upon two things, automation and data. Automation requires both automated artificial intelligence systems and crowdsourced ones. Data is also a combination of automated and user-generated content.
The question is, who controls the generated content? What happens when the amount of information exceeds human oversight? Can we trust an algorithm to vet what content exists and what doesn't?
Scale and content
Most companies are investing heavily in scalability. They're increasing their capacity, their infrastructure and their quality of service. But hidden among this growing frenzy, content assessment and security are being abandoned.
"Over the course of this year, we have invested significant resources to increase trading capacity on our platform and maintain availability of our service. We have increased the size of our support team by 640% and launched phone support in September. We have also invested heavily in our infrastructure and have increased the number of transactions we are processing during peak hours by over 40x."
But as these services grow, content quality and security assurance are becoming critical. Facebook is under fire due to unsupervised ad purchases and filter bubbles. YouTube is getting hell for their lack of content control, especially, around children's content. Users are accusing Twitter of becoming the home of trolls, Nazis, and armies of soulless botnets. The FCC is being questioned on the truthfulness of their Net Neutrality probe comments.
All these aggregator companies struggle with content. They feed on it but their scale is so massive, it's impossible for them to control the flow of it anymore. And the most problematic aspect is that they still don't know how to fix it.
"A filter bubble is a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles."
Facebook is trying to ease the situation. The truth though, is that Facebook, by design, creates filter bubbles.
"A bridging weak tie in a web context is a link to a source of information that you might not normally look at, you may not agree with, and challenges your ideas. Facebook and Google algorithms do the opposite: They show things we will like and agree with, so they are basically erasing our weak, bridging ties, at least in our digital networks."
YouTube is another example of out-of-control content. Their case is compelling because they're mixing human moderation with Deep Learning aid. Human operators train the artificial network, and the Deep Learning system expands the reach to all the platform's content. The results, while impressive, have also generated unintended consequences.
“The thing that sucks is YouTube doesn’t tell you why it was de-monetized,” said Sam Sheffer, a 27-year-old whose career as a YouTuber began just a few months ago. “They link you to some arbitrary set of rules, and you have no idea why you were de-monetized other than the fact that you are.”
The general tonic is always the same. Due to scale, content gets out of control. Automated content infests the networks. People cry out, and the operators harden the filters. Due to the immense volume, humans alone can't manually operate these filters. Operators then design new algorithms that can aid them in filtering and controlling it.
These machine-augmented moderation systems do censor plenty of subversive content. Content that shouldn't be there in the first place. But they also have unexpected effects. The diversity of content is suppressed, and only the most conservative views are allowed. Worst of all, these systems can't explain why they did what they did. When questioned by the platform's users, operators are unable to tell why the system censored their content.
Ethics, diversity, and open-mindedness aren't a black or white equation. Your upbringing, your education, your culture and your personal experiences matter. All these biases will creep into AI assisted moderation systems. And we need to be vigilant about it.
Build content moderation from the start
Learning from past mistakes has always been critical. In the age of exponential scalability, this is even more crucial than ever. There isn't much margin for error. A small slip, innocuous at a small scale, will sprout into a choking issue when the system grows.
There are valuable lessons that newcomers can learn from the current aggregators.
Don't subvert content quality pursuing rapid growth. Eradicating questionable content, once it's part of the larger system, will be damaging.
Establish a clear policy for content since day one. There has to be a clear set of rules people can follow. It's impossible to be objective, but at least, be transparent about the guidelines.
Be straightforward about how the organization enforces the policy. Users should know how the system assess if a piece of content has infringed the platforms policy.
Be impartial. It can't happen that some users, due to their status or name, can upend the rules of the platform. The recent banning of women on Facebook is a good example of what not to do.
Setup moderators early on. Moderators should raise problematic issues that the current policy doesn't addresses.
Under no circumstance allow moderators to make decisions if it's not objectively supported by the policy.
Update the policy on an ongoing basis. It's impossible to capture all the nuisances of social conventions, so keep the guidelines alive. It's a growing organism, like a newborn learning the rules of engagement.
Implement self-policing mechanisms in the platform from day one. You will need them. No matter how good your moderators are, you need to build a system to allow users to bring the attention to specific issues.
Build abuse detectors. As you platform grows, rogue elements will try to abuse it. You need to have ways of detecting these behaviors from day one. It's easy to delay this until you've grown, but by then, the damage might be too widespread. Twitter bots or the FCC Net Neutrality probe are a good cautionary tales.
Review the output of your abuse detectors regularly. These systems are autonomous and will make mistakes. You can't build them and forget about them.
Make sure your automated systems execute on new changes to the policy. A delay between one and the other can be problematic.
There isn't a perfect recipe for humans. We are complex systems, and it's impossible to plan for everything. Nonetheless, most people forgo essential quality assurance for the riches of rapid growth.
The consequences of not doing it, are dire. Advertisers flee and revenues go down. Content creators flee, traffic plummets and the market share erodes. Revenues go down even more.
PS: As a side note, I wonder how feasible it is to create a system, like AlphaZero that uses reinforced learning to devise a real-time policy that changes and adapts.
The problem, though, is that fake news isn't only manipulating political events. Its influence is affecting governments, strategic, innovative decisions like Net Neutrality, and indeed, industry-wide warfare.
Anatomy of fake news
There are several aspects about fake news that have changed in recent years. These have made them a terrifying threat to society. Fake news needs three components for them to work.
Thanks to the Internet, it's never been easier to create content. Everyone can start an online publication and start dumping their ruminations.
Fake content isn't limited to political agendas. It can touch virtually anything. It can be human trafficking claims against an enemy or lies on how a company treats their employees.
Distribution: The propaganda machine
Fake news content needs to be widely circulated. It's important to deliver it to the right people at the right moment. Albeit Facebook getting all the press lately, there is another suspect always present at every single fake news incident. Yes, Twitter.
Although propaganda efforts engage many different delivery channels, Twitter's core design is the perfect distribution engine. It's anonymous, it's in real-time, it allows for easy targeting and, for the most part, it's unpoliced.
Facebook, on the other hand, restricts the distribution algorithm (the news feed), it's focused on identifiable profiles and is, slightly better policed than Twitter.
Although the later is harder to exploit, both are extensively used for information manipulation.
Email, forums, websites, chats, you name it. Anywhere there is a strategic audience; interested parties will target it.
It's important to mention that the target audiences don't need to be large. Political misinformation, for example, tends to target sizeable audiences. Other forms of manipulation, like attacks on brands, products or business deals, might not call for a massive audience, just the right one.
Scale: The unstoppable tsunami
The previous two elements of fake news have existed for decades. There have always been hidden agendas, and there have always been ways to reach an audience, let it be in a forum, in a book or a newspaper.
“Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle. The real extent of this state of misinformation is known only to those who are in situations to confront facts within their knowledge with the lies of the day […] I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors.”
Nonetheless, the capacity to produce fabrications and to distribute it to the broader audience has never been as dominant as today.
In other words, the scale of misinformation we can attain today is orders of magnitude more significant. Automation of content enables us to create more content than ever before. Deep Learning systems can copy, summarise or even create content, at scale.
Hand in hand with this scale is the distribution capabilities of networks like Facebook or Twitter. Never in the history of humankind, we've experienced such colossal aggregation platforms.
The combination of these two facts makes the scale of information manipulation tremendous.
Fake news in the age of Artificial Intelligence
To accomplish scale, there needs to be a certain degree of automation. Automatic content creation is one part. Autonomous distribution and amplification, though, is the cornerstone.
The way propaganda automates distribution is through bots. These are computer scripts, posing as human users, that automatically distribute fake news on social networks. Sometimes a human will man a bot, others it will be autonomous. These hybrids are called cyborgs.
Twitter is, by far, the most substantial breading ground for bots. The social network's design is perfect for them. It exposes an API, which enables the automation of basic operations. It allows the creation of anonymous accounts, at scale.
Bots, though, don't operate in isolation. Bot owners cluster their creations to form swarms of bots that run in a coordinated way. These hives are called botnets.
"Twitter bots can pose a series of threats to cyberspace security. For example, they can send a large amount of spam tweets to other users; they can create fake treading topics; they can manipulate public opinion; they can launch a so-called astroturfing attack where they orchestrate false ‘grass roots’ campaigns to create a fake sense of agreement among Twitter users; and they can contaminate the data from Twitter’s streaming API that so many research works have been based on; they have even been linked to election disruption."
Smaller botnets are compromised of 30-40 bots. Bigger botnets might be as extensive as 350.000 bots (Jan. 2017). The latest discovered botnet, called Bursty, implicates 500.000 fake Twitter accounts (Sep. 2017). Depending on how conspicuous and aggressive these bots are, they can tweet between 72 to 300 times a day. That gives a throughput of 36 million tweets per day, at the lower range.
The problem is so critical that in 2015 DARPA (The US Defense Advanced Research Projects Agency) organized the first ever Twitter bot challenge. The goal was to upgrade global cyber-defenses against fake news on Twitter.
Since then, bot detection technology has been improving, but it's not enough. The recent discoveries of the Star Wars and Bursty botnets acknowledge this.
The most worrisome aspect of it is that, despite all the efforts to detect bots and fake news amplification nodes, there isn't an easy way to stop them.
Botnet neutralization starts with being able to detect their activity. This first step is already complicated. Tweetstorms will be identified quickly, but other subtle techniques might be more subtle. If fraudulent news activity is detected, the second point is to uncover the botnet. Some bots might be obvious, but others are very sophisticated.
"Social bots can search the Web for information and media to fill their profiles, and post collected material at predetermined times, emulating the human temporal signature of content production and consumption—including circadian patterns of daily activity and temporal spikes of information generation."
The last step is maybe the hardest. If we managed to identify a part of the botnet, we now have to disrupt it. As most detections happen outside of Twitter, the only way to stop the botnet is by reporting them to the company. According to their policy,
"Some of the factors that we take into account when determining what conduct is considered to be spamming include: […] If a large number of people have blocked you in response to high volumes of untargeted, unsolicited, or duplicative content or engagements from your account;
If a large number of spam complaints have been filed against you;"
Therefore, it's up to Twitter to decide when they take down such accounts. The important fact though, is that the time between fake news activity detection and botnet disruption might be quiet long. Each phase might take days or even weeks. In a week, a regular not-too-aggressive botnet (~100 bots) might have pushed something between 50.000 to 100.000 tweets. That's enough to take over and disrupt a conversation, a hashtag or a Twitter trend.
Future of fake news
Although the press has devoted much time discussing the impact of fake news on politics, I feel, that is also a diversion.
Information warfare is used as we speak to influence major strategic decisions. Such powerful botnets can attack a country, but they can and will subvert any organization that lays in their wake.
It's not farfetched to think that the current backlash against the big technology companies isn't, to an extent, amplified by the fake news echo chamber.
It's easy to plant disinformation as long as it's what the audience wants to believe. Giants like Facebook are the enemy now, so any content bashing them will find massive virality.
Today it's Facebook; tomorrow could be Bayer, Unilever, Maerks or your organization.
Cybersecurity investment is on the rise, but still, I don't know of any company that's deploying bot hunters and botnet disruptors. The only way to fight scale and automation is with scale and automation.
Cyberwarfare isn't for governments anymore. Companies need to invest in cyber defenses and be able to disrupt fake news attacks in real time.
According to the Forbes list, out of the top 10 largest companies in the world, four are Chinese, five are American, and one is Japanese.
If you narrow the scope to tech-only companies, China has seven companies in the top 20. Two of them, Tencent and Alibaba, among the top six.
The China that most people imagine has nothing to do with the current technology superpower that China is now. Andrew Ng, former Chief Scientist at Baidu, hits the nail when he states,
“China has a fairly deep awareness of what’s happening in the English-speaking world, but the opposite is not true.”
This asymmetry is helping China fly under the radar. Most organizations are so focused on the Silicon Valley dream that they're missing the elephant in the room.
Education is a critical aspect of any country. This is especially true when we're speaking of innovation. Historically, China's educational levels have been subpar with the rest of the world. This hasn't been the case for a while now. The truth is, China's university are already outperforming many of their international peers.
While institutions like Stanford still hold on to their perch of the global ranking, universities like Pekin's University, are closing in. Stanford outranks them in specific scores but lags in others like technology transfer.
In comparison, it's worth noting that there are precisely zero European universities among the top 30 (excluding the United Kingdom due to Brexit).
China's educational institutions still have a pending subject; attracting foreign talent. The country is trying to fix the lack of an international crowd applying a mixture of strategies with various degrees of success.
But better universities aren't the only reason for China's ascent to the innovation Olympus. In 2006, the Chinese General Secretary of China Communist Party Hu Jintao, and Wen Jiabao, President of People's Republic of China declared their intention to transform China into an 'Innovation-oriented' nation.
These declarations brought forward the term 'Indigenous innovation.' It refers to the capacity to produce innovative products and services from within a national context.
To achieve such a lofty goal, they knew they needed better local knowledge that the one they had. Improving their university system was strategic to making this, but it wasn't enough.
At the time, Chinese researchers and professors lacked knowledge in critical fields. To reduce the gap, they decided to bring foreign experts to the mainland through what's called the 1000 Talents Program.
“The way the government is putting money in is getting smarter and smarter,” says Ming Lei, one of the co-founders of Baidu and now co-director at Peking University’s AI Innovation Center.” “Before they just gave money to research projects or big SOEs or universities. But now they are more likely to give it to a private company, to one that is more active and can produce the products and services.”
Enter the Chinese startup scene.
China's startup talent
As with Chinese education, for years, Chinese startups have been looked down upon due to their lack of competitiveness. Local startups grew mostly as American copycats. Despite the negative connotations, these companies brought a wealth of knowledge to the entrepreneurs. It taught them how to build products, and how to do it fast.
“The velocity of work is much faster in China than in most of Silicon Valley,” says Ng. “When you spot a business opportunity in China, the window of time you have to respond usually very short—shorter in China than the United States.” – Ng
China might have started as the land of the copycats, but it quickly evolved and started developing their innovations. New Chinese startups emerged that, not only served the local market's need but did this at a scale never seen in the US.
"Chinese companies experience both much larger scale than anything seen before in the US and no holds barred domestic competition."
Such has been the evolution of the Chinese startup ecosystem that their products and services are starting to outperform their American peers.
“Weibo is a better product than Twitter, same for Taobao and eBay, WeChat and Facebook Messenger. Better features, more robust business model.” Today, Chinese companies are coming up with innovative products not seen in America such as customized news or distance learning using “underpaid American teachers” to teach English. We are now entering the age of copying from China, says Lee.
All this was happening, while Internet and Mobile penetration were increasing. In a way, China skipped an innovation step and went directly to mobile.
This leap has created some unique mobile behaviors that are giving a massive edge to Chinese companies.
"The data gap between the US and China is “dramatically larger” than the actual gap between the respective populations or the number of active mobile users. Chinese use their phones to pay for goods 50 times more often than Americans, he says, and orders for food delivery are ten times greater than in the US."
China's innovation efforts have squarely targeted the development of AI and Deep Learning technologies. Nonetheless, attaining AI expertise isn't easy. Investing in AI demands spending on the three building blocks that make it possible; hardware, data, and talent.
People know China for their hardware production. Even so, their expertise on the design aspect of high-tech semiconductors has remained elusive. If China wanted to up their game, they needed to increase their knowledge in the space.
Access to massive amounts of data is paramount for AI. Yet, data is one thing China has in excess. With an Internet population of 731 million users (2,5x more than the US) and very lax privacy regulations, they're well equipped to train their systems with large swaths of information.
“When it comes to government data, the US doesn’t match what China collects on its citizens at all,” says James Lewis, senior fellow at the Center for Strategic and International Studies. “They have a big sandbox to play in and a lot of toys and good people.”
Data and hardware aren't enough. You need people to man the algorithms. The government started doubling down on AI research money to increase the number of skilled AI and Deep Learning researchers. In contrast, the Trump administration began slashing the 2017 National Science Foundation budget by 11.2%. The effect has been dramatic.
In October of 2016, the US National Science and Technology Council released a paper titled "The National Artificial Intelligence Research and Development Strategic Plan" (PDF). The document indicated that China had surpassed, for the first time, the US number of peer-reviewed publications mentioning Deep Learning. It also sets the first US Artificial Intelligence R&D strategic plan ever.
The rise of Chinese AI researchers has been felt worldwide.
“When Rao [Subbarao Kambhampati, current president of the Association of the Advancement of Artificial Intelligence, AAAI] first started seeing Chinese researchers at international AI meetings, he recalls they were usually from Tsinghua and Peking University, considered the MIT and Harvard of China. Now, he sees papers from researchers all over the country, not just the most elite schools. Machine learning—which includes deep learning—has been an especially popular topic lately. “The number of people who got interested in applied machine learning has tremendously increased across China,” says Rao.
"In 2016 China increased its output of AI-related papers by almost 20 per cent compared with the previous year, while EU and US output dropped. […] However, the quality of fundamental research remains a problem. Although China leads the world in quantity of AI research, it lags behind the EU in terms of number of AI papers in the top 5 per cent of most cited research — but still overtook the US in this metric last year."
One of the unspoken advantages of many Chinese researchers is that they have access to the best of both worlds,
“Chinese researchers usually speak English, so they have the benefit of access to all the work disseminated in English. The English-speaking community, on the other hand, is much less likely to have access to work within the Chinese AI community.”
"The plan includes formulation of laws, regulations, and ethical norms on AI, as well as mechanisms for safety and supervision. The plan seeks to mitigate likely negative externalities, such as job losses, associated with AI, while fully leveraging the opportunities."
“We were told by the secretary of the Air Force, ‘Your tech is awesome, we should put it everywhere,’” he said. “No one followed up.” […] American military officials have “figured out a very good way to give $10 billion to Raytheon,” he said. “But to give a start-up $1 million to develop a proof of concept? That’s still very, very hard.”
On the other hand, China is trying to make it easier for foreign talent to come and work with them. To accomplish this, the big three of China, Baidu, Alibaba and Tencent (BAT) have been opening AI, and Deep Learning focused research centers on the West Coast.
Underestimating China is easy. For many years it's been the land of the cheap mediocre copycats. Chinese culture is foreign to most Westerners. It's plagued with idiosyncrasies that cultured western institutions have defined as inferior or wrong. The fact that few outside of China speaks or read Chinese doesn't help. We disregard and downplay that that's different or unknown to us.
But the truth is, it's becoming increasingly hard to ignore the fact that China is on the verge of becoming the world's technological leader.
While Chinese universities still have a low rate of international participation, that will change fast. It's a matter of time before foreign students start flocking China, looking for the next Stanford.
Meanwhile, more and more companies are turning to China for funding and customers. The US and Europe are lagging behind in technological adoption. Robotics, AI-based systems, automated education, Quantum computing or smart mobility are all happening in China, not in the US. The market is in China, the funding is in China, and the regulation is in China.
Let me start this post by saying that I'm a real believer. Virtual Reality (VR) is a technology that will disrupt how we live our lives. The problem is, it's not there yet.
The short answer is that the usefulness of the content (benefits) vs. the accessibility of the technology (cost) isn't there yet. Let us walk through the VR value chain and analyze each part.
There is some fantastic VR content out there, but it's not the norm. There are three main problems with VR content. The first one is common. When a new medium comes around, content transposition is usual.
The second issue is the lack of VR developers. There are two sides to what we now call Virtual Reality. On the one hand, we have what's called 360º experiences. This would fall into movies with 360 degrees of freedom. The producers record the content with 360º dedicated cameras. The footage is then "stitched" together to create an immersive experience. On the other hand, you have CGI based material. Experiences created entirely with Unity, Unreal, Blender and the likes. Content that resembles the traditional game development process.
While development for VR might look similar, it's not the same. 360º experiences allow for multiple storylines. Content needs to be "stitched," directors require expensive hardware too. The same applies to CGI-based content. The possibilities of the medium make it different.
New tools are being developed to use VR space to design 3D content. I think that's the way to go, and so far, the learning curve is less steep than the traditional hardcore Unity developer. That said, there are still in their infancy.
The truth is, great VR developers are scarce, just like good AI experts. It's hard to teach someone, and the return on the investment is dubious. VR sales aren't there. Some studios are even cutting down on VR investments. So it's hard to predict if getting into VR is a good or bad investment, putting a break into getting more people in the field.
The third difficulty with VR content is what I called usefulness. For the industry to take off, there needs to be a clear set of benefits. Most of the people with CGI expertise come from the entertainment industry (Game studios + Moviemakers). This is one of the reasons why the content is heavily tilted towards the entertainment sector.
This bias towards entertainment already narrows and corners VR's usefulness. If you compare the benefits VR entertainment offers versus the ease of use, we can see, we're not there yet.
VR entertainment, let it be games or VR movies like Melita (Disclaimer: I know the makers), is mind-blowing. The question is, is the experience so good that it offsets the cost of having it? The price isn't just economical, but the cost of setting the system. The more devices/sensors, the more time it requires to setup, the more expensive it is for the user. Time is money.
The most cost-effective devices (smartphone powered headsets) aren't costly ($10 – $150), but they deliver a poor substitute for a movie or game in 2D.
The mid-range devices are way more expensive ($350 – $500), and they depend upon, not only the headset but also external controllers. On top of that, the experiences, while being better, aren't astounding. At least not enough to make people come in droves.
Top of the line gadgets can easily hit the $1000 mark, demand external controllers, external cameras, and a dedicated gaming PC device. The delivered experiences are astonishing, but it's a steep price and setting.
Beyond entertainment, few categories are mature enough to compete with regular apps. Microsoft might be one of the best positioned to expand into other niches. While Hololens isn't prominent due to its lofty price, they're pushing the technology beyond entertainment. Their new Mixed Reality Platform is quite promising.
On top of the problems with content, the distribution of content is a mess. The word that comes to mind is "confusing." Finding content isn't easy. Each hardware manufacturer uses a different platform, trying, in vain, to achieve a virtual integration.
The strategy makes sense when you own the hardware, and you're already terrific on that part of the value chain. The problem is, none of the current players are good at that. None has a distinct hardware advantage, which means, there is no reason to flock to their distribution platform.
If any, I would say that SteamVR is the significant winner. They not only carry the VR experiences for the HTC Vibe and Oculus but, Microsoft's Mixed Reality ones too.
The problem with Steam is that its primary focus is gaming. Most of the current SteamVR experiences are games. Time will say if they can manage to outgrow their gaming origins and become the de-facto VR content platform.
The hardware fragmentation is also a problem right now. While there aren't that many headset options, namely, Oculus, HTC, and Samsung, the space is heating up. The recent entry of Microsoft in the VR race is increasing the number of hardware options too (Dell, Asus, Lenovo or Acer).
Brand fragmentation is on the way, but more critical than that, it's the fragmentation the technology itself is generating. VR headset capacities are changing every 4-5 months. A headset now will underperform in less than a year. This is problematic, not only for the users but because it creates a content legacy problem. A quick look at Google's DayDream store shows many VR apps with compatibility issues.
Last but not least, the hardware isn't there yet. We've gone from low frame rates and dizziness problems to better frame rates and enhanced tracking with external sensors.
Still, the experience has three significant obstacles. The first one is The Cord. The fact that you need to have the headset connected to something is a predicament. It gets in the way of mobility, just when everything is becoming mobile.
Luckily the cord is disappearing with the new standalone headsets by Oculus (Oculus Go) and HTC (HTC Vibe Focus, HTC Link).
There second major block is the need for external sensors to track the user's movement. Oculus had a camera, HTC Vibe introduced external infrared sensors, and finally, the HTC Vibe Focus features onboard sensors. This is allowing for an authentic "standalone" experience with what's called 6DoF.
The last major hurdle, though, is still the computational requirements. HTC Focus allows for a standalone experience, but there is a catch, you can't run the most power-hungry VR experiences. For that, you still need to rig the device to a computer.
What's the ideal? We need devices capable of being mobile and capable of running the top VR experiences, where the real value is delivered. Anything below that threshold will make it hard for the mainstream to buy into it.
Virtual Reality will change our world. I do not doubt it. I've been trying headsets for a while now. It always blows my mind. Still, there are some significant issues in several parts of the VR value chain that need to be resolved first.
The ecosystem is still very confusing. The focus now is on getting the technology to a mass-market-ready point. The technology works, but still requires a substantial investment regarding hardware, time and money. That needs to change.
Due to that focus, everything else suffers. Every single content platform suffers from a lack of clarity, confusing messages or bad UX. The iPhone took over the market, not only due to its superior hardware but due to its superior usability. The VR space needs a simplification of the messages, hardware, and platforms.
The VR industry still needs a little longer to mature. The problem though is that the investment is required now. Most of the companies investing in VR right now should be doing it for the foreseeable future. There are still two or three more years of desert until we start seeing enough adoption to recover some of the investments.
One more thing. In all this industry there is a prominent absent, which is Apple. So far they've been sitting on the sidelines, arguing the technology isn't there yet. After their recent iPhone X release, I have to wonder if they have something in stock, despite Cook's recent statements on the matter.