When the current tech talent isn’t enough

When the current tech talent isn’t enough

Technology is becoming ubiquitous in any company, but not all companies use technology in the same way. Some organizations use technology to aid their processes. Others make technology their core business.

Using technology to support your business is not enough anymore. The market is demanding more complex products and services, products that require sophisticated approaches that most companies aren't able to provide. Only the use of advanced technology can render such products.

Digital Transformation isn't a new trend; it's the tax of doing business. No matter your industry, your size or your market. Not moving technology to the core of your operations will kill your business in the next few years.

But digital transformation assumes there is enough technical talent in the market. There isn't. New incumbents need to fight, not only with the technology corporates but with a massive drought of talent.

The use of Artificial Intelligence, Machine Learning algorithms and data aggregation is becoming critical for survival.

Digital transformation isn't enough either. Digitalizing your product catalog won't cut it anymore. Switching to a mobile site won't make you sell more. Despite what many business owners think, the use of Artificial Intelligence, Machine Learning algorithms and data aggregation is becoming critical for survival.

The problem is, these cutting-edge methods beseech out-of-the-ordinary talent. It's not enough to be a Computer Science graduate.

Machine Learning or Deep Learning methods need a multi-disciplinary approach. Something most current engineers aren't good at. It entreats the use of sophisticated mathematical techniques, paired with probabilistic views.

The use and adoption of new technology go faster than our capacity to train people (4-5 years degrees).

SourceAssessing and Responding to the Growth of Computer Science Undergraduate Enrollments Report by the National Academy Press.

Keep up to date and grow your knowledge.
Subscribe to our newsletter.

Where is the talent?

The question is, how are we mitigating this? The answer is, we aren't. If you aren't one of the big technology companies, you are at a massive disadvantage. Not only you'll find it hard to attract talent without exciting, billion-people-impact projects. You'll have a hard time matching their salaries too.

Source: 2017 State Of Global Tech salaries
Source: 2017 State Of Global Tech salaries
Source: 2017 State Of Global Tech salaries

The top technology organizations are having a devastating effect on the market. To keep expanding their systems, they're hiring experts at an ever-increasing rate. They're quite literally, draining the talent pool. They aren't just pulling new and old graduates, but also University professors into their midst.

This is having a dire effect on the incumbent players. Not only they won't find talent. The talent they gain will be sub-prime. Even worse, the sucking out of professors will hinder the ability of many institutions to produce top rate graduates in the future.

With more than half of new CS Ph.D.s drawn to opportunities in industry, hiring and retaining CS faculty is currently an acute challenge that limits institutions’ abilities to respond to increasing CS enrollments.

Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments Report by the National Academy Press.

Many are turning to Massive Open Online Courses (MOOCs) as the solution. If we can retrain talent from other specialties and do it fast and cheap, we might be able to offset some of the demand.

The reality though is different. MOOCs aren't reaching the scale that the industry needs. Quite the opposite. Completion rates for MOOCs hoover between 7% – 10%. Despite all the hype, it seems that the open-to-all approach isn't yielding the rates that the industry expects. In some instances, the rates of completion are even worse than traditional higher education institutions.

In 2013, some MOOC providers started pivoting to different approaches. What's clear is that free on-demand courses don't work. People need incentives. Paid certifications, semi-synchronous and instructor-led courses are crucial for better completion rates.

Some providers are partnering up with existing higher education institutions delivering MOOC-based degrees.

Source: Coursera Launches Two New Masters Degrees, Plans to Offer Up to 20 Degree Programs

Others like Udacity, are approaching the problem by involving the technology industry instead. They're creating Nanodegrees that are aligned with the needs of the industry.

Udacity is tuning their Nanodegrees to the most exciting fields of Technology right now. Their Self-Driving Car or Flying car Nanodegrees, for example, are drawing massive success (18.000 Nanodegree graduates, 4x up from 2016).

The key to their success is double. They're teaching applied Artificial Intelligence use cases, instead of just the dry mathematical approach taken by traditional universities. Also, they're enrolling the top experts in the industry to give the classes and showcase their work. These are the same experts, big tech giants, are looting from university R&D labs.

Technology organizations are getting involved on these Nanodegrees, both with talent and money. Their goal is to hire the junior students of such Nanodegrees to populate their labs. Companies like IBM Watson, Google or Didi Chuxing, are betting on this approach.

Other companies, like WeWork, the global coworking company, are buying boot camp schools altogether. This way they're able to offer a new source of talent to their customers. It seems that the industry is investing heavily in alternative talent pools.

The question is, for all those incumbents out there, what exactly are they doing to foster talent?

While technology corporations might seem to be fixed in their industry, they're expanding and owning adjacent sectors at a rapid pace.

As the education moves from "open" to scarce, limited and expensive, how can we retrain minorities and other at-risk groups? Ignoring this rift will only increase the chasm between the technology elites and the rest of the population.

The gap between the capacity of technology corporations to amass talent and all the rest is widening exponentially every month. The more it swells, the harder it will be for other industries to compete with them. And while technology corporations might seem to be fixed in their industry, they're expanding and owning adjacent sectors at a rapid pace.

Incumbents should invest in supporting alternative talent pools for their purpose. They should embrace challenging projects that can be interesting to potential candidates.

Despite all this, the truth is that the automation of many jobs is accelerating the rate at which we destroy jobs. While automation also creates new job positions, the speed imbalance between both is increasing too.

The problem is so acute that some organizations are, paradoxically, investing in even more automation. Two reasons are driving this. It's harder than ever to find the talent they need. And the complexity of the new products requires a super-specialized workforce. A workforce that's impossible to grow and train at the industry's rate.

What are top corporations doing? They're training their AIs to do these jobs. Train an AI once, replicate ad infinitum. This is precisely what Foxconn is doing with their micro robotic arms or Google's DeepMind:

Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play.


It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher.

I wouldn't be surprised that, as I mentioned in the Quantum Computing article, more companies turn to AI-based employees, instead of training humans.

The development of such algorithms isn’t trivial and requires extensive mathematical knowledge. Something that isn’t common. […] As a side note, I wonder if the current Deep Learning models can’t be applied to the task of developing new Quantum Algorithms.


It's hard to predict how this will end. What's true is that the situation is grim. Access to advance training is expensive and limited. If you're not investing in securing your access to talent, it might be too later for you.

It's worrisome to see that there are no structured plans to retrain any displaced labor due to increased automation in the workplace. If organizations fail to upgrade, drastically, their workforce, they'll face significant problems. They won't be able to grow and compete with other offerings. Technology-based aggregators will surpass them and asphyxiate their market share.

How is your organization upgrading their talent? Human-resources one-time workshops don't cut it anymore. How are you enhancing your skills, personally? And more importantly, how are you preparing your children for the future?

Keep up to date and grow your knowledge.
Subscribe to our newsletter.

Apple’s 3D scanner will change everything

Apple’s 3D scanner will change everything

On September, Apple announced the iPhone X. A new feature, called FaceID, will allow the unlocking the phone through facial recognition. This single feature will have profound implications across other industries.

Apple dubbed the camera TrueDepth. This sensor allows the phone not only to perceive 2D but to gather depth information to form a 3D map.

It’s, in a nutshell, a 3D scanner, right in your palm. It’s a Kinect embedded in your phone. And I bring up Kinect because it’s the same technology. In November of 2013, Apple acquired the Israeli 3D sensing company PrimeSense, the technology behind Kinect, for 360 million dollars.

How Kinect works

3D Kinect scanner


PrimeSense’s technology uses what’s called structured light 3D scanning. It’s one of the three main techniques employed to do 3D scanning and depth sensing.

Structured light 3D scanning is the perfect method to embed on the phone. It doesn’t yield a massive sensing range (between 40 centimeters to 3.5 meters), but it provides the highest depth accuracy.

Apple’s 3D scanner patented technology

Depth accuracy is critical for Apple. The sensor is the grounding stone of Face ID, their facial authentication system (PDF). If you’re going to use faces to unlock your life, you better be sure you have a high accuracy to avoid face-fraud.


Is this new?

This technology, though, has been around for a while. PrimeSense pioneered the first commercial depth-sensing camera with Kinect in November of 2010.

Two years later, in early 2012, Intel started developing their deep sensing technology. First called Perceptual Computing; it later renamed to Intel RealSense.

On September of 2013, Occipital releases their Structure Sensor campaign on Kickstarter. They raised 1.3 million dollars, making it one of the top campaigns of the day.

But despite the field heating up, the uses of the technology remained either desktop-bonded or gadget-bonded.

In 2010, PrimeSense was already trying to miniaturize their sensor so it could run on a smartphone. It would take them seven years (and Apple’s resources) to finally be able to deliver on that promise in the form of the iPhone X.

The feat is quite spectacular. Apple managed to fit the Kinect on a smartphone, while keeping the energy-hungry sensors in check, beating everyone, including Intel and Qualcomm, to the market.


What will this entail

Several technological and behavioral changes are converging in the field. On one side, we see a massive improvement of computer vision systems. Deep Learning algorithms are pushing the performance of such systems to human-expertize parity. In return, such systems are now available as cloud-based commodities.

At the same time, games like Pokemon Go, are building the core Augmented Reality (AR) behavior in users. Now, more than ever, people are comfortable using their phones to merge reality with AR.

On top of that, the current departure from text-based interfaces is beginning to change the behavior of users. Voice-only is turning into a reality, and it’s a matter of time until video-only becomes a norm too.

Having a depth sensor in a phone changes everything. What before took specialized hardware, is now accessible everywhere. What before fixed us to a specific location, is now mobile.

The convergence of user behavior, increased reliance on computer vision systems, and mobile 3D sensing technology is a killer.

The exciting thing is, Apple will turn depth-sensing technology into a commodity. Apple’s TrueFace though will limit many people’s new developments. These people will, in turn, look at more powerful sensors like Lidar or Qualcomm’s new depth sensors, catalyzing the whole ecosystem.

In other words, Apple is putting depth sensors on the table for everyone to admire. They’re doing that, not with technology, but with a killer application of the technology, Face ID. They’re showing the way to more powerful apps.


But can we build on top of this?

It’s hard to predict what will this combo be able to produce. Here are some ideas, but I’m sure we’ll see some surprising apps soon enough.

Photo editing and avatar galore

I’ll start with the obvious. Your Instagram feed and stories will become better than ever. Photos taken with the iPhone X will be able to render different depth focuses. TrueDepth will also enable users to create hyper-realistic masks and image gestures.

Gesture-based controls

While these interfaces have been around for a long time, Apple just moved them to the world’s platform, mobile. It’s a matter of time before we see these apps cropping in our smart-TVs or other surfaces.

Security and biometrics

Apple already demonstrated Face-based authentication. I guess this will be massively adopted everywhere. From banks to airport controls, to office or home access systems.

This technology could also aid in KYC systems, fraud prevention or speedier identity checks. An attractive field of application could also be forensics. The last one is something I thought about after the Boston bombings of 2013.

Could you stitch all the user-generated video of that day? Could you create a reliable 3D space investigators could use to analyze what happened?. Depth sensing cameras everywhere will make this a reality.

Navigational enhances

Depth sensing is also critical in several navigational domains. From Augmented Reality (AR) and Virtual Reality (VR), all the way to Autonomous Vehicles mapping necessities. Having a portable device that can enhance 3D maps could be a huge benefit for many self-driving car companies.

On top of that, layer the raising of drone-based logistics, and it’s complicated navigational issues. Right now many systems use Lidar sensors, but it’s easy to imagine how they’ll handle depth sensors.

For example, they could be used to identify where the recipient is located and verify their identity.

Photo by Ricardo Gomez Angel on Unsplash

Drones – Photo by Ricardo Gomez Angel on Unsplash


People tracking for ads

People-tracking technologies are being used by authorities for “security” purposes. With a small twist, we could also use depth-sensing technology to do customer tracking and efficient ad delivery in the physical world. Adtech is going to have a field day with this tech.

3D scanner

This is an obvious one. I would only add, Apple made 3D scanning, not only portable but ubiquitous. It’s a matter of time before we see it used in many different environments. Some uses might include active maintenance, Industrial IoT or the Real Estate industry.

Predictive health

One of those spaces this technology will revolutionize is eHealth and predictive medicine. The user will now be able to scan physical maladies and send them to their doctors.

In the same vein, it will also affect the fitness space, allowing for weight and muscle mass tracking.

Photo by Scott Webb on Unsplash

Fitness – Photo by Scott Webb on Unsplash


Product detection

Depth-sensing will have immense repercussions for computer vision systems. It allows them to add a third dimension (depth) and speed up object recognition.

This will bring real image tracking and detection to our phones. We should expect better real-time product detection (and buying), or improvements in fashion-related products.

Mobile journalism

Last but not least, I’m intrigued by how journalists will use such technology. In the same way forensic teams might employ these systems, journalism can also benefit from it.

Mobile journalism (MoJo) is already an emerging trend but could be significantly enhanced by the use of 3D videos and depth recreations.


What should you be doing

We are on the verge of seeing an explosion of apps using this technology. Apple has already the ARKit on the market. The Android ecosystem is moving to adopt the ARCore one and finally delivering on Project Tango.

Any organization out there should devote some time to think how this technology can bring a new product to life. From Real Estate to Construction to Media, depth sensors are going to change how we interface with physical information.

Even if you don’t work with physical information, it’s worth thinking how this technology enables us to bridge both realities.

This space is going to move fast. Now that Apple has open the floodgates, all the Android ecosystem will follow. I suspect that in less than two-three years we’ll see a robust set of apps in this space.

“According to the new note seen by MacRumors, inquiries by Android vendors into 3D-sensing tech have at least tripled since Apple unveiled its TrueDepth.”

Following the app ecosystem, we’ll see a crop of devices embedding depth sensor technology beyond the phone, and it will eventually be all around us.

If you enjoyed this post, please share. And don’t forget to subscribe to our weekly newsletter and to follow us on Twitter!

Should you be thinking of Quantum Computing?

Should you be thinking of Quantum Computing?

The short answer is, depends. If your organization is dealing with Deep Learning, Machine Learning, complex simulations or optimizations, you should care. Quantum computing is one of those technologies that we get hyped, look into them, frown in disappointment and then dismiss. The truth though, is that you shouldn’t. Not now.

In theory, Quantum Computing enables companies to run hard (exponential based) problems orders of magnitude faster than current technologies. I say in theory because in most cases, the mathematical algorithms aren’t there yet. That said, this is changing and fast.

When I mean fast, I mean exponentially fast. Some weeks ago, Microsoft released its Quantum Computing Toolkit. IBM released something similar last year called IBM Quantum Experience (IBM-Q), becoming the first company to offer Universal Quantum Computing in the cloud.

The news caught my attention. It surprised me that more and more technology companies are releasing Quantum simulators. I wondered, isn’t it far away from being useful? The truth is, it is, and it’s not. So let me separate two things.

Quantum Computers

On one side you have the Quantum Computer itself, the hardware. The speed of innovation on the hardware side is impressive. Right now there might be close to nine or ten different approaches to building a Quantum Computing. Some are very recent, like the Flip-Flip Qubit proposed by the University of New South Wales in Australia. Others are improvements over current technologies, like the Loop-Based technique from the University of Tokyo.

Hardware is still evolving. It reminds me of the early days of digital computers. Each company is outperforming the other’s architecture. The significant difference, in this case, is the speed of innovation. The acceleration of the space will bring forward a viable (like in 1000 – 4000 qubits) Universal Quantum Computer during the next few years, not more.

It’s easy to dismiss the technology as it’s currently subpar with traditional computing. There is an ongoing debate about how faster can Quantum Computers operate. A discussion that, so far, Quantum has been loosing. I don’t expect this to be the case for long though.

IBM's Quantum x2000 chip

Image: IBM’s Quantum x2000 chip

Quantum Algorithms

On the other hand, you’ve got Quantum Algorithms. This is the software abstraction that runs on top of the Quantum Computers.

Writing Quantum Algorithms is nothing like current programming. It’s the comeback of assembly language, but on steroids, it’s a trip down Universal Turing Machine memory-lane.

Quantum Computing requires a complete rewrite of the underlying math of any classical algorithm. Not all algorithms are suitable to run on Quantum. Quantum developers need to develop new mathematical devices to make them workable. And when I say Quantum developers, I mean, hardcore mathematicians and physics.

It all comes down to developing the right Quantum algorithm, something that isn’t easy or achievable by many. Here though is where the exciting space lies. Most technology leaders are investing in building their own Quantum Computers. Meanwhile, startups are focusing on developing the right algorithms for potential customers. One example of this is the Vancouver-based 1QBit.

In 2014, two Singularity University alumni, Landon Downs, President and Andrew Fursman, CEO co-founded 1QBit. Their goal? To bring the right Quantum algorithms to solve intractable problems. Their clients? Financial institutions like Dow Jones, Pharma companies, Technology moguls like Fujitsu, AI-heavy companies, etc.

Their focus is on developing the Quantum algorithms to solve expensive computational problems. Developing these takes time and effort, which is why it’s so important to start doing it now.

In a way, the fact that both IBM and Microsoft are encouraging developers to play with their Quantum languages is for a reason. There aren’t enough people qualified to be Quantum developers, and the need is becoming very real.

Quantum for what?

Three critical spaces are the ones driving the field. The obvious one is cryptography. Our current infrastructure’s security relies on Public-Private cryptography. Behind it, there is one of the toughest mathematical problems, which is the factorization of prime numbers.

In 1994, Peter Shor, an American professor of Applied Mathematics at MIT, developed a new algorithm to factor prime numbers called the Shor Algorithm. The new algorithm took advantage of the way Quantum Computing works, achieving considerable speedup times. It wasn’t until 2001 that someone attempted to put it in play with a real Quantum Computer. Fast-forward to 2014, scientists have already achieved the factoring of a six digit number.

While still some years away, everyone is expecting a breakthrough in no time. Such is the pace that the National Institute of Standards and Technology (NIST), the organization in charge of validating our most used cryptographic algorithms, is already talking about post-quantum cryptography (PDF).

But crypto, while important, is the tip of the iceberg. Artificial Intelligence, but more specifically, Machine Learning and Deep Learning algorithms, are becoming ubiquitous too. These algorithms need, not only massive amounts of data but tremendous computational speed. Such is the need that the industry is fine-tuning their chip designs to supply even fastest training capacity to their customers.

It’s not about who uses AI or not anymore. It’s about who can re-train their models faster.

The quest for fast Deep Learning training is pushing the investment on Quantum Computers too. It’s not about who uses AI or not anymore. It’s about who can re-train their models faster.

So far, the inroads into Quantum Deep Learning haven’t been much. The underlying mathematics behind most Artificial Neural Networks don’t play well with Quantum Computation. This is changing though, and quickly.

Last but not least, optimization problems, for example in the logistics and operations industries, will also benefit from it. Calculating the perfect route to transport goods, with the least cost, is still a costly problem for classic computers. There are traditional optimizations, but they’re sub-optimal. As more companies go into e-commerce or ride-sharing services, being able to slash costs in logistics is becoming critical.

If we add Autonomous Vehicles (AV) on top of this, the picture starts becoming clear. AV requires both, faster Deep Learning algorithms, but also better-optimized routes. Both problems Quantum Computers should be able to assist within a few years.


Quantum Computing isn’t for everyone. It’s only suitable for some mathematical issues. For those that are suitable, it will allow faster and more powerful computations. While the hardware isn’t there yet, it’s evolving at an exponential rate. The bottleneck isn’t with the hardware per se, but on the capacity to develop the right Quantum Algorithms. The development of such algorithms isn’t trivial and requires extensive mathematical knowledge. Something that isn’t common.

Those organizations that start training their people in this space and start focusing on their own Industry Quantum Algorithms will gain a massive competitive advantage during the next five to ten years.

As a side note, I wonder if the current Deep Learning models can’t be applied to the task of developing new Quantum Algorithms. Just a final thought to get your mind reeling.

If you enjoyed this post, please share. And don’t forget to subscribe to our weekly newsletter and to follow us on Twitter!