Quietly cooking in the edges of the deep learning hype - hype that’s seen hundreds of “AI” companies solving some of the world's most useful problems like lead scoring – has been the next era of automation. Robotics, a problem previously insurmountable due to its marriage of real world physics and the need to operate in such a world with software, is briskly approaching a solved problem under specific constraints. A market opportunity exists for the entrepreneur who understands and builds a company that obeys these necessary, but by no means sufficient, parameters. The three guardrails can be summarized as: reduction, meticulousness, and repeatability.
Reduction: reduce surprise
“Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is.”
With any learning problem, the most important step is reducing the variability in the inputs which allows a mapping function to outputs to be quickly learned. This is deep neural networks greatest asset: to overcome the curse of dimensionality (their ability to do so is mostly explained as: ¯\_(ツ)_/¯). When thinking about a task to choose, it is preferable the environment has no impact on the task at hand - the larger the robot has to understand its enviroment the harder it will be to accomplish its goal. The strive for autonomous vehicles demonstrates this problem first hand - despite billions of dollars and millions of engineer hours the problem has yet to be suitably solved, mostly because the environmental state space is too large.
An example, one may imagine, of the first practical deployment is replacing slow intricate labor on an assembly line. The environment is static and the inputs are the same for each task. The latent notions a robotic arm must learn is how the material behaves when interacting with it, motion planning, and measuring the progress of its task. Recent papers indicate that a team with focus on such a constrained space can successfully accomplish this goal.
Meticulousness: labor intensity due to copious small actions
Surviving as a robotics startup in the early years will require maximizing the return on your effort. Hardware is expensive, sales cycles always longer than wanted, and fine tuning algorithms to specific use cases slow and computationally costly. High precision, speed, hazard potential, and tediousness cover most dimensions which affect labor costs, whether that be because they cause reduced labor supply or reduced efficiency. Try to automate a task that doesn’t maximize a combination of these factors and you risk not gaining market traction - fixed costs of your solution will not outweigh purchasing labor or the margin will be too small for a factory manager to care.
Repeatability: over and over and ov–
The most compelling reason to move to robotics is instantaneous scale, both in number of “workers” but in their capacity to work. Too many startups focus on niche applications, usually drawn in by higher absolute margins. This is the wrong direction. Once your deployment is finalized, scale is what will make or break you. You can develop robotics to carve marble, and that’s really frickin cool. But your fixed costs are the same, and you want to capture the largest swath of economic value as you can. By browsing the U.S. Bureau of Labor Statistics you can quickly gain an understanding of the markets that are largest by size and that are the least productive (in the economic sense). Textiles and food production seem ripe for the taking: they both satisfying large market potential, are extremely low in productivity, and task variability is low.
In summary:
Finding the target industry is straightforward by asking a set of questions: