Low-Hanging Fruit in Automating Biology

Building the digital bridge for biology


Progress in biology has been suffering from a lack of engineering and I'm working on fixing that. When I say engineering, I mean Engineering™: building fault tolerant systems that lay foundations of abstraction for the recursive building of more systems. Current biological methods are the antithesis of engineering. In the way that (most) programmers don't concern themselves with the movement of electrons, biologists must — wasting time, effort, and progress as they wrestle with an abstraction layer that is too granular.

In drug discovery and development we care about moving a system from one state to another. At the end of the day, it's not the mechanism that's important, but the result. Many have lost sight of this, to their detriment. For example, in an organism (the system) suffering from illness (the state) we would like interventions that push the system to health. Success is measured in our ability to do so and not in the intellectually stimulating addition of new nodes to our system.

There are many ways of trying to discover and/or build interventions that achieve this, but because we lack exact knowledge of the system, we must test our proposed intervention on a system and attempt to measure the effect it has in moving the system state. We understand it so poorly that ethically we can’t test on the desired system (humans) but on systems (mice, monkeys, etc) that maybe, fingers crossed, tell us about what the intervention will do. 

Assuming it is true that you want to live longer, healthier lives — a position not shared by the dull and dreary — then we need a way to rapidly perform this cycle. There are 4 variable to consider:

  1. The speed in which you can administer and measure your intervention
  2. The number of intervention you can try simultaneously
  3. The quality of your measurements
  4. The quality of your system

The difference in progress we've made in all 4 is close to zero. In vivo experiments look identical to a century ago. Scale is the same. Quality may have degraded. And while there are new systems (e.g. organoids, custom mouse models, etc) their ability to generalize to humans has been unimpressive, unknown, or a failure. Human bias plagues all four and costs from incumbents of a bygone era continue to reach spectacular heights.

This would be an uninspiring moan if hope wasn't on the horizon. Even 5 years ago, building a computational layer on top of nature would have been impossible. To build a bridge between the digital and physical requires being able to reason and manipulate the real world. Precision hardware has dropped in price precipitously over the last decade (shout out 3D printer community) and machine learning has conquered much of image and signal processing. Motion control is still being assailed, and the cracks are beginning to show. We have all we need today to start picking the low hanging fruit, and continued progress promises the ability to pick the top of the tree.

Transistor Bio is working on picking this fruit. We’re building the platform to make biology like our digital computers. The end user experiences an API: classes to instantiate, functions to call, and data returned. On our side, we’re building that bridge. While others are focused on cloud labs and in vitro experiments, we’re tackling in vivo. Robotics to house, maintain, and scale colonies of model organisms. Cognition to administer interventions and take measurements. Fusion of both to control nature. 

It’s going to be a journey, but the destination promises a stepwise change in our relationship to the natural world. You’re invited to come aboard. Read more about Transistor here (robot videos included!) and email me here. Hiring for any and all positions. If you think you have a useful skillset for this problem, I want to talk to you!