Select Page

The tree of life forms a concise snapshot of the history, and linkages between, all life forms on Earth. Another way to look at this diagram is to imagine what the world would have been like if history had played out differently.

If the evolutionary game was run again, how would the same fundamental building blocks have combined, given different selective pressures and exogenous shocks?

This research question exists at the intersection of evolutionary biology, engineering, and design, and over the last few years I have begun to ponder the question when set in the context of robotics. We (my research group and colleagues from around the world) are seeking  to answer this question with the aim of challenging existing notions of what robots are, how they are designed, built, and controlled, and how they interact with their environment.

We ran a workshop on this topic at the 2024 IEEE International Conference on Robotics and Automation (ICRA) conference in Yokohama and you can find the site at this link: “(Re) Designing the Tree of Robotic Life: A game of alternative timelines”, and as a follow-up we’re now inviting papers on this topic for a special issue. Papers submitted to this collection will explore scenarios in which the shifting tides of evolutionary history have resulted in a new (robotic) tree of life, and their goal should be to come up with radically new robot concepts suited to this reality. Submit your paper here.

Some of my musings on open questions in this thought-space are below:

How might robotics have developed under different evolutionary paradigms?

Consider a world where robotics emerged not from manufacturing needs but from ecological restoration. In this timeline, the first autonomous machines were designed to repair damaged ecosystems, leading to a completely different evolutionary trajectory. These robots would have evolved under selection pressures favouring long-term environmental stability over short-term productivity. We might see robots that photosynthesise, that decompose and recycle materials, that form symbiotic relationships with living organisms—essentially, robots that became part of the biosphere rather than operating outside it.

Or imagine robotics developing under a paradigm where individual capability mattered less than collective intelligence. In this scenario, we’d see the emergence of what one may call “superorganism robots”—vast distributed systems where no single unit is intelligent, but the collective exhibits emergent problem-solving capabilities far beyond any individual component. These systems might have evolved through mechanisms analogous to eusociality in insects, with specialised castes, division of labour, and collective decision-making that emerges from simple local rules.

Another fascinating alternative: what if robotics had evolved under constraints of extreme resource scarcity? Rather than the current trajectory of increasing computational power and material consumption, we might have seen robots that optimise for minimal energy use, that harvest ambient energy from their environment, that self-repair using locally available materials. These robots would be more like extremophiles—thriving in conditions where current robots would fail, but potentially less capable in resource-rich environments. BeamBots are an example of this type of thinking.

Perhaps most radically, consider a paradigm where robots evolved not as tools but as companions or even as a new form of life. In this timeline, the selection pressures would favour emotional intelligence, social bonding, and long-term relationship building over task efficiency. We might see robots that age, that form memories and attachments, that experience something analogous to growth and development. These robots wouldn’t be designed for obsolescence but for lifelong relationships, fundamentally changing our relationship with technology.

 

What is a robot? How are robots built and controlled, and how do they interact with their environments? How could they be different? Could they be better?

The question “What is a robot?” reveals more about our assumptions than about any fundamental truth. We’ve inherited a definition shaped by industrial automation: a robot is a machine that performs tasks autonomously. But this definition is arbitrary, born from a specific historical context. What if we instead defined a robot as any system that maintains its own goals while adapting to environmental changes? This broader definition would include biological organisms, ecosystems, and even certain social systems—challenging the boundary between natural and artificial.

Current robots are built from rigid materials because we inherited manufacturing techniques from the automotive and aerospace industries. But consider robots built from living materials: mycelial networks that grow and adapt, bacterial colonies that compute through chemical signalling, or hybrid systems where silicon and cells form integrated circuits. These aren’t science fiction—we’re already seeing early prototypes. A robot could be a tree that moves, a slime mould that solves mazes, or a coral reef that responds to environmental data.

Control architectures are equally constrained by historical accident. We use centralised or hierarchical control because that’s how factories and militaries are organised. But nature shows us alternatives: the distributed intelligence of a flock of starlings, the emergent behaviour of ant colonies, the self-organisation of cellular slime moulds. Imagine robots controlled not by algorithms but by chemical gradients, not by processors but by phase transitions, not by programs but by evolutionary dynamics. These systems might be less predictable but more robust, less optimised but more adaptable.

Interaction with environments is perhaps the most limiting aspect of current robotics. We design robots to operate in environments we control, with predefined interfaces and predictable conditions. But what if robots were designed to thrive in chaos? Consider robots that adapt their morphology in real-time, growing new appendages when needed, shedding unnecessary parts, changing their fundamental structure based on environmental demands. Or robots that don’t just sense their environment but become part of it—robots that modify their surroundings to create new possibilities, that co-evolve with their habitats, that blur the line between agent and environment.

Could they be better? The question assumes we know what “better” means. Better at what? Efficiency? Resilience? Beauty? Meaningfulness? Current robots excel at efficiency in controlled environments but fail catastrophically when conditions change. Biological systems trade efficiency for robustness, specialisation for adaptability. Perhaps “better” robots would be those that can’t be optimised for a single metric but instead maintain multiple capabilities, that can fail gracefully, that can learn and evolve rather than just execute programs. The best robot might be one that surprises us, that exceeds its specifications, that becomes something we didn’t design it to be.

 

Can we better define the “Phylogenetic Tree of Robotic Life” as it is currently, and also consider its previously unimagined, “could-have-been” (or, perhaps, “yet-to-be”) branches?

Absolutely. The current robotic tree has a clear root: the industrial manipulator arm, born from the marriage of numerical control and hydraulic actuators in the 1950s. From this single ancestor, we can trace distinct lineages. The industrial lineage gave rise to assembly robots, welding robots, painting robots—all sharing the same fundamental architecture: rigid links, precise positioning, repeatable tasks. The mobile lineage emerged later, splitting into ground vehicles (from AGVs to autonomous cars), aerial vehicles (from early drones to sophisticated quadcopters), and aquatic vehicles. The humanoid lineage represents a fascinating case of convergent evolution—multiple independent attempts to replicate human form, each with different underlying technologies.

But this tree is incomplete. We’re missing entire phyla. Consider the “soft robotics” branch that’s only now emerging—what if this had been the dominant form from the beginning? We might have a completely different tree where rigid robots are the oddity, where compliance and adaptability are the default, where robots are more like octopuses than machines. Or the “swarm robotics” branch: what if we’d invested in collective intelligence from the start rather than individual capability? We might have cities maintained by clouds of micro-robots, each individually simple but collectively capable of complex construction, repair, and adaptation.

The “could-have-been” branches are particularly fascinating. What if cybernetics had won over artificial intelligence in the 1960s? We might have robots that are fundamentally about feedback and control rather than representation and planning—robots that don’t “think” but “behave,” that don’t model the world but respond to it directly. What if biomimetics had been the primary design philosophy right from the beginning? We might have robots that grow, that reproduce, that evolve—not as metaphors but as actual mechanisms.

The “yet-to-be” branches are where imagination becomes crucial. I envision a branch of “ecological robots”—machines designed not for human tasks but for ecosystem functions, that become part of natural cycles. Or “temporal robots”—systems that exist across multiple timescales simultaneously, that make decisions based on geological rather than human timeframes. Or “quantum robots”—not just using quantum computing but existing in quantum superposition, exploring multiple possibilities simultaneously until observation collapses them into a single reality.

Perhaps most intriguing are the branches that represent fundamental paradigm shifts. What if robots evolved not from machines but from information? We might see robots that are pure software, that exist in virtual spaces, that can instantiate themselves in physical form when needed but aren’t bound to it. Or robots that evolved from social systems rather than mechanical ones—emergent intelligences that arise from human-robot interactions, that exist in the space between individuals rather than as individual entities. Alien Earth does a great job of imagining alternate forms of humanoids: synths, cyborgs, hybrids; it also explores some creative thinking about alternate phyla systems of interaction and hierarchy in novel alien lifeforms. 

Mapping these alternative trees isn’t just a fun after-dinner chat or academic exercise—it reveals the contingency of our current path. Every branch represents a choice, a constraint, a historical accident. By exploring the unmapped territories, we can see not just what robots are, but what they could become.

 

Can the history of biological evolution inform future engineering design?

Not just inform—transform. Biological evolution has solved problems that engineering has barely begun to address. Consider the problem of self-repair: biological systems heal themselves continuously, from molecular damage repair to tissue regeneration. We’re only now beginning to develop self-healing materials, but biology has been doing this for billions of years. The mechanisms are there: DNA repair enzymes, immune systems, regenerative capabilities. We can learn not just the principles but the actual mechanisms.

Take the problem of distributed control. A single cell coordinates thousands of simultaneous processes without a central processor. A brain processes information through billions of neurons, each making local decisions, yet producing coherent behaviour. Our current approach—centralised processors with hierarchical control—is fundamentally different. What if we designed robots with truly distributed intelligence, where each component makes autonomous decisions based on local information, and global behaviour emerges from local interactions? This isn’t just a different architecture—it’s a different philosophy of control.

Biological evolution has also solved the problem of operating in uncertainty. Living systems don’t have perfect sensors or complete models of their environment. Instead, they use strategies like redundancy (multiple sensors, backup systems), robustness (graceful degradation rather than catastrophic failure), and exploration (trying multiple approaches simultaneously). Our robots, by contrast, often fail completely when conditions deviate from expectations. We could design robots that, like biological systems, maintain functionality even when components fail, that adapt to unexpected conditions, that explore their environment rather than assuming they know it.

Perhaps most importantly, biological evolution shows us how to balance competing objectives. Engineering often optimises for a single metric—speed, efficiency, precision. But biological systems must balance multiple objectives simultaneously: energy efficiency and performance, growth and maintenance, exploration and exploitation. Trade-offs. This approach leads to solutions that are “good enough” across many dimensions rather than optimal in one. For robots operating in complex, changing environments, this multi-objective approach might be more valuable than single-metric optimisation.

But we can go beyond principles to specific mechanisms. Consider how evolution has solved locomotion: from the undulating motion of snakes to the flight of birds to the swimming of fish. Each represents millions of years of optimisation for specific environments. We can study not just the final forms but the evolutionary paths that led to them, understanding not just what works but why it works and how it emerged. This gives us not just solutions but solution strategies.

Or consider the evolution of intelligence itself. Biological intelligence didn’t emerge from a single breakthrough but from a series of incremental innovations: nervous systems, centralised processing, learning mechanisms, social cognition. Each built on previous capabilities. We might design robot intelligence the same way—not as a single system but as a series of layered capabilities, each enabling the next, each solving a specific problem that made the next innovation possible.

The most radical possibility: what if we don’t just learn from biological evolution but actually use it? Evolutionary robotics already exists—using genetic algorithms to evolve robot designs. But we could go further: robots that evolve in real-time, that adapt their morphology and behaviour through mechanisms analogous to biological evolution, that form ecosystems where different robot “species” co-evolve. These wouldn’t be robots designed by humans but robots that design themselves through evolutionary processes, potentially discovering solutions we never would have imagined.

Biological evolution also teaches us about failure. Most evolutionary experiments fail. Most mutations are harmful. Most new species go extinct. But this failure is productive—it’s how evolution explores possibilities, how it finds unexpected solutions. Our engineering culture often treats failure as something to avoid, but evolution shows us that failure is essential for innovation. Perhaps we should design robots that are meant to fail in interesting ways, that explore failure modes as part of their design process, that use failure as a source of information rather than just a problem to solve.

 

How can engineering design be augmented with elements of unconstrained, artistic freedom?

Art and engineering have been artificially separated, but they’re fundamentally the same activity: creating something that didn’t exist before. The difference is that engineering asks “Will it work?” while art asks “Is it meaningful?” Combining both questions—”Will it work and is it meaningful?”—produces something neither discipline could create alone.

Consider what happens when we design robots not just for function but for expression. A robot that moves not efficiently but beautifully—what does that teach us about motion? A robot that makes decisions not optimally but poetically—what does that reveal about choice? A robot that exists not to serve but to provoke—what does that show us about purpose? These aren’t just aesthetic exercises; they’re experiments in possibility, explorations of what robots could be beyond their current limitations.

Artistic freedom in robotics means questioning fundamental assumptions. Why must robots be useful? Why must they be efficient? Why must they be predictable? Art gives us permission to explore the useless, the inefficient, the unpredictable. A robot that does nothing but watch the sunset. A robot that moves in ways that waste energy but create beauty. A robot that behaves randomly, that surprises itself. These explorations might seem frivolous, but they reveal possibilities that functional thinking would never discover.

Think about how different artistic movements could transform robotic design. Surrealism: robots that operate on dream logic, that make connections through association rather than causality, that exist in impossible spaces. Minimalism: robots stripped to their essence, where every component serves multiple purposes, where simplicity becomes a form of complexity. Expressionism: robots that externalise internal states, that make emotions visible, that communicate through movement and form rather than just function.

Or consider specific artistic practices. Performance art: robots that exist only in the moment of interaction, that are defined by their relationship with observers rather than their internal structure. Installation art: robots that are inseparable from their environment, that create spaces rather than just operate within them. Conceptual art: robots that exist primarily as ideas, that challenge our definitions, that make us question what we think we know.

Artistic freedom also means embracing failure and imperfection. Engineering seeks to eliminate bugs; art often finds beauty in them. A robot that glitches in interesting ways, that produces unexpected behaviours, that fails gracefully—these aren’t just technical challenges but aesthetic opportunities. The Japanese concept of wabi-sabi—finding beauty in imperfection, impermanence, and incompleteness—could transform how we think about robotic design. What if robots were meant to age, to wear, to change in ways we can’t predict?

Perhaps most radically, artistic freedom means designing robots that challenge us rather than serve us. A robot that asks difficult questions. A robot that refuses to obey. A robot that has its own agenda. (Alien Earth, Short Circuit, Bicentennial Man…) These aren’t just thought experiments but real possibilities. What would it mean to create a robot with artistic agency, that makes creative decisions, that produces work that surprises even its creators? This would require rethinking not just design but our relationship with the things we create.

The intersection of art and engineering also reveals new forms of collaboration. Artists and engineers speak different languages, have different values, ask different questions. But in that difference lies possibility. An artist might ask “What if this robot could feel?” and an engineer might respond “Here’s how we could implement that.” The result isn’t just a robot that simulates emotion but a robot that challenges our understanding of what emotion is, what feeling means, what it is to be alive.

Ultimately, artistic freedom in robotics means recognising that robots are cultural objects, not just technical ones. They exist in a social context, they carry meanings, they participate in conversations about technology, humanity, and the future. By bringing artistic sensibilities to robotic design, we can create robots that don’t just do things but mean things, that don’t just function but communicate, that don’t just serve but transform. The most important robot might not be the most useful one but the one that makes us think, that makes us feel, that makes us question what we thought we knew about machines, about intelligence, about ourselves.