Select Page

In this live essay I add some more details and reflections on a panel discussion, hosted by Edinburgh Futures Institute, among five experts on what it means to ‘live well’ with robots. The conversation explored the technical, moral, social and economic dimensions of the oft-promised ‘robot revolution’.

Watch the panel discussion here.

Propositions – TL:DR

  • The robotics revolution is already here, it’s invisible because it happens in mundane contexts, and we miss it by focusing on embodied physical forms rather than recognising that there are autonomous systems around us already.
  • Robots are embodied systems that couple artificial intelligence with physical intelligence as integrated wholes; enabling safe interaction through soft robotics makes failures manageable rather than catastrophic.
  • Humanoid robots are lazy design work that distracts from real utility; we must focus on what jobs robots should perform rather than distracting ourselves with questions about consciousness or intimate relationships.
  • We are morally and ethically compelled to deploy robots for dangerous, dull, and dirty jobs; without governance ensuring displaced workers have meaningful alternatives, robots will amplify existing inequalities by benefiting those who already hold economic power.
  • Today’s decisions about robotic development will shape technological pathways for future generations, making careful governance essential to avoid lock-in effects that constrain long-term possibilities for human flourishing within environmental limits.

 

Will human beings ever embrace robots as a daily part of our social and economic lives?

Our lives already embrace robots, though often in ways we do not notice. The robotics revolution is here, but it happens in mundane contexts, not the dramatic scenarios of science fiction. Robots work in factories on repetitive tasks, in warehouses handling fulfillment, and in millions of homes as vacuum cleaners. Humans are social animals, and as such they form emotional bonds with these machines, these aspects of human design of robots and human-robot interaction were the main focus of the panel discussion hosted at The Edinburgh Futures Institute which informed this essay.

I see robots as embodied systems that couple artificial intelligence with physical intelligence as fundamentally integrated, not separate. This interplay between the two is fundamental. Autonomous systems are around us all the time, but we have a biased approach focusing only on embodied physical forms. These particular forms are just not terribly interesting to the general public, which is why we miss the revolution happening around us.

Significant challenges remain. Trust matters. If people do not trust robots, they will not adopt them. I see a gap between expectations, fed by science fiction narratives, and reality. We are far from general-purpose robot butlers that can do everything. The real question is not whether robots will be embraced, but which robots will make it into daily life and how business models, societal forces, and user needs will shape them, not just technological capability.

 

What would it mean to live well with robots?

Living well with robots means moving away from the fantasy of all-purpose humanoids toward task-specific robots designed for particular domains. Rather than creating robots that can do everything, we should define specific tasks robots are good at and design robots for those purposes. This system-level specification requires co-creation with users and stakeholders, treating safety, ethics, moral questions, confidence, and trust as community decisions.

I frame living well through the lens of donut economics, which balances environmental limits with social foundations for human flourishing. Robots should help us stay within environmental limits while providing a social foundation for people to live. This means robots that address real problems: agriculture, where we can increase productivity while reducing pesticide reliance; dangerous jobs like nuclear decommissioning and offshore platform work; healthcare; and environmental monitoring. My vision is for assistive technologies that enhance human capabilities rather than replace humans, where we are improved by the technology rather than replaced by it.

My specific research area focuses on softness and soft robotics as a new way in which robots can come into people’s lives. This approach involves changing the proximity of machines and people and how we close the gap between the two. Soft robotics makes interactions safe. When these systems fail, they do so in predictable, bounded ways, the consequences are manageable. The approach is application-dependent, but there is a common-theme about how one designs systems for interaction – specifically, for safe interaction. These systems are more than just artificial intelligence. They include physical form and embodiment. At the personal level, soft robotics is all about proximity and interaction. At the outer edge, it is about how robots can change the economics of labor and where they can physically be, providing safety and changing society on a much grander scale.

I approach problems by looking to evolutionary biology and nature. Biological systems have been around for 3.7 billion years, and there are lots of systems that have evolved. I like to explore the interplay between technologies that people work on in our research labs, where we provide options, and solutions or problems that exist in the marketplace, defined problems which we need to solve in some way. The intersection between those two – options, and problems – is important. We need to keep enough energy (money and enthusiasm) in the ecosystem to develop new technologies, and we need to make sure those new technologies are coupled with solving real and important problems.

Living well requires us to be cautious about how robots are directed, developed, and deployed. My goal is finding technologies that work well in specific domains and which are also adaptable: systems that enhance our lives rather than creating dependency or distraction.

 

How can we ensure that robot design and development is driven by and aligned with our fundamental human needs, such as security and social connection?

Ensuring that robots align with human needs requires fundamental shifts in how we approach design and development.

  • Co-creation and participatory design: Start with problems, not solutions. We need to go and talk to people whilst holding a blank sheet of paper and say, “What are the problems that you have? Let’s not worry about the technology to begin with.” This means true co-creation where end users, like care home residents, are asked what they want, not just what technologists think they need. If you put technology into the hands of designers or artists or people who democratise access to the technology, so now people can do this stacking of technology and build with it, that is when you get really interesting outcomes that you do not expect. Engineers are used to designing things to a specification. You do customer discovery, you come up with the specification, you design something which meets that. I think where you will get most interesting interactions is where you do not really know what is going to come from it.
  • Cultural sensitivity: Robot designs that work in one culture, like South Korea, may be completely off-putting in another, like Scandinavia. Cultural considerations must be built into the design process from the beginning.
  • Transparency and avoiding deception: Deception is the main ethical barrier. People should never be deceived into thinking they are interacting with a human. Transparency standards are being developed with five levels, from basic user manuals to interactive systems that can explain their decisions, following frameworks like the IEEE standard for transparency in autonomous systems. These standards are essential for designing with trust and transparency at the fore.
  • Governance and regulation: We should learn from successful regulatory models like aviation, with type certification and shared safety information, rather than the laissez-faire approach of the internet era. This approach includes thinking about liability, risk, and accountability frameworks for embodied robotic systems. For people doing work in oil and gas who want to use robots to do particular jobs, one of the issues from a hardware perspective for hardware companies is who takes on the risk, who takes on the liability when something goes wrong, when somebody is killed, something is damaged, something gets broken. This is one of the limiting factors in deploying robots. There is research which I think is hugely-underexplored about the laws governing embodied robotic systems and what that would ultimately look like in a framework. A lot of the systems thinking should be about risk and liability, and those two things have not really been thrashed out, and especially not in terms of the business model that goes behind hardware embodied robotics companies.
  • Business model considerations: The business model shapes the technology. We need to think about who pays and who benefits, ensuring that economic incentives align with solving real problems rather than just monetising user interactions. Consumer electronics is not the only model. We should consider approaches like aviation regulation, where safety and shared information matter more than rapid deployment.
  • Avoiding technological sticking plasters: I see an issue with care home robots. While care robots may have appropriate applications, we must distinguish between augmenting care and replacing human relationships due to economic pressures. We are trying to put a technological sticking plaster over a socioeconomic problem. The problem is that this is the way tech people solve problems. We need to change the discussion to bring in a much wider group of people. If the only way in which you can perform your job and go about your daily business is to have a robot look after your grandmother then there is something wrong with the economy, not necessarily a technological problem. We could throw a lot of money at solving that problem, but the question is always whether we are solving the right problem.

 

Where are robots already transforming our lives in ways we may not recognise, and which of our ideas about robots remain mere science fiction fantasies?

Robots are already transforming our lives in largely invisible ways. The robotics revolution is here, but we do not see it because it is not interesting. Robots are doing boring things like building cars and doing warehouse fulfillment, packing shopping bags. These mundane applications have real economic impacts, displacing workers and changing labor markets, but they do not make for compelling narratives.

The most common domestic robot is the vacuum cleaner. Millions exist worldwide, and people form emotional attachments to them. In industrial settings, robots perform dangerous, dull, and dirty jobs in factories, warehouses, and specialised environments.

There are some science fiction fantasies that should, I believe, remain distant. General-purpose robot humanoid servants that can do everything. We remain quite far from this realisation of a robot that can do everything: all-singing, all-dancing humanoid robots with full cognitive capabilities. Sex robots, however, are just around the corner. These are essentially sex dolls with minimal animatronics and AI, not the sophisticated companions of fiction. The robots will be ready far before humans and human societies are ready for the robots. Much of western society is innately dystopian and destructive and the development of humanoid sex robots is the epitome of our dystopia.

I routinely challenge this notion of a robot as a hard white plastic humanoid device. That is the main direction in which robotics has been taken. It is the one that fulfills the public’s consciousness. When we say to people “What is a robot?”, then that is what they answer with. I think these white plastic humanoid robots represent extraordinarly unimaginative thinking about the futures for robots and what we could do with them, what they could look like, what their embodiments could be, how they are built. Personally, I do not think we should make humanoid robots. I think it is lazy design work and I do not think humans are ready for humanoids. The primary importance that humanoid robots have is interaction with the environment that has been designed for people: e.g. grasps that can move handles, that sort of approach. Fine, system designers can do that, we need to design robotic systems to interact with our everyday objects but we can do it without making humanoid robots. I think that thought-leaders and tech-innovators should focus on what we want robots to do, what jobs we want them to perform, and where we want them to be, rather than getting mired in questions about robot consciousness or the possibility of intimate relationships. The primary driver for these conversations is the lazy choice of a bilaterally symmetric humanoid. I think that choice leads to distraction from the real utility that we could get from robotics to solve some of the wider ranging problems that we face as a society.

Science fiction narratives, often dystopian, shape expectations, but I believe the reality is that most impactful robots will be specialised tools rather than human-like companions. Robotics is not a single technology. Robotics is a series of technological solutions to a range of different problems. It brings in artificial intelligence. It brings in mechatronics. It brings in software development. It has lots and lots of different facets to it. The exciting developments happen where AI and robotics intersect, giving robots vision, language capabilities, and the ability to interact in our world, but these are still far from the general intelligence that we read about in science fiction. When a disembodied piece of software generates incorrect outputs, that is a very different circumstance than when a robot that is taking care of a cat produces errors, some consequences are physical and can’t be undone with control-Z. That is one of the key things in the embodiment of autonomous systems, which is where I enjoy the narratives that we are building around soft robotics, it is about making interactions safe.

 

Where can tomorrow’s robots make the world a better place, and most importantly, better for whom?

Robots can make the world better by addressing critical problems, but who benefits depends on how we design, deploy, and govern them.

  • Positive applications: Agriculture, where we can increase productivity while reducing pesticide reliance. Dangerous jobs like nuclear decommissioning and offshore platform inspection, jobs that put workers at risk. In nuclear decommissioning, especially of nuclear legacy waste, this work is performed by people who are close to end of life, who can go and take their whole lifetime dose of radiation to remove pieces of nuclear waste. This is precisely the kind of work we are morally compelled to eliminate through robotic deployment. Healthcare and rehabilitation, including post-stroke rehabilitation and assistive technologies for independent living. Environmental monitoring, tracking pollution, monitoring fragile ecosystems, climate change mitigation. Workplace accessibility, where offshore platform robots can bring jobs onshore, making them accessible to women, people with disabilities, and those who need to be near family. I have done work in robots for rehabilitation, specifically post-stroke rehabilitation. How rapidly somebody who has had a stroke can be provided with physiotherapy determines their clinical outcome. The more rapid the onset of physical rehabilitation, the better their outcome. Trust is not something which we tend to look at explicitly in my researcg because we are very much a physical sciences engineering group, but the trust discussions centre around what is the choice, what is the alternative? If your choices are use a robot to do some sort of rehabilitation because it will likely result in a much better clinical outcome, or do nothing, then you are goaded into moving down that trust line of “I believe that it will do me some good and I will use it” or  “What is the worst that could happen… it’s no worse than me not using it.” The question is quite couched in the nuances of the specific application area. For the work we have done in nuclear decommissioning, the question of trust is very different than the question of trust in robots for stroke rehabilitation. It is difficult to give an overarching answer because robotics is not a discipline. Robotics is a series of technological solutions to a range of different problems.
  • Critical concerns about who benefits: Without careful governance, robots could amplify existing inequalities. The people who win and lose will be the same kind of people, except the gap will be larger. There is a risk that economic benefits accrue to those who can afford the technology—the global North, wealthy elites. Job displacement may affect vulnerable workers without adequate retraining or alternative opportunities. Policing and surveillance applications could disproportionately harm marginalized communities. Environmental costs, like e-waste and material extraction, may be externalised to poorer regions.
  • The moral imperative: I believe that we are morally and ethically compelled to deploy robots for dangerous, dull, and dirty jobs rather than have low-paid labor perform them. However, this must be balanced with ensuring displaced workers have meaningful alternatives and that the transition does not simply create new forms of exploitation.

These are not just technological questions but socioeconomic ones requiring community decisions about governance, regulation, and economic models. I believe success depends on ensuring robots serve broader social goals rather than solely commercial interests, and that the voices of those who will be most affected are included in design and deployment decisions. The decisions we make today about robotic development will shape not only immediate outcomes but also the technological pathways available to future generations, making careful governance essential to avoid lock-in effects that constrain long-term possibilities.