Skip to main content
Blog
KI Recht

Machines with Rights? Why the Law (Still) Believes in Humans

html

Why the Question of Legal Personality for AI is Relevant at All

Artificial intelligence is now more than just a technological buzzword – it makes decisions, controls machines, generates content, and increasingly changes processes that were once reserved for humans. This raises a question that was long confined to science fiction: Should autonomous AI systems or robots be legally recognized as their own entities – as legal persons?

This is not a purely philosophical exercise. With the increasing autonomy of AI systems, it becomes more challenging to clearly attribute actions to humans. Who is liable if an autonomous vehicle makes an error that no one could foresee? Or if an AI system enters contracts without a person knowing the specific content?

What is a Legal Person – and How Could AI Be Part of It?

Current law distinguishes between natural persons (humans) and legal persons (e.g., companies or associations). Legal persons are artificially created legal entities that enable representation of collective interests or structuring of complex economic activities.

This shows that legal personality is not tied to biological characteristics but can be created by legislation if it appears functionally or legally prudent. In some cases, this status has also been transferred to non-human entities – such as the Whanganui River in New Zealand, which was legally recognized as a person.

Another approach is taken in Switzerland: Here, animals have not been considered objects since a legal reform in 2003 but rather as legal entities of their own kind (Art. 641a ZGB). Although they do not possess legal personality and cannot litigate, Swiss law acknowledges their special ethical status. This shows that intermediate forms are possible – beyond the classic dichotomy of person and thing.

This raises the question: Could an artificial intelligence also become a legal entity – a holder of rights and duties – if it acts autonomously and participates in legal transactions permanently? And: What legal and ethical consequences would that have?

Status Quo: AI is Not a Legal Entity Worldwide

Currently, AI is not recognized as a legal person anywhere in the world. Neither in the EU, the USA, China, nor in international law. Instead, it is legally considered a product or tool. For damages caused by AI, its developers, manufacturers, owners, or users are liable – not the AI itself.

This model has worked so far but hits limits when systems become so complex and self-learning that human responsibility becomes difficult to prove. Nonetheless, the legal consensus is clear: no own legal personality for AI, but stronger regulation and liability of human actors.

Europe: The Failed Idea of the "Electronic Person"

The debate was particularly intense in the EU: In 2017, the European Parliament proposed creating an electronic person for advanced autonomous systems – primarily to clarify liability.

The reaction was intense: Over 150 experts warned in an open letter against a dangerous fiction that could only serve to obscure human responsibility. Ethical concerns were also raised: Machines could not be bearers of rights or moral responsibility.

In the meantime, the EU has clearly positioned itself against this idea. Instead, it relies on rules like the AI Act, revised product liability directives, and the clear allocation of responsibilities to humans and companies.

USA and China: No Legal Status for AI – But Different Approaches to Handling It

Neither in the USA nor in China is artificial intelligence given its own legal status. The rule is: Only humans – or legally created persons by humans – can be bearers of rights and duties. AI is instead viewed as a tool, operated and accountable by humans. Legal experts refer to this as an "anthropocentric" understanding of the law – a law that centers around humans.

In the USA, dealing with AI mainly occurs through existing legal concepts such as product liability, contract law, or negligence. The line is also clear in patent law: Only humans can be considered inventors – not AI systems.

China follows the same fundamental principle but proceeds significantly more regulatory. The responsibility also lies clearly with the provider or user of AI. At the same time, China is rapidly creating new rules – for example, for algorithmic recommendation systems or generative AI. The goal is to control risks and create clear guidelines for companies.

Legal Philosophical Perspectives: What Constitutes a Person in Law?

Ultimately, it comes down to a fundamental question: Who or what can be a legal entity at all?

The classical approach states: Only rational, morally competent beings – i.e., humans – or constructs controlled by them (e.g., companies) can hold rights and duties.

On the other hand, some voices argue: If legal personality is a useful fiction, it could also be applied to AI – for example, to create clear liability structures or allow AI systems to operate in the market.

A compromise proposal suggests: A limited, functional legal personality for particularly autonomous AI – for instance, with its own liability fund, without civil rights or moral status. But even this is legally and ethically controversial.

Outlook: No AI Status, but More Regulation

In the short and medium term, it is not expected that AI will receive the status of a legal person. Concerns predominate – both from a legal perspective (e.g., due to potential liability gaps), as well as from an ethical viewpoint (due to a possible dilution of the human-centered understanding of law) and a societal perspective (for instance, regarding trust, legitimacy, and acceptance).

Instead, the global focus is on the following areas of action:

  • Liability rules that also legally address autonomous systems
  • Insurance obligations to cover damages by AI
  • Technical standards for safety, transparency, and traceability
  • Human-centered control, e.g., through governance, audit, and transparency mechanisms

In the long term, this assessment could change – for example, if AI systems were to develop consciousness-like or intentional properties. In such a case, law would have to rethink subject status, responsibility, and deserving protection – not only functionally but also normatively.

Conclusion

The discussion about the legal identity of artificial intelligence is not a marginal topic but touches the foundation of our legal order. Although the law currently treats AI as a tool – not as an acting subject, the more responsibility machines take on in real decision-making processes, the more intensely we must consider the limits of our legal categories of person, responsibility, and attribution.

This debate is not a technical detail – it represents one of the central future issues in the interplay of law, technology, and society.