Is the law ready to give “thinking” AI a legal status comparable to that of a human?

Today, 13:13 | Technologies
photo Зеркало недели
Text Size:

It's easier to issue a passport to an alien from outer space than to artificial intelligence. It sounds provocative, but behind this phrase there is a very practical question: what does it mean for our legal system for another mind to appear next to a person And how will the law react if this reason is born not “from the outside”, but within already existing systems - corporations, states, organizations?

Imagine: an alien civilization comes into contact with humanity. Contact - text. “They” solve the most complex problems, design experiments, write flawless code and create poetry that is indistinguishable from human. Would we deny their intelligence just because we cannot look “inside”? Hardly. Then why do we systematically apply this standard of skepticism to artificial intelligence

[see_also ids\u003d"

But law is not only interested in reason. It is interested in responsibility. A person is a subject of law not because he thinks, but because he has rights, responsibilities and is responsible for his decisions. And here a more difficult question arises: if artificial intelligence is capable of thinking and making decisions, is this enough to make it a subject of law And who will be responsible for the consequences of his actions Already today, digital judges, doctors, and bankers are working in various spheres of public life..

But before we talk about legal personality, we need to answer another question: do we really have grounds to call artificial intelligence “intelligence”? Doubts about this are not new.. Can a machine think This question was raised back in 1950 by Alan Turing, a British mathematician and one of the founders of modern computer science.. That is, is it possible to talk about intelligence in artificial systems In modern discussions on this subject, there are several common objections and counterarguments.. And they are not about the conventional “mas-market” - ChatGPT, but about AI models that are being developed in the bowels of large corporations.

The first counterargument relates to the so-called collective blind spot phenomenon.. We continue to think in terms where higher intelligence is by default identified with biological form. Therefore, if AI does not have a biological form - a conventional “body”, then it is not intelligence. Indicative in this context is a publication in Nature, prepared by an interdisciplinary group of researchers in the fields of artificial intelligence, philosophy, linguistics and data science, which directly points to cognitive inertia even in the scientific community. It consists of a reluctance to admit the obvious fact: humanity has ceased to be the only bearer of signs of general intelligence. We are no longer ourselves - at least in the intellectual dimension.

[see_also ids\u003d"

Alan Turing proposed assessing a machine’s ability to think not by its internal “experiences”, but by its behavior - that is, by whether it is able to maintain a meaningful conversation so that the interlocutor cannot distinguish it from a person. Modern language models have actually implemented this approach: they conduct coherent dialogues, transfer knowledge from one area to another, and work with abstractions.

The second counterargument about the ability of a machine to think will be called the “false bar”. Skeptics often demand perfection from artificial intelligence: universality, infallibility, omniscience. But general intelligence—and human intelligence in particular—has never meant omniscience.. Marie Curie was not an expert in number theory, Albert Einstein did not speak all the languages \u200b\u200bof the world. We recognize a person's intelligence on the basis of sufficient depth and breadth, not absolute completeness. According to the same criteria, modern AI systems demonstrate results comparable to the level of trained researchers.

The third counterargument is the so-called stochastic parrot thesis.. Its essence is that language models supposedly do not think, but only statistically recreate the words that are most likely to appear next to the formulation of the user's request - like a parrot that repeats what it hears without understanding the content. But the transfer of knowledge, the generalization of structures, the ability to solve problems that were not in the training data - all this goes beyond simply combining words. And the text form of interaction in itself is not a basis for denying intelligence. If we recognize the genius of Stephen Hawking, who communicated with the world through text and a speech synthesizer for decades, then it is obvious that the method of communication does not determine the presence or absence of thinking. And denying the intelligence of AI simply because of the lack of bodily interaction does not seem like a scientific position, but a form of anthropocentric prejudice. Therefore, the question is no longer whether a machine can think, but what this means for our self-image.

Today we are experiencing a profound shift in understanding of man's place in the world.. And he is already the third in the history of mankind. Copernicus deprived us of the illusion that we are at the center of the Universe, Darwin deprived us of biological exceptionalism, and modern AI undermines the monopoly on higher intelligence. Yes, artificial intelligence has a “jagged profile” of competencies: brilliant analysis can be combined with a primitive error. But this asymmetry is also typical for people.. This is not a denial of intelligence, but its atypical configuration.

[see_also ids\u003d"

Each such shift sooner or later calls into question the legal and ethical norms by which society organizes its life.. So now the problem is not whether there is so-called artificial general intelligence (AGI) - that is, a system capable of acting universally, like the human mind. And the fact is that law, ethics and social institutions have not yet developed a language for coexistence with it. History compresses time frames: what used to take decades now happens in months. Recognition or non-recognition of AGI is no longer an academic game, but a matter of long-term stability of civilization. We need a new ethics and a new legal imagination to cohabitate with this "

Today, the determining factor for law is not the “nature” of a being (biological, technical, hybrid), but the qualification - whether it is a “person” (the bearer of legal personality), or a “thing/object” (the subject of the competence of others). It depends on this who can be the addressee of the norm, who is “assigned” the duty to act or restrain, who is responsible for the damage and who is actually capable of ensuring the execution of a decision of a court or administrative body. And this is where the nerve of the discussion is concentrated: legal personality is the “entrance ticket” to the legal order, but this ticket always has a price in the form of responsibility.

And this is where artificial intelligence puts the law in front of an atypical situation. It, unlike “another mind from the outside,” is born inside our organizations, corporations, institutions and states. It has an owner, developer, integrator, operator, customer; has supply chains, updates, data and access. Therefore, any attempt to proclaim “let AI be a subject and answer for itself” is immediately suspected of the main sin of legal “for the sake of” - the dissolution of responsibility. And this is not an abstraction: if the “subjectivity of AI” becomes a screen behind which real risk controllers (people and corporations) relieve themselves of the burden of due responsibility and control, then the rule of law is shooting itself in the foot.

[see_also ids\u003d"

Legal regulation is not declarations or corporate manifestos, but a system of norms that are established and applied in a certain order. Therefore, the bold but unpromising statements of many transnational corporations about the development of “constitutions for AI” - a kind of “handicraft” production of legal documents outside the established procedure - have nothing in common either with the constitutions of states in content (substantive law), or with the procedure for their adoption and promulgation.

Let’s add one more, “journalistic” angle to this discussion.. In our imagination, the alien is something that exists “outside”, while AI appears “inside” our institutions. Therefore, the question of the subjectivity of AI is not about the contact of cultures, but about the redistribution of power in our society. My position as an analyst here is as follows: recognizing the alien as a subject (subject to contact) is an attempt to stabilize the external risk; recognizing AI as a subject means destabilizing the balance of duties and responsibilities between the citizen, the state and corporations. That is why law intuitively keeps its center of gravity on people and organizations: there are assets, insurance, internal control systems, regulatory oversight, enforcement mechanisms and enforcement of decisions.

This is why it may seem that we are “generous” to aliens and “stingy” to AI, but in fact we are consistent.

Law provides subjectivity where it does not destroy responsibility and makes it possible to maintain control over interaction.

In the case of “another mind”, subjectivity is an instrument of the world; in the case of AI, premature subjectivity easily becomes a tool for avoiding responsibility. And therefore, until the subjectivity of AI proves (primarily functionally, not metaphysically) that it enhances the protection of citizens and does not replace human and organizational responsibility, the law - quite rationally - will choose caution.

[see_also ids\u003d"

Therefore, the question posed at the beginning remains open: can artificial intelligence be a subject of law But he remains open not because of fear of a new mind, but because of something else - who will be responsible for his decisions? Subjectivity in law means not only rights, but also obligations, sanctions, coercion and execution of decisions. And if recognizing AI as a subject leads to the erosion of the responsibility of developers, owners, users or the state, then it is quite rational for the law to choose caution. Therefore, the idea of \u200b\u200ba “passport for AI” will remain a metaphor for now, not a legal reality..

[votes id\u003d"




Add a comment
:D :lol: :-) ;-) 8) :-| :-* :oops: :sad: :cry: :o :-? :-x :eek: :zzz :P :roll: :sigh:
 Enter the correct answer