In 2012, U.S. citizen Azzam Rahim’s relatives sued the Palestinian Authority for allegedly imprisoning, torturing, and killing Rahim. The Torture Victim Protection Act protects individuals from torture and killings against other “individuals.” Individuals were defined to be only “natural persons,” which an entire organization like the Palestinian Authority did not fall under, leading to the case being dismissed by the Supreme Court in Mohamad v. Palestinian Authority. The case solidified that the use of the word “individual” would be taken to explicitly include only human beings.
In response to the restrictive definition of “natural persons,” technology-activist groups have formed to advocate for “natural persons” to include artificial intelligence systems, which would grant them intellectual property rights. To obtain a patent, for example, applications must have an “individual” listed as an inventor, which excludes AI from obtaining rights for its inventions.
The Artificial Inventor Project has applied for international patents for a food container that uses fractal surfaces for insulation and a flashing light for enhanced attention to be used in emergencies, each created by an artificial intelligence system. The patents were denied and currently face appeals in the United Kingdom, Germany, Australia, and EPO jurisdictions. In 2021, the project succeeded in successfully applying in South Africa for the first patent with an AI listed as its inventor. The South African Patent credits an artificial intelligence system DABUS, the Device for Autonomous Bootstrapping of Unified Science, for “autonomously” generating a food container and specified in the patent that DABUS is an artificial intelligence system. Despite international success, the Artificial Inventor Project is still attempting to appeal the recent U.S. decision in the Thaler v. Vidal case to reject their patent applications. The Project claims that the gap between artificial intelligence and human consciousness is closing rapidly and that some AI software systems could potentially have the human-like brain power to produce creative and new inventions and ideas, which they believe should be undoubtedly credited to the machine internationally.
Facts of the Case
On August 5th, the United States Court of Appeals agreed with the Patent and Trademark Office’s (USPTO) decision in Thaler v. Vidal to deny patent applications with an artificial intelligence software system listed as an inventor. The Court maintained that inventors must fall under the category of “natural persons,” which, due to Mohammad v. Palestinian Authority, has precedent to only include human beings. The Court of Appeals “did not consider an abstract inquiry into the nature of inventions or rights” of artificial intelligence and only consulted the relevant statute.
Stephen Thaler submitted two patent applications to the USPTO with “DABUS” listed as the sole inventor in 2019. DABUS is an artificial intelligence software created by Thaler that is capable of creating “its own” inventions. Thaler claimed that he could not be considered the inventor of any of DABUS’ creations since he only created DABUS itself and had no direct influence over DABUS’ inventions. Thaler argued that it was able to output information about designs that “immediately led to obvious inventions,” which Thaler argued gave credit to DABUS for the overall creation.
Despite this, the USPTO returned the patent application to Thaler due to a lack of a valid inventor, which the District Court upheld by interpreting the requirement of an “individual” as an inventor to mean a natural person. The Court of Appeals then reaffirmed this decision in August 2022 by referring to the Patent Act that continuously refers to inventors as “individuals.” While the Patent Act does not clearly define individuals, the Court maintained that the Patent Act continuously uses the words “himself” and “herself” to refer to an individual rather than “itself,” which would be used to describe an artificial intelligence system. Additionally, there are other requirements of owning a patent that artificial intelligence systems cannot fulfill, such as “submitting an oath or declaration,” which an AI system cannot yet do on its own accord.
As technology grows more powerful by the decade, developments in AI have also brought upon questions of when, if ever, technology should be given human rights. Human-like robots are beginning to pass the Turing Test, a test that determines if a human cannot tell if a robot is technology or a human, raising doubt about the differences between humans and machines.
The decision in Thaler v. Vidal shows that the U.S. is approaching these metaphysical questions traditionally, by clearly defining a line between humans and technology with the definition of being “natural.” As artificial intelligence activists grow in number, it is likely that the U.S. legal system will have to adapt to changing technology or else face backlash from those who blur that line. As other countries like South Africa begin to grant machine-owned patents, the U.S. should be wary of falling behind.
Alex Penne is from Durham, NC, studying Electrical/Computer Engineering and Physics