Are intellectual property laws ready for AI in the UK?
The concept of Artificial Intelligence (or AI) has been around for a while. In fact, many of us use or take advantage of it without even knowing. For example, when you look up a route on your sat nav and it automatically considers the ‘real time’ construction, accident or maintenance delays, to present (and maintain) the quickest route. Or how about the ways in which apps tell you how much your Uber ride is going to cost… or where your driver is at any given time? To be honest, it’s not even a terribly modern concept. In commercial airlines autopilot functionality - to actually fly a plane - has been in operation since 1914.
But bearing in mind how AI works - by assessing data and applying an artificial form of reasoning, communication, learning, sense and understanding, to then make a recommendation or even action something - it opens up a number of legal questions we might not yet be ready for.
The legal questions we don’t, yet, have answers for...
It’s easy to conceive of there being challenges for those drafting contracts relating to legal ‘risk’ and liability. For example, what about the healthcare organisation that uses AI to analyse huge volumes of patient data (as well as data from other sources), to understand symptoms and provide suggested treatment options? Or how about autonomous vehicles navigating their way through busy areas and avoiding pedestrians and other vehicles? But what about intellectual property? For example, if something using AI ‘does what it does’ and comes out with a new invention… without any human involvement… surely that would make it the legal ‘inventor’? Right? But under UK patent law, an inventor is defined as a ‘person’. So how does this work if the inventor is a computer? Furthermore, as things currently stand, if that person discloses their invention to the state, they are given a 20 year ‘patent bargain’ (a monopoly on that invention). But AI allows for developments at a far faster rate and with relatively little ‘hard work’. Is it right, therefore that the AI should have a 20 year monopoly for its invention?
In reality we are not yet at a truly autonomous state when it comes to AI. In the majority of cases a human has had, or will have had, some involvement, before anything becomes ‘real’. This, however, in its own right presents questions. If an AI ‘invents’ something… is the ‘inventor’ the human that wrote the software code and created the algorithms that allowed the AI to ‘invent’, or is it the person that asked the question or that ‘uses’ the particular AI?
Similarly, we must ask what happens if the output is actually already protected by copyright? The music industry is - perhaps unsurprisingly - slightly ahead of the game here. Under UK copyright laws, the person "by whom the arrangements necessary for the creation of the work are undertaken" is considered to be the legal author of computer-generated literary, dramatic, musical or artistic works. But we have to be very careful, once again, about the level of human involvement. The output can only be considered "computer generated" if it can be defined "as a work generated in circumstances such that there is no human author of the work". This would mean, therefore, that under current UK copyright law the software programmer would most likely be the author and first owner of a copyright work generated by an AI entirely.
But the reality is that it’s rarely this simple. In the majority of AI projects there have been teams of programmers and multiple companies, working on the design of algorithms and determining and providing the data sets to be analysed. This means there are multiple, joint owners.
One final question we must ask, is whether it’s even right that the output of an AI should be protected by copyright at all? Copyright is designed to protect "original" literary, dramatic and artistic copyright works. “Originality” means that 'skill, labour and judgment' has been expended. Do we feel that AI applies skill and labour, thereby deserving protection? Does the skill and hard work that went into writing the algorithm count?
What about IP infringement?
Of course, making a decision on this is only going to make a difference if we also have the legal infrastructure to hold an AI accountable for infringement. Whilst we are still at a point that most AI has a human involvement, in some way (either designing, implementing, testing, using etc) it should still be possible to hold that "ultimate person(s)" accountable under current laws. At the point at which AI becomes fully autonomous, however… then we will have a problem. It may not be so easy to work out whether an AI's decision to infringe or create IP is ultimately attributable to a human.
So… in the meantime? If you’re involved in AI? First, as always, talk to an expert. Second, don’t wait for litigation to ‘make a decision’ for you. Make sure that any new agreements relating to the development and use of AI clearly state which part(ies) should own any protectable IP resulting from the AI.