Through the Black Mirror: "Joan Is Awful" and AI Malpractice
In a recently published article, AI and Security: Ensuring That Opportunities Outweigh the Threats, IDTechEx discussed the importance of ownership and culpability when it comes to deploying AI tools, especially in the context of creative works. This matter of accountability and the potential insidious use of artificial intelligence in the creation of intellectual property is a theme touched on in the first episode of the new Black Mirror season, "Joan Is Awful". For those not already in the know, Black Mirror is a speculative fiction anthology series created by Charlie Brooker. Premiering in 2011 and now on its sixth season, Black Mirror runs the gamut of existential subject matter, from questions of ethics and morality (the good of the many against the good of the self) to the potential consequences of unchecked and unregulated scientific advancement.
"Joan Is Awful" follows Joan - played by Annie Murphy - a manager who sits below the board at a tech company and has to make one of her employees redundant despite the ramifications of this on the company's recent green initiative pledge. We also see her texting with her ex-boyfriend, visiting a therapist, and finally sitting down with her fiancé to watch a show on Streamberry, this season's in-universe analog of Netflix.
Italy's temporary ban of ChatGPT in April 2023 could be a sign of things to come as AI tools develop and are put to use across a greater number of applications, ultimately affecting more people, especially where creative output is concerned. Source: IDTechEx
Spoilers ahead. Skip this section if you are sensitive to spoilers.
They come across a new show, the titular Joan Is Awful, and begin to watch. All that we have seen thus far in the show is dramatized for Joan, with Salma Hayek playing the character of Joan in the Streamberry adaptation. Joan (the one we know) is completely taken aback, as she doesn't know how her life has been so deeply invaded. The dramatization skews her personality to display exaggerated negative personality traits (such as callousness for the employee she makes redundant, where really she feels somewhat powerless in the decision and does express a modicum of pity). Frighteningly for Joan, the show is not restricted to her account, and she becomes ostracized from friends and family, in addition to being fired from work (as the dramatization of her life is seen to be a breakage of an NDA).
Joan ultimately seeks legal advice from a lawyer, whereupon the lawyer informs Joan that she had consented to Streamberry's Terms and Conditions, included in which is the ability to use and dramatize any and all aspects of her life, including her name. As the lawyer states at the beginning of their conversation, "I'm as shocked as you are". Joan then changes tack and proposes suing Salma Hayek for portraying her. Again, the lawyer negates this by informing Joan that it is only Salma Hayek's likeness: the entire show is CGI.
Joan is marooned with no legal recourse. And then, in a stroke of genius born from utter desperation, Joan envisions a way in which she can get Salma Hayek invested in this. So she defecates in a church during a wedding ceremony, knowing that this event will be repeated on the show. The real (at least in terms of Salma Hayek now playing herself) Salma Hayek understandably takes issue with this and talks to her lawyer about suing Streamberry. But again, she has licensed her image to the company, allowing them to take such liberties.
Spoilers end here.
Joan and Salma, canvas and paint alike, have absolutely no ownership over how they are portrayed. And it is this question of ownership that will be more frequently asked in connection with AI tools as they develop and draw on a more diverse range of data sets.
While from a legal standpoint, the case against Joan and Salma may appear pretty watertight in the Black Mirror episode, the usage of generative AI to create content - even in its current, comparatively limited form - still poses the important question of ownership, a question that has no robust answer by way of current IP laws; Patent law generally considers the inventor as the first owner of the invention. In the case of AI, who invents? The human creates the (initial) prompt, but it is the AI tool that creates the output. An AI may also be used to prompt other AI tools, and so AI can act as both the prompt and the creator. Other parties should also be considered, such as the developers of the AI tool, as well as the owners of the data that comprise the dataset used to train the AI tool.
This event could well be a sign of things to come. As AI becomes more advanced and so too does the type of content that it can generate, the approach taken by the Italian Garante could - and, most would probably agree, should - be one taken by all data protection agencies in order to ensure that personal data used to train such algorithms cannot be misused.
Because no one wants to be Joan.
IDTechEx forecasts that the global AI chips market will grow to US$257.6 billion by 2033. The report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in seven different categories (such as by geography, chip architecture, and application). In addition to the revenue forecasts for AI chips, costs at each stage of the supply chain (design, manufacture, assembly, test & packaging, and operation) are quantified for a leading-edge AI chip. Rigorous calculations and a customizable template for customer use are provided, and analyses of comparative costs between leading and trailing edge node chips.
IDTechEx's latest report, "AI Chips 2023-2033", answers the major questions, challenges, and opportunities faced by the AI chip value chain. For further understanding of the markets, players, technologies, opportunities, and challenges, please refer to it.