AI and ownership

I recently came across a social media post commenting on an AI picture of a group of smiling people largely women. If was quite innocuous as an image but the ire of one commenter was drawn from the usual place – AI is trained on real images without concent

A little like:

accountxyz: hope the people you all use in prompts start suing the heck out of all of you

Okay. Most images are built from AI trained on massive banks of images largely unattributed, making that some class action.

Better though (if we want to get social warrior – which I’m down for) how about some lawsuits happening on predatory media, fashion and entertainment industry bodies, organisations and individuals, that have destroyed young women’s lives and careers for profit; for that matter what about accountability by the individuals that have openly joked about their dehumanising and abuse of women. Let’s not leave men out – the hazing and humiliation of young men in those industries is also a known blight that, by silence, we are complicit.

Whilst that may be somewhat off topic, a person’s right to own their own identity, is heavily impacted by data protection and it’s regulation in the current era. Legislation tends to move at a glacial pace compared to digital technology. However, legislation has been uncaristeristically nimble, as seen in the EU data reforms, and in business with Cloudflare and others moving to take control of bot crawling. The use of AI in increasing the speed and volume of trials for combating COVID, pushed reforms to bringing pharmaceuticals to market, at a rate that was previously implausable.

On a different tactic … how about the artists that have been plagiarised – but not compensated – for models being trained on their work?

The digital era has invented some models for mass distribution & monetisation that could be used in the training up and prompting of outputs for AI. The Spotify model is probably the most familiar example. These models arose in the digital frontier, in a time where torrenting ruled, and artists control of their IP was threatened. It’s been said that this led to better control for the studios than the artists. From the Wild West of torrent downloads, more was gained technologically than was redressed ethically. Better ethical distribution of the equity is always a goal.

Philanthropically there is potential in a model where the incredible profit margins from AI start-ups and established tech giants are “encouraged” to be farmed back into art and social programs. NGOs, NFPs, Government and community led organisations are better positioned to lead the redress societal imbalance, and best avoid the conflict of interest  in distribution of that gain. It isn’t a direct compensation for artist IP, rather it’s a longer term sector and societal solution, which works more in terms of redistribution of individual wealth gained from the common wealth.

That being said, there are options other than “Ban the AI”. Artificial Intelligence is past cancel culture. What that means is we need to ask ourselves as individuals, communities and societally, what are the potential risks, what are the possible mitigations of those risks and what are the opportunities – and for whom? This is a long game.  If we move together as a society, bottom to top, shoulder to shoulder, the better our future becomes. The biggest opportunity is for building our ethics into our solutions.  And now is precisely that time.

Don’t let your AI run with a bad crowd.

I’ve been getting into using AI in my day-to-day a lot over the past 6 months. Deciphering cantankerous customers’ meanings. Writing the start of a long letter not knowing where to start. And sometimes creating a logo… well, I gave it a try with a clear set of prompts “logo for a book club, an illustration of an open book, simple line art” and this is what I received.

Well I don’t know where they learn those kind of words but it certainly isn’t from me. I daresay that it’s learning those things from those Unstable Diffusion crowd. Frankly I’m not even mad, I’m disappointed.

Similarly, there are days when writing prompts seems like the AI is just being willful. “What do you mean you don’t know what an infinity symbol is? You drew the Kardashians coming out of a clown car a minute ago just fine!” I really have to resist the temptation not to push my prompt, weighting the up up until all hell breaks loose and tortured content starts flying about the screen as we wrestle with concepts that really are beyond both of us.

So yeah I really need to take better care of my AI. We’re better working from the same playbook.

What my AI thinks it looks like when we are fighting
Posted in AI