Ethical questions surrounding the role of Artificial Intelligence (AI) in our lives are echoing ever more loudly and persistently through the threads of public debate. Should computers drive our cars? Should computers create art? Should computers decide what we read, eat, see, buy and so on?
Recently, the use of AI to recreate the voice of the late Anthony Bourdain in the documentary about his life Roadrunner has been stirring controversy; throwing the question of how, and when, AI should be used in the media sharply into focus.
Transparency when it comes to using AI
Filmmaker Morgan Neville used AI technology to stitch together fragments of Bourdain’s actual voice to create the illusion of him speaking – reading his own words. Since the phrases Neville generated are genuinely attributable to Bourdain, the issue is not one of veracity but disclosure.
By failing to disclose his use of AI, the filmmaker undermined the credibility of his work, leaving his audience (at least those aware of the controversy) in the uncomfortable position of not being sure when they are listening to Bourdain’s actual voice and when to a computer-generated construct. In short, the viewer is no longer sure what is real, or ‘true’.
This sense of unease at not knowing where the tentacles of AI end and reality begins is profound and unnerving.
The term ‘deep fake’ says it all. Digital technology can create virtual worlds where our senses are tricked into mistaking a computerized version of life for the real thing, an experience that can be as exhilarating as it is disorientating.
Deep fakes are so convincing they challenge our understanding of reality and the truth, blurring the line between fact and fiction. As AI technology has evolved, the seemingly reliable analogue world of physical media (sound recordings, photographs and film) has become ever more fragile. The specter of the ‘deep fake’ in the media, now hovers like an ominous shadow in our consciousness.
For now, it feels as though AI is still confined to the periphery of photography.
Major players like Amazon and Microsoft are already using powerful AI to analyse and auto-tag images, telling us what and who is being portrayed. As fun as it may be on social media, AI-driven facial recognition is also turbo-charging the power of the surveillance state.
As much as one may pine for the halcyon days when a photograph was more rigidly defined by the celluloid on which it was recorded, the digital genie, and the realities of ever more powerful AI technology, cannot be put back in their box.
It is for photographers to harness the power of AI creatively, while adhering to ethical boundaries for its use, especially in editorial context.
As software and algorithms penetrate deeper into the creative process, we run the risk of clever software supplanting our human sensibilities and the skills long associated with creating great photography.
How long is it before AI starts not only to tag your photos, but to pre-edit them and even apply filters it thinks provide the best enhancement for any given scene? By a process known as machine learning, AI always seeks to become ever smarter and ever more powerful.
For now the challenges posed by AI to photography are more ethical than existential. If you intend to use AI tools to modify an image in ways that subvert reality, then disclosure is essential. Without this ethical boundary, we risk inhabiting a chaotic photographic universe where the line between the truth and the reliable witness of the documentary photograph is forever erased.
Written by Yvan Cohen
To read more helpful articles on photography, check out our blog page.
Join our growing photographer community at LightRocket and get powerful archive management and website building tools for free!