The Self portrait as a drowned man by Hippolyte Bayard, staged and taken out of frustration not getting the recognition being the inventor of photography, although it seems it was Bayard himself who decided to wait after Daguerre’s announcement of inventing photography (probably on external advise).

The paper-procedure of Bayard was most likely indeed earlier then Daguerre’s copper-plate, both using silver, but the silver solution was difficult to wash out of the paper (fixate) and in many cases the images faded over time. Another issue was that the variation of paper had a massive influence on the end result, as if there were multiple truths, depending on the paper alone.

 

Bayard, unlike Daguerre, worked on paper. While Daguerre used a copper plate with thin silver and iodine coating as his light-sensitive material to be exposed in the camera obscura, Bayard used paper soaked in silver nitrate and salt. This meant that the chemical reaction producing the image occurred within the fabric of the paper, rather than on the surface of the coating. As a result, papers of different composition and density created different versions of the same “reality,” pointing to the contingency of visual perception. (Sapir, 1994)

Thus the paper process failed to produce photographs that had the scientific clarity, focus and uniformity needed for the geometrization of space which was a central strategy of Realist visuality. As such, it could not serve the ideology of Enlightenment and Cartesian knowledge, which purported to use photography as “evidence” for its myth of transparent, observable meaning. Instead, the paper-process photographs, such as Bayard’s, come across as baroque allegories, removed from meaning by the screen of the medium, inextricably interlacing appearance and being. In this way Bayard’s Self-Portrait further undermines its own claim to authenticity in the representation of death.(Sapir, 1994)

Nevertheless, a rather intriguing story, probably full of politics and publicity tricks and timing in a global race towards the birth of photography.

The discussion on image manipulation is not new, but it’s getting more and more refined and widely spread, with bigger importance perhaps? The digital process made the line thinner between enhancing an image and manipulate the situation/observation. Software like Photoshop, GIMP, Pixlr, Affinity and many more, but also new, powerful mobile apps, make it a breeze for everyone to manipulate an image in all its facets. From colour to content, from sky and weather to people, from face to gestures.

Manipulation in the commercial environment seems largely accepted. Colouring food, smoothen skin, replace the sky above your house-for-sale, Topaz software does this with one click, choosing from dozens of stock-skies. But it it continues to virtual products in a virtual surrounding, then it is no manipulation, but fully synthetic. Virtual/digital 3d cars in a virtual showroom (Relaycars.com, 2020), test new models even before production or physical modelling (there used to be clay models in the old days.) is now common practice.

Looking at it from art perspective, the (digital)photocollage has matured into a new genre and all lesser manipulation seems equally acceptable, as long it is transparently revealed, the image is manipulated.

It becomes ethically disturbing when manipulation slides into the social and public domain. The past year the phrase Fake news evaluated into an irritating joke, but real fake news, with the intention of social, economical or political misinformation or misleading was never easier, never more widely spread and never more dangerous as today.

One of the more interesting examples of ongoing technology is StyleGEN, which is freely available as Nvidia Source Code License-NC, but has only one specific goal, flawless, automated image manipulation. The results are spectacular and spectacular simple (relatively).

Abstract: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.(NVlabs, 2020)

Such software can be used for either the creation of deep-fake images or as fake determination tool combined.

The discussion also includes discussion about how to prevent unwanted image-manipulation ( one could argue what is unwanted) or at least means to determine, either instantly or after investigation, an image was manipulated. The term digital watermarking is such a possible solution. The use of Artificial Intelligence in Camera embedded applications (Informatics from Technology Networks, n.d.) seem the only viable short term technology capable of some form of protection against the deep fake, highly professional levels of manipulation today. My guess, as it is with all digital security issues, it will be a long and fierce battle.

Image manipulation and fake news seem intertwined. The European Commission, int their (our?) struggle against misleading and socio-political harmful information, defined the problem not being the fake news as such, but the misleading intention. Therefore they made the following definition of misleading information, including, but not limited to photography/images: false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. The risk of harm includes threats to democratic political processes and values, which can specifically target a variety of sectors, such as health, science, education, finance and more.(European Commission, 2018)

I think I can agree there and probably, that’s where my ethical boundaries are, although I’m never sure with image manipulation, it has so many variables and varieties, there might not be a genetic ethical rule, there is definitely no end to further technical and creative development, with new software, education and growing applications. It seems to be an everlasting ethical discussion at least.

 

One of the more intriguing image manipulations, perhaps because its so refined, is from a YouTube channel “Ctr Shift Face” and specifically this one with Billy Hader, radiating Tom Cruise in an almost creepy way:

New image manipulation combines artificial intelligence and learning mechanism to create totally new synthetic images from existing material. The facial overlay mimics Tom Cruise perfectly that even when you see the change happening, you still do not fully trust your eyes. This goes further than anything you can do with photoshop, the latter its the default response to almost every image that is in some form special, newsworthy, outstanding in whatever form: ow..must be photoshopped. How manipulation becomes the default.

 

Bibliography/Reference list

Ctrl Shift Face (2020). Bill Hader channels Tom Cruise [DeepFake]YouTube. Available at: https://www.youtube.com/watch?v=VWrhRBb-1Ig [Accessed 7 Jun. 2020].

European Commission. (2018). Final report of the High Level Expert Group on Fake News and Online Disinformation. [online] Available at: https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation [Accessed 7 Jun. 2020].

Informatics from Technology Networks. (n.d.). AI Watermarks Could Outsmart “Deep Fake” Photos. [online] Available at: https://www.technologynetworks.com/informatics/news/ai-watermarks-could-outsmart-deep-fake-photos-320017 [Accessed 7 Jun. 2020].

NVlabs (2020). NVlabs/stylegan2. [online] GitHub. Available at: https://github.com/NVlabs/stylegan2 [Accessed 7 Jun. 2020].

Rathenau.nl. (2019). Desinformatie in Nederland | Rathenau Instituut. [online] Available at: https://www.rathenau.nl/nl/digitale-samenleving/desinformatie-nederland [Accessed 7 Jun. 2020].

Sapir, M. (1994). The Impossible Photograph: Hippolyte Bayard’s Self-Portrait as a Drowned Man. MFS Modern Fiction Studies, [online] 40(3), pp.619–629. Available at: https://muse.jhu.edu/article/20909 [Accessed 6 Jun. 2020].

Tero Karras FI (2020). A Style-Based Generator Architecture for Generative Adversarial NetworksYouTube. Available at: https://www.youtube.com/watch?v=kSLJriaOumA&feature=youtu.be [Accessed 7 Jun. 2020].

The J. Paul Getty in Los Angeles. (2020). Hippolyte Bayard (French, 1801 – 1887) (Getty Museum). [online] Available at: http://www.getty.edu/art/collection/artists/1840/hippolyte-bayard-french-1801-1887/ [Accessed 6 Jun. 2020].