DESCRIPTION: Periphery, 2015. Interactive application using live data from Instagram, Flickr and Google search sources. Installed as a continuously updating projected work.

It has long been my view that more photographs do not necessarily help answer questions about photography. Rather than adding more photographs into the world, I believe, echoing Burgin, it would be better to attempt clarity of understanding of the ones already here.

If theory was to express certain empirical facts about practice, then I reasoned my practice would need to be able to withstand scrutiny of its purpose through those same theories. My focus was on shared practice: a practice of participants and of relational networks. And the network is the specific influence for “Periphery,” as it embraces shared and random Internet data.

I began by questioning what was image? I reasoned that if image was not only visual it must be an amalgam of different things. Thus images are created from connections, from relationships and in the context of the Internet from other people’s digital information or data.

Periphery is a dynamic, changing set of projects. The current version discussed here is “Periphery vision – china clay in associative data,” is a computer-based application. It takes as a starting point a connection to Instagram. The application searches through the Instagram API for images tagged with the term #chinaclay. This search returns the images to Periphery and they can be seen displayed in the first column. If a new image is uploaded to Instagram the tag #chinaclay is used and the webpage refreshed the work will change throughout.  “Periphery” therefore requires an Internet connection to Instagram in order to exist. All the images in the first column are taken by users of Instagram, while they may not necessarily know their work is being used their images are all publicly available through their individual Instagram feeds. The only reason these images appear in the work is by virtue of their meta-data or as it is termed on Instagram their specific tagging with a particular keyword.

The second column of the work interprets the loaded Instagram images and converts the colours from each image into a column of ten colour palettes from each image. This column is a graphical representation of the colour data contained in each of the searched for Instagram images. It is each image represented only as colour without any perceptual realism of form or shape. It is created by an interaction of the code I have written with the images downloaded.

The third column contains the comments and tags that have been attached to the Instagram images. This will contain the tag #chinaclay but also any additional tags or comments associated with the images and written by users of Instagram who have seen or interacted with these specific images. This column represents another layer of image – the meta-data of image. Unlike the technical specific meta-data, this is social data, written comments about the images and bespoke associated tags. The column does not delineate the text so the meta-data and comments from each image join the other.

The fourth column is created by selecting a random word from the third column and carrying out a search on the other popular image site Flickr. In essence, this column repeats the process searching images carried out in column one but with a word randomly generated from the results of column one and on another image site. By column four we are quite removed from the original search for #chinaclay. The bound associations of columns one, two and three are now randomly connected through the selection of a single word that may have been written in the comments or tags.

Column five, like column three, contains the titles of the images from the Flickr search. Depending on how these have been input by the users of Flickr they may include hashtags relating to the images and some descriptive text.

The final column six takes a random word from column five, the image titles and performs a Google word search. It includes a search from the web and a search from news sources available via the web.

This work represents a shift toward what I now describe as an ‘anticipated image,’ which is an image created in order to be shared, broadcast, networked and linked with other information. It is a future-oriented image, linked by algorithms, code, associations and randomness. As a form it represents the ‘de-presentification’ of lived experience, being the embodiment of coded, connected, outcomes. It is dynamic and always changing and yet it is simultaneously created from pre-existing linked forms.

We currently experience digital images as being more amenable and liable to forms of recombination, fragmentation and to being encountered through associations and connections. Semiotic approaches to signification are no longer the most appropriate tools for describing and explaining such images. In Reading the Figural (2001), D.N. Rodowick suggests a linguistic reading of images is both interrupted and disrupted by the different spatiotemporal organisation of contemporary forms of representation. His account of the figural reconciles image and text as being discursive in a non-linear, non-uniform and discontinuous sense. For Rodowick, the figural is not a combination of image and text, it is an interstitial space located between them both that conforms to the properties of each but can be reduced to neither one nor the other. In the digital age, a common property of image and text is computer code, from which they are both shaped. Computer code is organised by instructions and procedures within software that are algorithmic in their structure. These processes then largely determine the location and form of images.

When Victor Burgin (2009) remarked that photographic images are perceived environmentally, he described their dissemination across different realms and how they are experienced as heterogeneous rather than unified objects. For Burgin, image fragments coalesce through differing, mediated, virtual spaces, (such as the Internet) and they mix with the personal fantasies and memories of the viewer. Therefore, images are never one single thing located in one single place. This perspective on what images are and where they are located is pertinent to networked digital images, which mutate and reform continuously. The networked, digital image is the expression of the “interlacing of physical and algorithmic attributes, aesthetic and political forms, which characterise the age of information capitalism” (Rubinstein, Golding & Fisher, 2013: 08). In this way, visual representation is no longer the solid ground of the image. Instead, images move beyond representation, becoming forces that structure a reality rather than document it. Taking these arguments, ‘Periphery’ presents image as an always shifting, incomplete relation between information and data. The inherently flexible work of the image is carried out under the guise of endless pleasure and enjoyment, of the obligation to photograph, to share, to annotate, to comment and to interact within a network of human and object relations.

If the figural is binding a network of image and text into a new form, then the underpinning organisation of computer code and algorithmic manipulation expresses how the force of the figural can be fashioned. How software interacts with algorithms and data structures is, as Lev Manovich describes, the “software medium” (2013: 207). ‘Medium’ describes a technique which is defined by the material or methods used. A medium is, therefore, a “combination of particular techniques for generation, editing and accessing content” (2013: 335). The properties of a media object, Manovich argues, are not specifically defined by its format or file type, for example, image or text, but also by the software medium that accesses it. Therefore, an image or text could be considered to be a data structure made visible or accessible through a software medium. The software medium organises data into a familiar or recognisable form but it also may combine it with other data (meta-data) in differing ways.

Periphery makes no attempt to visualise abstract data, which I argue would be a fundamentally representational project. Instead, it organises and builds relationships between the data structures of image and text in order to demonstrate a new conceptual instrument – in which what is visual is seen as incidental or peripheral. Images are not purely visual nor are they purely perceptual objects but I argue they are always relational – they are formed from and create new relationships. What this work expresses is that a key characteristic of networked images is that they are organised around associations and framed by their repeating or random discontinuities rather than by their claim to being ‘pictures of something or other.’  Furthermore, if software explicitly configures and structures the images and text we encounter, then simultaneously it must also be generating new coordinates for these descriptions of the world.

This work situates photography at the heart of the image for image process. Image within Periphery is not about showing something as we might usually expect to do when looking at photographs. Image here begins with a hashtag. It is unimportant what the images show (hence the colour palette breakdown of image) instead the agency of photography here conceals power structures that are sustaining of the labour of photography masked by creativity and enjoyment. If the conditions of photography are pre-configured in ‘Periphery,’ it is because image-making is not about looking nor about what we see but about the circumstances that make looking an act that sustains the photographic image as a relational, commodity form.