Piccadilly Blur - an explanatory guide

Demo GIF Loading, bear with us :)

I am taking this opportunity to provide a brief walkthrough on the Picadilly Blur project. From a business perspective it might be questionable to provide a step-by-step guide into essentially the value of the craft that I’m leveraging upon. However, I have given it a lot of thought and considering the initial reception of my work, I believe providing this extra context will only encourage the affinity that could be bonded to these images.

PHOTOGRAPHY

  1. Using a wide-angle lens for a long exposure with tripod + ND filter to capture the 2 key elements of this composition: the static architecture of Picadilly Circus plus the city buzz as drawn by the moving traffic and people. The long exposure compresses the dynamic component to ghosting movements which thus cover a larger area of the photo than the static objects would otherwise occupy, but the information is more diluted which brings some mystery in the forefront of the image.

FEATURE EXTRACTION

  1. Using multiple basic information extraction algorithms (such as depth perception and edge detection) I am creating feature maps to act as a wireframe for when I’ll be feeding the inputs as constraints for the AI generative model.

MODEL EXPERIMENTS

  1. Prompt design: I am thinking of the main traits I want to add to my photography – make a choice for the predominant colour scheme, the architectural styles I want mashed up, the photography style I want the picture to represent, then based by a few initial iterations of different seeds, I am working on some negative prompts to balance out unwanted outputs.

At this stage I end up with numerous variants for the image, each individual one having different strengths and weaknesses. It is like a conversation between me and the model, in some variants it comes up with really detailed architectural lines for some of the buildings, in other attempts it produces a beautiful sky; in some images I find interesting interpretations of the motion blur mystery I provided with my photography and I make a selection of a few images and their corresponding strong points.

IMAGE EDITING

  1. I will then proceed with the initial re-touch with plain-old image manipulation – objects that have lost their meaning are manually repaired

AI UPSCALE

  1. Upscale the candidate images to 4x higher resolution (from the already decent-sized model output) – this is where each of the 4-5 candidate images containing desirable elements is fed on its own to the model. At this stage I only give it enough freedom to make the small objects make sense when adding more pixels, but not enough to change the actual high-level composition of the photo.

IMAGE EDITING

  1. Final retouch – again a very involved stage of this project, I will take my time to merge the 4-5 candidate photos into a single final artwork, some elements will be reused as they are from the initial photograph, other elements are manually drawn using my pencil and tablet to bring back some of the realism (as the effect I’m after requires some grounding into reality in order to relay that familiarity feeling, especially for the people living in the city and knowing the places and landmarks referenced).

Closing note, I have thus sacrificed some of the mystery of “Piccadilly Blur” hopefully in exchange for some credibility – the fact that there actually is a human mind drawing the lines. As my skills get more polished and the understanding of how the model comprehends my multi-medium inputs deepens, the method is bound to evolve. For further works I will try to experiment with different and more innovative variations of the stages above and it is unlikely to be writing about how future works will have been created (unless people are really interested, in which case please do engage via email or social media and let me know!).

Back to blog