I was included in Walls in Online Places Vol.2 as part of the UAL Postgraduate Community, and wrote briefly on my practice and the piece involved, Syn.
After lengthy discussion with a filmmaker friend of mine I was invited to partake in a film anthology. The brief involved having a repeating 16 second source clip edited or remixed to create alternative narratives. I chose a segment of my REPUBLIC project and made it gradually more distorted through its repeated duration until it became unintelligible footage.
“The Transdisciplinary Studio”, Alex Coles, ISBN: 9781934105962
“Novacene: The Coming Age of Hyperintelligence”, James Lovelock, ISBN: 024139936X
“Letter from a Region in My Mind”, James Baldwin, New Yorker November 17 1962 issue “Reflections”, https://www.newyorker.com/magazine/1962/11/17/letter-from-a-region-in-my-mind
“FOVO: A new 3D rendering technique based on human vision”, Robert Pepperell 05/27/20 https://www.gamasutra.com/blogs/RobertPepperell/20200527/363615/FOVO_A_new_3D_rendering_technique_based_on_human_vision.php
“Live Simulations” Ian Cheng, ISBN: 3959050151
“Chelsea in Active Worlds”, Cargo Collective , http://cargocollective.com/manetas/filter/Life/VIRTUAL-WORLDS-CHELSEA-IN-ACTIVE-WORLDS
“Generating High-Resolution Fashion Model Images Wearing Custom Outfits”, G ̈okhan Yildirim, Nikolay Jetchev, Roland Vollgraf, Urs Bergmann , Zalando Research, https://arxiv.org/pdf/1908.08847.pdf
“Virtual reality in neuroscience research and therapy”, Corey Bohil, Bradley Alicea, Frank Biocca, https://www.researchgate.net/publication/51765068_Virtual_reality_in_neuroscience_research_and_therapy
“The Treasury’s tweet shows slavery is still misunderstood”, David Olusoga, https://www.theguardian.com/commentisfree/2018/feb/12/treasury-tweet-slavery-compensate-slave-owners
“Kafka’s Jovial side revealed”, Kate Connolly, https://www.theguardian.com/books/booksblog/2008/jul/03/post26
“Facial recognition can’t tell black and brown people apart – but the police are using it anyway”, Moya Lothian McLean, https://gal-dem.com/facial-recognition-racism-uk-inaccurate-met-police/
“Perlin Noise – The Nature of Code”, The Coding Train, https://www.youtube.com/watch?v=8ZEMLCnn8v0
“How we visualise viruses”, Chris Mugan, https://wellcomecollection.org/articles/XwbnOBAAACEABdXJ
“I built a brain decoder”, Rose Eveleth, https://www.bbc.com/future/article/20140717-i-can-read-your-mind
“Crochet Coral Reef”, Margaret Wertheim, https://www.margaretwertheim.com/crochet-coral-reef
“Technomancy” James Paige, http://james.hamsterrepublic.com/technomancy/
“Duty Free Art: Art in the Age of Planetary Civil War”, Hito Steyerl, ISBN: 1786632438
I recently came across FOVO, a new mode of image rendering based on the structure of visual perception. I had always found that digital perspectives through screens to be quite lacking in terms of realism and presenting visual distortion based on location. Having taken time to study modes of perception with regard to image making, I had wondered about how graphical engines could be utilised to create richer perspectives, but more importantly, how our perception of these images can be altered by different ways of presenting visual information. Artists have been using these techniques for hundreds of years in their work and it is encouraging and exciting to see modern graphical artists engaging in this deep history of linear perspective and projection.
FOVO operates by basing image rendering around the sensing of light by human eyes. Cameras sense light through contact with flat planes, while the sensory parts of human eyes are hemispheres. Despite the endless number of neurons in the brain, human vision is not as high resolution as cameras, and it is not in 2D. I have experimented with perception distortion throughout this unit but have found it to be too digital, in the sense that distorting virtual spaces only makes them feel more virtual and further disconnects the observer from the space. FOVO is quite new, but paves the way for better perception of digital spaces and may allow for a better artistic manipulation of perception of these spaces in a way that is not currently possible.
I have always been fascinated by noise and static, particularly “visual noise” on the old television sets that were in my house as a kid. As I grew older, static started to disappear and was only replicated as visual effects on computers to give distorted or nostalgic references. I started using these effects, but didn’t really wonder how they were made, and as a result I underestimated their importance and intricacy.
“Perlin noise”, an algorithm based around generating gradients of visual noise (static), became important as I branched into programming and attempted to generate worlds that create themselves – infinite landscapes with sprawling terrain. What was fascinating about Perlin noise was that relatively dull looking static (see below) could be used to create these terrains. Further modification and classification to coding could yield detailed terrains complete with different biome types such as water, grass, mountain, clouds, and even more complex objects like trees or even animals.
What was interesting about Perlin noise was its many uses- for example, generating static and applying it to 3D meshes (meshes being basic objects that are later turned into more complex ones). Code dictates a spectrum from black (zero) to white (1) which deforms the mesh based on height. Because the noise is random, random terrain can be created from this. With further coding, additional colours can form more diverse environments such as water and grassland.
Before Perlin noise was created, randomly generated virtual objects or 2D textures were very jagged, inorganic and machinelike, even if used for simulating organic things. The best visualisation of this I could find was this simple graph by The Coding Train (Perlin noise frequency being at the bottom).
The reason I have been researching this is not for terrain creation, but rather to learn the properties of creating an unpredictable setting based off of my own direct creations, and to couple this with some sort of AI. This type of noise can also be used for creating endless amounts of textures and I would be interested in applying this to my work. Instead of generating something from scratch, I would begin with an image or 3D object I’d created, even a physical painting or sculpture, and then apply this technique in a virtual space to see what results I could yield.
I was included in a brief article where Chelsea MAFA artists talk about working in lockdown: