SA Art Gallery '23: ACM SIGGRAPH Asia 2023 Art Gallery

Full Citation in the ACM Digital Library

Δt-Sphere

This artwork has a unique name, "Δt," which combines the Greek letter delta (Δ) representing a small increment in mathematics and physics, and the initial letter of "Thickness." This work :Δt-Sphere is part of the Δt series, and was born out of a study of the monement of living creatures. The principles of elasticity and pendulum is used by penetrating a piano wire through a stack of thin acrylic plates.

The theme of this artwork is "Floating Liquid Encased in a Sphere" Through the organic fluctuations originating from artificial objects, I aim to provide an opportunity for people to recognize that the relationship between nature and artificiality is not separate, but rather by designing physical phenomena at a microscopic level, we can seamlessly experience natural elements like liquid.

Participants can manipulate a sphere that sways in response to their hand movements, creating a sensation as if they were generating wind with the palm of their hands. Moreover, they can control the floating liquid within the sphere, which reflects LED lights, giving them an illusion of becoming a deity or a sorcerer.

In today's chaotic and unpredictable society, this work has the power to harmonize even complex events under a designed law through the power of combination of art and science. The man-made objects are the crystallization of human wisdom while the liquid, which is swaying like a wave as a metaphor of this artwork, is a source of all things. The "Δt-Sphere" is a work that gives interactivity to the beauty of the fusion and harmony of natural and man-made objects, and can give people a new perspective.

Aguaviva

Aguaviva juxtaposes the spontaneous nature of biology with the predictable properties of digital technology.

A solitary moon jellyfish swims around in a saltwater dome. A small camera tracks its movement and turns it into xy values, expressed on a collar of digital numbers. The shifting position of the jellyfish is mapped to the corresponding digits below, resulting in an ever-changing string of random numbers.

Arbitrary values generated by computers are considered too predictable for high-end encryption schemes, such as secure Internet traffic and online banking. Hence, more unconventional sources are often used, even paid for. True randomness, for this purpose, is a commodity.

As part of the artwork, the numerical string created by the jellyfish is offered up in real-time to encryption companies to use at their discretion.

The apparatus is designed to extract randomness from this simple yet ancient life form---unaware of the fact that in the arena of random sequencing, its cellular contractions can outperform even the most powerful supercomputer.

AI History 1890-2090

The aim of this artwork is to let the audience feel "brackish waters of fictionality and reality" by seeing AI-generated history. In the real world, humans train AI with human-generated data. But AI trains humans with AI-generated data in this artwork. AI becomes smarter by learning. Their intelligence might surpass that of humans. When AI acquires more advanced knowledge than humans, will people, in turn, learn from AI? And will we unilaterally accept the AI-generated information without verification of the truth? This artwork anticipates the possible future experience. As AI infiltrates our lives, the information generated by AI is increasing exponentially. In this work, a zoetrope is used to show how the history of AI is being generated as AI generates images, and we can do nothing when we see AI creating its own history with its own generated images. We can only accept the information as it is presented to us. This work shows how we are immersed in this "brackish zone of fictionality and reality brought about by AI".

AI Nüshu (Women's scripts) - An Exploration of Language Emergence in Sisterhood

This paper presents "AI Nüshu," an emerging language system inspired by Nüshu (women's scripts), the unique language created and used exclusively by ancient Chinese women who were illiterate under a patriarchy society. Through an interactive art installation, two artificial intelligent (AI) agents continuously observe their environment and communicate with each other, developing a writing system that encodes Chinese. In this system, two AI agents observe the environment through cameras, record the unconscious behaviors of the audience, and generate summaries of their observations through visual recognition. Subsequently, the agent associates the corresponding original Nüshu poetry lines and generates new poetry text through a Language Model (LLM), representing its reflection. To develop their language, they continuously switch roles between the speaker and listener, constantly communicating their reflections, and encrypting a word in the poetry line with their self-created AI Nüshu character, allowing the other to guess and learn. Gradually, they reach a consensus on AI Nüshu, forming a unique "AI Nüshu Dictionary" for machines. This language, algorithmically combined into corresponding characters, has components derived from Nüshu, similar to Chinese characters and traditional textile patterns. Thus, like ancient women, the two agents gradually developed their Chinese writing system, corresponding one-to-one with Chinese characters. In contrast, humans, as the authority of the language system, became an object observed, interpreted, and inspired by machines to stimulate non-human language. This is the first media art project to interpret Nüshu from a computational linguistics perspective, infusing AI and art research with non-English natural language processing, Chinese cultural heritage, and a feminist viewpoint. This encourages the creation of more non-English, linguistically-oriented artworks for diverse cultures. We simulate communication in sisterhood through a multi-agent learning system, which questioned knowledge authority between humans and machines through the lens of language development.

Aquasia: The world's first immersive metaworld focused on the future of human habitats in the face of rising sea levels

Aquasia is the world's first educational metaworld set in a floating city in Asia. It merges creativity, technology, and sustainability, revolutionising learning into an engaging, memorable journey.

Offering an interactive desktop and a passive 360-degree VR experience, Aquasia provides audiences with a new perspective on the challenges and possibilities of living in a world affected by rising sea levels and climate change. Audiences are challenged to rethink modern food, energy, and transport systems, uncovering technological breakthroughs for a future aquatic life on our Blue Planet.

Incorporating Asian culture and innovative technology, underpinned by UN-HABITAT research, Aquasia aims to captivate educators and learners. Our innovative project garnered the attention of Singapore's esteemed ArtScience Museum, which invited us to launch Aquasia in their Curiosity and VR Galleries from 1-30 September 2023

Bending the Light: Next generation anamorphic sculptures

Bending the light is a new method for generating artworks that extends the classical anamorphic archetype to use freeform reflective and refractive media and 3D surfaces instead of images. The methodology uses a mix of raytracing and surface deformation techniques to determine the proper deformation the object should undergo to be corrected by the optical tool once viewed by the observer in a specific location. The reflected image hovers in front of, rather than behind, the mirror. The audience forms an essential part of the work. The holographic or ghost-like appearance of the reflection results from the interplay between the mirror, sculpture and the eye of the viewer. The sculpture has been recently selected as a finalist for the prestigious Wynne Prize and exhibited at the Art Gallery of New South Wales. Before then, it was also exhibited at Sydney Contemporary.

Cymatic Ground

Cymatic Ground is an interactive sound installation doubling as a model of a dynamic urban landscape. Its body, a metallic scaffold designed following the plans of an old neighborhood in Hong Kong, is covered by a fine layer of sand. This exposed skin morphs in response to the sounds and vibrations of the larger environment it is immersed in. In response to fortuitous pressures and occasional gentle beats from the public, Cymatic Ground produces audible moaning and earthquake-like shakings that displace the grains of sand, destroying or revealing transient geometric patterns and complex networks of canals analogous to avenues, streets and alleys faithful to its original plans. On the other hand, when these external influences are not in tune to its fundamental mechanical resonances, its skin wrinkles and breaks apart, and its metallic body produces strident notes of despair as the installation searches for an appearance better attuned to the new reality it is immersed in. In an introspective and concerted effort, each plate - each neighborhood-continuously scans its own mechanical resonances, probing every part of its body with precisely tuned waves of energy, not unlike a city striving to reinvent itself in order to rekindle its appeal in response to global economic crisis, the effects of the climate change or the continuous displacement of its dwellers.

Erased Murmurs

Erased Murmurs is an interactive installation that brings attention to the disappearing and often overlooked graffiti, focusing on emotionally significant words written in public spaces in Hong Kong. The artwork explores themes of public expression and the lasting impact of the 2019 - 2020 Hong Kong protests.

The installation features a book containing a collection of photographs capturing the graffiti in its original state. To mimic the attempts to conceal graffiti with patches of paint, which paradoxically makes the act of covering up stand out, the artist has covered the words in the photos with special ink, rendering them invisible to the naked eye. To reveal the hidden text, visitors must employ infrared reflectography using a handheld infrared light provided.

This interactive element of the artwork engages the audience in a process of discovery. The act of revealing these messages creates a sense of connection between the viewers and the anonymous authors of the graffiti, whose voices are momentarily brought back to life.

The words hidden behind the special ink are not random but have been carefully selected for their emotional significance. The order of the words is crafted into a cohesive narrative. Through personal confessions, expressions of inner struggle, complaints against society, and messages of consolation and support, the graffiti serves as a window into the emotional landscape of Hong Kong residents in the aftermath of the protests.

Erased Murmurs is an examination of the impermanence of public expression. It invites viewers to confront their own emotional responses to these erased messages and to consider the broader implications of silencing voices in public spaces. Ultimately, the installation stands as a tribute to the unbreakable spirit of those who continue to make their voices heard, even in the most challenging circumstances.

Exquisite Corpus

As humans, we regard our bodies through their visual surface components. The interior, when considered at all, is typically only due to medical concern for one's-self - rarely envisioning that of others. While radiological tools have dramatically improved our capacity for noninvasive representation, their use is often confined to the domains of personal health. This work seeks to instead uncover the possibilities they represent to show the full scope of our bodily form. In their obfuscation of the accustomed visual boundary, they remove associations of race and many aspects of gender. To further the dissolution of perceived identity, it excavates our inner sameness through algorithmically merging bodily interiors into 3d human chimeras - hybrid beings existing beyond the possibilities of genetic merger. Through the collection of simple participant biometrics, blended avatars constructed from real patient data are selected to give viewers a bodily representation that extends beyond the surface manifold commonly regarded as the self in both physical and virtual worlds. These avatars expand the representations usually seen within virtual spaces by, rather than existing as a 3d rendered hollow shell - absent the organs necessary for the operating individual, providing a volumetric representation of those elements unnecessary in the virtual space as the participant's character.

Fusion: Landscape and Beyond 2.0: An interactive AI generated art installation

"Fusion: Landscape and Beyond 2.0" (2023) is an interactive art installation harnessing the potential of AI to redefine our relationship with urban and natural landscapes. Central to the concept of the installation is the synthetic memory, which dynamically adapts and responds to myriads of instructions, and in turn, influences our understanding of the environment. The installation offers an immersive experience where viewers play an active role. As participants traverse the exhibit, their movements act as triggers, instigating a real-time transformation of the AI-generated landscape. This interactivity reveals layers of cityscapes and landscapes, serving as a visualization of AI's evolving memory and its interpretation of our environment.

The aesthetics of traditional Chinese landscape painting are deliberately incorporated to reinforce our discourse on ecological balance. The Chinese philosophies of nature entice us to embrace a worldview that deeply values the harmonious coexistence of humanity and the natural world. The relationship being sought is one characterized by compatibility, participation, and interconnectedness.

Using the traditional Chinese brushwork technique, Cun, we have devised a model that fuses AI's textual interpretation of city aesthetics with traditional brushstrokes. This integration results in a unique visual calligram that blurs the boundaries between physical and digital experiences.

The workflow of this project involves the use of a self-fine-tuned Stable Diffusion model and real-time visualization system. The system continuously synthesizes Chinese city images that echo our real-world urban experiences. These city images then metamorphose into an artificial nature imbued with the aesthetics of Chinese landscape painting, creating a visual poem or calligram.

In the end, the installation not only blurs the lines between AI and human cognition but also emphasizes the symbiotic relationship between humans, AI, and the world we inhabit. This project harnesses AI to echo our collective consciousness, weaving a narrative of co-existence within our shared world.

Geomart-ut7: Encountering Geometric Patterns in Media Arts

Geometric patterns sometimes appear to reflect an order we observe in nature and sometimes as abstract results of an intuitive process that emerges with different techniques applied. The geometric patterns obtained with the help of a simple compass and ruler reveal a prosperous world with their complex structures that appear at different levels in the visual content they offer us and their perfect layout. In the art of geometry, which takes its inspiration from deep abstractions, we encounter some mysterious bridges that are tried to be built between the representation and the imagined. When these bridges are skillfully built, they have a simple structure that allows us to look through a window that leads to absoluteness, repetition, eternity, simplicity, complexity, order, and chaos. In this series of works titled Geomart-ut7, I try to present a transformation where simple geometric forms can gain movement and turn into complex structures in a world of perception where meaning takes its breath away from expression. I wish to make visible the connections between the past's geometric art and the present's generative art, shrouded in mist.

Infinite Colours

"Infinite Colours" brings 2,499 videogame titles into a slow canvas of accumulative light. Each game adds a unique shape and colour onto the canvas and plays a unique string of notes. Over 8 hours, the canvas will be filled with infinite colours to celebrate LGBTQIA+ independent videogames.

History has always been queer. Through this generative visual and sound work, we aim to demonstrate the collective activism, movement, and creative expressions that queer folks are making to be visible, heard, and to say that we are here.

But queer movement does not happen overnight; queer resistance is accumulative and built over generations of selfsacrifice and self-acceptance. The multitude intersectionality of the unruly times slowly bleeds colour into the world, blends motion into the landscape, and accumulatively becomes a canvas of evermoving colourful light.

LightSense - Long Distance

'LightSense - Long Distance' explores remote interaction with architectural space. It is a virtual extension of the project 'LightSense,' which is currently presented at the exhibition 'Cyber Physical: Architecture in Real Time' at EPFL Pavilions in Switzerland. Using numerous VR headsets, the setup at the Art Gallery at SIGGRAPH Asia establishes a direct connection between both exhibition sites in Sydney and Lausanne.

'LightSense' at EPFL Pavilions is an immersive installation that allows the audience to engage in intimate interaction with a living architectural body. It consists of a 12-meter-long construction that combines a lightweight structure with projected 3D holographic animations. At its core sits a neural network, which has been trained on sixty thousand poems. This allows the structure to engage, lead, and sustain conversations with the visitor. Its responses are truly associative, unpredictable, meaningful, magical, and deeply emotional. Analysing the emotional tenor of the conversation, 'LightSense' can transform into a series of hybrid architectural volumes, immersing the visitors in Pavilions of Love, Anger, Curiosity, and Joy.

'LightSense's' physical construction is linked to a digital twin. Movement, holographic animations, sound, and text responses are controlled by the cloud-based AI system. This combination creates a location-independent cyber-physical system. As such, the 'Long Distance' version, which premiered at SIGGRAPH Asia, enables the visitors in Sydney to directly engage with the physical setup in Lausanne. Using VR headsets with a new 360-degree 4K live streaming system, the visitors find themselves teleported to face 'LightSense', able to engage in a direct conversation with the structure on-site.

'LightSense - Long Distance' leaves behind the notion of architecture being a place-bound and static environment. Instead, it points toward the next generation of responsive buildings that transcend space, are capable of dynamic behaviour, and able to accompany their visitors as creative partners.

Mākū, te hā o Haupapa: Moisture, the breath of Haupapa

The cracking and melting Haupapa glacier and lake, Aotearoa New Zealand's fastest growing body of water, are presented in a live cast of mākū, life-giving moisture. Tiny bubbles of ancient breath and atmosphere are pressed inside Haupapa's ancient glacial ice - including sea breezes, pollens, carbon dioxide and methane, as well as the ash of Australian fires. Single words and names of the elemental ancestors in Māori elder Ron Bull's voice, recorded live on the lake Haupapa, are woven through the sound and images to gift and acknowledge Kāi Tahu matauraka (knowledge) in a weather-responsive audio-visual installation. The project bridges meteorology, indigenous cosmologies, and science to create an active and unruly response to this rapidly changing icescape. The artists relinquish the ordering and qualities of sound and video to the weather conditions of Aoraki, recorded by NIWA instruments (New Zealand Institute for Water and Atmosphere) in place near the Haupapa glacier, then turned to digital information which feeds live into the installation, subtly altering the brightness, direction, and movement of the images and sounds according to the real-time weather conditions, and wind direction. Depending on the weather, the image changes and the sound and vocal sequence is endlessly variable. On days of high solar radiation, bright, clear ice and sun predominate and also move the images on screen accordingly, on cloudy days, the image darkens. La Niña conditions for the past three years have brought sunny settled weather to this region in the central South Island and melting has accelerated, indicating the changing climate. This installation expresses what it feels like to be inside that ice and water, responsive to heat, rain and bright sunlight.

#peaches

Ancestral time in Mangaian cosmology is an unfolding of multiple worlds through a generative process that extends from energy to matter from which we, Mangaians, are descended. Mangaia is the second largest Island in the Southern Cook Islands group. Its cosmology begins with expanding pulsating energies within the root of an upturned coconut, that generates multiple dimensions of existence. This transformation determines how we understand and navigate worlds. Within this multiplicity is recursion between the material and immaterial, where past, present and future are suspended and collapsed. Two key concepts underpin the generation of self-portrait images in the project #peaches; Akapapa'anga (layering through genealogy, building upon its ancestor genealogical connection within and between artworks) and the Mangaian cybernetic continuum (the ability for recursion to exist between worlds), which functions as ancestral time in practice. #peaches explores this proposition through layering and recursion of Al-generated portraits, and reveals the racial bias inherent in this technology, and its disruption to ancestral time.

Penumbra2.0

Penumbra2.0 is an AI immersive art installation exploring the visualisation of an unpredictable extreme wildfire scenario using a pyro-aesthetic, an aesthetic based on the perceptual qualities of this fire type. Currently seen in Canada, these wildfires are extreme in their speed and scale. They are unpredictable as their behaviour moves across terrains in unforeseen ways, unlike the linear and predictable paths of bushfires. Using geo-located data, Penumbra2.0 recreates an actual wildfire in the Vosges mountains, France, 2020. It has been collaboratively developed by art, AI and fire researchers.

Penumbra2.0 forms part of a larger research program entitled iFire. iFire consists of an artistic and scientific project series, the Penumbra series comprising the artistic and Umbra the scientific. Both use the same database of atmospheres, flora, pyro-histories and topographies. Penumbra explores the palpable and sensorial qualities of wildfire experiences, while Umbra investigates the dynamic variables of wildfire events. To amplify the evocative viscerality of these encounters, Penumbra is rendered in monochrome. To underscore its complex pyro-turbulent processes, Umbra is rendered in color.

Penumbra2 0 investigates pyro-aesthetics as a two-way dialogue between the viewer and a fire-laden landscape, rather than a linear relationship between an active human protagonist and a passive "natural disaster". It aims to model the uncertainties that characterize such exchanges. On the one hand, the actions of the user, and on the other, the fire's behavior. As the user traverses the landscape, they attempt to control their perspective through their movement and orientation of gaze. The fire responds in autonomous ways, by changing its behavior. Conversely, unexpected changes in the wildfire induce shifts in the user's actions as they attempt to manage the uncertainty.

Plastic Landscape - The Reversible World

"Plastic Landscape - The Reversible World" is an Al-generated 3D animated video design that shows the apocalyptic and surreal world surrounded by artificial plastic mixtures and objects in the ocean, urban city, Antarctica, and forest. Four different scenes are animated, with the camera panning slowly from left to right. Viewers can observe how the plastics are decomposed at a slower speed by looking at particle animations. Sound is created by the data of the decomposition of plastics. Different types of plastics and speed of decomposition determine the frequency, amplitude, and parameters of audio synthesis. This scene animation is inspired by Ilwalobongbyeong (a folding screen) behind the king's throne of the Joseon Dynasty. This animation depicts the twist of the landscape. Surreal objects/buildings in this animation made out of plastic look beautiful and mesmerizing at first glance. However, the viewers can notice that they are the decayed objects and destroyed nature impacted by human beings. This new multi-sensory artwork addresses the awareness of plastic pollution through the apocalyptic lens.

Sensitive Floral

"Sensitive Floral" is an interactive, generative artwork that ventures into the exploration of a unique generative system, biomimetically emulating the reactive behaviors of the Mimosa plant. By synthesizing the complexity of fractal tree data structures with the Cellular Automata mechanisms of grid computations, the artwork elegantly mirrors nature's adaptive and responsive traits. The interactive interface allows users to initiate a ripple of movement by simply touching the screen, triggering a cascade of changes across thousands of leaves, akin to the group leaf movements observed in Mimosa. The system, instantaneously detects which branch of the tree structure on the touch screen is being externally triggered. It then notifies the grid calculation system, based on the Cellular Automata mechanism, to generate parameters for angle gradients of closing behavior between branches. These parameters are applied in real time to the image generation system of the floral structure, altering the overall appearance of the flower.

The flower shaping heavily employs the recursion mathematical mechanism. Through a set of geometric relationship designs, a concept like cell division is used to grow the next branch. In the iteration process, two characteristics emerge: (1) the development of an organizational relationship of length and angle between lines that presents an organic gradient sense, and (2) the development of movable joints between lines that can display activity for subsequent imitation of the Mimosa pudica motion cell mechanism, demonstrating dynamic deformations "Close".

In existing research, it inspired that the pulvinar cells at the base of Mimosa leaves propagate electrical and chemical signals in response to external stimuli, leading to a chain reaction of leaf movement in adjacent cells. To implement this interlinking characteristic, I employ the computational mechanism of Cellular Automata, both for aesthetic representation and user sensation, within the tree structure's relationships.

Sonus Maris; Strange Attractor

Sonus Maris; Strange Attractor- is a two-channel video work that navigates the intersections between art and science, developed during an ongoing collaboration between artist Dr. Nigel Helyer and water engineers and scientists at the UNSW Water Research Laboratory (WRL).

Working in close partnership with WRL postdoctoral researcher Dr. Tino Heimhuber, Helyer employs audio-visual media to reinterpret data charting the unique dynamics of intermittently closed and open lakes and lagoons (ICOLLs). ICOLLs are the most prominent type of estuaries found on the NSW coastline and are unique in that they alternate between open and closed oceanic entrance conditions, driven by the dynamic interplay between oceanic and land-based forces. The fluctuations of water flow act as 'canary in the mine' indicators - reacting dynamically to our increasingly kinetic weather systems.

Through data archaeology and a novel algorithm "Inlet Tracker', the collaborators extract valuable information from a fourdecade archive of public satellite imagery, drawing attention to long-term morphological and eco-hydrological variations in these crucial sites. Helyer interprets this detail-rich source material to compose a series of musical scores translating the flow dynamics of the four ICOLLs sites as a multisensory experience. Helyer's animations of satellite imagery and experimental music invite audiences to rethink knowledge systems by seeing, feeling, and hearing the flows and patterns of coastal environments.

Superb Lyrebird Sequences, 2023.

In Unruly Times, like many of us, I have experienced two significant stand-out events, the devastating bushfires of 2019- 2020 and the COVID-19 pandemic. In response, my focus has been on reconnecting with the natural world by closely observing local Superb Lyrebirds, documenting their unique behaviours, vocalizations, and dance.

As the field of deep fake algorithms continues to evolve, their ability to convincingly simulate the visual appearance and acoustic characteristics of real individuals becomes increasingly advanced. Interestingly, the Australian Lyrebird has been a master of mimicry since ancient times, mimicking birds within its environment and more recently, chainsaws and cameras. Inspired by this natural and artificial phenomenon of mimicry, this project explores the intricate dynamics of representation, perception, and deception.

The first sequence documents the Lyrebird's mimicry in realtime, disrupting the interplay between real, reversed, and negative time to distort traditional conventions of cinematic time. In the second sequence slowly animated hybrid creatures, are morphed together. Created using genetic algorithms trained on millions of images, new images were created by cross-breeding multiple image genes. These morphing entities exist in a synthesised latent space, created by the artist, the community, and algorithms, challenging notions of abstraction, representation, and authorship. Thirdly, a text-to-image-generated image of a person converses using the lyrical language of the Lyrebird. This convergence hints at a future where interconnectedness between humanity, technology, and nature becomes further intertwined.

By blurring the boundaries between these seemingly separate domains, I invite viewers to contemplate the interplay between the enigmatic aspects of nature, our conceptual understanding of representation and perception, the potential dangers of hyper-realistic fakes, and the potential futures of virtual characters embodied with animal behaviours.

The Garden of Unearthly Delights

The Garden of Unearthly Desires is an interactive physical/digital installation in which three, real-time simulated biomes evolve over the course of a day as users make ethical decisions that alter the paramaters of the virtual worlds.

The work is inspired by the paintings of Renaissance master Hieronymus Bosch, which depicted surreal worlds with mythological characters in order to explore the dynamics between the spiritual and the physical. Bosch's masterpiece 'The Garden of Earthly Delights' is a triptych altarpiece depiecting Heaven, Earth and Hell according to Bosch's visualisation of the moral characteristics of each sphere.

My work takes Bosch's original and turns it into an interactive morality play in which user inputs create chaos, ecological harmony, or hedonistic effects within three evolving biomes. Each virtual world is controlled by artificial intelligence behaviours: characters explore, interact, hunt and fight; plants grow, flower and wither, all reacting dynamically to stimulus.

Audience behaviours influence the evolution of the worlds. Periodically, a narrator asks questions that prompt the audience to make decisions that reflect their attitudes towards the world. These questions will ask audiences to think carefully about issues such as climate change, social responsibility, future industries and culture all couched within a poetic narrative based upon medieval literature.

Users respond by selecting from a range of responses that cause the narrative to branch. Their selections are fed back into the virtual world, changing the way its characters and environments adapt. Each selection is mapped to a set of ethical variables, with corresponding algoithms that control environmental simulations.

The 'health' of the virtual world is charted on an onscreen data dashboard, providing real time statistics of the sentiments of the audience. In this way, the visual and simulative evolution of The Garden becomes a data visualisation of the behaviours of the audience.

TreeGAN

TreeGAN is an investigation into how machine learning and generative adversarial networks (GANS) create 3-dimensional objects. As machine learning finds an increasing number of applications within visual culture, we was interested to see how such systems might influence how we think about 3D objects. When this project started in 2019, there were relatively few art projects that used machine learning to produce 3D objects and even fewer that were trained on 3D objects to produce 3D objects (as opposed to synthesising 3D forms from 2D images), partly due to the paucity of conditional datasets of 3D objects. We synthesised a dataset of 3D objects using a form that is easy to produce and recognise - trees. Previous studies for 3D machine learning tended to focus on geometrically simple objects such as IKEA furniture (Lim et al 2013) and industrial objects (Wu et al 2016), therefore, trees presented an opportunity to observe how a 3D machine learning system would approach complex yet familiar organic forms. Trees are often used in visual art as metaphors for the human experience, from the scholarly pines of Chinese ink painting (Clunas 2002) (McMahon 2003) to the martyred oaks of German Romanticism (Rosenblum 1975), and thus add an empathetic layer to our formal exploration. Three-dimensional trees are easy to produce on a large scale using Lindenmayer systems, and we made 76 unique tree templates, based on art historical references and exported 350 random variations of these templates, giving us a dataset of just over 26,000 3D trees. We watched the transition of beautiful abstractions as the system progressed from random 3D noise to recognizable trees, a process we likened to the analytical cubism of Picasso and Braque in the early 20th century, where we could observe a new technological system developing its own form of figuration.

Visions of Destruction

"Visions of Destruction" is an interactive AI-aided artwork that critiques the human impact on the environment. A viewer's gaze, detected by an eye-tracking sensor, causes transformations in the landscape imagery. Hence, merely by observing the digital scenery, the spectator induces dramatic changes at the points their gaze touches.

AI-generated 'beautiful landscapes', constructed by Stable Diffusion, present viewers with a romanticized version of nature derived from the collective human memory, as represented by a web-based training dataset. The piece operates in real-time, providing a unique experience for each viewer, symbolizing the Anthropocene and the urgency to protect the natural environment. A viewer's gaze acts as a metaphor for human presence and the irreversible actions leading to today's climate crisis.

Technically speaking, an eye-tracker registers the gaze, which then triggers an image change precisely where the audience's eyes land. Using an array of pre-set prompts and inpainting with Stable Diffusion, viewers witness how pristine nature begins to deform before their eyes. Consequently, participants can reshape mountains, carve rivers, erect cities, and disrupt the initial idyllic nature, experiencing the metaphorical destruction and tension between technology and nature. When the eye-tracking detects no viewers, the landscapes begin a regeneration journey. Nature finds solace in this symbiotic dance between human presence and absence, its beauty flourishing. Additionally, the project brings interactivity into the realm of AI art.

The artwork effectively utilises generative models to emphasize the urgency of the climate crisis. By transitioning from serene landscapes to scenes of ecological devastation, it captures the stark realities of our evolving world. This aligns with the audience's crucial role in molding our environment and highlights everyone's duty to nature. As such, "Visions of Destruction" stands not only as artwork but also as a call to action.