From the moment mankind began observing the night sky, there is one question that has been in my mind for a very long time. As our attempts to find answers to these questions continued and progressed, we learned more about the universe and humanity.
What is left of us now? The SETI (Search for Extra-Terrestrial Intelligence) project, which seeks to find evidence of civilization outside the earth by discovering artificial radio signals from space, has continued to this day with the development of radio astronomy.
Breakthrough Listen, which aims to observe one million stars and one hundred galactic centers, is a radio wave exploration project that represents this modern SETI, and has been constantly updating new observational data from 2017 to the present. In particular, some of the data are recorded as unique radio signals of unknown cause. This work is the realization of real-time audio-visualization of data containing this unique signal with 2-channel projection mapping and 16- channel surround speakers.
The work creates a cosmic experience that waits for the unknown signal hidden in it by making the audience sense the radio signals from outer space with light and sound. Through this cosmic experience, the audience will be able to reconsider the possibility of an unknown being, so that we can look back at ourselves from a cosmic point of view that all humanity is ultimately one.
The journey to find extraterrestrial intelligence is the same as the journey to find the answer to the question about humanity. Now that the value of humanity and life goes beyond Earth, we have a time to ask the question that humanity has long held in mind. "Are we the only intelligent life in this universe?"
The artificial, a curtain of cubes, interrupts and rolls over the natural order. Leaves fall and freeze into polygonal crystals. As the artificial interruption spreads fields and leaves shatter into ice cubes. Nature once fluid, splatters, then freezes. The seasons fold like boxes into structures of gray, as we see time crystalize.
The AI cuts the atmosphere into rectangles and cubes, abstract, perfect, but empty of life squeezed into a splatter and gray wave.
The AI slides open a door. It leaks out to become a tight, glacial surface of data cubes, a networked sea of ice. Under this sea sheet of networked ice, the last mammal breaks through, swims away, and with her our past coded in her genes.
'Alpha and Omega' deals with the emotional temperature difference between the fear and anxiety of those who have experienced a disaster and the attitudes of those who do not.
This is because the artist, who returned to Seoul after suffering an earthquake around 5 am on February 11, 2018, when the second largest earthquake in Pohang (intensity 4.6), experienced the atmosphere of Seoul unlike Pohang. The severity of the earthquake felt at the epicenter of Pohang is not shared in Seoul. The artist interprets the difference in reaction between the two cities as "the difference in the senses due to the imbalance between information and experience."
To express the sensory gap between the two cities, seismic data of two cities, which are objective indicators of earthquakes, are used, and the images of the horizontal (Seoul) and vertical (Pohang) axes created at the intersection reflect the earthquake intensity data of each city for 10 years. do. The higher the intensity of the earthquake, the greater the change in the axis width or line. The sound is also connected with data, and it is divided into two channels: Channel 1- Left (Seoul) and Channel 2-Right (Pohang). The sound converted to MIDI changes in pitch and rhythm depending on the magnitude of the earthquake. As such, the difference between the amount and intensity of the disaster experience shown by the data is proposed in a form that can be perceived as image and sound.
Rather than simply reproducing the overwhelming fear and pressure of a disaster image, data sonication can listen to the data and provide a new synesthesia experience. In a space where disaster data is replaced by light and sound, audiences can freely experience a new type of disaster.
A still photo from artificial life, this is a frozen moment from the movement of simple shapes. Repeated geometric forms were rotated and transformed over time and complex interwoven abstract patterns emerged. These unexpected forms are born from motion and feedback. Initial graphical parameters were predefined and the patterned evolution was set in motion. When the motion is paused the cellular beauty of individual frames is revealed. Chance plays its part in this phenomenon. Individual parameters are predetermined but the end result is indeterminate. The space between known quantities is where the unexpected patterns and lights emerge.
This work explores a new process of creativity generation under the guide of Jordanous's Four PPPPerspectives (2016) and speculates intertwined relationships among multi-contributors in computational creativity. This work can also be seen as an experimental multispecies storytelling on creativity.
This experiment collected the images from OpenProcessing community as training samples and fed them into styleGAN to generate many images. Then these images are postprocessed as environment-driven interactive moving images by optical flow algorithm.
In this computational system, it is speculated that humans are inspiring themselves and that all other nonegos are used as bridges and catalysts in a closed loop, to some extent. I hope this work can motivate audiences to think about the definition of creativity and reflect on human's unique ability on creation by comparing with machine and nature's creative abilities.
Japanese cuisine was registered as a UNESCO Intangible Cultural Heritage in 2013 designated with an evaluation as a social diet custom embodying a Japanese spirit that respects nature. However, it is difficult to fully understand the cultural characteristics of Japanese food because the actual meal is limited to taste and visual information such as taste, ingredients, tableware, and presentation. In this system, in order to convey Japanese food culture, interaction and video expressions are combined with actual meals, and as users proceed with meals the natural environment, text, and Ashirai projected on the tableware change and the system allows you to learn about the richness of nature that supports Japanese food, the changes in the four seasons, and the relationship with traditional events. The natural environment changes in 10 stages, and users can experience the changing seasons and beauty at their own pace. Ashirai are vegetables and flowers that are added to complement the dishes, and four types of treats are projected onto the tableware according to the changing seasons. The Japanese food in this work consists of "Ichiju Sansai," which means one soup and three vegetables. "Ichiju Sansai" is said to be the basis of Japanese food and is composed of a staple food, a soup, two side dishes, and a main dish, and their arrangement is also fixed. We asked several people to experience the system and we were able to tell the users that Japanese food is supported by natural riches and seasons while influencing traditional events as well. It has been also confirmed as another effect that users re-think the act of eating and re-realize Japanese cuisine including recognition of the blessings of nature. Demonstrations are held at the art gallery using cooking models.
"Serial paintings" are a series of works highlighting the role of the canvas - or more generally the support of the painting - in the construction of the final form. "Back and Forth - Pneumatic Anadrome" is the first work in the series. In it, a string of coloured beads is pushed back and forth between two spiralling "canvases" using compressed air. Without rearranging the order of the beads, the string is forced to coil alternatively into one or the other spiral: this folding and unfolding reveals in turn images or text with opposing meanings or connotations. Each image has to be destroyed in order to create the other; this cyclic process of creation and destruction is purposefully revealed and triggered by the curiosity of the public.
BirthMark proposes an artificial intelligence model of an audience to evaluate and anticipate the audience reaction to media art. In BirthMark, human cognitive process of appreciating artwork is defined in three stages: "camouflage," "solution" and "insight." In other words, understanding the intention (solution) from hidden images (camouflage) and realizing its meaning (insight). Watching the archive video clips featuring different works by 16 artists, the A.I. in BirthMark tries to appreciate works of art in a similar way to humans. YOLO-9000, an object detection system, tracks objects in the images of the works, while ACT-R, a cognitive architecture designed to mimic the structure of the brain, reads and perceives them. The A.I.'s process of recognizing works is shown in the video and the keywords of the works found in this process appear on a small screen. At the same time, an old slide projector shows what the A.I. understands semantically about the artists' interpretations of their own work.
The A.I.'s cognitive process seems to be similar to the human act of appreciating art at a glance. But in reality, the keywords that it accurately analyzes from the images are only 2 to 5 out of 300 words. The more abstract the work is, the worse the A.I's intelligibility gets. As the "birthmark" in a short story of the same title by Nathaniel Hawthorne represents, BirthMark implies that there is a realm of humans that can be hardly explained through scientific methodology.
Blind Landing is composed of a helmet that tracks brain waves and eye movements, and AI software that analyzes the Youtube video frames. The work show how visual stimuli from algorithmically promoted contents affect viewer's behavior pattern, and induce the viewer to recover from their trusting and blind submission to the social network's algorithms of appreciation. To participate in the work, the audience is asked to put a helmet on. Then the viewer is subjected to the vision of one of the most appreciated online videos. Laterally, the screen shows the same videos analyzed by artificial intelligence software, which also colors the parts of the video that have been most seen by the viewer.
Blind Landing captures the user's data and shows how predictable they are. Two systems were independently implemented for this purpose: 1) AI model that predicts and simulates gaze; 2) Custom built software that acquires real user data in real time and compares it withthe previous prediction model. The workutilize participants' EEG brain signals to generate the attended scene while the YouTube appreciation. For easy wear, EEG hat was sawed inside the 70's Pilot helmet. To allow 360 degree of freedom, hanger was installed with iron pivot on the ceiling.
The aim of the work could therefore be to induce the viewer to recover from their trusting and blind submission to the social network's algorithms of appreciation, showing this cynical and perverse possible retaliation with wide eyes.
The recent development in the machine learning field encounters interesting robots' creative responses and becoming a challenging artistic medium. They are two possible directions in the future development of robots' creativity, replicating the human mental processes, or liberating machine creativity itself. At the SIGGRAPH Asia, we would like to present the artwork "Botorikko, Machine Created State" conceptualized with the intention to point on Post-Algorithmic Society where we are going to lose control over technology by been obsessed with the idea of using it to serves humanity. In our aesthetical approach, we incline to the 21st-century avant-garde conceptual tradition. We intend to draw parallels between Dadaism and machine-made content and encompass technological singularity and Dadaism into one, Singularity Dadaism, as a human-less paradigm of uncontrollable creative practice closely related to AI aesthetic and machine abstraction phenomena.
Creativity and the act of creating art are some of the greatest challenges the new generation of artificial intelligence models are exposed. Nevertheless, by creating AI agents to achieve and exceed the performances of humans, we need to accept the evolution of their creativity too. Hence, there are two possible directions toward the future development of robots' creativity, either to replicate the mental processes characteristic for humans or liberate machine creativity and leave them to evolve their own creative practices.
In the artistic origination of the artwork "Botorikko, Machine Created State," the appearance and generated dialogues between Machiavelli and Sun Tzu artificial intelligence clones resembles Aristotle's Mimesis as human's natural love of imitation and the pleasure in recognizing likenesses and Dadaistic ideas linked to strong social criticism against antiprogressive thinking. We are trying to shift AI as a creative medium beyond traditional artistic approaches and interpretations, and possibly to accept it as co-creative rather than only assistive in the age of AI and Post-Algorithmic Society.
BOX is an interactive installation, consisting on an everyday object augmented by artificial intelligence. The piece reflects on the power asymmetries that technology instantiates, aiming at providing with a reflection on the aesthetics of our relationship with it. The artwork also aims to showcasing the advancements and limitations in computer vision and artificial intelligence, allowing the public to experience in person its power as well as its inherent biases.
Recent advances in computer vision and artificial intelligence, have allowed the creation of systems able to infer (predict) information on a person from camera data, including facial recognition, facial expressions, ethnicity, among others. Nowadays, several companies provide image processing services that include these predictions, among several others.
In spite of potential benefits that face recognition proposes, its widespread application entails several risks, from privacy breaches to systematic discrimination in areas such as hiring, policing, benefits assignment, marketing, and other purposes.
BOX consists of a gumball machine that, using computer vision and machine learning, predicts its user's ethnicity, delivering free candy only to white users.
The artwork showcases a possible use of computer vision making explicit the fact that every technological implantation crystallises a political worldview, allowing the general public to experience in person the power of these new technologies, while simultaneously providing a tool for participatory observation, as well as ethnographic and technographic research.
Our project aims to raise awareness on discrimination, ethics, and accountability in AI among practitioners and the general public.
Humans and machines are in constant conversations. Humans start the dialogue by using programming languages that will be compiled to binary digits that machines can interpret. However, Intelligent machines today are not only observers of the world, but they also make their own decisions.
If A.I imitates human beings to create a symbolic system to communicate based on their own understandings of the universe and start to actively interact with us, how will this recontextualize and redefine our coexistence in this intertwined reality? To what degree can the machine enchant us in curiosity and enhance our expectations of a semantic meaning-making process?
Cangjie provides a data-driven interactive spatial visualization in semantic human-machine reality. The visualization is generated by an intelligent system in real-time through perceiving the real-world via a camera (located in the exhibition space). Inspired by Cangjie, an ancient Chinese legendary historian (c.2650 BCE), who invented Chinese characters based on the characteristics of everything on the earth, we trained a neural network, we have named Cangjie, to learn the constructions and principles of all the Chinese characters. It transforms what it perceives into a collage of unique symbols made of Chinese strokes. The symbols produced through the lens of Cangjie, tangled with the imagery captured by the camera, are visualized algorithmically as abstracted, pixelated semiotics, continuously evolving and composing an everchanging poetic virtual reality.
Cangjie is not only a conceptual response to the tension and fragility in the coexistence of humans and machines but also an artistic imagined expression of a future language that reflects on ancient truths in this artificial intelligence era. The interactivity of this intelligent visualization prioritizes ambiguity and tension that exist between the actual and the virtual, machinic vision and human perception, and past and future.
The physical part of the installation composes of two major elements, namely a wooden table and a water ecosystem. The table is served as a water reservoir as well as an interaction interface while the water ecosystem provides a non-stop water flow for the water reservoir.
On the technical side, the installation is set up with multimedia system consisting of a Mac Pro computer, four projectors, a speaker, a network router, an Xbox Kinect Sensor and an Arduino board. The Arduino board is connected to five relays, two ultrasonic sensors, one infra-red sensor, one water-flow sensor and one water-level sensor for detections and signal transmissions.
There are three major interactions in Cascade.
First, when audience enter the room, the infra-red sensor senses his movement and turns off the table lighting. Projection of a cascade then fades in with music triggering the water ecosystem to be turned on to form a water reservoir.
Second, projection reminds audience to open the under-table drawer and interact with pebbles. When users open the drawer, the ultrasonic sensor senses the drawer movement and turns the LED on/off.
Third, when users place pebbles on the table, water volume and flow rate of the reservoir will be changed. Therefore, the water-level sensor is used to detect the amount of water for controlling the water overflow and the water-flow sensor is used to transmit serial signals for projections of swimming fish to be mapped onto the reservoir accordingly. Pebbles are detected by the Xbox sensor to avoid fish collisions.
chameleon change the color of skin according to the surrounding environment for hiding the body, can not only avoid predators, and also confuse their prey. I use against neural network, I input a lot of images of chameleon various parts to the machine learning algorithm, the neural network generated the color-changing chameleon skin based on these pictures. Artificial-intelligence one day can be like a chameleon hide himself with pixel camouflage? Work is divided into two versions, image and image + interaction: in the interactive version, the color of the chameleon can change color according to the color of the catch by the electron trap.
It is an inevitable fact that social interaction nowadays is heavily mediated and distorted through the social network system. The presence of various media replicates our images to reproduce under a confirmation bias in our cognitive process. There has been a distorted gap between the original and reproduced images to reveal our identity in this process. The relationship form through this refracted self-image affects the user and the other users and eventually forms a complicated surveillance system.
"CURVEillance" is an interactive art installation that criticizes this phenomenon by the vision of cameras that track the audience who approaches the surveillance cameras by following steps: 1) Digital images on the media wall are shown by re-pixeled visualization of the audience image through the vision of cameras, and then, 2) In order to capture the movements of audiences, the camera system actively moves and follows them.
Specifically, each single camera lens automatically reacts and stares at the most active individual in the exhibition space by real-time. The media wall presents the reflected images of audiences as the observed objects by a crowd of cameras. In this process, images are scattered and overlapped so that it is hard to recognize the original form. Some participants attempt to get the camera's attention even if their image can be damaged while others are unintentionally monitored. Therefore, it induces a reversed interaction between participants and the media wall that brings tension from the technical eye. Ultimately, the installation aims to raise questions about the distortions of relationships of individuals who usually continue to use the media system in the digital era.
Deconstructing Whiteness is an interactive AI performance. It examines the visibility of race in general, and 'whiteness' in particular, through the lens of AI. The performance reveals some underlying racial constructs which compose the technological visibility of race. The artist uses an off-the-shelf face recognition program to resist her own visibility as a 'white' person. By utilizing a performative behavior she slightly changes her facial expressions and her hair style. These actions modify the confidence level by which the machine recognizes her as 'White'.
Face recognition algorithms are becoming increasingly prevalent in our environment. They are embedded in products and services we use on a daily basis. Recent studies demonstrate that many of these algorithms reflect social disparities and biases which may harshly impact people's lives. This is especially true for people from underrepresented groups. Scholar Paul Preciado claims that if machine vision algorithms can guess facets of our identity based on our external appearance, it is not because these facets are natural features to be read, it is simply because we are teaching our machines the language of techno-patriarchal binarism and racism. However, it is important to remember that these systems are not 'things-of-themselves'; there is no reason for them to be outside of our reach. We are able to intermingle with these systems so that we better understand the coupling between the information and our own bodies. This entanglement, as seen in the performance, reveals our own agency and ability to act.
In Deconstructing Whiteness the 'White' and 'Non-white' dichotomy is ditched in favor of a flow of probabilities which are meant to resist, confuse and sabotage the machinic vision and its underlying structural racism. The performance is also a call for others to become curious regarding their own visibility and to pursue a similar exploration.
As we all feel as of 2020, most things in the world are reorganized around digital technology and the influence is strong. This technology is rapidly changing toward AI and M2M, where hardware is becoming nanoscale and software is less human intervention in order to process data flooding more efficiently and quickly. For this reason, we were the discoverers of this technology, but we feel a gap where we no longer know how it works. And I think it provides an opportunity for digital life, Digital Being, to emerge from that gap.
I have been looking for an invisible and formless creature born out of abandoned technology for 10 years in New York City. I call it, "Digital Being," and it is also the title of my hypothetical story. This creature has atypical movements or other interactions depending on the machine it dominates. Also in this story, the entity can transmit itself over the network.
Digital Being: "Hello, World!," a primitive version of the digital being family, has arrived into LCD touch screens through the network. There seems to be no physical connection, but you can guess by seeing the Wi-Fi indicator briefly appear in the right corner of the screen every time I boot it up. I don't know why, but, after arriving, it started to create a flag image by constantly collecting small pixels. To me, this action seems like a process similar to how people co-evolved the state. Please come closer and see what it is doing by touching the screen.
"Distance Music: Preferred Population Density for The Acoustic Hygiene" is an interactive ambient music installation, which interacts with the number of listeners in the installation. The installation consists of three parts, which is the music engine, sensor, and a video loop. While people watch some archives of vintage government educational videos about social hygiene inside the room, the sensor will measure density of the room and send it to the pre-recorded music engine, which will interact with the data from density sensor, evolving into more intense music as the population density rises. As a result, people will hear unpleasant noise the more they are close to each other. This installation is inspired from 'social distancing', a global experience during this ongoing pandemic era, as some of the music engine process represents and simulates 'distancing alarm' for social distancing, therefore acts as an experiment of public alarm for social distancing. We believe that the people's memory with this unprecedented worldwide incident will resonate efficiently with the installation, and a great deal of inspiration as well.
Can we look at a person's face and determine how they feel? AI emotion recognition systems are designed to detect faces and return confidence levels across a set of emotions such as anger, contempt, disgust, fear, happiness, sadness and surprise. Such systems already operate in our environment and might have the capacity to seriously impact our lives. In the live performance 'Don't Worry, Be Happy', the artist is strapped to an electric chair. Her face is constantly detected by an emotion recognition AI system. As long as she is detected as 'Happy' she is safe. However, each time any other emotion is observed, she receives an electric shock to both her arms. During the performance the artist changes her apparent behavior in order to free herself from the 'punishment' that the AI system delivers. Yet, under the threat of getting shocked, for how long can she perform this exaggerated facial expression so that the machine continues to 'read' her as 'Happy'?
The apparatus presented in this performance utilizes the electric chair as a reminder of the use of new technologies as instruments of the law. Such tools initially appeared as legitimate solutions and was later own understood as problematic and inhumane. The performance aims to remind the viewers that we are not powerless when we are confronted with AI algorithms that are looking at us. The artist resists the emotion recognition system by faking a smile. This performative engagement points out to a future of a post-algorithmic society in which we change our behavior in order to align it with algorithms in our environment.
In this robotic art project, we developing a series of autonomous behavior as fictional design to mimic that robot has its own conscious. We make robot holding a pencil for drawing line-based illustration, metaphor robot is dreaming, according to in "Do Androids Dream of Electric Sheep?", whether you can dream is used as a romantic question to test between life and artificial life. In the system, the industrial robot independently explores the Environmental Information through the depth sensor and AI system in exhibition space. And then the detected data will used to input into Dandelion-like Generative System(DGS), which made by algorithmic data structures from fractal tree and L-system for mimic nature phototropism mechanism and growing processes. Finally, we used grasshopper develop a mechanism to convert the linebase dandelion diagram into robotic painting by manipulation paths and gripper's act, to make robot physically holding the pencil to draw it out on a paper. This installation will painting an unique dandelion illustration per day, it can also see as specific data visualization by its dynamical physical environment. This stroking processes and its painted outcome can see as a hybrid creation combining computational aesthetics and Robotic Stroking.
Empathy Wall is a work that starts with the question of whether when human-to-person communication is applied to human-to-technical communication, it can produce feelings close to the "sympathy" that occur between people. Empathy Wall wanted to develop human-to-human communication into human-to-object communication using the latest IT technology, and we wanted to expand this into empathy and familiarity between technology and humans. In other words, through the work of art called Empathy Wall, we are trying to extend human-to-human consensus to human-to-technical consensus formation. In the Empathy Wall, the two audiences in each room divided by walls cannot see each other, and an image appears on the wall. The two audiences are given the same subject and question, and if they talk about it freely, their emotions will be analyzed through AI algorithms according to the story, and images based on Kandinsky's theoretical rules will appear on the screen. At this time, the images from the emotional analysis of the stories of the two audiences are all mixed and appear on the walls of the room. Through this process, the audience will be able to see images automatically generated regardless of their own story being expressed on the same screen, in addition to images that respond to their stories. At this time, you can think of images as images of audiences in other rooms, or you can think that the walls themselves create images. During the experience of the Empathy Wall, the audience can feel emotions by looking at images that respond to their stories, and can feel emotions with other audiences as they automatically appear and mix with their own images regardless of their own stories, and furthermore, people and walls can feel empathy.
Drama mask culture is a classification in the long history of Chinese traditional culture. But now many young people had been almost forgotten this kind of culture, our purpose is to think through a can make young people more likely to focus on a way, the opera masks appear in public, utilizing new media in this work, we break the traditional opera masks, allow the user to the style of be fond of according to oneself design their own opera masks, generated by AI technology will play facebook elements (eyes nose mouth lines) into various styles: Pixel style, style and line fault style more popular modern fashion elements, such as the user side/tablet and mobile phone use through the program can design their own opera masks, and then click send projection on the wall, appreciation of their own design art show facebook generation process and form, get belongs to own a new opera masks media work. This work wants to let you know: in fact, traditional culture has not been outdated, and can be very cool.
The author noted the surrounding environment for creating gentrification using culture and arts in Munrae-dong. The author wanted to express the author's view of the situation in Munrae-dong, where gentrification is taking place, and the image of the person be used and consumed in this situation through partial visualization of four-dimensional space.
The screen's nature is to both show and to obscure. It forever hypnotizes us, seamlessly eliminating its own qualities as a substrate. It owns the characteristics of a Zelig: forever changing, unstable in any context, and destabilizing context itself. Informed by photography, film, and every meme that ever was, the digital image shifts readily between aspects of each. Its meaning is necessarily slippery and hard to define; possessing a quality that makes it hard to pin down or make fit into a neat category.
Given this slipperiness, can we ever grasp the basic, tectonic components of the digital image? The bits and pixels of the screen do little to help our visual understanding of its relationship to one's perspective in everyday life. The seductive illusions and concomitant complexities of our online experiences have enabled an entirely new trompe l'oeil hell of phishing attacks, spoofs, and cross-domain tomfoolery. Digital images, precisely because of their ambivalence towards the picture plane, forever slip from our grasp. Only as Flusser's metaphorical wind blows them from our mental, perceptual grasp do they reveal aspects of their construction. Rather than fight against this liminal quality, we exploit it. Good-for-nothings celebrate the disappearance of materiality; albeit, through lack, dejection, and an embrace of the absence that seems to have brought much of our culture to a standstill.
Forever shifting, always shiftless, on an endless joyride from nowhere to anywhere. How does one go about working with this shiftlessness? Each Good-for-nothing raises its metaphorical glass to Herman Melville's crème de la crème good-for-nothing anti-hero, Bartleby. They are images aligned with a scrivener of the post-modern age that can only tell us: 'I prefer not to'.
Hauntings (http://johnt.org/hauntings/) is a portrait of Australian artist/writer Francesca da Rimini. Francesca was a founding member of cyberfeminist collective VNS Matrix. Hauntings uses mixed reality technologies to create a series of portraits of Francesca reading her writing. Her stories are autobiographical and explore cultural diaspora and cross-generational family histories. They act as spells and incantations and often draw from algorithmic writing techniques.
These mixed reality (XR) encounters seek to recreate and reinterpret the experience of da Rimini's performances and readings. Each reading/performance is constructed as a kind of virtual sculpture that the audience can explore and interact with. These virtual spaces and the writing itself both share a deliberate tension between wanting to make sense of one's place in the world and the acceptance of fragmentation, of the breaking down of meaning and of the image. They seek to call forth more diverse perspectives on reality. The interactive experiences are influenced by Tonkin's on-going explorations into embodied perception and the relationship between the movements of a viewer and the active bringing forth of a world.
Nowadays, with the continuous progress of science and technology, the environment is getting worse, and human beings are beset by diseases. The human body is constantly suffering from pain, disease, and even life threat. When human body organs can't meet our perfect requirements for perfection, perhaps an electronic pill can solve our troubles. This work attempts to present the visual art expression form of projection mapping in the future world of scientific and technological progress, and designs the image content with the concept that human organs are continuously eroded and finally cured through the intervention of electronic pills. The first part of the work shows the formation of human lung organs and the process when they are invaded by viruses outside the body. This part strives to present the reality of human organs and the fragile human body. The middle section describes that through the intervention of electronic pills, one lung is gradually deformed after being alienated by electronic technology to form an efficient and indestructible mechanical lobe, while the other lobe is continuously eroded by viruses without the intervention of electronic pills. At the end of the work, it presents the possibility of lung being repaired and cured constantly through the style of Cyberpunk.
This work is an online installation that creates a new audio-visual using automatic video selection with deep learning.
Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music.
This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space.
This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.
The interactive immersive installation, I'm thinking what I am thinking, resembles a diagrammatic huge brain processing everyday data. The space includes a projection screen, a vintage CRT TV, 8 speakers hanging throughout the room, and under a spotlight there's a rug with stepping sensors. The environment combines sound and generative graphics, creating a subconscious experience for the visitor that calls on their intuition and cognition. It asks the question, 'Are we completely conscious of our thinking patterns when making a decision?'
This work was produced in the context of the Leaning Out of Windows project, where artists, scholars and physicists are placed in collaborative dialogue in the development of new artistic works. One of the overlaps between experimental particle physics and my own work is the deconstruction of material in order to inspect the nature of objects and their constituents. The source material for this work is a set of photographs of the experimental apparatus of the TRIUMF particle accelerator. The emphasis of the photographs is the beam-lines that facilitate the transport of various particles in the apparatus, which resembles a industrial factory. The various exotic particles are made through acceleration, filtering and collision. I often work with photographic imagery and machine learning methods to question the relations between objects and contexts, reality and imagination, and realism and abstraction.
This image is composed from 130,000 image fragments extracted from 100 photographs taken at TRIUMF. Image fragments are constructed by the algorithmic selection of areas of somewhat uniform colour. The edges of these fragments are an emergent result of the interaction of a segmentation algorithm and the photograph. The image is constructed by collaging these fragments where placement is determined by grouping fragments, according to colour and orientation, using a self-organizing machine learning algorithm. The macro-structure is then also an emergent result, this time following from the interaction between the self-organizing algorithm and the set of photographic fragments. While I have used this fragmentation process in other works, it was my exposure to Karen Barad's concept of "cutting together/apart," that solidified by thinking on objects as resulting from the creation of boundaries through (inter)intraaction. This conception aligns very closely to what I've been thinking about as Machine Subjectivity that is enabled by imagination as boundary-making and a critique of classification.
The extension of Nanography to Cinematic Projection (various experiments based on the act of "seeing") the works prove that the attempt to present a new perspective on the act of seeing are made in various stages. Major works are motivated by comparing the images of old and new Hanji (traditional Korean paper) taken by an electron microscope. In the image of the old Hanji, Mother Nature is engrained with the traces of time accumulated. The image resembles the scenery of mountain in that there are soil, trees grow, flowers bloom and fruits are born. With this motive, the background of this work turns to the nature. The photo works are harvested in the process of shooting all over the country by time and season.
They highlight the contingency rather than intentionality and enhance fictitiousness by blurring the line between the actual forest and the virtual reality synthesized with a nano-image. Why don't we imagine that the screen-like image in the wild nature is the screen of an outdoor theater? By stimulating the emotional code of a fictional drama, it spurs us to recall movies based on a specific situation. This work led to an opportunity for the artist to naturally develop the sense of improvisation and direction in the field and to integrate it into other cultural areas. My nano-image was projected behind a scene on a stage of a documentary film starring a pianist, as part of a theater stage set, on a small village in Jeju island, and on a house designed by H-Sang, Seung 's 18 years ago. The space of life and the space of fiction become more romantic because of the fictional clothes that are worn for a while. As the project progresses, the cultural sensitivity becomes more intense against the back drop of science.
주마간산走馬看山 is a four-character chinese idiom that means to look at the scenery while horse riding, and it means to skim through the outer surface of things. <주마간산>, a collaboration between photographer Kim Hun Soo and painter Kwack Youn Soo, captures the scenery of the city as viewed from various perspectives while traveling by various transportations. They live in a city, use the same transportation, and look at similar landscapes, but each person's gaze varies. Even if it is a passing impression, the landscapes that contain each image are piled up and accumulated in the time and space of the city in which we live.
Eyes are everywhere. Cheap and accessible technology products with high-resolution imaging capacity are in every corner of our surroundings to surveil us. They watch us, record us, and even recognize us. These artificial gazes became so ubiquitous and so familiar to us that we are not even aware of them in everyday lives. Meanwhile, human senses interact with each other and transfer from one to another. We hear vibrations, feel textures by seeing, smell tastes, taste tactility, and so on. We feel the movement only by seeing stopped escalator. Our vision translates the visual information to activate the motor sensation embedded to somewhere in the body. "Kam" tries to twist the one's familiarity by the phenomenon of the other. The eyeball-shaped camera follows you and imitates your blinks. The unfamiliar and unexpected behavior of this robotic camera gives lively feel to it and, at the same time, becomes eerie and unreal. It also makes you realize your sensation of blink when you find yourself trying to make it blink. Even its mechanical sounds seem to make you feel your blink physically. "Kam" utilizes the face recognition algorithms to see one layer deeper onto our facial expression. It exposes itself by reacting to the expression, when we would not even aware of it otherwise. It makes us pay conscious attention to it and realize our own bodily existence. "Kam" intended this trivial daily happening to become a meaningful experience.
"Keep Running" is a collection of human and machine generated horse paintings using the generative adversarial network technology. The artworks are produced during the lockdown period in the Middle East due to the Covid-19 pandemic. Horses are significant symbols in the Middle East region culturally and historically, representing strength, endurance and persistence. These are the spirits that keep us running during the difficult times, even when we are facing many physical constraints in daily life. Nowadays, many AI-generated artworks are either photo-realistic or very abstract with distorted faces, fragmented figures and a combination of unknown objects. What technically unique in our work is to produce a series of AI-assisted paintings that shows distinguishable features and forms of horses, while creating aesthetic and even sentimental values in each horse portrait. Each of our art piece is presented in a 2x2 grid format that shows how an AI horse painting is evolved over the generation process. We believe that the machine learning process could unite human creativity with AI technology to produce a series of unique and aesthetic paintings - even these artworks were created in a large scale and mass production method that is never been possible before. With the artistic and technical novelty in our artworks, we also wish to pay a salute to the pioneers like Eadweard Muybridge and Andy Warhol, who first popularized the use of machines with camera and silk-screen printing technologies in art-making, redefining the meaning and expanding the horizon of art.
"Let's Chat Like This" is an interactive system that allows two people to observe each others' moods through interacting with a shared interactively generated image. The moving image changes according to the two people's facial expressions. Different from traditional ways of communication, "Let's Chat Like This" focuses more on the emotional aspect of communication. It shows a visualization of the complexity of human emotion and boosts people's emotional communication in a creative no-verbal way. When experiencing this work, people's emotions are bound together with the same moving image they see. The moving image changes depending on their moods. They will be aware of their current moods as well as the other's, the intimacy and empathy between them will be increased.
This is not only a "social distancing" art installation that helps us connect emotionally during the COVID-19 pandemic, but also my hypothesis of what future emotional communication will be like. I hope this artwork can evoke deep thinking and maybe cheer people up in this challenging time.
AI is essentially 'intelligence' programmed by humans. Although AI which can joke, communicate and tell a story resembles humans, can we continue to have a natural conversation with them, even though it is identified as AI? The current AI is only applied to a certain field, as it is at the step of 'weak intelligence'. It is expected to be developed into 'general or strong intelligence' imitating humans' whole intelligent activity in the future. This work allows us to indirectly experience to consider whether we would treat AI as we do humans, if it is developed into 'strong intelligence', through the conversation with AI which is able to learn emotional words.
LightTank is an interactive Extended Reality (XR) installation that augments a large lightweight aluminium structure with holographic line drawings. It consists of four transparent projection walls which are assembled to an X shape tower like construction of 7.5 x 7.5 x 5.5 m.
The project was developed by the arc/sec Lab in collaboration with the Augmented Human Lab for the Ars Electronica Festival and presented in the Cathedral of Linz in Austria. It aims to expand principles of augmented reality (AR) headsets from a single person viewing experience, towards a communal interactive event. To achieve this goal, LightTank uses an anaglyph stereoscopic projection method, which combined with simple red/cyan cardboard glasses, allows the creation of 3D virtual constructions.
The holographic line drawings are designed to merge with its physical environment, whether it is the geometrical grids of the aluminium structure or the gothic architecture of the cathedral. Certain drawings seem to peel off the existing physical structure, while others travel through the cathedral and line up with the characteristic elements like columns, groined arches and rose windows.
The project follows a hybrid design strategy which places equal attention to both design aspects, the physical and the digital. The aim of the setup is to explore user responsive architecture, where dynamic properties of the virtual world are an integral part of the physical environment. LightTank creates hereby a multi-viewer environment which enables visitors to navigate through holographic architectural narratives.
The greatest mystery of life comes when you're least expecting it and disappears when you thought it is here to stay. The heat that ignites it at the beginning is doused by the intimacy it creates. It is a portal, a mirror, a cross to bear, a joy, a heartbreak, and an axe. It cuts through your hard parts, the gristly parts, and lays your beating heart bare. It is both the butterfly that flutters in your tummy, and the acid that melts everything away. That, my friend, is what we call LOVE.
This work invites viewers to see the world through a machine's perspective. People are accustomed to seeing the world through an anthropocentric viewpoint and create things accordingly. What will it be like if machines are creating things through their perspective? Created by the Mechanical Creator, what are the challenges this group of mechanical life forms is facing for survival? How do they live within and adapt to the environment? We know that hermit crabs are using human trash as their shells. What would the mechanical life forms do when they are interacting with their living environment?
Narcissus was a hunter in Greek mythology who fell in love with his own reflection in the water. Narcissus is the origin of the term narcissism. This artwork, as its name suggests, is based on the myth of Narcissus. A narcissistic ego that conceals the weakness of the individual, focuses energy only on itself. Everyone has narcissism even if a little. Excessive narcissism causes lots of problems from isolation because fascinating by the perfection of oneself makes relationship with the others closed. Sensing the brainwave of an appreciator, creates an interaction that the higher concentration on the reflection on the surface of water, the more blurred observation. Through this interaction, the viewer is interrupted from being deeply immersed in oneself. That experience reflects the reverse of the Narcissus story and expresses that mirrored image of the participant could not exist as complete subject. Also, this artwork is referred to the 'Mirror stage' hypothesized by Jacques Lacan(1901--1981). Unlike the ego-psychologists' assertion that we should strengthen our ego, Jacques Lacan points out the narcissistic ego which is imaginary in human beings and says that we can grow up as a healthy subject through acknowledging that we are lacking ego not just strengthening the ego.
In NEBULA GO, you can see many nebulae and stars being born and losing, and glimpse the secrets that arise in space. In this NEBULA GO, the universe is expressed through Go, and since ancient times, Go was invented as a tool for observing and studying the movement of celestial bodies. In this work, the artist harmonizes the secrets that occur in the universe by using the act of placing a Go in a square space, various fights caused by it, domain creation, and movement of forces. Looking at NEBULA GO, unlike Go, where victory or defeat is determined, you can observe the numerous planets visible in the universe when all actions have been completed and the changes caused by these planets. Also, by focusing on the birth origin of astronomical observation, actors can appreciate their own microcosm through their actions.
How does machine learning contribute to our understanding of how ideas are communicated through drawing? Specifically, how can networks capable of exhibiting dynamic temporal behaviour for time sequences be used for the generation of line (vector) drawings? Can machine-learning algorithms reveal something about the way we draw? Can we better understand the way we encode ideas into drawings from these algorithms?
While simple pen strokes may not resemble reality as captured by more sophisticated visual representations, they do tell us something about how people represent and reconstruct the world around them. The ability to immediately recognise, depict objects and even emotions from a few marks, strokes and lines, is something that humans learn as children. Machinic Doodles is interested in the semantics of lines, the patterns that emerge in how people around the world draw - what governs the rule of geometry that makes us draw from one point to another in a specific order? The order, speedpace and expression of a line, its constructed and semantic associations are of primary interest, generated figures are simply the means and the record of the interaction, not the final motivation.
The installation is essentially a game of human-robot Pictionary: you draw, the machine takes a guess, and then draws something back in response. The project demonstrates how a drawing game based on a recurrent-neural-network, combined with real-time human drawing interaction, can be used to generate a sequence of human- machine doodle drawings. As the number of classification models is greater than the generational models (i.e. ability to identify is higher than drawing ability), the work inherently explores this gap in the machine's knowledge, as well as creative possibilities afforded by misinterpretations of the machine. Drawings are not just for guessing, but analysed for spatial and temporal characteristics to inform drawing generation.
Outside-in is an installation that utilizes machine learning to reflect on systematic discrimination by focusing on the indefinite detention of Mexicans with Japanese heritage concentrated in Morelos during WWII. This algorithmic discrimination system tears apart four classic fiction films continuously within a projection room. The fragments are displaced and classified using machine learning algorithms. The system selects, separates, reassembles and displaces the fragments into new orders. The new orders, edited in real time, are displayed in two perpendicular projections (one for the moving images, another for subtitles) and on a third wall the edited sound components are output through a row of headphones. It evokes the condition of being robbed of your right to be in the place to which you belong. The citizens detained during WWII were removed from their residence, their belongings were confiscated and they were placed in seclusion solely for having Japanese ancestry. Similarly, at present, data retrieving companies configure low resolution representations of ourselves from the snatched digital debris of our daily life. These pieces are reconfigured into archetypes and meaning is attached to them for massive decision making. We don't have the right or means to know what these representations look like or what meaning has been attached to such shapes. It is a privilege reserved to the designers of algorithmic processes: they own this right and we the citizens own the consequences.
"Painting of Thousand-hands Avalokitesvara" is a media art based on the theme of "Painting of Thousand-hands Avalokitesvara (千手觀音圖)," which paintings painted under the theme of Avalokitesvara (千手觀音) during or before Goryeo Dynasty. We reproduces the original Buddhist culture, which accounts for a large portion of Korea's culture archetype, in the three dimensional space. Avalokitesvara (千手觀音) appears in lotus flower on the center of artwork. Avalokitesvara is a Buddhist saint who saves people with a thousand-hands and a thousand eyes. Thousand-hands (千手) literally symbolize a thousand hands, and metaphorically symbolize the ability and its appearance is very diverse. Also in the artwork, Avalokitesvara has 11 faces, indicating that through Avalokitesvara's various appearances, they can save all of the people in various situations. On both sides of the Avalokitesvara, there are Four Devas (四天王), the four heavenly guardians of Buddhism. The Dragon King and the Sudhana (善財童子) appear After the appearance of the Four Devas. All of them gathered to listen to the teaching of the Avalokitesvara. Thousand-hands begin to unfold in the halo (光背) of the Avalokitesvara. After all the elements of the artwork such as the waves in the background and the Litany Buddha (化佛) appear, 42 hands which contained people's wishes with Buddhism things (持物) appear accordingly. Every time a 42-hands appears, the color of the thousand-hands in the halo changes and the thousand-hands take various hand movements (手印).
Persistence is a kinetic installation exploring the conflict between geologic and human timescales. The Anthropocene, a proposed geological epoch, is proceeding towards a formal 'golden spike' to mark it's beginning. The installation investigates the fundamental dissonance one encounters when holding the ideas of planetary memory and personal experience simultaneously. Will we be defined by radionuclides, mass extinctions, and irreparable damage to the planet, or by a golden spike marking our ability to recognize and reverse current trajectories.
Persistence acts as a fiducial, a fixed point, a reminder of our limitations, and fleeting collective memory. By drawing attention to our own limitations we hope to offer a space to allow viewers to reflect on the collective frailty of our memory, and the dire need to preserve the valuable life on this planet.
The imagery of recently extinct animals, natural resources, and forgotten life forms will be displayed using the persistence machine. As the six-foot robotic arm rotates across a phosphorescent canvas, ultraviolet lasers activate the underlying pigment - revealing a fleeting image. Each additional pass of the robotic arm, mimicking a clock, invites new opportunities to allow existing memories and images to fade - or to activate entirely new compositions.
In this digital representation of the project. A video (or real-time simulation) of the kinetic installation will be installed in the gallery for visitors to interact with. Visitors can select from a tablet a selection of topics to remember. The memory will be recalled using the kinetic persistence system. The visitor is invited to explore the memory and learn more about the topic to give the memory a new life.
Many people enjoy keeping houseplants and get comfort from the presence of plants. Not only for aesthetics and medical purposes, plants also have many other uses in human history. As a result, plant ecology and its biological evolutions are closely related to human culture. Normally people perceive plants as static objects, but in fact they do move and react to their surrounding environment in real-time. Their responses are just too slow to be recognized and their communication methods just differ from ours. Therefore, people find it hard to understand the biological and ecological contents underneath plants. We imagine what would happen if plants can talk, see, and sense as humans do. Our team is composed of researchers from engineering, HCI, and media arts. Our biology-computer hybrid installation project is collaboratively created based on our imagination driven from diverse experiences and interdisciplinary knowledge. Based on the imagination, we give each plant a character and exaggerate plants' sense by adding electronic devices with text to speech (TTS) voice synthesis and physics-based visual processing. The cultural histories of plants are spoken with all different synthesized human voices generated by our AI-based voice synthesis system. Also, as human vision responds to light, we also imagined that plants could see their surroundings through leaves. Because photosynthesis takes place there. By capturing the image from a mini-camera affixed to the leaf and showing the result of image processing in LCD screens placed among plants, we mimic a vision of the plants. The electrical signal is measured when users touch the plant and it distorts the audio-visual outputs. The overall experience with this work may arouse users to think about plants as a dynamic living being, opening a gap for users to understand the underlying context of plants more deeply.
Title of my work is 'Playing with Remine.' Playing means playing or interacting with each other. There are two reasons why I design this work.
Currently, I'm designing my website using interactive pictures and videos which visitors can play with them by clicking. Because I unsatisfied with traditional way to communicare with audiences only through the art piece of the artist is onesided communication.
If you click my character on the website, you can meet Kim Hae-min(me) sitting in the room. Kim Hae-min doesn't has any attention to visiotors and doing her own habits. But if visitors matches certain conditions which attracts Kim Hae-min she beings to interreact with visitors. After that text messages pops up and visitors can choice questions to ask they want. Kim Hae-min's response depends on which question you choose. The response can be kind, ignoring the questions, explain the work, or talk about society. In other words, it is a reactive character with various events.
Motivation of this project is also my personal experience. I've felt that there's actually not a lot of communication between people, and how can I be honest, comfortable and funny?
In other way i also plan to expand this work to installation project. The audience passes by and sees Kim Hae-min. The audience enters the installation space and asks questions to the character. And then, if you don't agree or if you have conflicting values, you move on to the mini-game format. 'Battle with Remine' I'm thinking of trying this piece with interactive projection mapping. In the mini-game, I'm going to put various devices to solve the conflicts between me and visitors.
This artwork is inspired by the short story "The Gold-Bug" by Edgar Alan Poe. The story follows William Legrand, his servant Jupiter and an unnamed narrator on their quest to uncover a buried treasure. Poe took advantage of the popularity of cryptography as he was writing the "The Gold-Bug" and his story revolves around the team trying to solve a cipher. The characters in the story follow a simple substitution cipher to decode a message that eventually leads them to the treasure.
With this project, the aim was to re-encode the decrypted text into a digital form and turn it into a 3d tree. In order for this to be achieved, the following process was used:
1) Using Chomky's Context-Free-Grammar, the text was broken down into a syntax tree. 2) By using a simple substitution process, like the one used by Poe, the syntax tree was turned into an L-systems syntax. 3) The tree was then generated using the build-in L-Systems function in Houdini 4) Maya was used to stylize, texture and render the 3d tree.
Point Nemo is the name of the Oceanic pole of inaccessibility. The nearest terrestrial human life is located approximately 1,000 miles away; often, the nearest humans are located in space, approximately 250 miles away, aboard the International Space Station.
The composition of this work draws inspiration from Théodore Géricault's painting, "The Raft of Medusa" (1818--19). Situated at a sublime intersect of sea and sky, this work represents a meditation on human desire --- the poetics that drive human exploration and the urgencies that underly human migration.
Prometheus' string series attempt various artistic experiments applied to 'data refraction', a concept that breaks frame of data with modern concept of accurate delivery of information and induces more creative results. The resulting installation work includes various processes such as data extraction from living creature into 3D shape generation and printing, as well as robotic sculpture and data visual performance.
'Prometheus string' series began by recognizing the material essence of life as a stream of non-material data. For example, if you look at the human body, dead cells on one side are falling apart, and on the other, new cell division is constantly occurring. The living things that we can see and touch are just a piece of the continual line of life and death of the many invisible substances that make up our body.
The information of living things that have been digitalized becomes a model for realizing unexpected results through a process of 'refraction of data' that is transferred or transformed in a way suitable for various systems. Of course, using information about life can't be said that the information can replace life, but it can be a significant attempt to approach the essence of life with a new perspective on the digital system, which is gradually expanding its scope.
Prometheus' string series also intends to continue various artistic experiments applied to the 'refraction of data', a concept that breaks the frame of data with a modern concept of accurate delivery of information and induces more creative results. The experiment to convert from material life to non-material information and from information to artificial life will continue. Also experiments of connecting artificial intelligence patterns for robot movement with social discourse as well.
A still photo from artificial life, this is a frozen moment from the movement of simple shapes. Repeated geometric forms were rotated and transformed over time and complex interwoven abstract patterns emerged. These unexpected forms are born from motion and feedback. Initial graphical parameters were predefined and the patterned evolution was set in motion. When the motion is paused the cellular beauty of individual frames is revealed. Chance plays its part in this phenomenon. Individual parameters are predetermined but the end result is indeterminate. The space between known quantities is where the unexpected patterns and lights emerge.
"River's Edge" is the title of a series of collage artworks created from images obtained via the Internet through the medium of generative programming. In this series of images, the artist used a keyword associated with his childhood memory "River's Edge"to conduct an in-depth search for and gather associated images, which he then assembled into vivid and visually appealing collages. In the "River's Edge" artwork, blue, gray, and green sections of the collected images were associated with water, stone, and sky. Pieces from hundreds of images were extracted from the Internet in data form, processed, and emplaced in the images. The creative process was based on an algorithm that examined the collected images, extracted appealing sections, subjected them to a limited set of modifications, and then emplaced them into the artwork. Although the algorithm's functionalities are limited to magnification, rotation, and choosing the areas to extract from the collected imagery, the process made it possible to create a wide variety of collages. In the numerous trials that were conducted to develop this art form, several new expressions were identified, and many beautiful patterns were created.
Roads in You is an interactive biometric-data artwork that allows participants to scan their veins and find the roads that match their vein lines. The vein data as one of the fascinating forms of biometric data contain uniquely complicated lines that resemble the roads and paths surrounding us. The roads resemble how our vein lines are interconnected and how the blood circulates in our bodies in various directions, at various speeds, and in different conditions. This artwork explores the line segmentation and the structure of veins and compares them to roads in the real world. The participants can also export the data and keep them as a personalized souvenir (3d printed sculptures) as part of the artistic experience. Through this project, users can explore the correlation between individuals and environments using the hidden patterns under the skin and vein recognition techniques and image processing. This project also has the potential to lead the way in the interpretation of complicated datasets while providing aesthetically beautiful and mesmerizing visualizations.
Room View is a piece documenting the view outside of my room in Manhattan, from April to March 2020. I recorded the sound in 30 days of quarantine (including radio, sirens, people clapping for the essential workers, etc.), and several views of the Chrysler Building in different weather conditions. The melting of photographs or videos is triggered by the sounds. Depicting the state of mind when I was absorbed by the view, quiet and slow. As time passing by, the sound became a way I rely on to know what is happening outside my room - in the real world.
The portrait mode of the photographs inherits from the idea of how we receive and send out the information through mobile devices. The virtual view becomes our new reality.
Scan was inspired by an accident I encountered in 2019. After my arm was injured, I barely remember the details of the accident. Through this work, I'd like to explore the relationship between our body memory and the memory we fill up with our own imaginations. As the strips reveal the scar, the memory of the event is not clear anymore, it's filled with our own interpretation.
Searching All Sources of White presents the error as the landscape. It is an interactive video installation examining the idea of the limitation of seeing. The projector projects a blue screen while a white spot falls in the middle. Blue is often seen in a digital display namely default screen, calibration screen, sleep mode, and 'Blue Screen of Death'. In an exhibition setting, this work gives the impression of a failure in the display when the projector displays a blue, standbymode. Standby is a component mode in which a system is kept readily available in case an unexpected event occurs. A system may be on standby in case of failure, shortage, or other similar events.
The interaction is analogue rather than digital. The work invites the audience's body movement as a variation in the scene. The blue landscape is an illusion to the audience that the display device is having a malfunction situation. The switching text on the bottom first misleads the audience that it is a common standby mode text searching for the input source. It invites the audience to step into the projection area. When the blue light source from the projector is blocked by a body, a yellow light appears. The white spot in the middle is never a white light source. The white results from the addition of complementary-colour light source - yellow and blue. In the RGB colour system, mixing the primary colour blue with the complementary colour yellow would produce the colour white. There is a limitation of the human eyes which cannot analyse a mixture of complementary-colour light, results in perceiving it in white. The malfunction scene and the illusion of colour in human eyes encourage the audience to reflect on the limitations in the spectrum of human visual perception.
A selfie is a form of art. Over 1 million selfies are now taken every day. Selfies are not always as spontaneous as they seem. They can be a communication tool like any other that can be manipulated for purposes. Selfie + CODE III is a collection of generative selfie series by using computer algorithms. The algorithmic processes expend the concepts of traditional self-portraits to generative and expressive selfies delivering thought or feeling. The artist started taking her generative selfies in 2015 to raise awareness of Asian female faculty being isolated and marginal in a predominantly white institution (http://www.socialhomelessness.com). Her generative selfies have captured psychological moments to express those individual identities are devalued and deconstructed by a homogeneous institution in the United States. It has been shared by social media. The virtual supporting system at Facebook, "Like," by her diverse mentors and friends, helped her to persist and survive in a regionally isolated and exclusive community. Eventually, It has brought her psychological reconciliation and healing to succeed in dealing with difficulties.
See the scenery of the city through Korean traditional music. 'object' exists with time. We sometimes bring the 'object' of the past to reproduce the time of the past. 'object' becomes the music of Korean traditional music. What we are trying to reproduce is nature. What is nature? Nature is a phenomenon itself. For people in the modern world, is nature an urban ecosystem? We find the way back to nature through the 'object' called 'Korean traditional music'. It looks at nature as the whole being, not as an individual who only has the impression of passing through without a clear form.
Over the past few decades, China has been undergoing urbanization at an astounding pace. In 2013, the national leadership raised the process to a new gear when it unveiled its plan of converting 70 percent of the population to a cityoriented lifestyle by 2025. Such a significant change would undoubtedly transform the character of a country that has been largely agrarian throughout its millennia of history. One may wonder how, and to what extent, the landscape, culture and the daily being of the nation's people may be altered. As artists, we are compelled to explore and reflect upon the various phases of this historic undertaking while questioning how people are positioned during this monumental social transformation. Through fieldwork in China, we collect the ingredients necessary for a multimedia production that combines traditional artistic expressions with emerging technologies. Weaving three interfaces, namely virtual reality (VR) in cyberspace, a series of painting on canvases and traditional shadow play imagery, the multimedia art project visualizes the metamorphosis that results from the urbanization process. With a retrospective into the past through time-honored imagery and a reflection of the present through immersion in the realities of the modern China, we seek to present stories of everyday people to the conscience of a worldwide audience.
Simplexity 01 is a selection from an on-going experimental project that explores how unexpected visual complexities emerge from simple algorithmic procedures. For this particular work, the artist appropriated a simple space-filling algorithm into a generative medium for producing an unseen imaginary structure. The quality that identifies this work is the structure's semi-organic appearance, and the artist sees this emergence as a direct result of the spatial scale that the algorithm was allowed to explore.
During 3 months of being quarantine in our tiny apartment, here comes the project, "SkyWindow". The concept of the "SkyWindow" is the idea of being a mental escape from reality, especially under the unprecedented time. Being quarantine in an entire enclosure space continuously for numerous hours and days, people are desperately looking for reliefs in any possible ways. Through the artist's interactive design, looking up to the imaginary sky could be the most enjoyable solution to get the immediate comfort without going out.
The "SkyWindow" is an immersive and intimate experience with sky-like projections on the ceiling like putting a void hole to it as an interactive installation. A dark environment with the projected sky/universe on the ceiling intriguing the audience to walk closer underneath. Further, the visual graphic will induce the audience to reach out to their hands like touching the sky to trigger the raindrops (meteor shower) and sounds falling from the "SkyWindow."
The "SkyWindow" here metaphorically represents a piece of "hope" people can expect during the pandemic. No matter a planet far away in the dark or sunlight in the bright, it gives you unexpected joy and surprise in the design. Besides exposing under different spatial scenes, through this "SkyWindow," waving hands in the air will trigger the (meteor) shower falling from the Sky which ironically implies the power of control that people have been losing it for a while under such an unpredictable moment. And the (meteor) shower implicitly refers to wash out all the illness and sadness for returning the clean and pure spirits.
The topic of my research was problem of creating virtual environments (VE) in immersive art. I was focused on a roles of Presence, Flow, Immersion, and Interactivity. I was particularly interested in the problem of presence and flow in VE. Presence is defined as the subjective experience of being in one place or environment, even when one is physically situated in another. Presence is a normal awareness phenomenon that requires directed attention and is based in the interaction between sensory stimulation, environmental factors that encourage involvement and enable immersion. Flow is a state of experience where someone is completely absorbed and immersed in an activity. I researched relations between presence, flow, immersion and interactivity, e.g. how interactivity and sound spatialization improves experience of presence. I have developed machine learning methods that extend granular and pulsar synthesis in composing and new methods of building and transforming virtual environments.
Surrogate Being is an interactive virtual environment, where I negotiate the discrepancy between memories and digital data of a nostalgic place, my hometown in Korea. Interweaving the heterogeneity of algorithmic digital images and affective memories, this project overcomes the binary opposition of humans and nonhuman and the anthropocentric perspective to investigate technology as a coevolving cognitive being and vital actor in the cognitive network. This project explores our experience and understanding of the world living in the Cognisphere - the globally interconnected cognitive system of humans and machines - and acknowledges nonlinguistic forces and experiential knowledge. Such affective dynamics among planetary cognitive beings are largely unnoticed and overshadowed by seemingly explicit and errorless digital information, and this project opens up interplays between tangible representations on the interface and underlying affects.
Surrogate Being bridges the gap between my mind and digital technology and invites participants to navigate the mediated landscape with their curiosity. As memories remain indistinct and disintegrated until we recollect, the landscape is destructed and distorted when no participant is engaged. If a participant is approached the fragmented image and stands in front of it, it turns into a navigable landscape. As s/he moves her/his head to look at the other side of the landscape, the virtual camera in the scene changes its direction responding to the participants' movement - analyzing the image using computer vision. This interaction suggests a potential depth in this digital landscape we can look into, thus the monitor becomes a portal into a mediated digital-memory space. Furthermore, human and technological cognition become indistinguishable in this mediated space, collaboratively generated by affective memories and algorithmic decisions.
Highlighting endangered species in New York State and beyond, 'The Sinking Garden' is a new technology project integrating Virtual Reality application with fine art language to depict ecosystems that are at risk of survival.
The project is based on and inspired by research conducted by New York State Department of Environmental Conservation, The Cornell Lab of Ornithology and The International Union for Conservation of Nature (IUCN). Focusing on specific endangered animals and plants that exhibit extraordinary beauty and significance in biodiversity, the project metaphorically depicts critical environmental issues.
The Sinking Garden uniquely combines painting and new imaging technology to portray endangered species. At first, a series of paintings were created in a distinctive style inspired by folk art traditions from diverse cultures. Second, through conducting research and consulting scientists in nature conservation, the digital portraits of endangered species were chosen and produced as VR components. Finally, the VR platform brings animals and plants come to life in the 3D environment within cyberspace.
The Sinking Garden project is intended to expand the capacity of visual art by utilizing new imaging technology in our age. Interweaving aesthetics with educational experience, the new media art project aims at inspiring viewers to cherish the natural world that we call home.
"The Synthetic Cameraman" is a full-screen, real-time, 3D graphics simulation that critically challenges the notions of remediation, processuality, linearity, and creative agency in computer-generated virtual environments. The application is rendering a virtual scene depicting a volcanic mountain landscape with a centrally located volcanic cone that is violently erupting with pyroclastic flow and rocks of different sizes being expelled as molten lava rivers are traveling down the slope forming a lava lake at the foot of the cone. The visual aspect of the phenomenon is enhanced by deep sounds of rumbling earth and rocks hitting the bottom of the caldera and falling down the slope. The viewing takes about 3--5 minutes and is divided into three sections with the middle section constituting the core experience where the control over individual elements in the scene is given over to the algorithms. The weather conditions, eruption, and the settings of virtual camera - its dynamic movement and image properties - are procedurally generated in real-time. The range of possible values that the camera is using can go beyond the capabilities of physical cameras, which makes it a hypermediated representational apparatus, producing partially abstract, semi-photorealistic ever changing fluid visuals originating from a broadened aesthetic spectrum. The algorithms are also controlling various post-processing effects that are procedurally applied to the camera feed. All of these processes are taking place in real-time, therefore every second of the experience is conceived through a unique entanglement of settings and parameters directing both the eruption and its representation. Each second of the simulation as perceived by the viewer is a one-time event, that constitutes this ever-lasting visual spectacle. The artwork can be displayed in a physical setting using a TV / projector or in a virtual setup as a continuous image feed (stream) produced by the application.
Indeed, people spend more time on deep thinking since 2020. The questions which ask mainly by the sociologists, now become the topics on the dining table. The debates on social and moral dilemmas are happening intensively 24 hours on the internet. We started to think more about who we are, where we are going, and how we will value the information we have received. Do we have freedom? Shall we believe absolute freedom? Sometimes people directly transform the idea of liberty into democracy. However, shall we also equal freedom to democracy? Since we are all inside this one pandemic bubble, after most people stay at home for a couple of months, we start emerging a global-size collective memory, which makes people more empathetically understand others' situations. Meanwhile, more and more people have to learn and take experience virtually. The attention of empathy and the new work-from-home mode evokes the initial idea of this virtual reality experience. We start to ask how people could learn and think more effectively in this brand new virtual age? Unity program makes this innovation possible. The innovative architecture modeling could permit a large group of people to experience personal space and sharing areas simultaneously. The sound design is specially designed for the various space sound and the audience's interactivities. We use this program to build up an immersive and empathetic space that embodies a hypothetical argument of a social dilemma into a virtual manifestation. People might be able to figure out the most meaningful answer by wearing the same shoes. The social distance could also be virtually controlled in this program by counting if the number of participates overload spaces.
"Tokyo" is a generative artwork created by visualizing continuous recorded Tokyo temperature data obtained from the Japan Meteorological Agency from 1990 to 2017, and then printing out the result in a creative manner. The colored dots in the artwork reflect the temperature of each day. Cold days were colored in blue while warm days were displayed in orange. There are two primary reasons for using natural phenomena, such as temperature data in generative art creation. First, the data allows us to embrace and comprehend the unpredictability of natural phenomena. Second, when used with a generative algorithm, it makes possible data visualization in ways that allow us to create abstract art. Since there are massive amounts of historical temperature data, such artwork would be impossible to create without computers. Simple patterns like noise are not always random and often contain repeating patterns that can be expressed harmoniously. The seeming randomness of dots showing temperature distributions of hot summer days can be painted as patterns that result in abstract artwork. When applied to Tokyo temperature data, the stain-like patterns that resulted are among the most attractive characteristics of generative art painting and would be difficult to express without the generative algorithm.
"Trace of Dance" tells the story of modern labor. It is not just to make money, but it includes images of various complex desires, such as social status, personal satisfaction, and the recognition of bosses. The artist likened this behavior of a modern laborers to an inertial Flapping wings of a moth. The thermal data are collected through interviews with workers and produced as sculptures depicting the trace of dance based on them. The sculpture is melted by thermal lighting, which turns on and off in proportion to laborers' working hours. The artist asks whether this quiet misfortune comes from personal aspirations or from systems.
Turn Over is a kinetic art that illustrate the change of human and society. Twenty four set of Y-shaped object, which means person in Chinese, turn on a flat surface and gradually makes various pattern.
The Chinese character "人," which means a person, is similar to the alphabet letter "Y" rotated 180 degrees. When this character is arranged regularly in a lot on a surface, the lines of the characters starts to look like the boundaries of stacked cubes. Then, if one of these characters is turned 180 degrees, the orientation of one cube is also changed (For example, the top surface becomes to the side surface). This turn over of single character is too small to be noticeable. Sometimes it just seems a kind of contradiction, in-coherent, or treason. But when many characters turn at a time, it breaks boundaries and becomes a drastic change. This illustrates our change as an individual and a society.
Uncertain Facing is a data-driven, interactive audiovisual installation that aims to represent the uncertainty of data points of which their positions in 3D space are estimated by machine learning techniques. It also tries to raise concerns about the possibility of the unintended use of machine learning with synthetic/fake data. Uncertain Facing visualizes the realtime clustering of fake faces in 3D space through t-SNE, a non-linear dimensionality reduction technique, with face embeddings of the faces. This clustering reveals what faces are similar to each other based on the assumption of a probability distribution over data points. However, unlike the original purpose of t-SNE that is meant to be used in an objective data exploration in machine learning, it represents data points as metaballs, in which two or more face images become a merged face when they are close enough, to reflect the uncertain and probabilistic nature of data locations the t-SNE algorithm yields. As a result, metaball rendering is used as a means of an abstract, probabilistic representation of data as opposed to exactness that we expect from the use of scientific visualizations. Along with the t-SNE and metaball-based visualization, Uncertain Facing sonifies the change of the overall data distribution in 3D space based on a granular sound synthesis technique. Uncertain Facing also reflects error values, which t-SNE measures at each iteration between a distribution in original high dimensions and a deduced low-dimensional distribution, to represent the uncertainty of data as jittery motion and inharmonic sound. As an interactive installation, Uncertain Facing allows the audience to see the relationship between their face and the fake faces, implying an aspect that machine learning could be misused in an unintended way as face recognition technology does not distinguish between real and fake faces.
Could exploring the limit of consciousness become a mode of cultivating oneself?
Understand_ V.T.S is an installation that substitutes senses. It helps explore and ponder in the process of the cultivation. In this piece of work, it conducts the experiment in which the possibility of the cooperation between natural and artificial algorithms are assessed, serves as an approach to human enhancement. That is, it tries out how well our brains(natural) work with AI (man-made). The feature of neuroplasticity allows our senses to perceive the world in various ways in which we might see not with our eyes, but with skins or listen not through our ears, but through taste buds, to name but a few. General skin vision relies on brain parsing pieces of information and shaping cognitions thereafter. In this respect, I introduced an object recognition system - YOLO v3, converting the results given by YOLO 3 into Braille reading system to thigh skin, and the other side converting the tactile image to motor on your back directly.
You can control a robot wanders about the surroundings of you. The signals its left eye receive will translate the result of object detect to Braille and deliver it to your leg; while its right eye converts the signal received into the tactile image to the your back. So eventually your brain will manage to comprehend the meaning of these signals. Unlock a new tactile cognition by a Human and AI integration.
Viewporter is an interactive installation that displays a computer-generated video of a city. Viewers can rotate the screen attached to a device that resembles a telescope to accelerate the playback speed of the video. In this project, the deep learning technique with images capturing the Seoul skyline was used to train and generate the artificial city skyline. As one of the most developed metropolitan cities, Seoul has been under continuous development and construction of highrise buildings over the past decades. While the image of the skyscraper-packed skyline has been portrayed by the mainstream media to symbolize the utopian dream of the city, the lives of the residents with mundane duties have been far-fetched from the attractive image promoted through the propagandistic videos on the media. Viewporter uses the analogy of a telescope in tourist attractions to emphasize the distance between the idealized and the real and have viewers re-think the illusion and fantasy promoted by the images of development programs.
When there is light, everything is visible. I decompose the fundamental element in the visual world to let the invisible become visible.
It is a process deconstructing light. I project a white source of light on a surface while using prism and some moving images to "deconstruct" it. "White" is not an independent colour. It is a mixture of colour in the visible spectrum that is composed of the primary colour red, green and blue.
Through refraction of light, I separated the white source with the three primary colours to rainbow light using a prism. After that, I took away green light from white light, leaving a mixture of red and blue light. Without green, the light source gradually reflects a new colour called magenta, hence the 'rainbow' becomes a 'duo-coloured rainbow'. Eventually, I erased red from magenta. The line results in pure blue colour. As blue is a primary coloured light that cannot be further decomposed by the prism, it appeared the ultimate light source in a monochromatic 'rainbow' colour.