“Narrative Design”

I’m featured in Bonni Rambatan’s “Narrative Design” podcast/comic series (Episode 4)!

“Behind the Minds of Great Storytellers – A Conversation in Audio and Comics”

“I have always loved talking to people who create stories — those who study human interaction so deeply as to develop a strong sense of empathy for the weak. These are artists, writers, journalists, game designers, but also researchers publishing their findings, or fans writing their next piece of fic.

For lack of a better term, I call these people “storytellers”. Or, if you’d prefer, narrative designers — not in the game design sense, but in a much broader sense of helping us all shape the narratives we tell ourselves about ourselves.” – Bonni Rambatan. 

Drawing by Bonni Rambatan

'Future Flesh' - Group Exhibition

I will be exhibiting at the 223 Gallery from the 20th-29th of March!

I will only be performing live for the Private View on the 20th (RSVP only: info@jackspencerashworth.com) Please come along if you can! I look forward to seeing you there.

Please click on the following link below to view the Facebook event page: 



Exhibition at the Watermans Riverside Gallery

I will be showing video and photography work from ‘An Evalution’ series at the Watermans Riverside Gallery 

Open Daily (1pm -  9 pm), Friday 18 January – Wednesday 20 February 2013

Please click on the following link for more information: 


Review of LUPA16 Performance

Please click on the following link to read the review of the performance I did for LUPA16 in London City Nights, written by David C James: http://londoncitynights.blogspot.co.uk/2013/02/lupa-16-behind-james-campbell-house.html




I will be performing at LUPA-16 in Bethnal Green on 15/02/13. The performances will take place from 

Please click on the following link for more details: http://www.facebook.com/events/470345473023904/?fref=ts

“LUPA (LOCK UP PERFORMANCE ART) – A performance series curated by Aaron Williamson (2011-2012) Jordan McKenzie, Kate Mahony and Rachel Dowle (2011 – 2013). LUPA ENDS IN JUNE 2013 WITH LUPA-20

A grim little lock-up garage, the size of a single car, on a Housing Estate in deepest Bethnal Green is the venue for a series of ‘pop-up’ performances to be staged once-a-month on a Friday night. The audience assembles in the Car Park and the event ‘pops-up’ promptly at 8pm, finishes at 9.”- LUPA


Art and Chat in Brighton

On 30/01/13, I will be giving a talk about my work for Art and Chat in Brighton as part of the

Brighton Digital Festival. The talks will be taking place from 18:00-21:30. Please click on the following link

for more details: http://digihub.org.uk/


 Location: NEO Bankside, Pavillon A, 50 Holland Street, London, SE1 9FU

Date: 19th of October and Time: 10:00-12:00

Making of Chapter I- The Beginning

First Collage_Snap Shot of Animation in Processing

 First Collage

I was inspired by Lucas Cranach’s Renaissance painting, ‘Adam and Eve,’ (1526) to produce an animation of falling apples in Processing. The image above  was the first collage I made, which later on became my first live performance from the ‘An Evolution’ series (final project). The following link below is the animation I used to project on to a wall in the live performance:

Please click on the following link to view my Processing Sketch and Source Code.

To do this Processing sketch, I was inspired by  and used some of the code from Daniel Shiffman’s Raindrop Catching Game, where you catch rain drops with a circle by moving your mouse around. This game can be found in Daniel Shiffman’s book and Website ‘Learning Processing.’ This book was a great help to me and had clear and detailed tutorials on how to work with Processing.

Main Image of my Character: done in Photoshop:


I wanted to bring my character to life through my performances so I decided to make a costume and perform as her. I bought up to 50 Barbies from the 99p store and started using their heads to form my character’s mask.


Making of the Costume

Once I got the barbie doll heads, I positioned them on a mannequin’s head to see what the mask would look like (see first image above). I then decided to buy a Unisex Lycra Spandex Flesh Color Zentai Suit from online and thought about stitching the barbie heads on to it. I soon found out that this was very difficult to do, so I decided to use velcro, which had the same colour as the suit (See Second Image). The third and fourth images show the final result of my costume, with the barbie doll heads attached onto the suit.


The following sketches, also done in Photoshop and Processing, were the other ideas I had for the series of performances I wanted to do. In total I had five digital collages. In the end, instead of making all five come to life as performances, I used three.  I did this because, sadly, I did not have enough time to produce five performances, nor did I have enough money. I would, however, like to form these sketches into performances in the future.

One of the ideas I had was to perform in a junk store called ‘Aladin’s Cave’ as my character (see images below). It would have been really interesting for me to perform and stand among the other mannequins and objects. I would have invited the audience to walk around the space and see if they could notice whether I was a living person or not.

Another performance which I did not get a chance to make was ‘The Vitruvian Woman’ (see images below). For this piece,  I  was influenced by Leonardo da Vinci’s famous ‘The Vitruvian Man’ (1487), based on the ideal proportions of the human body. Personally, I do not think that there is such a thing as ‘ideal proportions of a human body,’ so to question the ‘ideal’ and ‘perfection,’  I made a collage/animation in Photoshop and Processing (see below) of a character whose hands and feet do not exactly touch the circle around her. In the animation, the circle and the square around the female figure changes colour, indicating a change of mood. My plan was to make a sculpture out of this and perform with it as my character below.

Please Click on the following link to see my Processing Sketch and Source Code


Originally, I wanted to do my first performance  in the Ben Pimlott Seminar room at Goldsmiths. I was not, however, allowed to hang apples from the ceiling or cover the floor with grass. These were the images I had produced to indicate what my performance was going to look like.


Photos and Diagrams of the first Space

I put together the images in Photoshop. A projection of the animated falling apples would have been projected onto the wall and real apples would have been hung from the ceiling. The last image shows the other side of the room, where the audience would have entered. This is also where a camera would have been positioned to film the performance.

Instead of using the Ben Pimlott Seminar room, I was able to do my first performance (Chapter I) in another space within the university. Please scroll down to view the images of the new space:


The Setting up Process

By the entrance of the space, a black curtain was hung, which the audience had to walk through (image 5). In addition, they were told to take off their shoes by the door, before entering the space. About 30 apples were hung from the ceiling with small, metal hooks and fishing wire. To film the performance, 3 HD cameras were used (image 7) and a short throw projector was hung from the ceiling to project the animated apples and tree (my processing sketch). I later on had 6 hours worth of footage, which I had to edit. I managed to edit it down to 5 minutes. I edited my video in iMovie and Final Cut Pro X.

The performance was streamed live on the internet from my channel:
http://www.ustream.tv/channel/ipek6  Before doing these performances, I did not know about this website. ustream.tv is a great broadcasting website where people can broadcast anything from their homes, school, other countries, etc. By becoming a member and creating my own channel I was able to stream my performances live for my friends and family abroad. The great thing about it is that it is free!

To view the photos and video of my first live performance Chapter I- The Beginning please click here

I edited the photographs in Adobe Photoshop Lightroom 4 and the video in Final Cut Pro X + iMovie

The audio you hear in the video was recorded by me in Green Park and Hyde Park. I then edited it in Garageband and Audacity. This was the audio track that was also played in the live performance 

Making of Chapter II- In the Flash of an Eye

Similar to the first project I did (Chapter I), I started off my second project by creating another collage in Photoshop and later on animated it in Processing. This was a sketch I produced before putting on the second live performance: Chapter II- In the Flash of an Eye . It was a reference to Man Ray’s photographic work entitled, ‘Coat Stand’ (1920)

Please click on the following link to view my Processing Sketch and Source Code

If you click on the background of the image with your mouse, another image should flash!

For the live performance, I wanted to make the flashing in the background of the sketch come to life. To do this, I decided to work with a digital camera and an arduino. I wanted to make a digital camera detect motion and then take photographs of the audience in the space.

Since I had not worked with PIR sensors before I started researching into them. Online, I found a very useful tutorial on what a PIR sensor is and how it works: http://www.instructables.com/id/PIR-Motion-Sensor-Tutorial/  I found out that PIR sensors sense motion and can detect the movement of people within or out of the sensors range. The first thing I wanted to do is buy a PIR sensor, which I got from rapidonline.com. The following link to this PDF contains the specifications to the PIR sensor I got from rapid.com e.g. The module structure, dimensions, detection area, electrical characteristics (sensitivity), etc. of the PIR sensor can be found here: http://www.rapidonline.com/pdf/61-1462.pdf

Because the detection of the PIR sensor was too large, whenever I moved near it, it would trigger the digital camera to take a photo. In the end, I had to cover its Fresnel lens with black tape and made a little slit in the middle, so that it could still detect motion from a certain point/area. (see image below)

To help me get started on my circuit, I decided to use the open source software Fritzing to make a circuit diagram:
Thanks to my friend, I was able to borrow her Canon 50d camera to do this project. I also needed to buy a Remote Shutter Release for Canon 40D 50D 7D as RS-80N3, which I got from eBay. In the diagram above, the white circle with the 3 pins is supposed to represent the N3 connector (see image below)
Once I received the remote shutter from eBay, I took it apart and soldered its red, white and yellow wires on to a soldering board, to connect up to my arduino. The N3 connector (the end of the remote shutter release cable above) was plugged into the camera.
Here are some images of me putting the circuit together
Please Scroll down below to view my code for this project:

@Ipek Koprulu
@Date 26/06/12

“Sketch to control camera shutter based on motion detection”

In order to complete the code for my second project, I looked at and used
Martin Schmitt’s (mschmitt) code from:



const int PIRsensor = 12;//green
const int ledPin = 10; //white
const int shutterSwitch = 8;//red
const int shutter_press_time = 1000; // How long the shutter shall be pressed

int motion_detector = 0;//variable that changes when someone is infront of the sensor,
//when motion is detected
int shutter_state = 0;
long shutter_enabled = millis();//this will be used as a counter and is the same action as delay();

void setup(){
pinMode(ledPin, OUTPUT); //output pin for fault checking motion
pinMode(shutterSwitch, OUTPUT);//output pin for the shutter (remote switch)
pinMode(PIRsensor, INPUT);// input PIR Sensor
digitalWrite(PIRsensor, HIGH);//accessing PIR sensor pin
Serial.begin(9600);//opening the port to send message to computer, to confirm detection

void loop(){
int sensorState = digitalRead(PIRsensor);//initializing PIR State (movement detection)
// Sensor goes LOW when motion detected
if (LOW == sensorState){
motion_detector = 1;
shutter_enabled = millis();//this is the same action as delay();
motion_detector = 0;
// Shutter should be turned off
if ((1 == shutter_state) and (0 == motion_detector)){//if shutter state is on, no detection

if (millis() – shutter_enabled > shutter_press_time){
digitalWrite(ledPin, LOW);
digitalWrite(shutterSwitch, LOW);
shutter_state = 0;
/// Shutter should be turned on
if ((0 == shutter_state) and (1 == motion_detector)){// if shutter state is on detect
digitalWrite(ledPin, HIGH);
digitalWrite(shutterSwitch, HIGH);
shutter_state = 1;

The Following Videos Show How My Camera Works. Please Watch! 


The following video is a small piece I did with the camera. When you watch it, it looks like a still image, a photograph but if you keep watching, you will see the camera flash in slow motion. For me, the sound it creates is powerful. These videos were edited in iMovie and Final Cut Pro X



The Following images are taken from the Second Space I wanted to perform in for my Chapter II project. I have also included a sketch of what I wanted my performance to look like within the space. I had managed to book the room but due to last minute complications, I was given another room to show my work in. The following images were done in Photoshop

New and Final Space – where I performed Chapter II – In the Flash of an Eye
I made the suit of my character from cardboard. I painted it with white and black acrylic. I used black, silk fabric for the collar of the suit, white and black buttons for the shirt and  a bow tie. I later on glued the suit on to a coat stand. I wanted the suit to look very similar to my collage above.
The following photographs were taken by the Canon 50d camera during the live performance. As explained above, the camera was connected up to an Arduino and a PIR sensor
In the performance, I also had analogue cameras placed on a table. The table was covered with a large black cloth and the cameras were numbered. The audience members used these cameras to take photographs of me and of each other, while the digital Canon 50d took photographs of us from the other side of the room. It was a game of  watching and being watched, ‘The digital vs the analogue.’ Furthermore,  a laptop was hidden under the table for live streaming. This performance was also watched live on my ustream channel: http://www.ustream.tv/channel/ipek6

*Agfa Synchro Box (made in Germany 1951) http://www.flickriver.com/photos/tags/agfasynchrobox/interesting/

*Nikon F55 (2002-2006) http://camera-wiki.org/wiki/Nikon_F55_(N55)

*Olympus OM-1 (1972) http://en.wikipedia.org/wiki/Olympus_OM-1

*Minolta X-300 (1984) (for the camera that does not have flash on it: Kodak Ultramax 400 36 Exp. Poses colour print film) http://www.kameramuseum.de/0-fotokameras/minolta/kb-slr-ana/minolta-x-300.jpg


*Canon EOS 1000F (1992, year of release) (film used: 200 36 exp. 35 mm colour print film) http://www.okazii.ro/aparate-foto-film/canon/aparat-foto-canon-eos-1000f-a41617249


 *Canon mega zoom 105 (released in 1991, 1994?) (film used: 200 36 exp. 35 mm colour print film)http://www.flickr.com/photos/8045576@N05/2349521248

*Two disposable cameras from Boots: http://www.boots.com/en/Boots-Essentials-single-use-camera_1209317/ 

The performance was recorded by five HD cameras. One camera was positioned in each side of the room (x4 cameras) and one was hand held. In the end, I had 8 hours worth of footage, which I cut down to 11 minutes. The video was edited in Final Cut Pro X


Making of Chapter III- Movography

This is my final collage entitled ‘Movography,’ also done in Photoshop and Processing. It was inspired by French Performance Artist Orlan’s live surgical performance ‘Omnipresence’ (1993). I decided to name this piece Movography because it involves Photography (something still) as well as a movie, a video (something that moves). I aim to also make this piece come to life through my last live performance, which I will be doing in the Saint James church at Goldsmiths

In the following Video below, you can watch me playing ‘Movography’ in Processing. Next to it, you can read My Code:


PLEASE CLICK ON THE LINK TO WATCH A BETTER QUALITY OF THIS VIDEO ON VIMEO: Screen Recording of Movography Animation in Processing


Code to Processing Sketch Above:

PImage body;
import processing.video.*;

Movie myMovie;

void setup() {
size(420, 695, P2D);
body=loadImage(“head_is_in_tv2.jpg”);//load background image
myMovie = new Movie(this, “eyec.mov”); // Load and play the video in a loop


void movieEvent(Movie myMovie) {

void draw() {
image(body, 0,0,420,695);//drawing image/fixing size
image(myMovie, 130,550,150,99);//drawing myMovie,
//fixing the size and positioning of the video




My space can be seen on the left, in pink



For my final performance, since I had worked with Processing and the Arduino for my previous performances, I wanted to work with Max/MSP. I audited some Max/MSP classes this year and by using some of the things I learned and the patches we made in class I came up with the following:

In one of my Max/MSP classes, I managed to create a video that moves and is controlled by sound. I liked the video it produced but I did not like the sound. I had filmed a closeup of my eye and by changing the jit.gl.gridshape object to a sphere on my max patch (see below) I managed to film a really round version of my eye, zooming in and out to the beat. (Please Click on the images below to see the max patches in large)

I liked the video I had managed to make of my circular eye zooming in and out of the screen but I wanted to change the video visually and mix it with some other images. I started looking at some Max/MSP tutorials and found their spatial mapping and simple mixing tutorials particularly useful. I decided to combine some parts of the two tutorials together, record and save what I produced. (see max patch below)

After I produced my video, it was now time for me to work on the audio. Since Chapter III is going to be my final piece, I decided to mix the audio I made from both Chapter I and II into a track with the following patch:

While I was playing with Max/MSP, I decided to record what I was hearing (by going to Extras>>Quickrecord). This track, however, was still not enough. I wanted to add something more to it. In the end, I mixed in some of the audio recordings from the audience members I interviewed about Chapter I and II. In the interviews, the audience spoke about their opinions and experiences of my two live and mediated performances. By mixing in the interviews, my final track will give the viewers from the Nowhere exhibition a hint of  what my work is about and how it was experienced. It is supposed to represent a summary of my work within the final chapter. In the exhibition, the reason why I decided not to label my work is because I am more interested in hints and subtlety in my work, which I tried to do through my audio.

The following Max patch is the final patch I used to produce and record my audio for the video/performance:

In the end I produced a 20 minute long audio track:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

I later on added this audio track to the video I created in Max/MSP and edited everything in Final Cut Pro X, which I then exported to be burned onto a DVD. This DVD/Video will be played on an analogue TV in my final performance. (SEE VIDEO BELOW)

 Making of the Mask for my character (see final collage/animation above)  

Scrim Across Face

I started experimenting with my mask by using modroc (see image above) I soon realised that using modroc was going to be too heavy and hard to work with for what I wanted to do. I wanted my mask to be ‘flesh-like’ so instead I decided to use liquid latex. I got a plastic mask from the 99 p store and started putting the latex on it. It took many layers for it to become thick but in the end, I believe, that it definitely looked like skin, especially from a distance. In Chapter III, the neutral latex mask and the loss of the barbie doll heads is meant to represent a loss of identity for my character.  The character in Chapter I and II, who confronted the audience by approaching them, is transformed into something else. The character in the final performance covers her eyes with the mask, allowing them to only be revealed digitally on the TV screen. Instead of directly approaching the audience like before, the character ‘gives herself’ to the viewers, by standing still, allowing herself to be watched and approached instead.



I originally wanted to mount two large monitors on opposite walls in the space. However, since I had no access to any, I was able to get two smaller monitors instead. This is why my setup is not identical to my mock-up sketches above. I placed the two smaller monitors, playing the videos Chapter I and II performances opposite from the photographs of Chapter I and II; Video vs Photography documentation of performance = ‘Chapter III Movography.’