This project is about discovering the possibility of touching. Touch is one of the most basic and fundamental ways that we gain knowledge of world. The comforting touch of a friend, another's kiss, the stability of the ground beneath out feet and softness of a warm shirt. When we touch, we connect, we feel, we are for a brief moment one with another object. Each fingertip has hundreds of receptors. Our bodies are built for touching. Thanks to rapid advances in digital technology, we now live in a world where distance is immaterial. This digital world is an audiovisual one but touch has been left behind. Touch grounds us firmly in physical reality and in the present moment we can only touch physical objects that we can reach. How can we make touch more relevant in the future we're heading toward? How can we touch the untouchable?

A lot of different experiments were tried in this project. One of the most interesting is to try to create the feeling of touch in VR experience.I used Unity and Arduino link the real world with the VR world. When you rotate a shape in the real world, the block in the virtual world will follow. I prepared three different shapes, then asked the user to name what they were holding and what they were seeing. The result is that when the shape’s visuals in the VR world doesn't match what the hand was feeling in reality, the tester spent 20% more time to correctly answer. This shows how the visual and tactile senses influence each other ,as long as the objects you touch have similar movements with your visual objects. It is easy to create illusions, as long as you touch a similar movement object with the thing you see, you would feel like that you are touching the thing you observe.

The biggest challenge in this project was the computer vision coding to process the real world, and then translate the shapes to the 49solenoids and follow the real world movements in real-time, lag would have created a negative effect. An interesting additional hardware issue was the power and interference considerations of so many high power magnetic devices in close proximity to each other and the micro-controllers.

I used Processing scripting to process the real-time image from the Web Cam first, and analyse each pixel in the image then target the moving object.Because of the constrained budget and time we were limited to 7 rows * 7columns = 49 solenoids but this was quite sufficient to create a good effect.The screen was divided into into 49cells, I constructed four circuit boards to separate different group of solenoids to make power management and debugging easier.

Developed in collaboration with David Densheck, Sam Roots, Nathan Chang and show in the WIP show in Royal college of art.