note to self, maybe create a prototype image to show our idea
Create a real time live performance. Use the input from a webcam or use some other input device (better to use Leap Motion, you can grab one at the Library) to create a real time live graphical experience.
BENCHMARKING & CASE STUDIES
Our inspiration and what influenced us is one of the biggest social media platform “Snapchat” which gained success through the fun engaging lenses they have. One of the gimmicks of snapchat is when you take a video, you can pinpoint a sticker on an item, this is done with motion tracking. The technology comes from Ukraine start up called “Looksery” which snapchat acquired in Sep 2015 for $150 million, making it the biggest acquisition of Ukraine history.
Those application uses pixel data from camera to identify objects and interpret 3D space. Computer vision is how Facebook knows who is in your photos and how self-driving cars avoid running over people, and how you can give yourself a flower crown filter. First step is detection, locating the area of contrast between light and dark parts of the image. The Viola Jones algorithm is the facial detection tool, it works by repeatedly scanning through the image data, calculating the difference between the greyed scaled pixeled value underneath the white boxes and the black boxes, for instance the bridge of the nose is usually lighter than the surrounding areas on both side, the eye sockets are darker than the forehead, and the middle of the forehead is lighter than the sides of it. These are crudest for facial features but if enough matches were found in one area of the image, it concludes there’s a face there. This algorithm will not work if you are tilted or facing sideways, but they are very accurate for frontal faces, it is how digital camera have been putting boxes around faces for years.
Figure 1. Figure 2. Figure 3.
How it detects facial feature is with an active shape model, a statistical model of a face shape that has been trained by people manually marking the borders of facial features as shown on figure 1. on thousands of sample images. The algorithm takes an average face from that data and align it to the images from your phone’s camera, scaling and rotating it according to where your face is located. The model analyses the pixel data around each of the points looking for edges defined by darkness and lightness. The model can even correct and smooth the points by taking into account the locations of all the other points. Those point are used as coordinates to create a mash. A 3D mask as shown in figure 2 that can move rotate and scale along with your face as the video data comes in for every frame as shown in figure 3. With that, they can deform the mask to change your face shape, change your eye colour, add accessories, and set animation to trigger when you open your mouth or raise your eyebrows. Snapchat can also switch your face with a friend, although that involves a lot more data. The technology is not new, what is new is the ability to run this in real time from a mobile device. That level of processing speed is a recent development.
(outline the visual, sound and interaction design. This section should contain drawings, images, videos and/or audio to helps show how your proposed system will look and behave.)
As mentioned above about this project concept, we will generate the interactive media of the live real time performance from all movements of users and integrated with abstract visualizations throughout electric wire and thermal inspiration by camera motion tracking.
Input device integrated
By camera or webcam, this project will apply this device to track the human face or body movement of participants.
Figure 1. Sample of the webcam
From the electric wire and thermal inspiration ( abstract visualization ) , we integrate that inspiration with the real live movement of users and provide to them with the abstract visualization based on their live performance.
Figure 2. Electric wire inspiration
Figure 3. Thermal inspiration
Sample of outcome
Initially, we will manipulate the interactive media of output along with implicated inspiration, technological approach and tracking algorithm. Therefore, our project outcome, it will be design roughly as below.
Figure 4. Sample of outcome