Site Loader
Order Essay
111 Town Square Pl, Jersey City, NJ 07310, U.S.
111 Town Square Pl, Jersey City, NJ 07310, U.S.

The project area selected is a real live performance and the interaction with abstract visualization based on all movements of participants via webcams.
Our concept is to create a real live abstract visualization based on the movement of the user with a webcam. The idea is to use a motion tracking and detection with abstract coloring of the visual to create an imitation of thermal imaging. We will be using several colors
The way the coloring scheme of a thermogram works is the brighter colors are for example, red, orange, and yellow; indicate warmer temperatures as more heat and infrared radiation emitted. While the cooled toned, purples, blue, and black indicate cooler temperatures as less heat and infrared radiation emitted.
We will be using these colors to create a thermogram effect. The colors will depend on the contrast of the user; it will be used as a guide. For example, the darker areas will have cooler tones and lighter areas will have warmer tones. While we cannot make an actual themogram, we are aiming to achieve the closest imitation we can get. Thermogram example is shown on figure 1.

Figure 1. Thermogram

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

BENCHMARKING & CASE STUDIES

Our inspiration and what influenced us is one of the biggest social media platform “Snapchat” which gained success through the fun engaging lenses they have. One of the gimmicks of snapchat is when you take a video, you can pinpoint a sticker on an item, this is done with motion tracking. The technology comes from Ukraine start up called “Looksery” which snapchat acquired in Sep 2015 for $150 million, making it the biggest acquisition of Ukraine history.
Those application uses pixel data from camera to identify objects and interpret 3D space. Basically, computer vision is about how social media can know who is in user’s pictures and how self-driving cars can avoid driving over people, and how person can give himself a flower crown filter. First step is detection, locating the area of contrast between light and dark parts of the image. The Viola Jones algorithm is the facial detection tool, it works by repeatedly scanning through the image data, calculating the difference between the grayed scaled pixel value underneath the white boxes and the black boxes, for instance the bridge of the nose is usually lighter than the surrounding areas on both side, the eye sockets are darker than the forehead, and the middle of the forehead is lighter than the sides of it. These are crudest for facial features but if enough matches were found in one area of the image, it concludes there’s a face there. The algorithm wouldn’t work if the person is tilted or facing sideways, but it is very accurate for frontal faces, and it is how the digital camera have been placing boxes around faces for years.

Figure 2. Facial futures Figure 3. 3D mask Figure 4. Frames

How it can detect facial feature is with an active form model, a statistical model of a face form that has been trained by people manually marking the edges of facial features as shown on figure 2 on thousands of sample images. The algorithm is taking an ordinary face from that data and aligns it to the images from your phone’s camera, scaling and rotating it according to where your face is located. The model analyses the pixel data around each of the points looking for edges defined by darkness and lightness. The model can even correct and smooth the points by taking into account the locations of all the other points. Those points are used as coordinates to create a mash. A 3D mask as shown in figure 3 can move, rotate and scale simultaneously with your face as the video data comes in for every frame as shown in figure 4. With all of that, they can deform the mask to change person’s face shape and change his or her eye color, add any accessories, and set animation to trigger when he opens his mouth or raise his eyebrows. Snapchat can also switch your face with a friend, although that involves a lot more data. The technology is not new, what is new is the ability to run this in real time from a mobile device. That level of processing speed is a recent development. This algorithm is illustrated on figure 5.

Figure 5. Algorithm

?
DESIGN

As mentioned above about this project concept, we will generate the interactive media of the live real time performance from all movements of users and integrated with abstract visualizations throughout electric wire and thermal inspiration by camera motion tracking.

Input device integrated

By camera or webcam, this project will apply this device to track the human face or body movement of participants, which sample is figure 6.

Figure 6. Sample of the webcam

Inspiration

From the electric wire and thermal inspiration (abstract visualization), we integrate that inspiration with the real live movement of users and provide to them with the abstract visualization based on their live performance. Electric wire and thermal inspiration are illustrated in figures 7 and 8 respectively.

Figure 7. Electric wire inspiration

Figure 8. Thermal inspiration

Sample of outcome

Initially, we will manipulate the interactive media of output along with implicated inspiration, technological approach and tracking algorithm. Therefore, our project outcome, it will be design roughly as below. Figure 9 shows sample of outcome.

Figure 9. Sample of outcome

?

Post Author: admin

x

Hi!
I'm Elizabeth!

Would you like to get a custom essay? How about receiving a customized one?

Check it out