In class we were shown how the Xbox 360 Kinect could be used to help make our projects more interactive or more efficient. The demonstration we were shown was were the sensor would track the motion of the users hand and represent it with a green square on screen. It did this using a mixture of depth and heat sensors.
The software used is provided by Microsoft (the creators of the Xbox and the Kinect), it is called SDK and the version used is 1.8. They have released a later version (2.0) which is specifically designed for the Xbox One Kinect sensor and is not compatible with the Xbox 360 Kinect sensor.
Being the programmer in the group I began experimenting with this device. My original problem was that I didn’t realize that the updated SDK 2.0 software wasn’t compatible with the old sensor. So after trying numerous different websites to try and get this working, I decided to Google if the new software worked with the old hardware and thankfully I did because it immediately told me that it wasn’t compatible.
Once I downloaded the SDK I needed a library of content to use with my Processing app. On the Dkit Moodle in a powerpoint slideshow by Kieran Nolan, I was given a link to a helpful site for setting up the Kinect. This gave me a library called SimpleOpenNI. After linking this to my library in processing aswell I was ready to go. Using a few basic demonstrations online I could see myself on the screen through the kinect in multiple different formats, including heat, depth and regular.
I plan on continuing the research into this area as I can see this being helpful for my project. I find it would definitely make it more interactive and interesting.
Today in class we looked at a program called ‘reacTIVision’. This is a program using the webcam of your computer to create a digital black and white image in squares of what the webcam sees. I found the app cool in the way it works and impressively quick. We tried to use the app with the ‘Processing’ app but could not get it to work since the computers were updated.
However from looking at examples online I could see the potential this had and how (if needed or applicable) it could improve my group project. Making it more physically interactive.
The program can also work by scanning certain images that act like a bar code to the webcam which can be programmed to different effect. For example on the video I looked; a project used a helmet with a barcode on the top of the users head. The camera was on the ceiling looking down scanning the code. It would use the helmet to track the users location and depending on the location, it would activate and change the levels of different sounds the user would hear. Essentially immersing them in a virtual space.
I own an Xbox and a kinect and always wondered how it worked. Seeing previous projects from other years, some of them used such tools like the kinect. This really helped me realize the basic step to get this motion tracking aspect of my project more realistic.
So for the group project we have to make an interactive multimedia project based around the theme of Independence. Originally I thought (like many others in the class) of the 1916 Rising or the American War of Independence. I loved history in school and found both of them era’s really interesting, but to not let myself down by jumping on the first idea then regretting it later I tried to find other ideas that I might prefer.
After much thought about other ways independence affects us or is present, I realized that I wanted to use history too much. So in class we were split into groups depending on our different interests in the direction our project should go. I became part of the group with three others; Shauna, Julia and Jordan. We all share the same interest in history in multiple era’s of independence and how they are connected as well as the historic figures we relate with them e.g Martin Luther King and Michael Collins.
We liked the idea of projections to make the user feel more immersed in the experience. Also for physical objects. An early idea was off sculptures of historic scenes. I really like the idea of the user being in a room and being able to explore the room discovering different aspects of the project, such as a projection of the Storming of the Bastille, with a miniature model below it and then in another corner another projection etc.
As for the roles in the group. None of us wanted to be the programmer. Including me and from previous years doing programming I understand the hate towards it. We decided that we will all help with the programming however we will have to agree on a lead programmer, but it may not be as bad as it sounds because we all seem to want our project to be mostly visual reliant. Not too technical in the programming side of things thankfully.
In class a few days ago we were split into groups of 3/4 people and given a set of ‘Makey Makeys’. These are like adjustable controllers for the average person, for example connecting a few of the wires to a few bananas and using a pic board connecting it to the computer.
When our group first tried it out with a single banana (we only had the one available) we were immediately impressed. We connected the banana to the up arrow key on the pic board then opened the game ‘Super Mario’ on a browser. This meant that each time we wanted Mario to jump over or onto an object we had to hit the banana. It was pretty cool and ideas immediately started flowing through my head about cooler controllers than a banana that could be made for the group project.
We watched a Youtube video from the company behind the Makey Makey. The possibilities of this device seemed endless. There were examples of them playing a dancing games with the wires connecting to buckets of water, a pencil drawn controller on a piece of paper or a dog Skyping his owner whenever he left his bed.
I think this could help my group project a lot, especially if it ends up being a project which needs buttons to be operated through or a physical contact (basically as long as it is not a projection).