Seymour Kunis media core

Gary Kunis '73 named the Seymour Kunis Media Core in honor of his father with a gift to Watson College Equipment Endowment.

The Seymour Kunis Media Core provides facilities for research in multimedia, including multimedia security, multimedia forensics, biometrics, steganography and steganalysis, immersive displays, and virtual and augmented reality.

This core allows researchers to test the security of watermarking and data-hiding algorithms, develop and test new digital forensic techniques, and design novel immersive environments that will shape future trends in human-computer interaction.

Contact: 

Lijun Yin
Associate Professor, Computer Science
lijun@cs.binghamton.edu

Scott Craver
Associate Professor, Electrical and Computer Engineering
scraver@binghamton.edu

Grants and Awards

  • NSF, "Analyzing Face Expressions in 3D Space"
  • RI/AFOSR, "Processing Time-varying 3D Geometry"
  • Air Force Research Lab (AFRL) "Real time eye tracking and hand tracking for human computer interaction"
  • NYSTAR James D. Watson Investigator Program, "Developing a 3D Face Modeling, Analysis and Recognition System"
  • Presidential Early Career award in Science and Technology (PECASE) "Towards a General Theory of Counterdeception"
  • AFOSR Young Investigator Award, "Identification of Secret Algorithms using Oracle Attacks"
  • AFOSR, "Estimation of Information Hiding Algorithms and Parameters"

Research Projects

AINT Steganographic Operating System

AINT, a steganographic operating system, is a virtual OS that is encrypted and scattered within ordinary files like PDF documents. When a user types a common terminal command, AINT unpacks, decrypts and runs itself, creating a local web server that serves a Web 2.0 virtual desktop to a web browser in private browsing mode. This allows the user to view and edit hidden files, and run hidden applications. The AINT architecture has the advantage that there is no suspicious software footprint that would suggest the existence of hidden files.

Bacterial biofilms formation and 3D structure

An ongoing Howard Hughes Medical Institute (HHMI) project with the Biology department, featuring one undergraduate and one PhD student from each department, is using the media core to analyze microscope images of bacterial biofilms. The purpose of this project is to use image processing and signal processing techniques to better understand the process of biofilm formation, and to better understand the 3D structure of biofilms.

Eye and face tracking with an active camera

New eye gaze tracking technology has been developed that does not require infrared cameras. This eye gaze tracking has a unique feature that a user's 3D eyeball is estimated and the fovea and iris positions are also estimated for constructing a viewing vector. Such an eye gaze tracking technology can be used for many applications including attention estimation, fatigue detection, eye-reading and viewing analysis, eye and computer interaction, etc. This work has been supported by the Air Force Research Lab and National Science Foundation. Three papers have been published from this work, and a patent has been filed through the Technology Transfer Office.

Head gesture tracking and interacting for face recognition

An active camera locates a person and automatically focuses (zooms) on the person's face for feature tracking. The 3D head pose is also estimated and tracked in real-time. The head gesture is used to control and interact with machines for operations such as real-time head painting. Two papers have been published from this work. 

Eye and hand pointing tracking in a smart room setup

A finger pointing system will be demonstrated. The 3D Hand and finger will be tracked using two regular cameras. The finger pointing direction is to control the button clicking as a nature gesture interaction in an attempt to replace mouse and keyboard. Applications include developing the next generation of human computer interface and ultimately actions such as using finger pointing as a means of TV remote control, etc. This work was supported by the Air Force Research Lab, and two publications have been generate. A patent has also been filed for this work through the BU Technology Transfer Office.

Face and facial expression recognition with an adaptive avatar behavior generation

This demo will showcase the newly developed facial expression recognition system. Human facial expressions are tracked and estimated in real time (e.g., happiness, angry, sadness, surprise, etc.) The estimated expressions are used to control an avatar for expression mimicking and expression response. On the near horizon is enabling computers to recognize the user's emotional state. Automating the analysis of rapid facial behavior could help understand human's emotion. It could help design a new generation of human computer interface, teach parents about their babies, reveal patients' moods to doctors and even root out liars.

There are many potential applications, including in healthcare, diagnosis, pain management, communication, and assistive living, security, military, education (e.g., monitoring and analyzing students performance in class and using advanced technology to improve learning and teaching), training, entertainment and gaming, and retail and advertising.

This work has been supported by NSF and NYSTAR. A technology disclosure has been filed to the BU Technology Transfer Office. Ten publications have been generated in top-notch conferences and journals.

3-D Mask Illusion

This is the work of a senior design team in the 11-12 academic year. Video footage of a face is projected onto a concave face mask, creating the illusion of a face that follows the viewer.

The student team built a vacuum-forming machine, and wrote software to distort face images so that selected facial features are precisely projected upon corresponding features of a face mask. They also conducted research to analyze the cause of the face mask illusion, and its effectiveness at different ranges for both animated faces and still images.
Applications include interactive kiosks, and possibly general-purpose back projection displays.

Laboratory Features

Display Wall/ Innovation Display

This display incorporates 6 projectors to produce a single big screen display, oriented toward passers-by.  It uses a Microsoft Kinect device to sense people and respond to gestures. Future projects incorporating the display wall include:  improving gesture recognition and expanding the gesture interface with the computer; experimenting with projection materials to reduce differences in brightness; producing gesture-based educational tools.

Movable carpet squares

Allow for the demarcation of project space within the Core

Plastic bubble covers

Students can leave unattended projects safety in place on the floor

Configurable curtains

Allow for the creation of varied-sized "rooms" within the Core

Overhead theater grid

Access to power and data in the ceiling