Image & Acoustics Signals Analysis Student Research Projects

All FRI students present their research posters at campus events. Each cohort presents two posters, the first is a pre-proposal developed in the fall first-year Research Methods Seminar. The second poster represents results from the research proposed in the spring first-year course and then conducted in the fall second-year course. These posters are presented at a public session at the annual FRI Research Day in December. At that event, each student team has an opportunity to talk with other faculty and students about their research results.

Image & Acoustics Signals Analysis Research Themes

Cohort 10 2023-24 in Fall 2023, as first-year students (pre-proposal stage)

  • A GAN-tastic transformation of visual landscapes
  • Future video frame prediction
  • Improving accuracy of AI generated LULC maps
  • Towards creating a comprehensive deepfake dataset
  • Using social media to track changes over time

Cohort 9 2022-23 in Fall 2023, as second-year students (research results)

  • Automatic strike zone detection in baseball 
  • Black-box adversarial face transformation network
  • Investigation of racial bias in facial recognition algorithms
  • Unveiling the digital masquerade: Techniques in deepfake detection & sourcing

Cohort 9 2022-23 in Fall 2022, as first-year students (pre-proposal stage)

  • AI creating racially biased housing valuations due to unrepresentative datasets
  • Detecting misinformation about COVID-19 on Twitter using SVM 
  • Examining the security of facial recognition algorithms with adversarial attacks
  • Improving object detection for video inpainting
  • Strikezone detection based on a CNN

Cohort 8 2021-22 in Fall 2022, as second-year students (research results)

  • CopyMarth: replicating player behavior in Super Smash Brothers Melee
  • Generating NeRF-based high fidelity head portraits
  • Text-to-image building facade synthesis
  • Transfering 2D garments onto 3D models using a clothing-segmentation
  • Utilizing deepfakes to anonymize children online

Cohort 8 2021-22 in Fall 2021, as first-year students (pre-proposal stage)

  • A.I. replication of human playstyles in Super Smash Bros
  • Building facade dataset synthesis
  • Generating 3D deep fakes using 2D deepfake computational methods
  • Using key point mapping and KP-VTON to estimate the fit of clothing on human models
  • Utilizing deepfake technology to protect children

Cohort 7 2020-21 in Fall 2021, as second-year students (research results)

  • Auto-generating commentary for esports
  • Deepfake video detection by analyzing facial landmark locations
  • Evaluating the impact of external stimuli in human-robot interaction
  • Integrating deep q-learning with learning from demonstration to solve Atari games
  • Machine learning and mahjong: an exploration of ai and human robot interaction (HRI)
  • Using facial mimicry to determine power and status differentials in group meetings over virtual platforms

Cohort 7 2020-21 in Fall 2020, as first-year students (pre-proposal stage)

  • Evaluation of deepfake detection methods to protect online identities
  • Examining the effects of facial mimicry and smiling in group conversation
  • Generating live esports commentary
  • Inferring human intention using dialogue and gestures in robot manipulation
  • Mahjong through machine learning: comparison of reinforcement learning and supervised learning for human robot interaction
  • Solving Atari games using machine learning

Cohort 6 2019-20 in Fall 2020, as second-year students (research results)

  • A benchmark dataset for bluff detection in poker videos using facial analysis
  • Human motion synthesis to generate dance movements using neural networks
  • Randomized and realistic 3D avatar generation using generative adversarial networks
  • Robot navigation and object detection
  • Simulation of a robotic arm to assemble a tower of unique objects

Cohort 6 2019-20 in Fall 2019, as first-year students (pre-proposal stage)

  • Bluff Detection in Poker using Micro-Expressions
  • Controlling Robot using Gestures for Delivery Tasks
  • Evaluating the Factors that Affect Trust in Human-Robot Interaction
  • Human Motion Synthesis to Generate Dance Movements using Neural Networks
  • Randomized and Realistic 3D Avatar Generation using Generative Adversarial Networks

Cohort 5 2018-19 in Fall 2019, as second-year students (research results)

  • A Simplified Approach for Falsified Video Detection
  • Assigning Autonomous 2D Navigation Goals Utilizing Eye Gaze
  • Fatigue Detection Model Using Deep and Auxiliary Facial Feature Analysis
  • Find the Litter: a semi-supervised machine learning method for automatic litter detection
  • Multimodality Based Facial Expression Recognition

Cohort 5 2018-19 in Fall 2018, as first-year students (pre-proposal stage)

  • Effect of Gaze Pattern Based Stimuli on Alzheimer Patients
  • Patient-Robot Interaction Through Affective Analysis Using a CNN
  • Targeted Lip Reading for Security Purposes Using a CNN
  • The Effects of Cosmetics on the Accuracy of Deep Learning Pixel-Based Facial-Recognition Algorithms
  • Using Convolutional Neural Networks to Detect False Emotions

Cohort 4 2017-18 in Fall 2018, as second-year students (research results)

  • Extracting and Applying Gaze Data for Gaze Pattern Identification
  • Holistic Identification of Scene Text with a General Image Classification CNN
  • Sorting Recyclable Waste to Prevent Contamination Using a Convolutional Neural Network

Cohort 4 2017-18 in Fall 2017, as first-year students (pre-proposal stage)

  • Interpreting American Sign Language With Microsoft Kinect
  • Identifying Recyclable Plastic Bottles from Aerial Drone Images Using CNN
  • Autonomous First Responder
  • Improving Haar Algorithms for Facial Recognition
  • Aerial Image Processing for Autonomous Robot Navigation

Cohort 3 2016-17 in Fall 2017, as second-year students (research results)

  • 3D Object Detection for Visual Impairment
  • Using Artificial Occlusion to Facilitate Low-Resource Facial Recognition on Occluded Images
  • Gaze Patterns of Location Recognition/Non-Recognition
  • Facing Kinect Sensors to Differentiate Biological Gender Unobtrusively through Gait Detection

Cohort 3 2016-17 in Fall 2016, as first-year students (pre-proposal stage)

  • Autism classification through gait
  • 3D object detection and classification for the sight impaired
  • Recognizing occluded faces with deep learning
  • Gaze estimation for scene recognition

Cohort 2 2015-16 in Fall 2016, as second-year students (research results)

  • Autism classification through gaze
  • Gesture recognition for automatic sign language interpretation
  • Pose estimation for automated control

Cohort 2 2015-16 in Fall 2015, as first-year students (pre-proposal stage)

  • Identifying siblings through facial recognition software
  • Counting people in crowds using RGB-D cameras
  • Enhancing speech recognition with lip reading
  • Computer translation of child speech
  • Fingerprints are the best passwords