Generally, my research intersects between the area of Human-Computer Interaction (HCI) and Machine Learning (ML). In particular, I investigate how ML can be used to enhance and create new and novel interaction techniques in devices such as wearables, smartphones and tablets. In the past, I investigated an interaction technique that uses back area/side of the device to sense and model users’ grip patterns to enhance touchscreen touch target, user identification and detect cognitive errors.
Recently, my interest skewed towards computer vision particularly in image processing. I am currently working with my PhD student to investigate deep learning techniques to estimate tree species from high-resolution canopy UAV imagery. I am also working with my students to detect cancerous cells from computed-tomography (CT) scan images. Besides ML and deep learning, I also work in a number of classic HCI areas such as interactive interaction particularly on touch gestures and usability testing.