I’m Bhanuka Gamage, a final-year PhD candidate in the Inclusive Technologies Lab at Monash University. My research sits at the intersection of Augmented Reality, Human–AI Interaction, and Accessibility, where I design context-aware AR tools to empower people with Cerebral Visual Impairment (CVI).
Alongside my academic work, I’ve worked as a Senior Machine Learning Engineer with over eight years of industry experience—building and scaling AI solutions, leading cross-functional teams, and delivering user-focused products. Most recently, I led the development of a smart feeding system for pets at ilume, combining trackers and smart bowls to help fight obesity in dogs. I’ve since wrapped up my work there to focus full-time on completing my PhD.
When I’m not in the lab, I’m usually deep in the Victorian High Country on my mountain bike—chasing trails and fresh air.
The best way to reach me is via LinkedIn.
Bhanuka Gamage, Nicola McDowell, Dijana Kovacic, Leona Holloway, Thanh-Toan Do, Nicholas Price, Arthur Lowery, Kim Marriott
ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'25)
This research explores smart glasses solutions for CVI through co-design with people who have Cerebral Visual Impairment. We develop assistive technology using extended reality to support environmental perception, demonstrating how glasses for CVI can enhance daily navigation and visual processing for individuals with brain-based vision impairments.
Bhanuka Gamage, Leona Holloway, Nicola McDowell, Thanh-Toan Do, Nicholas Price, Arthur Lowery, Kim Marriott
ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'24)
A comprehensive review of vision-based assistive technology for people with Cerebral Visual Impairment. We examine smart glasses, assistive tech, and AI-powered solutions for CVI, identifying gaps in current technologies and presenting insights from our focus study to guide future assistive technology development for individuals with CVI.
Bhanuka Gamage, Thanh-Toan Do, Nicholas Seow Chiang Price, Arthur Lowery, Kim Marriott
ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'23)
An investigation into what blind and low-vision people want from assistive smart devices including smart glasses and other assistive technology. Through literature analysis and a focus study, we identify key requirements for effective assistive tech solutions that address the real needs of people with vision impairments.
I’ve just rolled out a revamped site with featured publications, CVI research updates, and news—check it out!
Presented my research on AR-powered Apple Vision Pro solutions designed to help individuals with Cerebral Visual Impairment read text and interact with their environment.
Successfully completed my pre-submission milestone—the final of Monash’s three major PhD checkpoints.
Published "Broadening Our View: Assistive Technology for Cerebral Visual Impairment" at CHI '24 Extended Abstracts, highlighting the need for more assistive technology research focused on CVI.
Presented "AI-Enabled Smart Glasses for People with Severe Vision Impairments" at the ASSETS 2023 Doctoral Consortium in New York.
Most outstanding undergraduate academic performance in BCompSci (Honours).
Most outstanding graduate in BCompSci (Honours) for April 2021 graduation.
Won Gold Medal for “BaitRadar – a scalable browser extension for clickbait detection on YouTube using AI technology.”
Awarded to high-achieving students across the university.
Highest marks in MCD4700 across all Monash campuses.
Highest mark in MCD4290 across all Monash campuses.
Awarded to the batch top in the Engineering – IT stream.
Selected for ASSETS 2023 Doctoral Consortium in New York—one of only 3 international PhD researchers; awarded a travel grant.
Awarded scholarship for PhD in Computer Science (valued at ~AUD $200,000).
Awarded scholarship for PhD stipend (valued at ~AUD $175,000).
Awarded for excellence in college basketball.
Awarded for excellence in college basketball.
25% scholarship for college colours in basketball.
Finalist in the Tertiary Education category at the 20th APICTA competition.
A novel computer-implemented method for predicting video links as clickbait using deep learning is described. The video link’s title, thumbnail, tags, audio transcript of the video, comments and statistics are used for training the model. The title, tags, audio transcript, comments, thumbnail and statistics are inputted into a deep learning network having separate sub-networks for each attribute. The sub-network for title, tags, audio transcript, and comments involves an embedding layer and a long short-term memory layer. The sub-network for thumbnail involves a convolutional neural network. The outputs are merged through an average operator; each sub-network handles a different modality. The weight of the video link as clickbait is determined by the deep learning model.