Skip to main content

Human Machine Lab

The Human Machine Lab is an interdisciplinary research laboratory with overarching yet practical ambition: computer systems must be designed around human needs and capabilities while maintaining human-level intelligence.  Its interdisciplinary projects fall in various fields including human-computer interaction, usable security, privacy, and artificial intelligence. Three recent research programs are highlighted below:

Cybersecurity
(Led by Dr. Miguel vargas Martin)

This research program endeavours to analyze memorability of system assigned passwords at the time of creation, by studying the brain waves generated while seeing a password for the first time. The research program also investigates authentication techniques that leverage implicit learning phenomena from the psychology field.

Data Privacy in Companion Robots and Smart Toys
(Led by Dr. Patrick Hung)

This program studies a privacy protection framework for companion robots and smart toys. A companion (social) robot is a device consisting of a physical humanoid robot component that connects through a network infrastructure to Web services that enhance traditional robot functionality.​ In this context, a smart toy is defined as a device consisting of a physical toy component that connects to one or more toy computing services to facilitate gameplay in the cloud.​​​ The objective of this research is to build a theoretical and technical data privacy protection engine for culture-aware robots and smart toys on enabling users to be in control of their privacy by specifying their privacy preferences in Human-Robot Interaction (HRI).

Intelligent Decision Systems

(Led by Dr. Amirali Abari)

This program is on designing, exploring and extending the capabilities of intelligent decision systems, aiming to ease, assist, or automate human decision making. One of the main endeavors in any intelligent decision system (e.g., recommender systems) is learning the user preferences upon which decisions are ultimately made. Our focus to preference learning is to require as little explicit information as possible from users. This approach intends to make intelligent decision systems practical and to help preserve individual’s privacy. This approach is possible due to the increasing availability of user behaviour data generated through online social networks, e-marketplaces, or other Web or mobile applications. We are currently developing probabilistic models and machine learning algorithms for preference learning.

University of Ontario Institute of Technology logo