The project focused on exploring possibilities of using assistive robots at smart home environment, with the intent to use them for home-based assistive robotics and healthcare applications.
Initial user research were required to understand human-robot interaction preferences in smart home environment.
Phase 1: Robot Task @ Smart Homes
Recruitment & Screening
Target users were recruited through a research institute at Georgia Tech named TechSage. Emails, text messages and phone calls were made to reach out to potential participants.
To screen qualified participants from the pool, a screener survey was distributed to interested participants both online (filled by participants) and through phone calls (read by me and filled based on participants' answers).
📝 Check out the Screener Survey→
Focus Group/ Think Tank
In order to understand people's perspective of having assistive robotic agent at homes, and how would robots be integrated with smart home devices, focus groups were conducted to collect user opinions through group discussion and a brainstorming activity.
🧩 Springboard brainstorming method was used.
Participants were given If I (or the family member) _____, I wish the robot____ statement to help brainstorm robot tasks at smart homes.
To accommodate Covid health guidelines and concerns, focus groups were conducted both in-person and online through conference meetings.
🔎 3 sessions x 4-5 people/session were conducted.
- Ice-breaker and warm up discussion
- Brainstorm #1: general home scenario
- Brainstorm #2: assistive population home scenario
- Summarize and end study
7 different scenarios were used each focus group session to collect as many different tasks as possible.
Focus Group Activity
210 tasks were collected and coded based on task categories, including 214 IW statements and tasks mentioned in warm up discussions. For human-task codings, tasks were coded into 3 levels:
- Primary task category:
Chores, Cognitive assistance, Emotional support,Entertainment, Health monitoring, Security and safety
- Secondary task action:
Information management, fetch object, emergency notification, task reminder, etc.
- Tertiary task description:
clean and wipe, detect supply shortage and auto refill, turn off devices, etc.
📝 Check out the Focus Group Outline→
Phase 2: Human Commands For Robots
After coding tasks based on both human preferences and robot capabilities, key tasks were selected out to be used to collect users' verbal command utterances for robot natural language processing.
Participants were recruited from phase 1's participants pool.
Unlike traditional interviews, the purpose of the session was to collect people's verbal wordings and sentences for target daily tasks to help train the robot to translate natural language queries into known system specifications.
🤔 We wanted to understand how people request robot to complete certain tasks while smart home devices send out status alert or ask for actions.
To collect verbal commands, we aimed to collect keywords (slots) and sentences structures (utterance) that people use to send request through voice UI.
🔎 12 participants were interviewed.
5 scenarios were given to help participants visualize robot-human interaction cases, and each participant was prompt to verbalize how they would request the task, through which device, and expectations for possible failures.
Sample Scenario Prompts
Utterances of each scenario were collected and organized based on action types and tasks.
📝 Check out Interview Outline→