Growing up in the 1980s, Hae Won Park enjoyed watching Astro Boy, a cartoon series about a schoolboy robot with extraordinary strength, the ability to fly, and a well of mainly positive human emotions. "He was portrayed as a friend, the children's friend," Park said of the show’s main character. Park is an Amazon Visiting Academic (AVA) with Amazon’s Lab126, the company’s research and development center.
Then the Star Wars movies burst into Park’s consciousness, firing her young imagination with its robots and androids.
"They're very expressive, very emotionally engaged, very social, and able to build really tight relationships with humans,” Park said of R2-D2, C-3PO, and Co. “I decided … I was interested in building something that could become a companion.”

A robot with personality

Park has upheld that pledge throughout her journey in higher education, most recently as a research scientist at the Massachusetts Institute of Technology’s (MIT) Personal Robots Group, where she develops “interactive social machines” that can be personalized to meet the needs and preferences of their users in a relational way.
"Amazon has put a lot of effort and emphasis on ethical practices for developing technology and protecting people’s privacy. Providing transparency around how the information is used is essential for building the trust and confidence consumers need to make the most of these social devices."

Dr. Hae Won Park

Amazon Visiting Academic with Amazon’s Lab126
Now on sabbatical from MIT, Park is working with Amazon, helping to make interactions with Amazon’s first household robot more seamless and meaningful. As an AVA, Park has joined a roster of distinguished academics specializing in economics, applied mathematics, and computer science. These thought leaders are embedded with Amazon teams, working to help solve specific technical problems while advising colleagues on methodology and lines of inquiry.
Park was drawn to the company by consumer reaction to Astro, which was introduced in 2021.
“I could see from customer feedback that people really want to build relationships with the robot,” she said. “To provide that customer experience, the robot has to have a very rich character that’s transparent and easy to understand.
“[So] I’m working on bringing Astro users a closer relationship with the robot,” she continued. “We've heard from customers that they want a richer relationship with their home robots—and we are committed to working on the features that our customers want. Ultimately, we aim to build technology that adapts to the users' wants and needs, not vice versa.”

The user’s pet

Park said she is interested in “human flourishing”—the beneficial results of interaction with consumer robots that “push people to develop themselves.” As she sees it, it’s key that consumer robots “become sort of like pets,” with personalities that can teach and encourage, as well as serve.
Part of that bond involves trust and privacy. Park said that Astro users should feel confident that—despite the robot’s cameras and microphones—their privacy is respected.
“It’s a sensor-rich system,” she explained. “But Amazon has put a lot of effort and emphasis on ethical practices for developing technology and protecting people’s privacy. Providing transparency around how the information is used is essential for building the trust and confidence consumers need to make the most of these social devices."

Readable robots

At MIT, and now at Amazon, Park and her colleagues develop theories about mental processes, and translate them into computational models that can be embedded in social machines. These inputs—based on users’ speech, gestures, facial expressions, and intonations—are designed to help the machines make appropriate decisions, based on their role, the context of the interaction, the information they need to deliver, and to whom. Over time, and with repetition and exploration, these social machines can learn from natural feedback, and build relationships with users.
In the big picture, Park believes character and personality are some of the most important traits of a consumer robot. Unlike single-task-oriented industrial robots, consumer-friendly “embodied agents” have to convey meaning that’s in some sense “readable” to users.
An Amazon Astro bot on a white background
Robots also have to translate human cues into appropriate and reliable reactions, and cope with change. To do that, Park has developed three categories of inquiry, based on how the robot perceives and responds to social cues, learns and becomes personalized to its users over time, and engages with users in one-on-one situations.
In particular, Park is drawn to robots that could support childhood education, behavior changes, and assist eldercare. Traits that go beyond task fulfillment, like personality, even cuteness, are essential to providing this support.

This robot is more than its character

The impact of cuteness on the human brain is profound. In 1949, zoologist Konrad Lorenz posited that baby-like features—big eyes, prominent craniums, recessive chins—make adults feel positively engaged with the possessors of such traits. In theory, this response is a benefit to humankind, because it prompts adults to protect and nourish their infants and young children when they are at their most vulnerable.
But cuteness alone is not enough to cement lasting relationships. For Park, engagement with a home robot like Astro goes beyond big eyes or tiny chins. It’s also linked to perceptions of responsiveness and growth. Just as a baby will squeal with delight when seeing a parent, a robot’s nod, grin, or raised eyebrow bolsters the user’s willingness to engage with the machine on a long-term basis. Those responses elicit the feeling that the machine, slowly but surely, is learning about you—and growing its behavior around you.
In essence, the robot rewards the user with recognizably appropriate responses.
“When you generate the right cues in the device by understanding how humans generally react to a growing relationship, it generates a different level of engagement in people,” Park said. “They're curious about what the device can do and how it will adapt to them, and they get excited when the agents generate expressions that make them kind of like animated characters, only in real life—and as your companion robot.”