Will Robots Find a Place in the Smart Home?

For developers and engineers, emerging “social robot” applications are still a work in progress.

For developers and engineers, emerging “social robot” applications are still a work in progress.

Maytronics’ service robot Dolphin Nautilus CC Plus uses AI and a sophisticated navigation system to quickly clean pools (Fig. 2). With its advanced capabilities, the system learns the layout of a pool and automatically remembers the location of obstacles, facilitating the efficient performance of its tasks. Image courtesy of Maytronics.


What is a domestic robot? What roles will these machines play in the evolving smart home ecosystem? What can current technology do, and what tasks lie beyond its reach?

To answer these questions and achieve a better understanding of these robots, engineers should start by dismissing most of their preconceptions about these machines. These robots are works in progress. In fact, developers have only just started to define the roles of the robot in the smart home. For appearance, there are probably as many forms as there are design teams.

Although the designers’ imagination plays a part in defining these robots, more influential is the emergence of several powerful technologies. The almost steady stream of advances in sensing, artificial intelligence (AI) and voice and facial recognition are not only shaping designers’ ideas of what these machines will be (Fig. 1), but they are also constantly forcing engineers to recalibrate benchmarks of the robots’ capabilities and limitations.

Fig. 1: The temi home robot combines advanced sensing, Amazon’s Alexa virtual assistant technology, and AI-enabled facial and voice recognition to deliver a mobile smart home hub that promises to take human-machine interaction to a new level. Image courtesy of temi USA.

Multiple Personalities

One vision of this technology divides the robots into two types of systems: service robots and social robots. In both cases, these machines can be part of the drive toward greater automation in the home.

Within the context of the smart home, service robots provide utility functions on behalf of their owners (Fig. 2). Current examples include iRobot’s Roomba, Husqvarna’s Automower robot lawn mower and Maytronics Dolphin Nautilus CC Plus robotic pool cleaner.

Smart home technology developers, however, have grander plans, in which they foresee robots growing in complexity and value. They anticipate that these machines will eventually assume such household tasks as cooking and serving meals.

Social robots, on the other hand, will be the soul of home systems, providing owners with some level of empathetic interaction (Fig. 3).

Fig. 3: Intuition Robotics’ ElliQ robot aims to provide older adults living alone with social companionship via multimodal interaction. This bot intuitively engages users, based on contextual understanding of its operating environment and emotional awareness of its user. ElliQ responds to voice commands and offers proactive notifications and advice, such as medication reminders. Image courtesy of Intuition Robotics.

“[These robots] will provide a single interface of interaction for the owner, using multi-modality, context and behavior models,” says Dor Skuler, CEO of Intuition Robotics. “They will anticipate their needs, proactively suggest utility based on context, get to know how and when we like to get things done and direct the service robots accordingly.”

An interface and control unit can take the form of an overall system hub. “We have found that robots make the most sense as smart home hubs—intelligent machines not only capable of controlling existing smart home technologies, but also having the ability to predict the environmental needs of its owner,” says Joe Taveras, marketing specialist for temi USA.

As a smart home hub, social robots also bridge the gap between smart home cyber systems and their owners.

“We need someone to tell our ‘workers’ what to do,” says Nir Frenkel, head of engineering at temi USA. “This is where digital assistants come into play. You can consider digital assistants as a translator of human speech to computer speech.”

A Richer Mix of Functionality

The incorporation of digital assistant technology positions social robots to assume a unique set of roles in the smart home.

The development and cultivation of natural language programming algorithms, far-field microphone arrays, echo and noise cancellation, and voice recognition are essential for many of the functions that these robots perform, but one crucial ingredient is missing.

Add mobility to the mix, and you have what promises to be a winning combination. According to the ABI Research report “Smart Home Robotics,” articulation and mobility promise to open the door for broader use of robots in the home. Adding robotic functions to existing voice-control front-end devices affirms activation and engagement through physical movement or simulated facial expressions.

Now integrate these mobile machines with the other smart home technologies, and you take home automation to a whole new level.

“Smart home devices like thermostats, cameras, motion detectors and so on have zero or almost zero use when an ‘activator’ is not present,” says Frenkel. “Most ‘activators’ we have in our houses are smartphones, smart speakers and smart displays. By integrating robots into our homes, we are adding another dimension to our smart home: movement, or more to the point, autonomous movement. Being able to move stuff around your house [open a door or flick a light switch on or off] with voice activation is an amazing step up for home and business automation. Even though this technology is in its diapers, this is a start of something amazing.”

The Robot’s Senses

Engineers developing robots for the smart home must contend with a dynamic and sometimes unpredictable environment. One way design teams plan to meet the challenges presented by this family of applications is through extensive use of sensor technologies. Passive sensors, such as cameras and microphones, are more affordable than ever. As a result, they have made their way into most smart devices and robots.

These passive technologies provide a foundation for robot sensing, but designers increasingly are turning to more sophisticated and complex technologies. For instance, the market offers specialized active sensors—such as radar, LiDAR, time-of-flight cameras and rangefinders—that produce accurate readings of specific metrics.

These sensors produce large amounts of unstructured data. This has historically been difficult to convert into usable information, but the emergence of new technologies has provided a way around this obstacle, opening the door for the use of a broader assortment of sensing technologies.

“When these sensors are paired with the latest advances in compute capabilities, such as NVIDIA’s Jetson product line and cutting-edge AI, such as deep neural networks, it becomes possible to structure the information so as to extract troves of useful features,” says Carlos Asmat, lead robotics engineer at Piaggio Fast Forward. “Given the increasing abundance of computational power, it becomes more advantageous to use sensors that can capture as much information as possible while remaining affordable.”

Despite these advances, sensing technologies currently supporting smart home robots represent an early, transitional stage of development. Designers and users alike can expect steady change in this sector.

“As technology matures, we can add more and more senses to the robot—many senses that humans don’t have—and allow the decision-making and cognitive algorithms to be smarter and more accurate,” says Skuler. “Recent examples include radar that can see through walls, emotion recognition, dangerous gas detection, infrared vision, gyroscopes and thermal vision.”

Building Spatial and Contextual Awareness

Building a rich repertoire of robot sensing capabilities takes aim at providing machines with the ability to cultivate a spatial and contextual awareness of the environment in which they operate, rivaling, if not surpassing, that of the humans living there.

“People share a mental model of their environment,” says Asmat. “For instance, we all have a concept of what is the interior as opposed to the exterior of our homes, the state of our appliances and devices, colors and shapes, to name a few. If robots want to better interact with people, they need to share this understanding of the environment and communicate in terms the user can relate to and understand. Achieving this results in more intuitive interfaces, allowing users to leverage their knowledge of familiar concepts and objects to interact with novel devices.”

Techniques such as object recognition using neural networks, simultaneous localization and mapping, age and gender detection, emotion recognition and natural language processing play key roles in implementing these intuitive capabilities.

On a more basic level, robots must be able to navigate and operate in the home. Developers agree that improving a robot’s ability to navigate and understand its direct environment—providing sensing technologies that enable obstacle avoidance and the ability to distinguish one room from another—is crucial for establishing the value of domestic robots (Fig. 4). Achieving this objective, however, remains daunting.

Fig. 4: The temi robot’s ROBOX navigation system incorporates LiDAR, depth and RGB cameras, an inertial measurement unit, and proximity and time-of-flight linear sensors. This instrumentation allows the robots to perform 3D mapping, user detection and tracking, obstacle avoidance and path planning. Image courtesy of temi USA.

“When designing robots, we always compare it to the human body—depth sensors and cameras instead of eyes, microphones for ears, motors for muscles and wires for veins and nerves,” says Frenkel. “That being said, even with these artificial body parts, it is very hard to teach a robot that a door is a door, [and] a bedroom is a place where humans sleep.”

Robots, however, have one big advantage over humans. They are constantly connected to the world’s greatest data source: the internet. Using the internet and other Internet of Things (IoT) devices, robots can gain a sense of spatial awareness that human eyes, ears and nerves cannot detect.

Robots Get Smart

Armed with a growing assortment of sensing technologies, robots can gather a broad variety of information, such as user motion, appliance usage and house demographics.

Add that to the information and context gleaned from vocal expression, body language, facial expressions and features. For smart home robot designers, the challenge has been converting this data into functions like voice and facial recognition, spatial and contextual awareness, and empathetic interaction. A relatively new technology, however, is providing engineers with the means of solving this problem.

“AI and machine learning tools help aggregate and extract meaning from the rich sensor information,” says Asmat. “By analyzing the user’s commands in a fuller context, the user’s intentions can be more accurately and readily assessed. As such, a larger communication throughput is achieved between the user and the machines without requiring more effort from the user.”

The process of extracting meaning from sensor data has been enhanced with the use of a field in AI called cognitive agents, or cognitive AI, which provides a very useful robot-human interaction model.

“Instead of being a one-direction, command-based, interaction, these agents know how and when to proactively initiate interactions with the user,” says Skuler. “These agents sense the environment using input sensors such as voice and facial recognition, understand the context and make dynamic cognitive decisions in real time to determine what is the best course of action.”

As a result, developers have been able to impart robots with the ability to generate personalized experiences. This has helped robots better meet consumers’ expectations.

Smart Home Robots Today and Tomorrow

Any appraisal of domestic robot prospects has to begin by emphasizing the fact that these machines are still in the early development stages. Practically speaking, this defines where developers are in what they are trying to do with the technology and the importance of cost and consumer acceptance.

“The current generation of personal robots is more of an exploration of the effects of mobility and physical expressivity than well-established consumer products,” contends Asmat. “How their added features benefit or detract from a virtual assistant is for the consumer to decide. So far, it seems that most consumers prefer the reduced cost of a purely virtual assistant, and they perceive the benefits of the added physical expressivity as a luxury addition.”

As the technology evolves, however, the level of robots’ physical interaction with the real world will likely become more significant and begin to outweigh the cost of the required hardware.

“Based on the trajectory of technological innovation in the robotics space, we can predict that smart home applications will greatly involve the ability of the robot to effectively physically manipulate aspects of an environment,” says Taveras. “This will, however, require significant advances in sensor technologies.”

At that point, most developers expect the roles of smart home robots to evolve beyond what we can currently comprehend. Advances in this space will only add to the value of the robots.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Tom Kevan's avatar
Tom Kevan

Tom Kevan is a freelance writer/editor specializing in engineering and communications technology. Contact him via .(JavaScript must be enabled to view this email address).

Follow DE
#23284