Image
Top
Navigation

Automate your home with Kasa

Design Kasa core system from the scratch

ABOUT MY ROLE

Before jumping to the project, people at TP-LINK have already started imaging and drafting the solutions for this innovative concept. I took on what past designers and design interns delivered and tested out different solutions to understand mindset from potential users. After finding out the direction, I narrowed down the scope and delivered a simple but easy scalable solution for final implementation.

OVERVIEW

A small step but a big potential for Kasa ecosystem

Automatic Rule is a great feature for users, but it might be a complicated feature for users to setup. Also, sometimes users might encounter unexpected device behaviors. The reason can be what users setup might be different from what the backend did or they don’t clearly understand the system and make the wrong decision. We started from knowing what existed solution out there in the market and then thought of what simple solutions we can offer. Like buying food at Chipotle , users can create “Smart Action” which guide users to setup their automatic rules step-by-step. We called this design as a “Kids menu” which shows a comprehensive and simple configuration process to users. On the other hand, the function provides value for users to allow their devices work automatically at a scheduled time, for a sudden event, or under a particular scenario.

BEFORE START

Why automation is important to users?

Thinking of you have a bunch of smart devices, there might be some times you want them to interact with others. By using voice control like Amazon Echo, users can easily request their smart devices to change to preferred status right away. However, it doesn’t address some scenarios, such as users are not physically at home,  the continuous home status they would like to maintain, or they might forget they should react when somethings have already been happened. Smart Action is an automatic rule helps users to reach those goals. It handles daily routines automatically without users efforts on it. The primary focus can be for security use case, and later extend to entertainment or other scenarios. The ultimate goal can expand the concept by making the feature as a home assistant or housekeeper to users.

Automation can provide values to smart home users

EXISTED SOLUTIONS

What solutions people have already built in the market?

There have been many existed solutions out there in the market. Basically, the concept is around “If something happens” and “Do these actions”. The different part might be on how to guide users to successfully setup the rule and whether they can understand and expect the rule they created.

 
 

Yonomi:
Trigger + Condition + Action 

Yonomi focuses on “Connect with Life” concept to help users to automate their smart devices. The app mentions when (main event) and only if (conditions) to clearly specify what makes this routine happens. When users go to setup the new routine, they will see all elements they want to insert first for the creation. This way gives users an overview feeling for the configuration.

 
 

Stringify:
Element Selection

Stringify has the same focus as Yonomi does. The app helps users to connect all their physical devices and digital things together in one place. When users start setup the flow, they pick what elements they want first and then build the logic for those selected elements. The app uses a visualized way to let users understand the relationship among their smart devices and other native elements.

 
 

Wink Robot:
If + Then 

Wink is a smart home platform aiming at controlling connected products from favorite brands on one app. The Wink Robot’s concept is pretty similar to Yonomi. The different part is the app only uses “If” instead of separating trigger and condition to specify how to make the automation happen. More information can be found on its website.

 
 

Thington:
Chat-bot with step-by-step

Thington highlights its ability to connect smart home devices though the whole app is beyond that concept. The app leads users through the whole process just by chatting. Chatbots is its key feature to allow users to enjoy conversational feeling through the app.

 

FIRST ITERATION

Can people differentiate trigger and condition?

From the product requirement document I got, our product managers specified details about trigger, condition, and actions in the initial stage. This model is pretty similar to Yonomi.

 

“I want to do X [Action] whenever Y [Trigger] occurs

but only if Z [Conditions] are met”

 
 

Triggers

Triggers are things that happen in a moment in time.

Conditions

Conditions are said to be status for a period of time. They can be tested to be True or Not true by the system.

Actions

Actions are instantaneous. They are the outputs of a rule. Actions get the stuff done that the user wants done.

 
 
However, we had a team debate that whether users can clearly classify their trigger and condition. Although we can differentiate those two by moment and period of time, users might need to think hard when they are doing the setup.

 

Examples of Triggers

Examples of Conditions

 
 

Overview VS. Step-by-step?

There was a discussion around how to guide users to setup the configuration. Showing overview setup options as Yonomi or Wink Robot does is good for users to understand what relationship among selected elements and edit components straightforwardly. However, it might be a little bit geeky, especially for novice users.

On the other hand, step by step process is friendly on guiding users to successfully complete the configuration. However, it might take longer time for users to go through the process. Also, which elements should be started first for users to pickup is another challenge because users have different mental model when they think of their automatic rule.

Users might see too much information and hinder their setup from overview style

Users might spend more time doing configuration from step-by-step guidance

 
 
 

Can people differentiate manual Control and automation

The other challenge we faced was the concept of manual control and automation. We came up with a thought to merge those two concepts into one setting initially. Before having automatic rule on our ecosystem, We had “Scene” function – manual group control – which allows users to easily click the button and set their devices to a pre-defined status. Once we announced new function – automation, we were wondering whether users can understand their difference. The following questions would be we need to design an interface for users to setup those settings separately or we just let users have the same entry to start configurations. One finding from Yonomi is the app always shows complete configuration for automation rules to their users.  Users can then setup  “Favorite” to only apply “Actions” section without its condition and trigger.

Kasa users might not be able to differentiate current Scene and new Automation feature

 

TESTING

We tested people internally to know their mental model

In order to understand whether users can easily understand the new automation system and have no problem on the configuration, we tested several designs internally with 6 participants by using Invision App. The following was our findings.

 
 

User’s reading behavior might affect how the interface displays

During the testings, we asked our participants how they process information and we found out a pattern based on their information processing type. People who mainly rely on glance information prefer less action button and more programming way to configure settings. That means overview setup display can allow them pickup the important information to start with and reduce efforts on reading every content. On the contrary, people who like to read everything prefer more instructions on the page because they afraid they tap something wrong. Step-by-step is a better way for this type of users.

Detailed Readers prefer step-by-step way; Speed Readers prefer overview version

Another finding was that people who like to see overall information rather than step by step is because they don’t have a strong understanding about how many steps they need to go through for the setup. If they can see overall view in advance, it might ease their nerve during the configuration process.

 
 
 

Merge trigger and condition might be better

Based on the result, Participants could not really define primary trigger from their imagined scenario. If we requested them to setup trigger and condition, they might spend more time to think of what they would like to configure. Most participants tended to come up with If-Then model to setup their automatic rule.

Participants spend more time differentiating settings for trigger and condition

 
 
 

Lower learning curve on understanding manual control and automation

At first, we supposed our internal testers can understand the difference between manual control and automation because they were working on implementing those features, but it seems like they were still a little confused about those concepts. However, it does not mean putting those two concepts into the one setup flow would reduce the complexity of two functions. The focus would be finding out solutions to allow users to understand the difference and setup those two functions separately based on their needs.

How to let users understand current Scene and new Automation feature is our next step

 
 

OUR CHALLENGE

Initial Scope is too large to implement

Our initial plan started from designing customized automation setup flow. That meant users could mix time, device, presence triggers and conditions to support most of their use cases. However,  we need more time to build the flexible infrastructure but we didn’t really have. By discussing with our product manager, technical lead, and other stakeholders many times, we finally decided to narrow down our scope by creating pre-defined automation setup for first release. The next question would be how to make our pre-defined automation simple but still satisfy users’ basic needs was what we should answer.

 
 

How to leverage from current existed system – Scene?

As mentioned before, we already released “scene” for our Kasa users. Although we initially had a plan to merge scene and automation to deliver another simple version, this might cause the whole system and infrastructure to rebuild. Also, more and more users are using scene to manually control their group of devices. If we changed the system dramatically, people might be confused and complain this new change. Therefore, we made a decision that we still use current Scene and have another function – automation for users to setup differently.

 

FINAL ITERATION

How to simplify implementation but still satisfy our users’ needs?

Based on our user data, we found more than 70% of Kasa users have Amazon Echo. We then had two assumptions about our target users. Firstly, Kasa’s users are not innovator or early adopter with abundant DIY experience. Secondly, Kasa’s users pretty like simple way to control or setup their devices. Therefore, we thought step-by-step configuration is the way we want our users to experience for their automation settings. Also, this aligns with how people evaluate our app – easy to use and setup. However, that doesn’t mean we give up showing an overview of the setup to users. Our final design will show how many steps they should setup, and let users focus the one which is highlighted for setup. After the automation is configured, users don’t need to go through the whole flow again to change each setting. When they tap their existed automation, we show the parameter for that automation and allow them to edit the one they would like to change the setting.

 
 
 

What simple user cases we should provide?

In order to simplify our feature for urgent implementation, a template based design is what we would like to deliver for first release. How to determine which use cases we want to support is our product team needs to think of. We currently suppose the following three use cases with the reason we selected them.

 

Schedule a Scene to apply at certain time

Based on our products review, we found many users use schedule function to turn on or off their individual smart plug or light bulb at certain time. Since we have already released scene feature, we think it might be useful for them to apply their scene (turn on or off multiple devices at the same time) at the scheduled time, including sunrise and sunset.

 

Control lighting with a sensor

Smart Action only works on SR20 currently, and SR20 is compatible with many Zigbee and Z-wave devices. Most security sensor are using that technology, so having a smart action for their security needs is necessary. User doesn’t need to definitely turn on the light for their setting. We can activate any electronic which plugs in to the smart plug. However, if they choose to turn on the light, they can also specify a period of time to turn the light off automatically.

 

Set an Auto-off timer

Actually this smart action is extracted from the second smart action. We believed auto-off timer is useful for people to save energy. Also, it can be a better alternative to our current device timer since we found out not so many users use that function.

 

CURRENT RELEASE

Introduce Smart Actions

Smart Action is our final feature creation to make users’ life simpler through automatic controls. Check out the following prototypes and understand how users setup their smart actions.

 
 

Prototype with Wireframes

 
 
 

Real Interface

 

 

FUTURE SCALABILITY

What capability users might look for in the future?

Although we defined several frequent use cases for our target users, expert smart home users might be not satisfied since they normally look for a fully customized solution. Our next step would be supporting a flexible way to allow our end users to setup whatever they want. We can limit the number of devices and other parameters to balance flexibility and implementation cost.

 

TAKEAWAYS

What did I learn from this project?

Narrow Down the Scope

In the beginning of the project, we tried to deliver a fully customized solution which can fit into all use cases we expected. However, we faced many uncertainties and found out technical constraints when we started implementation. Thus, we decided to narrow down our scope. This helps us to finish MVP features before deadlined and have more rooms to deal with scalability issue later.