Device User Experience

REDMOND, Wash. — Dec. 1, 2011 — We’ve already talked about some of the challenges with getting a device “connected,” and how trends in silicon, hardware and software can cause challenges for developers wanting to bring smart, connected devices to market. In this article we will focus on the device user experience.

User experiences have evolved over time, from command line input, through 2-D graphical interfaces with simple input/output devices, to devices that support voice input, touch and multitouch and gesture, and 3-D user interface. Users expect their device user experiences to be responsive, animated, fluid, immersive and intuitive. Note that we’re talking about the user’s experience, not just the user interface: The overall experience obviously includes the user interface (what the user sees), but also includes how the user interacts with the device, the device navigation model (using input and output devices), and how the user configures settings/preferences and connects to other devices.



Mike Hall, principal software architect in the Windows Embedded Business at Microsoft.

The first view a user has of a new device is probably the device’s shell or application launcher (note that some devices boot to a single application, which could be thought of as the shell). The application or shell can be divided into two discrete technology blocks. These are the user interface (what the user sees) and the application logic (the glue or business logic of the application). Separating user experience from underlying logic is not new; the concept of Model View Controller has been around for some time, and can be clearly seen exposed through programming models such as Microsoft Foundation Classes, which divides an application into a series of discrete classes — Application, Document, View, Frame, etc., and Silverlight, which cleanly separates user interface design and underlying application development to the point where the designer and developer tool chains also stand alone.

By cleanly separating user interface design and application code development, we can create two parallel development paths, one for user experience and one for underlying application/business logic. This has the benefit of enabling designers to work on the look, behavior, brand and emotional connection for the given user experience, and the software developer to focus on the underlying functionality for the application/shell. This would include connectivity (including working with Web Services), deployment, security, and of course interaction with the user interface layer. Note that the software developer wouldn’t be creating or displaying user interface elements directly, but would be using elements of the user experience, which have been developed and exposed by designers.

Enabling designers to work independently from the software development process requires a set of tools that enables the designer to create the user interface elements in a form that can be easily consumed by the software developers, and as designers iterate on the design, make it easy for the developers to consume these changes and updates.

Let’s use Windows Embedded Compact 7 as an example of how designers and developers can work independently, and at the same time “work together” to design and implement an immersive user experience. Designers would use Expression Blend to create a Silverlight project. The project would define the user experience without the designer needing to worry about the underlying code — the designer focuses on the look and feel of the experience, timelines, events, and animations and UI resources. The developer is primarily interested in the events exposed from the user experience, and the callbacks exposed from the user interface that the developer can use to provide information to the end user. For example, the designer may create an animation that can be displayed to a user when a time-consuming process is taking place (for example, pulling content from a Web service or parsing data to be displayed to the user). The developer simply needs to know that the animation exists, and then call into it. The developer would use tools such as Visual Studio and Platform Builder to integrate the designer-generated project into his or her embedded design.

For many devices, user experience and connectivity are tightly coupled. A device that relies on connectivity to pull data from Web services isn’t particularly useful when there isn’t a data connection, so the ability for a device to cache data locally can be important for partially connected devices. We’ve already described that a clean separation of user experience (designer work) and the underlying application code (developer work) is important to provide the rich, immersive experience that end users demand. It’s also important for developers to consider how their code and the user experience work together: Synchronous coding methods can block the user experience and make a device appear to be nonresponsive, which wouldn’t be a great end-user experience. It’s better for a developer to consider using asynchronous coding methods to keep user experiences responsive and “alive.” This is especially important as more of the applications and services that we use move to the cloud.

Mike Hall is a principal software architect in the Windows Embedded Business at Microsoft, working with Windows Embedded Compact and Windows Embedded Standard.

Hall has over 30 years of industry experience and has been working at Microsoft for over 15 years — originally in Developer Support, focusing on C/C++, MFC, COM, device driver development, Win32, MASM, and Windows CE operating system development, and then as a systems engineer in the Embedded Devices Group before taking on his current software architect role. Mike also pens a blog covering Windows Embedded developments that can be found

here

.

Related Posts