REDMOND, Wash. — Nov. 1, 2010 — In the previous article we discussed how to enable connectivity for embedded devices and the pros and cons for wired, wireless and cell-based networking, but we didn’t discuss what the underlying device was, or how trends in hardware and software impact device developers. This article focuses on hardware, software and emerging technology trends, and what this means for you, the embedded device developer.
During the past 50 years we’ve seen advancement in computing from mainframes through Internet- connected desktop computing to smart connected devices. Embedded hardware has also undergone a similar transformation, from simple 4-bit, 8-bit, 16-bit computing devices to 32-bit single core and multicore silicon and for some classes of embedded devices 64-bit multicore silicon. An extension of the single core, the multicore trend incorporates the addition of connectivity, which enables a device to play as part of a distributed embedded system rather than a single stand-alone device. Programming a single threaded process running on a single core CPU is fairly straightforward. However developing multithreaded code can become more involved. If we go a step further and consider developing multithreaded code that runs on multicore machines and is able to load balance work across the cores, a developer’s job can become increasingly complicated. If that process wasn’t challenging enough, layer on the ability for your device to be part of a distributed embedded system and life for the embedded developer looks to be quite complex.
Emerging Technology Trends
There are similar trends in the software space; a number of years ago most software developed was written in assembler. This was immensely time consuming; printouts would contain approximately six characters per line on 132 column fanfold listing paper and was hard for someone else to pick up, debug and decipher. The move from assembler to higher level languages like C/C++ dramatically improved developer performance, and thankfully made the source easier to read! Productivity came not only from a higher level programming language but also from associated runtime libraries. A good example of this is the ease of developing a Windows application using the Microsoft Foundation Classes compared to raw Win32 programming. The evolution of programming languages and frameworks continues with C# and the Microsoft .NET Framework. These frameworks provide rapid application development and an array of helper classes that take most of the heavy lifting away from the developer. This allows developers to focus on the code that makes their application unique, rather than on the underlying primitives and plumbing that are required for their application to work.
The next trend is focused around user experiences (note that I didn’t say user interface). There have been many advances in this arena, specifically the inclusion of Windows Embedded Compact 7 CTP and Silverlight — enhancing and improving user interaction and experience. User experiences can include a number of input and output modalities. If we roll back 30 years, the primary user experience was command prompt and keyboard. Roll back 20 years and graphical user interfaces with keyboard and mouse were the order of the day. Today, users expect a more immersive user experience and input-output modalities that match the device type and usage scenarios. Although some devices may work well with 2-D graphics, keyboard and mouse, other devices may use speech for input and output and others may use 3-D hardware-accelerated graphics and touch, gesture and multitouch.
The key message to take away is that developers need to choose the appropriate language and framework to build their device experience. This can include anything from dealing with multicore or distributed systems programming to interacting with cloud-based services and building immersive user experiences.