Updating software (components) on constrained edge devices as well as on more powerful controllers and gateways is a common requirement in most IoT scenarios.
Having software update capabilities ensures a secure IoT by giving IoT projects a fighting chance against the Pandora’s box they opened the moment their devices got connected. From that moment on, the devices are at the forefront of IT security threats, which, historically, many embedded software developers never had to face. These days, shipping, say, a Linux-powered, web-connected device without any security updates ever having been applied during its lifetime is akin to suicide.
A more charming argument for software updates is that they enable agile development of hardware and hardware-related components. Concepts such as the minimum viable product can be applied to devices, as not all features need to be ready at manufacturing time. Changes on the cloud side of the IoT solution can be applied to devices at runtime, as well.
Sometimes software updating is a business model in its own right, as devices are much more attractive to the customer if they are updateable. In other words, consumers know that they not only get a fixed set of features, but can also expect to benefit from future product updates as well. In addition, new revenue streams may arise from the potential monetization of feature extensions (e.g. apps) without any need to design, manufacture, and ship a new device (revision).
There are various options for connecting a device to a cloud service. From an architectural perspective, it must be decided whether devices should connect directly to the software update service or indirectly through a device connectivity layer (e.g. Bosch IoT Hub, AWS IoT, IBM Watson IoT, Azure IoT Hub, etc.), which itself could also be an IoT solution service. I am a big believer in the direct approach myself, and my product Bosch IoT Rollouts actually supports both. I will explain why below.
So let’s get started: direct connectivity will allow IoT solutions to have a separation of concerns by having distinct channels for software updates in addition to their own channel that the IoT solutions use for device event streams and commands.
This is an interesting approach for two reasons: first, it makes it much easier to keep the software update channel’s API stable if you don’t have to bother with all the business requirements of the other channel. We should not forget that there are scenarios in the IoT in which connected devices might go for extended periods without contact with the backend. In some cases it might be years, especially between manufacture and initial connectivity. Keeping a transport layer stable for that amount of time is easy, but that is certainly not the case for the business layer. This is especially true in the IoT, where many cloud solutions are still in the early phases of maturity.
Second, having a separate channel also allows you to have a separation of business and update functionality on the device itself. Especially in a complex stack (e.g. on an IoT gateway), do you really want to risk a potentially broken stack having to update itself to fix the problem? And can it be guaranteed that it will always be able to do that? Imagine a scenario where you have an OSGi runtime on your gateway with one bundle that causes out-of-memory exceptions and your software update client runs in the same runtime. It could be very difficult to predict the outcome.
However, the separation comes with a price: two channels usually mean greater implementation effort on the device side, and in some scenarios, it might also increase drain on your traffic budget or battery life.
The second option is to combine the use cases in a single channel. We call this indirect integration with the software update service, as the cloud connectivity layer is actually connected to the device and has to split the solution from the update traffic in the cloud.
This has the great benefit of a simplified connectivity architecture. It also allows for leveraging general-purpose device management protocol standards (e.g. LWM2M, OMA-DM, TR-069), which usually include software updates only as a subsection of their standards. In addition, it allows the use of proprietary protocols that are defined by the device (manufacturer) itself.
At the end of the day, the IoT solution engineer has a choice to make: separation of concerns vs. simplicity. With our Bosch IoT Rollouts, we have decided to support both options, and we have customers using both. Direct connectivity turned out to be much easier to maintain for the IoT solution, while indirect connectivity adds a lot of complexity to the overall architecture.
However, most IoT engineers include software update issues in their design process very late in the project, as in most cases it is not part of the core business function, and when they come to it, they don’t want to make any more changes to their architecture. As a result, most solutions take the indirect approach, potentially suffering from the consequences after go-live.
The second decision for cloud-based software updates in the IoT relates to the protocol. Should I go with a standard device management protocol or design a custom one? Many IoT solutions these days favor MQTT with a custom protocol on top. In addition, many of the IoT connectivity layers on the market also offer a proprietary protocol on top of HTTP, MQTT, or AMQP.
I personally believe that some of the standards have value and should at least be considered. OMA-DM v2 looks promising and we have had some experience with LWM2M, as well. As always, standards offer a good framework, but they usually come with a set of constraints; especially in the early stages of an IoT project, this can add a lot of complexity. However, a good standard that covers software updates will allow the IoT solution to have software updates as a function without the need to write even one line of code if both the device and the software update service support it off-the-shelf.
Last but not least, there is the question of device authentication. This is, of course, a general question for every IoT solution. But especially for the direct integration path, the choice has to be made whether the same authentication mechanism can be used for software updates and the IoT solution channel. I usually argue for using the same mechanism. This is actually easy to implement with asymmetric authentication (e.g. X.509 certificate). Bosch IoT Rollouts supports this for its Direct Device Integration API as well as most of the connectivity layers typically used in the IoT. If asymmetric is not an option (which is often the case with constrained embedded devices), I would recommend going for a central (symmetric) key store that can be used by the different channels.
As pointed out above, there are choices that have to be made and questions to be answered. Unfortunately, the IoT is not yet in a state where we have found one architecture that fits at least the majority of scenarios. The good news, however, is that there are options and that they work.