At Techolution, we have implemented Asset Management and Monitoring solutions leveraging popular cloud IoT platforms including GCP IoT Core, Azure IoT and Popular IoT platforms. The implementations were successful but we had some challenges which affected the rollout time. We realized that we needed an edge management platform. Here are the challenges that we experienced and this article describes how we solved for them:
- Code deployment: Most of our solutions involved edge gateway based deployments. For the firmware deployments, enterprises generally give the OS image with code to the manufacturers (OEM). Or they manually install the code in the premises (Usually in the case of Proof of concept or very low volume pilots). While giving to OEMs is easy, deployment technicians still needed to make changes related to the customer or the site. So the time to deploy hardware for IoT solutions was time consuming and the cost increases proportionately with the number of sites. Not only the human labour costs, it also involves the operational cost of bringing the labour to the site, ensuring that they stay for a required number of days etc made the big rollouts challenging.
- Upgrades: Generally IoT solution providers handle the firmware upgrade as OTA (Over the air) or a more refined version of it. While it is good, in case of major deployments with a lot of dependencies, it was always a challenge. While moving to containerized solutions solves a part of the problem, lack of coherent proven solutions to handle it effectively at cloud and edge, compatibility with IoT platforms and ability to handle IoT specific requirements were a challenge.
- Management: As the size and scale of the deployment grows, it is essential to know the health of the fleet, apart from its online or offline status. With each edge possibly having its own configuration, context based logic etc, making sure upgrades do not cause service disruptions and reliable rollback were part of the challenge.
As we decided to revamp the architecture and solution, we realized the following capabilities were needed to make sure the entire deployment delivered more value to the customer:
- Data processing at Edge, along with a flexible and scalable data model
- AI Invocation at edge, along with ability to choose and deploy required model version and applications
- No code solution for data acquisition from underlying connected devices over different protocols
- Ability to extend the solution to micro controllers or low powered machines running on different versions of RTOS operating system
Given the problems and needs, we explored various available options . While Edge management platforms like K3s were available, they expect to have a full fledged Kubernetes cluster at the edge, which is not possible. While iOFog seemed to handle the need, there was a gap at Edge capabilities like data pipelines.
Sensing the gap, we adopted a hybrid approach of addressing the edge capabilities and pipeline management as a core activity and decided to choose an edge platform which could support it along with other basic features. While we did choose the ioFog controller framework, we created a separate edge management platform with the following extra features in addition to the default features provided by ioFog.
- Customized workflow around deployment of packages and applications to include approval workflow and rollback strategy
- Scheduling the updates for a future time
- Framework agnostic edge management services to be adaptable to other edge frameworks in the future if needed
- Revamped UI to identify different edge locations, their health and to provide more intuitive deployment
- Integration with multiple public and private container repositories
- Automated environment setup
- Integration with Anthos for managing application deployments on data centres acting as Edge locations
- Deployment of non-containerized applications and programs
- Customized the edge agent to handle the native application deployment
- Handling OTA upgrades to controller-based edge devices
- Capability to handle AI/ML service deployment
- Ability to capture AI/ML accuracy at Edge and ability to choose the relevant model version
- More features in Pipeline
For the edge layer, we created a separate agent software to account for following capabilities
- Configurable UI driven scalable data pipeline
- Out-of-the-box support to perform analytical operations
- Ability to invoke services running in the edge location
- Ability to run AI/ML services at edge
- Real-time monitoring of AI performance accuracy and communication to cloud edge platform
- No code way of data acquisition from multiple underlying connected devices via standard protocol
- Support for standard industry communication protocols
- Framework to extend to new protocol in the future
- Automatic Registration and provisioning with Google IoT Core and IoT gateways
- Real-time communication of Edge health and KPIs (CPU, memory, storage, network IO) to cloud
- Out of the box communication of telemetry and decision to Google IoT Core and other IoT Gateways
- Offline data storage and sync
- Caching at edge
- More features in development
Our Deployment Approach
As we developed the platform we formulated a strategy to implement a tower monitoring solution which we then executed for a telecom provider on the African continent. This step took place after developing the accompanying edge capabilities and after pilot rollouts on our internal assets followed by testing. As the second phase of the project, the deployment encompassed around 30 sites in Africa with the following deployment and rollout approach:
- Identified the data pipeline requirements and modelled it on the GUI for edge data pipeline software
- Converted the edge data pipeline to a container image
- Changed the existing data acquisition code to use the new edge data acquisition capability to make it a configurable native application
- Created separate configuration file (JSON) for data acquisition for each location based on the variable number of connected devices at each location
- Checked in the data pipeline container docker image to container repository
- Uploaded native application data acquisition and configurations to GCP storage
- Registered the containerized pipeline and native application in edge controller platform via UI for data acquisition (API and CLI are also available)
The edge gateways were shipped with just the edge controller agent. As the edge gateways were deployed in the tower (i.e the deployment location) and came online, the edge agent controller registered itself with edge platform and Google IoT core using secured keys.
As the edge came online in the platform, a support team deployed the applications onto the gateways by accessing the UI of the edge management platform from their remote locations. Deployment teams were able to simultaneously work on other deployment activities without worrying about software upgrades.
Benefits to the Customer
- We were able to deploy more sites in a short time frame as the efficiency of the deployment team increased. This limited onsite visit days by customer engineers during the pandemic.
- Executive leadership saw quicker rollouts and ROI justification to the board of directors.
- As the edge was able to store the data locally, the UI and services layer of the Asset management platform was deployed at edge as well. This gave engineers a detailed and transparent view of the site without accessing the cloud layer of the asset management platform.
- Since our edge management component was able to deploy and manage any containerized solution, the customer was able to easily limit deployment of their non critical workloads (e.g visitor management, local customer feedback system etc ) from the cloud to their local edge locations. This resulted in significant cloud runtime cost savings.
If you are working on an edge cloud deployment you may have experienced challenges around code deployments, upgrades and management. We have solved for those with our edge management platform and deployment approach. Fill out the form if you would like to talk with an engineer about how we can help: