Welcome to KubeEdge’s documentation!

_images/KubeEdge_logo.png

KubeEdge is an open source system for extending native containerized application orchestration capabilities to hosts at Edge.

Welcome to KubeEdge

KubeEdge is an open source system for extending native containerized application orchestration capabilities to hosts at Edge.

Why KubeEdge?

Learn about KubeEdge and the KubeEdge Mission here

First Steps

To get the most out of KubeEdge, start by reviewing a few introductory topics:

Before you get started

Code of Conduct

Please make sure to read and observe our Code of Conduct.

Community Expectations

KubeEdge is a community project driven by its community which strives to promote a healthy, friendly and productive environment. The goal of the community is to develop a cloud native edge computing platform built on top of Kubernetes to manage edge nodes and devices at scale and demonstrate resiliency, reliability in offline scenarios. To build a platform at such scale requires the support of a community with similar aspirations.

  • See Community Membership for a list of various community roles. With gradual contributions, one can move up in the chain.

Getting started

  • Fork the repository on GitHub
  • Read the setup for build instructions.

Your First Contribution

We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged.

If you have questions about the development process, feel free to jump into our Slack Channel or join our mailing list.

Find something to work on

We are always in need of help, be it fixing documentation, reporting bugs or writing some code. Look at places where you feel best coding practices aren’t followed, code refactoring is needed or tests are missing. Here is how you get started.

Find a good first topic

There are multiple repositories within the KubeEdge organization. Each repository has beginner-friendly issues that provide a good first issue. For example, kubeedge/kubeedge has help wanted and good first issue labels for issues that should not need deep knowledge of the system. We can help new contributors who wish to work on such issues.

Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see Contributing below for the workflow.

Work on an issue

When you are willing to take on an issue, you can assign it to yourself. Just reply with /assign or /assign @yourself on an issue, then the robot will assign the issue to you and your name will present at Assignees list.

File an Issue

While we encourage everyone to contribute code, it is also appreciated when someone reports an issue. Issues should be filed under the appropriate KubeEdge sub-repository.

Example: a KubeEdge issue should be opened to kubeedge/kubeedge.

Please follow the prompted submission guidelines while opening an issue.

Contributor Workflow

Please do not ever hesitate to ask a question or send a pull request.

This is a rough outline of what a contributor’s workflow looks like:

  • Create a topic branch from where to base the contribution. This is usually master.
  • Make commits of logical units.
  • Make sure commit messages are in the proper format (see below).
  • Push changes in a topic branch to a personal fork of the repository.
  • Submit a pull request to kubeedge/kubeedge.
  • The PR must receive an approval from two maintainers.

Creating Pull Requests

Pull requests are often called simply “PR”. KubeEdge generally follows the standard github pull request process.

In addition to the above process, a bot will begin applying structured labels to your PR.

The bot may also make some helpful suggestions for commands to run in your PR to facilitate review. These /command options can be entered in comments to trigger auto-labeling and notifications. Refer to its command reference documentation.

Code Review

To make it easier for your PR to receive reviews, consider the reviewers will need you to:

  • follow good coding guidelines.
  • write good commit messages.
  • break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue.
  • label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process.

Format of the commit message

We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why.

scripts: add test codes for metamanager

this add some unit test codes to improve code coverage for metamanager

Fixes #12

The format can be described more formally as follows:

<subsystem>: <what changed>
<BLANK LINE>
<why this change was made>
<BLANK LINE>
<footer>

The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.

Note: if your pull request isn’t getting enough attention, you can use the reach out on Slack to get help finding reviewers.

Testing

There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test:

  • Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer.
  • Integration: These tests cover interactions of package components or interactions between KubeEdge components and Kubernetes control plane components like API server. An example would be testing whether the device controller is able to create config maps when device CRDs are created in the API server.
  • End-to-end (“e2e”): These are broad tests of overall system behavior and coherence. The e2e tests are in kubeedge e2e.

Continuous integration will run these tests on PRs.

Roadmap

This document defines a high level roadmap for KubeEdge development.

The milestones defined in GitHub represent the most up-to-date plans.

KubeEdge 1.1 is our current stable branch. The roadmap below outlines new features that will be added to KubeEdge.

2019 Q4 Roadmap

  • Support HA for cloudcore
  • Support exec&logs API for edge application
  • Support reliable message delivery from cloud to edge.
  • Add protobuf support for data exchange format between cloud and edge
  • Finish scalability test and publish report
  • Support managing clusters at edge from cloud (aka. EdgeSite)
  • Enhance performance and reliability of KubeEdge infrastructure.
  • Support ingress at edge.
  • Upgrade Kubernetes dependencies in vendor to v1.16.
  • Improve contributor experience by defining project governance policies, release process, membership rules etc.
  • Improve the performance and e2e tests with more metrics and scenarios.
  • Improve KubeEdge installation experience
  • Add more docs and move docs out of main repo

Future

  • Support edge-cloud communication using edgemesh.
  • Istio-based service mesh across Edge and Cloud where micro-services can communicate freely in the mesh.
  • Enable function as a service at the Edge.
  • Support more types of device protocols such as OPC-UA, Zigbee.
  • Evaluate and enable much larger scale Edge clusters with thousands of Edge nodes and millions of devices.
  • Enable intelligent scheduling of applications to large scale Edge clusters.
  • Data management with support for ingestion of telemetry data and analytics at the edge.
  • Security at the edge.
  • Support for monitoring at the edge.
  • Evaluate gRPC for cloud to edge communication.

Support

If you need support, start with the troubleshooting guide, and work your way through the process that we’ve outlined.

Community

Slack channel:

We use Slack for public discussions. To chat with us or the rest of the community, join us in the KubeEdge Slack team channel #general. To sign up, use our Slack inviter link here.

Mailing List

Please sign up on our mailing list

KubeEdge Community Membership

Note : This document keeps changing based on the status and feedback of KubeEdge Community.

This document gives a brief overview of the KubeEdge community roles with the requirements and responsibilities associated with them.

Role Requirements Responsibilities Privileges
Member Sponsor from 2 approvers, active in community, contributed to KubeEdge Welcome and guide new contributors KubeEdge GitHub organization Member
Approver Sponsor from 2 maintainers, has good experience and knowledge of domain, actively contributed to code and review Review and approve contributions from community members Write access to specific packagies in relevant repository
Maintainer Sponsor from 2 owners, shown good technical judgement in feature design/development and PR review Participate in release planning and feature development/maintenance Top level write access to relevant repository. Name entry in Maintainers file of the repository
Owner Sponsor from 3 owners, helps drive the overall KubeEdge project Drive the overall technical roadmap of the project and set priorities of activities in release planning KubeEdge GitHub organization Admin access

Note : It is mandatory for all KubeEdge community members to follow KubeEdge Code of Conduct.

Member

Members are active participants in the community who contribute by authoring PRs, reviewing issues/PRs or participate in community discussions on slack/mailing list.

Requirements

  • Sponsor from 2 approvers
  • Enabled two-factor authentication on their GitHub account
  • Actively contributed to the community. Contributions may include, but are not limited to:
    • Authoring PRs
    • Reviewing issues/PRs authored by other community members
    • Participating in community discussions on slack/mailing list
    • Participate in KubeEdge community meetings

Responsibilities and privileges

  • Member of the KubeEdge GitHub organization
  • Can be assigned to issues and PRs and community members can also request their review
  • Participate in assigned issues and PRs
  • Welcome new contributors
  • Guide new contributors to relevant docs/files
  • Help/Motivate new members in contributing to KubeEdge

Approver

Approvers are active members who have good experience and knowledge of the domain. They have actively participated in the issue/PR reviews and have identified relevant issues during review.

Requirements

  • Sponsor from 2 maintainers
  • Member for at least 2 months
  • Have reviewed good number of PRs
  • Have good codebase knowledge

Responsibilities and Privileges

  • Review code to maintain/improve code quality
  • Acknowledge and work on review requests from community members
  • May approve code contributions for acceptance related to relevant expertise
  • Have ‘write access’ to specific packages inside a repo, enforced via bot
  • Continue to contribute and guide other community members to contribute in KubeEdge project

Maintainer

Maintainers are approvers who have shown good technical judgement in feature design/development in the past. Has overall knowledge of the project and features in the project.

Requirements

  • Sponsor from 2 owners
  • Approver for at least 2 months
  • Nominated by a project owner
  • Good technical judgement in feature design/development

Responsibilities and privileges

  • Participate in release planning
  • Maintain project code quality
  • Ensure API compatibility with forward/backward versions based on feature graduation criteria
  • Analyze and propose new features/enhancements in KubeEdge project
  • Demonstrate sound technical judgement
  • Mentor contributors and approvers
  • Have top level write access to relevant repository (able click Merge PR button when manual check-in is necessary)
  • Name entry in Maintainers file of the repository
  • Participate & Drive design/development of multiple features

Owner

Owners are maintainers who have helped drive the overall project direction. Has deep understanding of KubeEdge and related domain and facilitates major agreement in release planning

Requirements

  • Sponsor from 3 owners
  • Maintainer for at least 2 months
  • Nominated by a project owner
  • Not opposed by any project owner
  • Helped in driving the overall project

Responsibilities and Privileges

  • Make technical decisions for the overall project
  • Drive the overall technical roadmap of the project
  • Set priorities of activities in release planning
  • Guide and mentor all other community members
  • Ensure all community members are following Code of Conduct
  • Although given admin access to all repositories, make sure all PRs are properly reviewed and merged
  • May get admin access to relevant repository based on requirement
  • Participate & Drive design/development of multiple features

Note : These roles are applicable only for KubeEdge github organization and repositories. Currently KubeEdge doesn’t have a formal process for review and acceptance into these roles. We will come-up with a process soon.

Setup using Release package

Prerequisites

  • Install docker

  • Install kubeadm/kubectl

  • Creating cluster with kubeadm

  • KubeEdge supports https connection to Kubernetes apiserver.

    Enter the path to kubeconfig file in controller.yaml

    controller:
      kube:
        ...
        kubeconfig: "path_to_kubeconfig_file" #Enter path to kubeconfig file to enable https connection to k8s apiserver
    
  • (Optional) KubeEdge also supports insecure http connection to Kubernetes apiserver for testing, debugging cases. Please follow below steps to enable http port in Kubernetes apiserver.

    vi /etc/kubernetes/manifests/kube-apiserver.yaml
    # Add the following flags in spec: containers: -command section
    - --insecure-port=8080
    - --insecure-bind-address=0.0.0.0
    

    Enter the master address in controller.yaml

    controller:
      kube:
        ...
        master: "http://127.0.0.1:8080" #Note if master and kubeconfig are both set, master will override any value in kubeconfig.
    

Cloud Vm

Note:execute the below commands as root user

VERSION="v0.3.0"
OS="linux"
ARCH="amd64"
curl -L "https://github.com/kubeedge/kubeedge/releases/download/${VERSION}/kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz" --output kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz && tar -xf kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz  -C /etc

Generate Certificates

RootCA certificate and a cert/key pair is required to have a setup for KubeEdge. Same cert/key pair can be used in both cloud and edge.

wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/tools/certgen.sh
# make script executable
chmod +x certgen.sh
bash -x ./certgen.sh genCertAndKey edge

NOTE: The cert/key will be generated in the /etc/kubeedge/ca and /etc/kubeedge/certs respectively.

  • The path to the generated certificates should be updated in etc/kubeedge/cloud/conf/controller.yaml. Please update the correct paths for the following :
    • cloudhub.ca
    • cloudhub.cert
    • cloudhub.key
  • Create DeviceModel and Device CRDs.
    wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml
    kubectl create -f devices_v1alpha1_devicemodel.yaml
    wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml
    kubectl create -f devices_v1alpha1_device.yaml
  • Create ClusterObjectSync and ObjectSync CRDs which used in reliable message delivery.
      wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml
      kubectl create -f cluster_objectsync_v1alpha1.yaml
      wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml
      kubectl create -f objectsync_v1alpha1.yaml
  • Run cloud
    cd /etc/kubeedge/cloud
    # run cloudcore
    # `conf/` should be in the same directory where cloudcore resides
    # verify the configurations before running cloud(cloudcore)
    ./cloudcore

Edge Vm

Prerequisites

Configuring MQTT mode

The Edge part of KubeEdge uses MQTT for communication between deviceTwin and devices. KubeEdge supports 3 MQTT modes:

  1. internalMqttMode: internal mqtt broker is enabled.
  2. bothMqttMode: internal as well as external broker are enabled.
  3. externalMqttMode: only external broker is enabled.

Use mode field in edge.yaml to select the desired mode.

To use KubeEdge in double mqtt or external mode, you need to make sure that mosquitto or emqx edge is installed on the edge node as an MQTT Broker.

  • We have provided a sample node.json to add a node in kubernetes. Please make sure edge-node is added in kubernetes. Run below steps to add edge-node.
  • Deploy node shell wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/node.json #Modify the node.json` file and change `metadata.name` to the name of the edge node kubectl apply -f node.json
  • Modify the /etc/kubeedge/edge/conf/edge.yaml configuration file
    • Replace edgehub.websocket.certfile and edgehub.websocket.keyfile with your own certificate path
    • Update the IP address of the master in the websocket.url field.
    • replace edge-node with edge node name in edge.yaml for the below fields :
      • websocket:URL
      • controller:node-id
      • edged:hostname-override
    • Configure the desired container runtime in /etc/kubeedge/edge/conf/edge.yaml configuration file
    • Specify the runtime type to be used as either docker or remote (for all CRI based runtimes including containerd). If this parameter is not specified docker runtime will be used by default
      • runtime-type:docker or runtime-type:remote
    • Additionally specify the following parameters for remote/CRI based runtimes
      • remote-runtime-endpoint:/var/run/containerd/containerd.sock
      • remote-image-endpoint:/var/run/containerd/containerd.sock
      • runtime-request-timeout: 2
      • podsandbox-image: k8s.gcr.io/pause
      • kubelet-root-dir: /var/run/kubelet/
  • Run edge
    # run edgecore
        # `conf/` should be in the same directory as the cloned KubeEdge repository
        cd /etc/kubeedge/edge
        # verify the configurations before running edge(edgecore)
        ./edgecore
        # or
        nohup ./edgecore > edgecore.log 2>&1 &
       
**Note**: Running edgecore on ARM based processors,follow the above steps as mentioned for Edge Vm
    VERSION="v0.3.0"
    OS="linux"
    ARCH="arm"
    curl -L "https://github.com/kubeedge/kubeedge/releases/download/${VERSION}/kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz" --output kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz && tar -xf kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz  -C /etc
  • Monitoring containers status
    • If the container runtime configured to manage containers is containerd , then the following commands can be used to inspect container status and list images.
      • sudo ctr –namespace k8s.io containers ls
      • sudo ctr –namespace k8s.io images ls
      • sudo crictl exec -ti /bin/bash

NOTE: scp kubeedge folder from cloud vm to edge vm

In cloud
scp -r /etc/kubeedge root@edgeip:/etc

Reporting bugs

If any part of the kubeedge project has bugs or documentation mistakes, please let us know by opening an issue. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.

To make the bug report accurate and easy to understand, please try to create bug reports that are:

  • Specific. Include as much details as possible: which version, what environment, what configuration, etc. If the bug is related to running the kubeedge server, please attach the kubeedge log (the starting log with kubeedge configuration is especially important).
  • Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem.
  • Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report.
  • Unique. Do not duplicate existing bug report.
  • Scoped. One bug per report. Do not follow up with another bug inside one report.

We might ask for further information to locate a bug. A duplicated bug report will be closed.

What is KubeEdge

KubeEdge is an open source system extending native containerized application orchestration and device management to hosts at the Edge. It is built upon Kubernetes and provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge. It also supports MQTT and allows developers to author custom logic and enable resource constrained device communication at the Edge. Kubeedge consists of a cloud part and an edge part. Both edge and cloud parts are now opensourced.

Advantages

The advantages of Kubeedge include mainly:

  • Edge Computing

    With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is produced. This reduces the network bandwidth requirements and consumption between Edge and Cloud. This increases responsiveness, decreases costs, and protects customers’ data privacy.

  • Simplified development

    Developers can write regular http or mqtt based applications, containerize these, and run them anywhere - either at the Edge or in the Cloud - whichever is more appropriate.

  • Kubernetes-native support

    With KubeEdge, users can orchestrate apps, manage devices and monitor app and device status on Edge nodes just like a traditional Kubernetes cluster in the Cloud

  • Abundant applications

    It is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge.

Components

KubeEdge is composed of these components:

  • Edged: an agent that runs on edge nodes and manages containerized applications.
  • EdgeHub: a web socket client responsible for interacting with Cloud Service for edge computing (like Edge Controller as in the KubeEdge Architecture). This includes syncing cloud-side resource updates to the edge and reporting edge-side host and device status changes to the cloud.
  • CloudHub: A web socket server responsible for watching changes at the cloud side, caching and sending messages to EdgeHub.
  • EdgeController: an extended kubernetes controller which manages edge nodes and pods metadata so that the data can be targeted to a specific edge node.
  • EventBus: an MQTT client to interact with MQTT servers (mosquitto), offering publish and subscribe capabilities to other components.
  • DeviceTwin: responsible for storing device status and syncing device status to the cloud. It also provides query interfaces for applications.
  • MetaManager: the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).

Architecture

_images/kubeedge_arch.pngKubeEdge Architecture

Getting involved

There are many ways to contribute to Kubeedge, and we welcome contributions!

Read the contributor’s guide to get started on the code.

Beehive

Beehive Overview

Beehive is a messaging framework based on go-channels for communication between modules of KubeEdge. A module registered with beehive can communicate with other beehive modules if the name with which other beehive module is registered or the name of the group of the module is known. Beehive supports following module operations:

  1. Add Module
  2. Add Module to a group
  3. CleanUp (remove a module from beehive core and all groups)

Beehive supports following message operations:

  1. Send to a module/group
  2. Receive by a module
  3. Send Sync to a module/group
  4. Send Response to a sync message

Message Format

Message has 3 parts

  1. Header:
    1. ID: message ID (string)
    2. ParentID: if it is a response to a sync message then parentID exists (string)
    3. TimeStamp: time when message was generated (int)
    4. Sync: flag to indicate if message is of type sync (bool)
  2. Route:
    1. Source: origin of message (string)
    2. Group: the group to which the message has to be broadcasted (string)
    3. Operation: what’s the operation on the resource (string)
    4. Resource: the resource to operate on (string)
  3. Content: content of the message (interface{})

Register Module

  1. On starting edgecore, each module tries to register itself with the beehive core.
  2. Beehive core maintains a map named modules which has module name as key and implementation of module interface as value.
  3. When a module tries to register itself with beehive core, beehive core checks from already loaded modules.yaml config file to check if the module is enabled. If it is enabled, it is added in the modules map or else it is added in the disabled modules map.

Channel Context Structure Fields

(Important for understanding beehive operations)

  1. channels: channels is a map of string(key) which is name of module and chan(value) of message which will used to send message to the respective module.
  2. chsLock: lock for channels map
  3. typeChannels: typeChannels is a map of string(key)which is group name and (map of string(key) to chan(value) of message ) (value) which is map of name of each module in the group to the channels of corresponding module.
  4. typeChsLock: lock for typeChannels map
  5. anonChannels: anonChannels is a map of string(parentid) to chan(value) of message which will be used for sending response for a sync message.
  6. anonChsLock: lock for anonChannels map

Module Operations

Add Module

  1. Add module operation first creates a new channel of message type.
  2. Then the module name(key) and its channel(value) is added in the channels map of channel context structure.
  3. Eg: add edged module
coreContext.Addmodule(“edged”)

Add Module to Group

  1. addModuleGroup first gets the channel of a module from the channels map.
  2. Then the module and its channel is added in the typeChannels map where key is the group and in the value is a map in which (key is module name and value is the channel).
  3. Eg: add edged in edged group. Here 1st edged is module name and 2nd edged is the group name.
coreContext.AddModuleGroup(“edged”,”edged”)

CleanUp

  1. CleanUp deletes the module from channels map and deletes the module from all groups(typeChannels map).
  2. Then the channel associated with the module is closed.
  3. Eg: CleanUp edged module
coreContext.CleanUp(“edged”)

Message Operations

Send to a Module

  1. Send gets the channel of a module from channels map.
  2. Then the message is put on the channel.
  3. Eg: send message to edged.
coreContext.Send(“edged”,message) 

Send to a Group

  1. SendToGroup gets all modules(map) from the typeChannels map.
  2. Then it iterates over the map and sends the message on the channels of all modules in the map.
  3. Eg: message to be sent to all modules in edged group.
coreContext.SendToGroup(“edged”,message) message will be sent to all modules in edged group.

Receive by a Module

  1. Receive gets the channel of a module from channels map.
  2. Then it waits for a message to arrive on that channel and returns the message. Error is returned if there is any.
  3. Eg: receive message for edged module
msg, err := coreContext.Receive("edged")

SendSync to a Module

  1. SendSync takes 3 parameters, (module, message and timeout duration)
  2. SendSync first gets the channel of the module from the channels map.
  3. Then the message is put on the channel.
  4. Then a new channel of message is created and is added in anonChannels map where key is the messageID.
  5. Then it waits for the message(response) to be received on the anonChannel it created till timeout.
  6. If message is received before timeout, message is returned with nil error or else timeout error is returned.
  7. Eg: send sync to edged with timeout duration 60 seconds
response, err := coreContext.SendSync("edged",message,60*time.Second)

SendSync to a Group

  1. Get the list of modules from typeChannels map for the group.
  2. Create a channel of message with size equal to the number of modules in that group and put in anonChannels map as value with key as messageID.
  3. Send the message on channels of all the modules.
  4. Wait till timeout. If the length of anonChannel = no of modules in that group, check if all the messages in the channel have parentID = messageID. If no return error else return nil error.
  5. If timeout is reached,return timeout error.
  6. Eg: send sync message to edged group with timeout duration 60 seconds
err := coreContext.SendToGroupSync("edged",message,60*time.Second)

SendResp to a sync message

  1. SendResp is used to send response for a sync message.
  2. The messageID for which response is sent needs to be in the parentID of the response message.
  3. When SendResp is called, it checks if for the parentID of response message , there exists a channel is anonChannels.
  4. If channel exists, message(response) is sent on that channel.
  5. Or else error is logged.
coreContext.SendResp(respMessage)

EdgeD

Overview

EdgeD is an edge node module which manages pod lifecycle. It helps user to deploy containerized workloads or applications at the edge node. Those workloads could perform any operation from simple telemetry data manipulation to analytics or ML inference and so on. Using kubectl command line interface at the cloud side, user can issue commands to launch the workloads.

Docker container runtime is currently supported for container and image management. In future other runtime support shall be added, like containerd etc.,

There are many modules which work in tandom to achive edged’s functionalities.

_images/edged-overall.pngEdgeD OverAll

Fig 1: EdgeD Functionalities

Pod Management

It is handles for pod addition, deletion and modification. It also tracks the health of the pods using pod status manager and pleg. Its primary jobs are as follows:

  • Receives and handles pod addition/deletion/modification messages from metamanager.
  • Handles separate worker queues for pod addition and deletion.
  • Handles worker routines to check worker queues to do pod operations.
  • Keeps separate cache for config map and secrets respectively.
  • Regular cleanup of orphaned pods

_images/pod-addition-flow.pngPod Addition Flow

Fig 2: Pod Addition Flow

_images/pod-deletion-flow.pngPod Deletion Flow

Fig 3: Pod Deletion Flow

_images/pod-update-flow.pngPod Updation Flow

Fig 4: Pod Updation Flow

Pod Lifecycle Event Generator

This module helps in monitoring pod status for edged. Every second, using probe’s for liveness and readiness, it updates the information with pod status manager for every pod.

_images/pleg-flow.pngPLEG Design

Fig 5: PLEG at EdgeD

CRI for edged

Container Runtime Interface (CRI) – a plugin interface which enables edged to use a wide variety of container runtimes, without the need to recompile and also support multiple runtimes like docker, containerd, cri-o etc

Why CRI for edge?

Currently kubeedge edged supports only docker runtime using the legacy dockertools.

  • CRI support for multiple container runtime in kubeedge is needed due to below mentioned factors
    • Include CRI support as in kubernetes kubelet to support containerd, cri-o etc
    • Continue with docker runtime support using legacy dockertools until CRI support for the same is available i.e. support for docker runtime using dockershim is not considered in edged
    • Support light weight container runtimes on resource constrained edge node which are unable to run the existing docker runtime
    • Support multiple container runtimes like docker, containerd, cri-o etc on the edge node.
    • Support for corresponding CNI with pause container and IP will be considered later
    • Customer can run light weight container runtime on resource constrained edge node that cannot run the existing docker runtime
    • Customer has the option to choose from multiple container runtimes on his edge platform

_images/edged-cri.pngCRI Design

Fig 6: CRI at EdgeD

Secret Management

At edged, Secrets are handled separately. For its operations like addition, deletion and modifications; there are separate set of config messages or interfaces. Using these interfaces, secrets are updated in cache store. Below flow diagram explains the message flow.

_images/secret-handling.pngSecret Message Handling

Fig 7: Secret Message Handling at EdgeD

Also edged uses MetaClient module to fetch secret from Metamanager (if available with it) else cloud. Whenever edged queries for a new secret which Metamanager doesn’t has, the request is forwared to cloud. Before sending the response containing the secret, it stores a copy of it and send it to edged. Hence the subsequent query for same secret key will be responded by Metamanger only, hence reducing the response delay. Below flow diagram shows, how secret is fetched from metamanager and cloud. The flow of how secret is saved in metamanager.

_images/query-secret-from-edged.pngQuery Secret

Fig 8: Query Secret by EdgeD

Probe Management

Probe management creates to probes for readiness and liveness respectively for pods to monitor the containers. Readiness probe helps by monitoring when the pod has reached to running state. Liveness probe helps in monitoring the health of pods, if they are up or down. As explained earlier, PLEG module uses its services.

ConfigMap Management

At edged, ConfigMap are also handled separately. For its operations like addition, deletion and modifications; there are separate set of config messages or interfaces. Using these interfaces, configMaps are updated in cache store. Below flow diagram explains the message flow.

_images/configmap-handling.pngConfigMap Message Handling

Fig 9: ConfigMap Message Handling at EdgeD

Also edged uses MetaClient module to fetch configmap from Metamanager (if available with it) else cloud. Whenever edged queries for a new configmaps which Metamanager doesn’t has, the request is forwared to cloud. Before sending the response containing the configmaps, it stores a copy of it and send it to edged. Hence the subsequent query for same configmaps key will be responded by Metamanger only, hence reducing the response delay. Below flow diagram shows, how configmaps is fetched from metamanager and cloud. The flow of how configmaps is saved in metamanager.

_images/query-configmap-from-edged.pngQuery Configmaps

Fig 10: Query Configmaps by EdgeD

Container GC

Container garbage collector is an edged routine which wakes up every minute, collecting and removing dead containers using the specified container gc policy. The policy for garbage collecting containers we apply takes on three variables, which can be user-defined. MinAge is the minimum age at which a container can be garbage collected, zero for no limit. MaxPerPodContainer is the max number of dead containers any single pod (UID, container name) pair is allowed to have, less than zero for no limit. MaxContainers is the max number of total dead containers, less than zero for no limit as well. Generally, the oldest containers are removed first.

Image GC

Image garbage collector is an edged routine which wakes up every 5 secs, collects information about disk usage based on the policy used. The policy for garbage collecting images we apply takes two factors into consideration, HighThresholdPercent and LowThresholdPercent. Disk usage above the high threshold will trigger garbage collection, which attempts to delete unused images until the low threshold is met. Least recently used images are deleted first.

Status Manager

Status manager is as an independent edge routine, which collects pods statuses every 10 seconds and forwards this information with cloud using metaclient interface to the cloud.

_images/pod-status-manger-flow.pngStatus Manager Flow

Fig 11: Status Manager Flow

Volume Management

Volume manager runs as an edge routine which brings out the information of which volume(s) are to be attached/mounted/unmounted/detached based on pods scheduled on the edge node.

Before starting the pod, all the specified volumes referenced in pod specs are attached and mounted, Till then the flow is blocked and with it other operations.

MetaClient

Metaclient is an interface of Metamanger for edged. It helps edge to get configmap and secret details from metamanager or cloud. It also sends sync messages, node status and pod status towards metamanger to cloud.

EventBus

Overview

Eventbus acts as an interface for sending/receiving messages on mqtt topics.

It supports 3 kinds of mode:

  • internalMqttMode
  • externalMqttMode
  • bothMqttMode

Topic

eventbus subscribes to the following topics:

- $hw/events/upload/#
- SYS/dis/upload_records
- SYS/dis/upload_records/+
- $hw/event/node/+/membership/get
- $hw/event/node/+/membership/get/+
- $hw/events/device/+/state/update
- $hw/events/device/+/state/update/+
- $hw/event/device/+/twin/+

Note: topic wildcards

wildcard Description
# It must be the last character in the topic, and matches the current tree and all subtrees.
+ It matches exactly one item in the topic tree.

Flow chart

1. eventbus sends messages from external client

_images/eventbus-handleMsgFromClient.jpgeventbus sends messages from external client

2. eventbus sends response messages to external client

_images/eventbus-handleResMsgToClient.jpgeventbus sends response messages to external client

MetaManager

Overview

MetaManager is the message processor between edged and edgehub. It’s also responsible for storing/retrieving metadata to/from a lightweight database(SQLite).

Metamanager receives different types of messages based on the operations listed below :

  • Insert
  • Update
  • Delete
  • Query
  • Response
  • NodeConnection
  • MetaSync

Insert Operation

Insert operation messages are received via the cloud when new objects are created. An example could be a new user application pod created/deployed through the cloud.

_images/meta-insert.pngInsert Operation

The insert operation request is received via the cloud by edgehub. It dispatches the request to the metamanager which saves this message in the local database. metamanager then sends an asynchronous message to edged. edged processes the insert request e,g. by starting the pod and populates the response in the message. metamanager inspects the message, extracts the response and sends it back to edged which sends it back to the cloud.

Update Operation

Update operations can happen on objects at the cloud/edge.

The update message flow is similar to an insert operation. Additionally, metamanager checks if the resource being updated has changed locally. If there is a delta, only then the update is stored locally and the message is passed to edged and response is sent back to the cloud.

_images/meta-update.pngUpdate Operation

Delete Operation

Delete operations are triggered when objects like pods are deleted from the cloud.

_images/meta-delete.pngDelete Operation

Query Operation

Query operations let you query for metadata either locally at the edge or for some remote resources like config maps/secrets from the cloud. edged queries this metadata from metamanager which further handles local/remote query processing and returns the response back to edged. A Message resource can be broken into 3 parts (resKey,resType,resId) based on separator ‘/’.

_images/meta-query.pngQuery Operation

Response Operation

Responses are returned for any operations performed at the cloud/edge. Previous operations showed the response flow either from the cloud or locally at the edge.

NodeConnection Operation

NodeConnection operation messages are received from edgeHub to give information about the cloud connection status. metamanager tracks this state in-memory and uses it in certain operations like remote query to the cloud.

MetaSync Operation

MetaSync operation messages are periodically sent by metamanager to sync the status of the pods running on the edge node. The sync interval is configurable in conf/edge.yaml ( defaults to 60 seconds ).

meta:
    sync:
        podstatus:
            interval: 60 #seconds

Edgehub

Overview

Edge hub is responsible for interacting with CloudHub component present in the cloud. It can connect to the CloudHub using either a web-socket connection or using QUIC protocol. It supports functions like sync cloud side resources update, report edged side host and device status changes.

It acts as the communication link between the edge and the cloud. It forwards the messages received from the cloud to the corresponding module at the edge and vice-versa.

The main functions performed by edgehub are :-

  • Keep Alive
  • Publish Client Info
  • Route to Cloud
  • Route to Edge

Keep Alive

A keep-alive message or heartbeat is sent to cloudHub after every heartbeatPeriod.

Publish Client Info

  • The main responsibility of publish client info is to inform the other groups or modules regarding the status of connection to the cloud.
  • It sends a beehive message to all groups (namely metaGroup, twinGroup and busGroup), informing them whether cloud is connected or disconnected.

Route To Cloud

The main responsibility of route to cloud is to receive from the other modules (through beehive framework), all the messages that are to be sent to the cloud, and send them to cloudHub through the websocket connection.

The major steps involved in this process are as follows :-

  1. Continuously receive messages from beehive Context

  2. Send that message to cloudHub

  3. If the message received is a sync message then :

    3.1 If response is received on syncChannel then it creates a map[string] chan containing the messageID of the message as key

    3.2 It waits for one heartbeat period to receive a response on the channel created, if it does not receive any response on the channel within the specified time then it times out.

    3.3 The response received on the channel is sent back to the module using the SendResponse() function.

_images/route-to-cloud.pngRoute to Cloud

Route To Edge

The main responsibility of route to edge is to receive messages from the cloud (through the websocket connection) and send them to the required groups through the beehive framework.

The major steps involved in this process are as follows :-

  • Receive message from cloudHub
  • Check whether the route group of the message is found.
  • Check if it is a response to a SendSync() function.
  • If it is not a response message then the message is sent to the required group
  • If it is a response message then the message is sent to the syncKeep channel

_images/route-to-edge.pngRoute to Edge

Usage

EdgeHub can be configured to communicate in two ways as mentioned below:

  • Through websocket protocol: Click here for details.
  • Through QUIC protocol: Click here for details.

DeviceTwin

Overview

DeviceTwin module is responsible for storing device status, dealing with device attributes, handling device twin operations, creating a membership between the edge device and edge node, syncing device status to the cloud and syncing the device twin information between edge and cloud. It also provides query interfaces for applications. Device twin consists of four sub modules (namely membership module, communication module, device module and device twin module) to perform the responsibilities of device twin module.

Operations Performed By Device Twin Controller

The following are the functions performed by device twin controller :-

  • Sync metadata to/from db ( Sqlite )
  • Register and Start Sub Modules
  • Distribute message to Sub Modules
  • Health Check

Sync Metadata to/from db ( Sqlite )

For all devices managed by the edge node , the device twin performs the below operations :-

  • It checks if the device in the device twin context (the list of devices are stored inside the device twin context), if not it adds a mutex to the context.
  • Query device from database
  • Query device attribute from database
  • Query device twin from database
  • Combine the device, device attribute and device twin data together into a single structure and stores it in the device twin context.

Register and Start Sub Modules

Registers the four device twin modules and starts them as separate go routines

Distribute Message To Sub Modules

  1. Continuously listen for any device twin message in the beehive framework.
  2. Send the received message to the communication module of device twin
  3. Classify the message according to the message source, i.e. whether the message is from eventBus, edgeManager or edgeHub, and fills the action module map of the module (ActionModuleMap is a map of action to module)
  4. Send the message to the required device twin module

Health Check

The device twin controller periodically ( every 60 s ) sends ping messages to submodules. Each of the submodules updates the timestamp in a map for itself once it receives a ping. The controller checks if the timestamp for a module is more than 2 minutes old and restarts the submodule if true.

Modules

DeviceTwin consists of four modules, namely :-

  • Membership Module
  • Twin Module
  • Communication Module
  • Device Module

Membership Module

The main responsibility of the membership module is to provide membership to the new devices added through the cloud to the edge node. This module binds the newly added devices to the edge node and creates a membership between the edge node and the edge devices.

The major functions performed by this module are:-

  1. Initialize action callback map which is a map[string]Callback that contains the callback functions that can be performed
  2. Receive the messages sent to membership module
  3. For each message the action message is read and the corresponding function is called
  4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller

The following are the action callbacks which can be performed by the membership module :-

  • dealMembershipGet
  • dealMembershipUpdated
  • dealMembershipDetail

dealMembershipGet: dealMembershipGet() gets the information about the devices associated with the particular edge node, from the cache.

  • The eventbus first receives a message on its subscribed topic (membership-get topic).

  • This message arrives at the devicetwin controller, which further sends the message to membership module.

  • The membership module gets the devices associated with the edge node from the cache (context) and sends the information to the communication module. It also handles errors that may arise while performing the aforementioned process and sends the error to the communication module instead of device details.

  • The communication module sends the information to the eventbus component which further publishes the result on the specified MQTT topic (get membership result topic).

    _images/membership-get.pngMembership Get()

dealMembershipUpdated: dealMembershipUpdated() updates the membership details of the node. It adds the devices, that were newly added, to the edge group and removes the devices, that were removed, from the edge group and updates device details, if they have been altered or updated.

  • The edgehub module receives the membership update message from the cloud and forwards the message to devicetwin controller which further forwards it to the membership module.

  • The membership module adds devices that are newly added, removes devices that have been recently deleted and also updates the devices that were already existing in the database as well as in the cache.

  • After updating the details of the devices a message is sent to the communication module of the device twin, which sends the message to eventbus module to be published on the given MQTT topic.

    _images/membership-update.pngMembership Update

dealMembershipDetail: dealMembershipDetail() provides the membership details of the edge node, providing information about the devices associated with the edge node, after removing the membership details of recently removed devices.

  • The eventbus module receives the message that arrives on the subscribed topic,the message is then forwarded to the devicetwin controller which further forwards it to the membership module.

  • The membership module adds devices that are mentioned in the message, removes devices that that are not present in the cache.

  • After updating the details of the devices a message is sent to the communication module of the device twin.

    _images/membership-detail.pngMembership Detail

Twin Module

The main responsibility of the twin module is to deal with all the device twin related operations. It can perform operations like device twin update, device twin get and device twin sync-to-cloud.

The major functions performed by this module are:-

  1. Initialize action callback map (which is a map of action(string) to the callback function that performs the requested action)
  2. Receive the messages sent to twin module
  3. For each message the action message is read and the corresponding function is called
  4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller

The following are the action callbacks which can be performed by the twin module :-

  • dealTwinUpdate
  • dealTwinGet
  • dealTwinSync

dealTwinUpdate: dealTwinUpdate() updates the device twin information for a particular device.

  • The devicetwin update message can either be received by edgehub module from the cloud or from the MQTT broker through the eventbus component (mapper will publish a message on the device twin update topic) .

  • The message is then sent to the device twin controller from where it is sent to the device twin module.

  • The twin module updates the twin value in the database and sends the update result message to the communication module.

  • The communication module will in turn send the publish message to the MQTT broker through the eventbus.

    _images/devicetwin-update.pngDevice Twin Update

dealTwinGet: dealTwinGet() provides the device twin information for a particular device.

  • The eventbus component receives the message that arrives on the subscribed twin get topic and forwards the message to devicetwin controller, which further sends the message to twin module.

  • The twin module gets the devicetwin related information for the particular device and sends it to the communication module, it also handles errors that arise when the device is not found or if any internal problem occurs.

  • The communication module sends the information to the eventbus component, which publishes the result on the topic specified .

    _images/devicetwin-get.pngDevice Twin Get

dealTwinSync: dealTwinSync() syncs the device twin information to the cloud.

  • The eventbus module receives the message on the subscribed twin cloud sync topic .
  • This message is then sent to the devicetwin controller from where it is sent to the twin module.
  • The twin module then syncs the twin information present in the database and sends the synced twin results to the communication module.
  • The communication module further sends the information to edgehub component which will in turn send the updates to the cloud through the websocket connection.
  • This function also performs operations like publishing the updated twin details document, delta of the device twin as well as the update result (in case there is some error) to a specified topic through the communication module, which sends the data to edgehub, which will send it to eventbus which publishes on the MQTT broker.

_images/sync-to-cloud.pngSync to Cloud

Communication Module

The main responsibility of communication module is to ensure the communication functionality between device twin and the other components.

The major functions performed by this module are:-

  1. Initialize action callback map which is a map[string]Callback that contains the callback functions that can be performed
  2. Receive the messages sent to communication module
  3. For each message the action message is read and the corresponding function is called
  4. Confirm whether the actions specified in the message are completed or not, if the action is not completed then redo the action
  5. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller

The following are the action callbacks which can be performed by the communication module :-

  • dealSendToCloud
  • dealSendToEdge
  • dealLifeCycle
  • dealConfirm

dealSendToCloud: dealSendToCloud() is used to send data to the cloudHub component. This function first ensures that the cloud is connected, then sends the message to the edgeHub module (through the beehive framework), which in turn will forward the message to the cloud (through the websocket connection).

dealSendToEdge: dealSendToEdge() is used to send data to the other modules present at the edge. This function sends the message received to the edgeHub module using beehive framework. The edgeHub module after receiving the message will send it to the required recipient.

dealLifeCycle: dealLifeCycle() checks if the cloud is connected and the state of the twin is disconnected, it then changes the status to connected and sends the node details to edgehub. If the cloud is disconnected then, it sets the state of the twin as disconnected.

dealConfirm: dealConfirm() is used to confirm the event. It checks whether the type of the message is right and then deletes the id from the confirm map.

Device Module

The main responsibility of the device module is to perform the device related operations like dealing with device state updates and device attribute updates.

The major functions performed by this module are :-

  1. Initialize action callback map (which is a map of action(string) to the callback function that performs the requested action)
  2. Receive the messages sent to device module
  3. For each message the action message is read and the corresponding function is called
  4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller

The following are the action callbacks which can be performed by the device module :-

  • dealDeviceUpdated
  • dealDeviceStateUpdate

dealDeviceUpdated: dealDeviceUpdated() deals with the operations to be performed when a device attribute update is encountered. It updates the changes to the device attributes, like addition of attributes, updation of attributes and deletion of attributes in the database. It also sends the result of the device attribute update to be published to the eventbus component.

  • The device attribute updation is initiated from the cloud, which sends the update to edgehub.
  • The edgehub component sends the message to the device twin controller which forwards the message to the device module.
  • The device module updates the device attribute details into the database after which, the device module sends the result of the device attribute update to be published to the eventbus component through the communicate module of devicetwin. The eventbus component further publishes the result on the specified topic.

_images/device-update.pngDevice Update

dealDeviceStateUpdate: dealDeviceStateUpdate() deals with the operations to be performed when a device status update is encountered. It updates the state of the device as well as the last online time of the device in the database. It also sends the update state result, through the communication module, to the cloud through the edgehub module and to the eventbus module which in turn publishes the result on the specified topic of the MQTT broker.

  • The device state updation is initiated by publishing a message on the specified topic which is being subscribed by the eventbus component.

  • The eventbus component sends the message to the device twin controller which forwards the message to the device module.

  • The device module updates the state of the device as well as the last online time of the device in the database.

  • The device module then sends the result of the device state update to the eventbus component and edgehub component through the communicate module of devicetwin. The eventbus component further publishes the result on the specified topic, while the edgehub component sends the device status update to the cloud.

    _images/device-state-update.pngDevice State Update

Tables

DeviceTwin module creates three tables in the database, namely :-

  • Device Table
  • Device Attribute Table
  • Device Twin Table

Device Table

Device table contains the data regarding the devices added to a particular edge node. The following are the columns present in the device table :

Column Name Description
ID This field indicates the id assigned to the device
Name This field indicates the name of the device
Description This field indicates the description of the device
State This field indicates the state of the device
LastOnline This fields indicates when the device was last online

Operations Performed :-

The following are the operations that can be performed on this data :-

  • Save Device: Inserts a device in the device table
  • Delete Device By ID: Deletes a device by its ID from the device table
  • Update Device Field: Updates a single field in the device table
  • Update Device Fields: Updates multiple fields in the device table
  • Query Device: Queries a device from the device table
  • Query Device All: Displays all the devices present in the device table
  • Update Device Multi: Updates multiple columns of multiple devices in the device table
  • Add Device Trans: Inserts device, device attribute and device twin in a single transaction, if any of these operations fail, then it rolls back the other insertions
  • Delete Device Trans: Deletes device, device attribute and device twin in a single transaction, if any of these operations fail, then it rolls back the other deletions

Device Attribute Table

Device attribute table contains the data regarding the device attributes associated with a particular device in the edge node. The following are the columns present in the device attribute table :

Column Name Description
ID This field indicates the id assigned to the device attribute
DeviceID This field indicates the device id of the device associated with this attribute
Name This field indicates the name of the device attribute
Description This field indicates the description of the device attribute
Value This field indicates the value of the device attribute
Optional This fields indicates whether the device attribute is optional or not
AttrType This fields indicates the type of attribute that is referred to
Metadata This fields describes the metadata associated with the device attribute

Operations Performed :-

The following are the operations that can be performed on this data :

  • Save Device Attr: Inserts a device attribute in the device attribute table
  • Delete Device Attr By ID: Deletes a device attribute by its ID from the device attribute table
  • Delete Device Attr: Deletes a device attribute from the device attribute table by filtering based on device id and device name
  • Update Device Attr Field: Updates a single field in the device attribute table
  • Update Device Attr Fields: Updates multiple fields in the device attribute table
  • Query Device Attr: Queries a device attribute from the device attribute table
  • Update Device Attr Multi: Updates multiple columns of multiple device attributes in the device attribute table
  • Delete Device Attr Trans: Inserts device attributes, deletes device attributes and updates device attributes in a single transaction.

Device Twin Table

Device twin table contains the data related to the device device twin associated with a particular device in the edge node. The following are the columns present in the device twin table :

Column Name Description
ID This field indicates the id assigned to the device twin
DeviceID This field indicates the device id of the device associated with this device twin
Name This field indicates the name of the device twin
Description This field indicates the description of the device twin
Expected This field indicates the expected value of the device
Actual This field indicates the actual value of the device
ExpectedMeta This field indicates the metadata associated with the expected value of the device
ActualMeta This field indicates the metadata associated with the actual value of the device
ExpectedVersion This field indicates the version of the expected value of the device
ActualVersion This field indicates the version of the actual value of the device
Optional This fields indicates whether the device twin is optional or not
AttrType This fields indicates the type of attribute that is referred to
Metadata This fields describes the metadata associated with the device twin

Operations Performed :-

The following are the operations that can be performed on this data :-

  • Save Device Twin: Inserts a device twin in the device twin table
  • Delete Device Twin By Device ID: Deletes a device twin by its ID from the device twin table
  • Delete Device Twin: Deletes a device twin from the device twin table by filtering based on device id and device name
  • Update Device Twin Field: Updates a single field in the device twin table
  • Update Device Twin Fields: Updates multiple fields in the device twin table
  • Query Device Twin: Queries a device twin from the device twin table
  • Update Device Twin Multi: Updates multiple columns of multiple device twins in the device twin table
  • Delete Device Twin Trans: Inserts device twins, deletes device twins and updates device twins in a single transaction.

Edge Controller

Edge Controller Overview

EdgeController is the bridge between Kubernetes Api-Server and edgecore

Operations Performed By Edge Controller

The following are the functions performed by Edge controller :-

  • Downstream Controller: Sync add/update/delete event to edgecore from K8s Api-server
  • Upstream Controller: Sync watch and Update status of resource and events(node, pod and configmap) to K8s-Api-server and also subscribe message from edgecore
  • Controller Manager: Creates manager Interface which implements events for managing ConfigmapManager, LocationCache and podManager

Downstream Controller:

Sync add/update/delete event to edge

  • Downstream controller: Watches K8S-Api-server and sends updates to edgecore via cloudHub
  • Sync (pod, configmap, secret) add/update/delete event to edge via cloudHub
  • Creates Respective manager (pod, configmap, secret) for handling events by calling manager interface
  • Locates configmap and secret should be send to which node

_images/DownstreamController.pngDownstream Controller

Upstream Controller:

Sync watch and Update status of resource and events

  • UpstreamController receives messages from edgecore and sync the updates to K8S-Api-server

  • Creates stop channel to dispatch and stop event to handle pods, configMaps, node and secrets

  • Creates message channel to update Nodestatus, Podstatus, Secret and configmap related events

  • Gets Podcondition information like Ready, Initialized, Podscheduled and Unschedulable details

  • Below is the information for PodCondition

    • Ready: PodReady means the pod is able to service requests and should be added to the load balancing pools for all matching services
    • PodScheduled: It represents status of the scheduling process for this pod
    • Unschedulable: It means scheduler cannot schedule the pod right now, may be due to insufficient resources in the cluster
    • Initialized: It means that all Init containers in the pod have started sucessfully
    • ContainersReady: It indicates whether all containers in the pod are ready
  • Below is the information for PodStatus

    • PodPhase: Current condition of the pod
    • Conditions: Details indicating why the pod is in this condition
    • HostIP: IP address of the host to which pod is assigned
    • PodIp: IP address allocated to the Pod
    • QosClass: Assigned to the pod based on resource requirement

    _images/UpstreamController.pngUpstream Controller

Controller Manager:

Creates manager Interface and implements ConfigmapManager, LocationCache and podManager

  • Manager defines the Interface of a manager, ConfigManager, Podmanager, secretmanager implements it
  • Manages OnAdd, OnUpdate and OnDelete events which will be updated to the respective edge node from the K8s-Api-server
  • Creates an eventManager(configMaps, pod, secrets) which will start a CommonResourceEventHandler, NewListWatch and a newShared Informer for each event to Sync(add/update/delete)event(pod, configmap, secret) to edgecore via cloudHub
  • Below is the List of handlers created by controller Manager
    • CommonResourceEventHandler: NewcommonResourceEventHandler creates CommonResourceEventHandler used for Configmap and pod Manager
    • NewListWatch: Creates a new ListWatch from the specified client resource namespace and field selector
    • NewSharedInformer: Creates a new Instance for the Listwatcher

CloudHub

CloudHub Overview

CloudHub is one module of cloudcore and is the mediator between Controllers and the Edge side. It supports both web-socket based connection as well as a QUIC protocol access at the same time. The edgehub can choose one of the protocols to access to the cloudhub. CloudHub’s function is to enable the communication between edge and the Controllers.

The connection to the edge(through EdgeHub module) is done through the HTTP over websocket connection. For internal communication it directly communicates with the Controllers. All the request send to CloudHub are of context object which are stored in channelQ along with the mapped channels of event object marked to its nodeID.

The main functions performed by CloudHub are :-

  • Get message context and create ChannelQ for events
  • Create http connection over websocket
  • Serve websocket connection
  • Read message from edge
  • Write message to edge
  • Publish message to Controller

Get message context and create ChannelQ for events:

The context object is stored in a channelQ. For all nodeID channel is created and the message is converted to event object Event object is then passed through the channel.

Create http connection over websocket:

  • TLS certificates are loaded through the path provided in the context object
  • HTTP server is started with TLS configurations
  • Then HTTP connection is upgraded to websocket connection receiving conn object.
  • ServeConn function the serves all the incoming connections

Read message from edge:

  • First a deadline is set for keepalive interval
  • Then the JSON message from connection is read
  • After that Message Router details are set
  • Message is then converted to event object for cloud internal communication
  • In the end the event is published to Controllers

Write Message to Edge:

  • First all event objects are received for the given nodeID
  • The existence of same request and the liveness of the node is checked
  • The event object is converted to message structure
  • Write deadline is set. Then the message is passed to the websocket connection

Publish Message to Controllers:

  • A default message with timestamp, clientID and event type is sent to controller every time a request is made to websocket connection
  • If the node gets disconnected then error is thrown and an event describing node failure is published to the controller.

Usage

The CloudHub can be configured in three ways as mentioned below :

  • Start the websocket server only: Click here to see the details.
  • Start the quic server only: Click here to see the details.
  • Start the websocket and quic server at the same time: Click here to see the details

Device Controller

Device Controller Overview

The device controller is the cloud component of KubeEdge which is responsible for device management. Device management in KubeEdge is implemented by making use of Kubernetes Custom Resource Definitions (CRDs) to describe device metadata/status and device controller to synchronize these device updates between edge and cloud. The device controller starts two separate goroutines called upstream controller and downstream controller. These are not separate controllers as such but named here for clarity.

The device controller makes use of device model and device instance to implement device management :

  • Device Model: A device model describes the device properties exposed by the device and property visitors to access these properties. A device model is like a reusable template using which many devices can be created and managed. Details on device model definition can be found here.
  • Device Instance: A device instance represents an actual device object. It is like an instantiation of the device model and references properties defined in the model. The device spec is static while the device status contains dynamically changing data like the desired state of a device property and the state reported by the device. Details on device instance definition can be found here.

Note: Sample device model and device instance for a few protocols can be found at $GOPATH/src/github.com/kubeedge/kubeedge/build/crd-samples/devices

_images/device-crd-model.pngDevice Model

Operations Performed By Device Controller

The following are the functions performed by the device controller :-

  • Downstream Controller: Synchronize the device updates from the cloud to the edge node, by watching on K8S API server
  • Upstream Controller: Synchronize the device updates from the edge node to the cloud using device twin component

Upstream Controller:

The upstream controller watches for updates from the edge node and applies these updates against the API server in the cloud. Updates are categorized below along with the possible actions that the upstream controller can take:

Update Type Action
Device Twin Reported State Updated The controller patches the reported state of the device twin property in the cloud.

Device Upstream Controller

Syncing Reported Device Twin Property Update From Edge To Cloud

The mapper watches devices for updates and reports them to the event bus via the MQTT broker. The event bus sends the reported state of the device to the device twin which stores it locally and then syncs the updates to the cloud. The device controller watches for device updates from the edge ( via the cloudhub ) and updates the reported state in the cloud.

_images/device-updates-edge-cloud.pngDevice Updates Edge To Cloud

Downstream Controller:

The downstream controller watches for device updates against the K8S API server. Updates are categorized below along with the possible actions that the downstream controller can take:

Update Type Action
New Device Model Created NA
New Device Created The controller creates a new config map to store the device properties and visitors defined in the device model associated with the device. This config map is stored in etcd. The existing config map sync mechanism in the edge controller is used to sync the config map to the egde. The mapper application running in a container can get the updated config map and use the property and visitor metadata to access the device. The device controller additionally reports the device twin metadata updates to the edge node.
Device Node Membership Updated The device controller sends a membership update event to the edge node.
Device Twin Desired State Updated The device controller sends a twin update event to the edge node.
Device Deleted The controller sends the device twin delete event to delete all device twins associated with the device. It also deletes config maps associated with the device and this delete event is synced to the edge. The mapper application effectively stops operating on the device.

_images/device-downstream-controller.pngDevice Downstream Controller

The idea behind using config map to store device properties and visitors is that these metadata are only required by the mapper applications running on the edge node in order to connect to the device and collect data. Mappers if run as containers can load these properties as config maps . Any additions , deletions or updates to properties , visitors etc in the cloud are watched upon by the downstream controller and config maps are updated in etcd. If the mapper wants to discover what properties a device supports, it can get the model information from the device instance. Also, it can get the protocol information to connect to the device from the device instace. Once it has access to the device model, it can get the properties supported by the device. In order to access the property, the mapper needs to get the corresponding visitor information. This can be retrieved from the propertyVisitors list. Finally, using the visitorConfig, the mapper can read/write the data associated with the property.

Syncing Desired Device Twin Property Update From Cloud To Edge

_images/device-updates-cloud-edge.pngDevice Updates Cloud To Edge The device controller watches device updates in the cloud and relays them to the edge node. These updates are stored locally by the device twin. The mapper gets these updates via the MQTT broker and operates on the device based on the updates.

EdgeSite: Standalone Cluster at edge

Abstract

In Edge computing, there are scenarios where customers would like to have a whole cluster installed at edge location. As a result, admins/users can leverage the local control plane to implement management functionalities and take advantages of all edge computing’s benefits.

EdgeSite helps running lightweight clusters at edge.

Motivation

There are scenarios user need to run a standalone Kubernetes cluster at edge to get full control and improve the offline scheduling capability. There are two scenarios user need to do that:

  • The edge cluster is in CDN instead of the user’s site

    The CDN sites usually be large around the world and the network connectivity and quality cannot be guaranteed. Another factor is that the application deployed in CDN edge do not need to interact with center usually. For those deploy edge cluster in CDN resources, they need to make sure the cluster is workable without the connection with central cloud not only for the deployed applicatons but also the schedule capabilities. So that the CDN edge is manageable regardless the connection to one center.

  • User need to deploy an edge environment with limited resources and offline running for most of the time

    In some IOT scenarios, user need to deploy a full control edge environment and running offline.

For these use cases, a standalone, full controlled, light weight Edge cluster is required. By integrating KubeEdge and standard Kubernetes, this EdgeSite enables customers to run an efficient kubernetes cluster for Edge/IOT computing.

Assumptions

Here we assume a cluster is deployed at edge location including the management control plane. For the management control plane to manage some scale of edge worker nodes, the hosting master node needs to have sufficient resources.

The assumptions are

  1. EdgeSite cluster master node is of no less than 2 CPUs and no less than 1GB memory
  2. If high availability is required, 2-3 master nodes are needed at different edge locations
  3. The same Kubernetes security (authN and authZ) mechanisms are used to ensure the secure handshake between master and worker nodes
  4. The same K8s HA mechanism is to be used to enable HA

Architecture Design

_images/EdgeSite_arch.PNGEdgeSite Architecture

Advantages

With the integration, the following can be enabled

  1. Full control of Kubernetes cluster at edge
  2. Light weight control plane and agent
  3. Edge worker node autonomy in case of network disconnection/reconnection
  4. All benefits of edge computing including latency, data locality, etc.

Getting Started

Setup

_images/EdgeSite_Setup.PNGEdgeSite Setup

Steps for K8S (API server) Cluster

  • Install docker

  • Install kubeadm/kubectl

  • Creating cluster with kubeadm

  • KubeEdge supports https connection to Kubernetes apiserver.

    Enter the path to kubeconfig file in controller.yaml

    controller:
      kube:
        ...
        kubeconfig: "path_to_kubeconfig_file" #Enter path to kubeconfig file to enable https connection to k8s apiserver
    
  • (Optional) KubeEdge also supports insecure http connection to Kubernetes apiserver for testing, debugging cases. Please follow below steps to enable http port in Kubernetes apiserver.

    vi /etc/kubernetes/manifests/kube-apiserver.yaml
    # Add the following flags in spec: containers: -command section
    - --insecure-port=8080
    - --insecure-bind-address=0.0.0.0
    

    Enter the master address in controller.yaml

    controller:
      kube:
        ...
        master: "http://127.0.0.1:8080" #Note if master and kubeconfig are both set, master will override any value in kubeconfig.
    

Steps for EdgeSite

Getting EdgeSite Binary
Using Source code
  • Clone KubeEdge (EdgeSite) code

    git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
    
  • Build EdgeSite

    cd $GOPATH/src/github.com/kubeedge/kubeedge/edgesite
    make
    
Download Release packages

TBA

Configuring EdgeSite

Modify edgeSite.yaml configuration file, with the IP address of K8S API server

  • Configure K8S (API Server)

    Replace localhost at controller.kube.master with the IP address

    controller:
      kube:
        master: http://localhost:8080
        ...
    
  • Add EdgeSite (Worker) Node ID/name

    Replace edge-node with an unique edge id/name in below fields :

    • controller.kube.node-id
    • controller.edged.hostname-override
    controller:
      kube:
        ...
        node-id: edge-node
        node-name: edge-node
        ...
      edged:
        ...
        hostname-override: edge-node
        ...
    
  • Configure MQTT (Optional)

    The Edge part of KubeEdge uses MQTT for communication between deviceTwin and devices. KubeEdge supports 3 MQTT modes:

    1. internalMqttMode: internal mqtt broker is enabled. (Default)
    2. bothMqttMode: internal as well as external broker are enabled.
    3. externalMqttMode: only external broker is enabled.

    Use mode field in edgeSite.yaml to select the desired mode.

    mqtt:
      ...
      mode: 0 # 0: internal mqtt broker enable only. 1: internal and external mqtt broker enable. 2: external mqtt broker enable only.
      ...
    

    To use KubeEdge in double mqtt or external mode, you need to make sure that mosquitto or emqx edge is installed on the edge node as an MQTT Broker.

Run EdgeSite
  # run edgesite
  # `conf/` should be in the same directory as the cloned KubeEdge repository
  # verify the configurations before running edgesite
  ./edgesite
  # or
  nohup ./edgesite > edgesite.log 2>&1 &

Note: Please run edgesite using the users who have root permission.

Deploy EdgeSite (Worker) Node to K8S Cluster

We have provided a sample node.json to add a node in kubernetes. Please make sure edgesite (worker) node is added to k8s api-server. Run below steps:

  • Modify node.json

    Replace edge-node in node.json file, to the id/name of the edgesite node. ID/Name should be same as used before while updating edgesite.yaml

      {
        "metadata": {
          "name": "edge-node",
        }
      }
    
  • Add node in K8S API server

    In the console execute the below command

      kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json
    
  • Check node status

    Below command to check the edgesite node status.

      kubectl get nodes
    
      NAME         STATUS     ROLES    AGE     VERSION
      testing123   Ready      <none>   6s      0.3.0-beta.0
    

    Observe the edgesite node is in Ready state

Deploy Application

Try out a sample application deployment by following below steps.

kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml

Note: Currently, for edgesite node, we must use hostPort in the Pod container spec so that the pod comes up normally, or the pod will be always in ContainerCreating status. The hostPort must be equal to containerPort and can not be 0.

Then you can use below command to check if the application is normally running.

  kubectl get pods

Bluetooth Mapper

Introduction

Mapper is an application that is used to connect and control devices. This is an implementation of mapper for bluetooth protocol. The aim is to create an application through which users can easily operate devices using bluetooth protocol for communication to the KubeEdge platform. The user is required to provide the mapper with the information required to control their device through the configuration file. These can be changed at runtime by providing the input through the MQTT broker.

Running the mapper

  1. Please ensure that bluetooth service of your device is ON

  2. Set ‘bluetooth=true’ label for the node (This label is a prerequisite for the scheduler to schedule bluetooth_mapper pod on the node)

    kubectl label nodes <name-of-node> bluetooth=true
    
  3. Build and deploy the mapper by following the steps given below.

Building the bluetooth mapper

cd $GOPATH/src/github.com/kubeedge/kubeedge/device/bluetooth_mapper
make bluetooth_mapper_image
docker tag bluetooth_mapper:v1.0 <your_dockerhub_username>/bluetooth_mapper:v1.0
docker push <your_dockerhub_username>/bluetooth_mapper:v1.0

Note: Before trying to push the docker image to the remote repository please ensure that you have signed into docker from your node, if not please type the followig command to sign in
docker login
# Please enter your username and password when prompted

Deploying bluetooth mapper application

cd $GOPATH/src/github.com/kubeedge/kubeedge/device/bluetooth_mapper
    
# Please enter the following details in the deployment.yaml :-
#    1. Replace <edge_node_name> with the name of your edge node at spec.template.spec.voluems.configMap.name
#    2. Replace <your_dockerhub_username> with your dockerhub username at spec.template.spec.containers.image

kubectl create -f deployment.yaml

Modules

The bluetooth mapper consists of the following five major modules :-

  1. Action Manager
  2. Scheduler
  3. Watcher
  4. Controller
  5. Data Converter

Action Manager

A bluetooth device can be controlled by setting a specific value in physical register(s) of a device and readings can be acquired by getting the value from specific register(s). We can define an Action as a group of read/write operations on a device. A device may support multiple such actions. The registers are identified by characteristic values which are exposed by the device through entities called characteristic-uuids. Each of these actions should be supplied through config-file to action manager or at runtime through MQTT. The values specified initially through the configuration file can be modified at runtime through MQTT. Given below is a guide to provide input to action manager through the configuration file.

action-manager:
   actions:          # Multiple actions can be added
     - name: <name of the action>
       perform-immediately: <true/false>
       device-property-name: <property-name defined in the device model>
     - .......
       .......
  1. Multiple actions can be added in the action manager module. Each of these actions can either be executed by the action manager of invoked by other modules of the mapper like scheduler and watcher.
  2. Name of each action should be unique, it is using this name that the other modules like the scheduler or watcher can invoke which action to perform.
  3. Perform-immediately field of the action manager tells the action manager whether it is supposed to perform the action immediately or not, if it set to true then the action manger will perform the event once.
  4. Each action is associated with a device-property-name, which is the property-name defined in the device CRD, which in turn contains the implementation details required by the action.

Scheduler

Scheduler is a component which can perform an action or a set of actions at regular intervals of time. They will make use of the actions previously defined in the action manager module, it has to be ensured that before the execution of the schedule the action should be defined, otherwise it would lead to an error. The schedule can be configured to run for a specified number of times or run infinitely. The scheduler is an optional module and need not be specified if not required by the user. The user can provide input to the scheduler through configuration file or through MQTT at runtime. The values specified initially by the user through the configuration file can be modified at runtime through MQTT. Given below is a guide to provide input to scheduler through the configuration file.

      scheduler:
        schedules:
          - name: <name of schedule>
            interval: <time in milliseconds>
            occurrence-limit: <number of times to be executed>            # if it is 0, then the event will execute infinitely
            actions:
              - <action name>
              - <action name>
          - ......
            ......
  1. Multiple schedules can be defined by the user by providing an array as input though the configuration file.
  2. Name specifies the name of the schedule to be executed, each schedule must have a unique name as it is used as a method of identification by the scheduler.
  3. Interval refers to the time interval at which the schedule is meant to be repeated. The user is expected to provide the input in milliseconds.
  4. Occurrence-limit refers to the number of times the action(s) is supposed to occur. If the user wants the event to run infinitely then it can be set to 0 or the field can be skipped.
  5. Actions refer to the action names which are supposed to be executed in the schedule. The actions will be defined in the same order in which they are mentioned here.
  6. The user is expected to provide the names of the actions to be performed in the schedule, in the same order that they are to be executed.

Watcher

The following are the main responsibilities of the watcher component: a) To scan for bluetooth devices and connect to the correct device once it is Online/In-Range.

b) Keep a watch on the expected state of the twin-attributes of the device and perform the action(s) to make actual state equal to expected.

c) To report the actual state of twin attributes back to the cloud.

The watcher is an optional component and need not be defined or used by the user if not necessary. The input to the watcher can be provided through the configuration file or through mqtt at runtime. The values that are defined through the configuration file can be changed at runtime through MQTT. Given below is a guide to provide input to the watcher through the configuration file.

      watcher:
          device-twin-attributes :
          - device-property-name: <name of attribute>
              - <action name>
              - <action name>
          - ......
            ......   
  1. Device-property-name refers to the device twin attribute name that was given when creating the device. It is using this name that the watcher watches for any change in expected state.
  2. Actions refers to a list of action names, these are the names of the actions using which we can convert the actual state to the expected state.
  3. The names of the actions being provided must have been defined using the action manager before the mapper begins execution. Also the action names should be mentioned in the same order in which they have to be executed.

Controller

The controller module is responsible for exposing MQTT APIs to perform CRUD operations on the watcher, scheduler and action manager. The controller is also responsible for starting the other modules like action manager, watcher and scheduler. The controller first connects the MQTT client to the broker (using the mqtt configurations, specified in the configuration file), it then initiates the watcher which will connect to the device (based on the configurations provided in the configuration file) and the watcher runs parallelly, after this it starts the action manger which executes all the actions that have been enabled in it, after which the scheduler is started to run parallelly as well. Given below is a guide to provide input to the controller through the configuration file.

      mqtt:
        mode: 0       # 0 -internal mqtt broker  1 - external mqtt broker
        server: tcp://127.0.0.1:1883 # external mqtt broker url.
        internal-server: tcp://127.0.0.1:1884 # internal mqtt broker url.
      device-model-name: <device_model_name>

Usage

Configuration File

The user can give the configurations specific to the bluetooth device using configurations provided in the configuration file present at $GOPATH/src/github.com/kubeedge/kubeedge/device/bluetooth_mapper/configuration/config.yaml. The details provided in the configuration file are used by action-manager module, scheduler module, watcher module, the data-converter module and the controller.

Example: Given below is the instructions using which user can create their own configuration file, for their device.

     mqtt:
       mode: 0       # 0 -internal mqtt broker  1 - external mqtt broker
       server: tcp://127.0.0.1:1883 # external mqtt broker url.
       internal-server: tcp://127.0.0.1:1884 # internal mqtt broker url.
     device-model-name: <device_model_name>        #deviceID received while registering device with the cloud
     action-manager:
       actions:          # Multiple actions can be added
       - name: <name of the action>
         perform-immediately: <true/false>
         device-property-name: <property-name defined in the device model>
       - .......
         .......
     scheduler:
       schedules:
       - name: <name of schedule>
         interval: <time in milliseconds>
         occurrence-limit: <number of times to be executed>            # if it is 0, then the event will execute infinitely
         actions:
         - <action name>
         - <action name>
         - ......
       - ......
     watcher:
       device-twin-attributes :
       - device-property-name: <name of attribute>
         actions:        # Multiple actions can be added
         - <action name>
         - <action name>
         - ......
       - ......

Runtime Configuration Modifications

The configuration of the mapper as well as triggering of the modules of the mapper can be done during runtime. The user can do this by publishing messages on the respective MQTT topics of each module. Please note that we have to use the same MQTT broker that is being used by the mapper i.e. if the mapper is using the internal MQTT broker then the messages have to be published on the internal MQTT broker and if the mapper is using the external MQTT broker then the messages have to be published on the external MQTT broker.

The following properties can be changed at runtime by publishing messages on MQTT topics of the MQTT broker:

  • Watcher
  • Action Manager
  • Scheduler
Watcher

The user can add or update the watcher properties of the mapper at runtime. It will overwrite the existing watcher configurations (if exists)

Topic: $ke/device/bluetooth-mapper/< deviceID >/watcher/create

Message:

         {
          "device-twin-attributes": [
            {
              "device-property-name": "IOControl",
              "actions": [                     # List of names of actions to be performed (actions should have been defined before watching)
                "IOConfigurationInitialize",
                "IODataInitialize",
                "IOConfiguration",
                "IOData"
              ]
            }
          ]
        }
Action Manager

In the action manager module the user can perform two types of operations at runtime, i.e. : 1. The user can add or update the actions to be performed on the bluetooth device. 2. The user can delete the actions that were previously defined for the bluetooth device.

Action Add

The user can add a set of actions to be performed by the mapper. If an action with the same name as one of the actions in the list exists then it updates the action and if the action does not already exist then it is added to the existing set of actions.

Topic: $ke/device/bluetooth-mapper/< deviceID >/action-manager/create

Message:

    [
      {
        "name": "IRTemperatureConfiguration",          # name of action
        "perform-immediately": true,                   # whether the action is to performed immediately or not
        "device-property-name": "temperature-enable"   #property-name defined in the device model
      },
      {
        "name": "IRTemperatureData",
        "perform-immediately": true,
        "device-property-name": "temperature"          #property-name defined in the device model
      }
    ]
Action Delete

The users can delete a set of actions that were previously defined for the device. If the action mentioned in the list does not exist then it returns an error message.

Topic: $ke/device/bluetooth-mapper/< deviceID >/action-manager/delete

Message:

    [
      {
        "name": "IRTemperatureConfiguration"        #name of action to be deleted
      },
      {
        "name": "IRTemperatureData"
      },
      {
        "name": "IOConfigurationInitialize"
      },
      {
        "name": "IOConfiguration"
      }
    ]
Scheduler

In the scheduler module the user can perform two types of operations at runtime, i.e. : 1. The user can add or update the schedules to be performed on the bluetooth device. 2. The user can delete the schedules that were previously defined for the bluetooth device.

Schedule Add

The user can add a set of schedules to be performed by the mapper. If a schedule with the same name as one of the schedules in the list exists then it updates the schedule and if the action does not already exist then it is added to the existing set of schedules.

Topic: $ke/device/bluetooth-mapper/< deviceID >/scheduler/create

Message:

[
  {
    "name": "temperature",            # name of schedule
    "interval": 3000,           # frequency of the actions to be executed (in milliseconds)
    "occurrence-limit": 25,         # Maximum number of times the event is to be executed, if not given then it runs infinitely 
    "actions": [                          # List of names of actions to be performed (actions should have been defined before execution of schedule) 
      "IRTemperatureConfiguration",
      "IRTemperatureData"
    ]
  }
]
Schedule Delete

The users can delete a set of schedules that were previously defined for the device. If the schedule mentioned in the list does not exist then it returns an error message.

Topic: $ke/device/bluetooth-mapper/< deviceID >/scheduler/delete

Message:

    [
      {
        "name": "temperature"                  #name of schedule to be deleted
      }
    ]

Modbus Mapper

Introduction

Mapper is an application that is used to connect and control devices. This is an implementation of mapper for Modbus protocol. The aim is to create an application through which users can easily operate devices using ModbusTCP/ModbusRTU protocol for communication to the KubeEdge platform. The user is required to provide the mapper with the information required to control their device through the dpl configuration file. These can be changed at runtime by updating configmap.

Running the mapper

  1. Please ensure that Modbus device is connected to your edge node

  2. Set ‘modbus=true’ label for the node (This label is a prerequisite for the scheduler to schedule modbus_mapper pod on the node)

    kubectl label nodes <name-of-node> modbus=true
    
  3. Build and deploy the mapper by following the steps given below.

Building the modbus mapper

cd $GOPATH/src/github.com/kubeedge/kubeedge/device/modbus_mapper
make # or `make modbus_mapper`
docker tag modbus_mapper:v1.0 <your_dockerhub_username>/modbus_mapper:v1.0
docker push <your_dockerhub_username>/modbus_mapper:v1.0

Note: Before trying to push the docker image to the remote repository please ensure that you have signed into docker from your node, if not please type the followig command to sign in
docker login
# Please enter your username and password when prompted

Deploying modbus mapper application

cd $GOPATH/src/github.com/kubeedge/kubeedge/device/modbus_mapper

# Please enter the following details in the deployment.yaml :-
#    1. Replace <edge_node_name> with the name of your edge node at spec.template.spec.voluems.configMap.name
#    2. Replace <your_dockerhub_username> with your dockerhub username at spec.template.spec.containers.image

kubectl create -f deployment.yaml

Modules

The modbus mapper consists of the following four major modules :-

  1. Controller
  2. Modbus Manager
  3. Devicetwin Manager
  4. File Watcher

Controller

The main entry is index.js. The controller module is responsible for subscribing edge MQTT devicetwin topic and perform check/modify operation on connected modbus devices. The controller is also responsible for loading the configuration and starting the other modules. The controller first connects the MQTT client to the broker to receive message of expected devicetwin value (using the mqtt configurations in conf.json), it then connects to the devices and check all the properties of devices every 2 seconds (based on dpl configuration provided in the configuration file) and the file watcher runs parallelly to check whether the dpl configuration file is changed.

Modbus Manager

Modbus Manager is a component which can perform an read or write action on modbus device. The following are the main responsibilities of this component: a) When controller receives message of expected devicetwin value, Modbus Manager will connect to the device and change the registers to make actual state equal to expected.

b) When controller checks all the properties of devices, Modbus Manager will connect to the device and read the actual value in registers accroding to the dpl configuration.

Devicetwin Manager

Devicetwin Manager is a component which can transfer the edge devicetwin message. The following are the main responsibilities of this component: a) To receive the edge devicetwin message from edge mqtt broker and parse message.

b) To report the actual value of device properties in devicetwin format to the cloud.

File Watcher

File Watcher is a component which can load dpl and mqtt configuration from configuration files.The following are the main responsibilities of this component: a) To monitor the dpl configuration file. If this file changed, file watcher will reload the dpl configuration to the mapper.

b) To load dpl and mqtt configuration when mapper starts first time.

Pre-requisites

For best understanding of the guides, it’s useful to have some knowledge of the following systems:

Setup KubeEdge from sourcecode

Abstract

KubeEdge is composed of cloud and edge parts. It is built upon Kubernetes and provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge. So if we want to setup kubeedge, we need to setup kubernetes cluster, cloud side and edge side.

  • on cloud side, we need to install docker, kubernetes cluster and cloudcore.
  • on edge side, we need to install docker, mqtt and edgecore.

Prerequisites

Edge side

Note:

  • Do not install kubelet and kube-proxy on edge side
  • If you use kubeadm to install kubernetes, the Kubeadm init command can not be followed by the “–experimental-upload-certs” or “–upload-certs” flag

Run KubeEdge

Setup cloud side

Clone KubeEdge
git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
cd $GOPATH/src/github.com/kubeedge/kubeedge
Generate Certificates

RootCA certificate and a cert/key pair is required to have a setup for KubeEdge. Same cert/key pair can be used in both cloud and edge.

$GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh genCertAndKey edge

The cert/key will be generated in the /etc/kubeedge/ca and /etc/kubeedge/certs respectively, so this command should be run with root or users who have access to those directories. We need to copy these files to the corresponding edge side server directory.

Run as a binary
  • Firstly, make sure gcc is already installed on your host. You can verify it via:

    gcc --version
    
  • Build cloudcore

    cd $GOPATH/src/github.com/kubeedge/kubeedge/
    make all WHAT=cloudcore
    
  • Create DeviceModel and Device CRDs.

    cd $GOPATH/src/github.com/kubeedge/kubeedge/build/crds/devices
    kubectl create -f devices_v1alpha1_devicemodel.yaml
    kubectl create -f devices_v1alpha1_device.yaml
    
  • Create ClusterObjectSync and ObjectSync CRDs which used in reliable message delivery.

    cd $GOPATH/src/github.com/kubeedge/kubeedge/build/crds/reliablesyncs
    kubectl create -f cluster_objectsync_v1alpha1.yaml
    kubectl create -f objectsync_v1alpha1.yaml
    
  • Copy cloudcore binary

    cd $GOPATH/src/github.com/kubeedge/kubeedge/cloud
    mkdir -p ~/cmd
    cp cloudcore ~/cmd/
    

    Note ~/cmd/ dir is an example, in the following examples we continue to use ~/cmd/ as the binary startup directory. You can move cloudcore or edgecore binary to anywhere.

  • Create and set cloudcore config file

    # the default configration file path is '/etc/kubeedge/config/cloudcore.yaml'
    # also you can specify it anywhere with '--config'
    mkdir -p /etc/kubeedge/config/ 
    
    # create a minimal configuration with command `~/cmd/cloudcore --minconfig`
    # or a full configuration with command `~/cmd/cloudcore --defaultconfig`
    ~/cmd/cloudcore --minconfig > /etc/kubeedge/config/cloudcore.yaml 
    vim /etc/kubeedge/config/cloudcore.yaml 
    

    verify the configurations before running cloudcore

    apiVersion: cloudcore.config.kubeedge.io/v1alpha1
    kind: CloudCore
    kubeAPIConfig:
      kubeConfig: /root/.kube/config #Enter absolute path to kubeconfig file to enable https connection to k8s apiserver,if master and kubeconfig are both set, master will override any value in kubeconfig.
      master: "" # kube-apiserver address (such as:http://localhost:8080)
    modules:
      cloudhub:
        nodeLimit: 10
        tlsCAFile: /etc/kubeedge/ca/rootCA.crt
        tlsCertFile: /etc/kubeedge/certs/edge.crt
        tlsPrivateKeyFile: /etc/kubeedge/certs/edge.key
        unixsocket:
          address: unix:///var/lib/kubeedge/kubeedge.sock # unix domain socket address
          enable: true # enable unix domain socket protocol
        websocket:
          address: 0.0.0.0
          enable: true # enable websocket protocol
          port: 10000 # open port for websocket server
    

    cloudcore use https connection to Kubernetes apiserver as default, so you should make sure the kubeAPIConfig.kubeConfig exist, but if master and kubeConfig are both set, master will override any value in kubeconfig. Check whether the cert files for modules.cloudhub.tlsCAFile, modules.cloudhub.tlsCertFile,modules.cloudhub.tlsPrivateKeyFile exists.

  • Run cloudcore

    cd ~/cmd/
    nohup ./cloudcore &
    
  • Run cloudcore with systemd

    It is also possible to start the cloudcore with systemd. If you want, you could use the example systemd-unit-file. The following command will show you how to setup this:

    sudo ln build/tools/cloudcore.service /etc/systemd/system/cloudcore.service
    sudo systemctl daemon-reload
    sudo systemctl start cloudcore
    

    Note: Please fix ExecStart path in cloudcore.service. Do NOT use relative path, use absoulte path instead.

    If you also want also an autostart, you have to execute this, too:

    sudo systemctl enable cloudcore
    
  • (Optional)Run admission, this feature is still being evaluated. please read the docs in install the admission webhook

Deploy the edge node

Edge node can be registered automatically. But if you want to deploy edge node manually, here is an example.

Setup edge side

  • Transfer certificate files from cloud side to edge node, because edgecore use these certificate files to connection cloudcore
Clone KubeEdge
git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
cd $GOPATH/src/github.com/kubeedge/kubeedge
Run Edge
Configuring MQTT mode

The Edge part of KubeEdge uses MQTT for communication between deviceTwin and devices. KubeEdge supports 3 MQTT modes:

  1. internalMqttMode: internal mqtt broker is enabled.
  2. bothMqttMode: internal as well as external broker are enabled.
  3. externalMqttMode: only external broker is enabled.

To use KubeEdge in double mqtt or external mode, you need to make sure that mosquitto or emqx edge is installed on the edge node as an MQTT Broker.

Run as a binary
  • Build Edge

    cd $GOPATH/src/github.com/kubeedge/kubeedge
    make all WHAT=edgecore
    

    KubeEdge can also be cross compiled to run on ARM based processors. Please follow the instructions given below or click Cross Compilation for detailed instructions.

    cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
    make edge_cross_build
    

    KubeEdge can also be compiled with a small binary size. Please follow the below steps to build a binary of lesser size:

    apt-get install upx-ucl
    cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
    make edge_small_build
    

    Note: If you are using the smaller version of the binary, it is compressed using upx, therefore the possible side effects of using upx compressed binaries like more RAM usage, lower performance, whole code of program being loaded instead of it being on-demand, not allowing sharing of memory which may cause the code to be loaded to memory more than once etc. are applicable here as well.

  • Copy edgecore binary

    cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
    mkdir -p ~/cmd
    cp edgecore ~/cmd/
    

    Note: ~/cmd/ dir is also an example as well as cloudcore

  • Create and set edgecore config file

    # the default configration file path is '/etc/kubeedge/config/edgecore.yaml'
    # also you can specify it anywhere with '--config'
    mkdir -p /etc/kubeedge/config/ 
    
    # create a minimal configuration with command `~/cmd/edgecore --minconfig`
    # or a full configuration with command `~/cmd/edgecore --defaultconfig`
    ~/cmd/edgecore --minconfig > /etc/kubeedge/config/edgecore.yaml 
    vim /etc/kubeedge/config/edgecore.yaml 
    

    verify the configurations before running edgecore

    apiVersion: edgecore.config.kubeedge.io/v1alpha1
    database:
      dataSource: /var/lib/kubeedge/edgecore.db
    kind: EdgeCore
    modules:
      edged:
        cgroupDriver: cgroupfs
        clusterDNS: ""
        clusterDomain: ""
        devicePluginEnabled: false
        dockerAddress: unix:///var/run/docker.sock
        gpuPluginEnabled: false
        hostnameOverride: $your_hostname
        interfaceName: eth0
        nodeIP: $your_ip_address
        podSandboxImage: kubeedge/pause:3.1  # kubeedge/pause:3.1 for x86 arch , kubeedge/pause-arm:3.1 for arm arch, kubeedge/pause-arm64 for arm64 arch
        remoteImageEndpoint: unix:///var/run/dockershim.sock
        remoteRuntimeEndpoint: unix:///var/run/dockershim.sock
        runtimeType: docker
      edgehub:
        heartbeat: 15  # second
        tlsCaFile: /etc/kubeedge/ca/rootCA.crt
        tlsCertFile: /etc/kubeedge/certs/edge.crt
        tlsPrivateKeyFile: /etc/kubeedge/certs/edge.key
        websocket:
          enable: true
          handshakeTimeout: 30  # second
          readDeadline: 15  # second
          server: 127.0.0.1:10000  # cloudcore address
          writeDeadline: 15  # second
      eventbus:
        mqttMode: 2  # 0: internal mqtt broker enable only. 1: internal and external mqtt broker enable. 2: external mqtt broker
        mqttQOS: 0  # 0: QOSAtMostOnce, 1: QOSAtLeastOnce, 2: QOSExactlyOnce.
        mqttRetain: false  # if the flag set true, server will store the message and can be delivered to future subscribers.
        mqttServerExternal: tcp://127.0.0.1:1883  # external mqtt broker url.
        mqttServerInternal: tcp://127.0.0.1:1884  # internal mqtt broker url.
    
    • Check modules.edged.podSandboxImage
      • kubeedge/pause-arm:3.1 for arm arch
      • kubeedge/pause-arm64:3.1 for arm64 arch
      • kubeedge/pause:3.1 for x86 arch
    • Check whether the cert files for modules.edgehub.tlsCaFile and modules.edgehub.tlsCertFile and modules.edgehub.tlsPrivateKeyFile exists. If those files not exist, you need to copy them from cloud side.
    • Check modules.edgehub.websocket.server. It should be your cloudcore ip address.
  • Run edgecore

    # run mosquitto
    mosquitto -d -p 1883
    # or run emqx edge
    # emqx start
    
    cd ~/cmd
    ./edgecore
    # or
    nohup ./edgecore > edgecore.log 2>&1 &
    

    Note: Please run edgecore using the users who have root permission.

  • Run edgecore with systemd

    It is also possible to start the edgecore with systemd. If you want, you could use the example systemd-unit-file. The following command will show you how to setup this:

    sudo ln build/tools/edgecore.service /etc/systemd/system/edgecore.service
    sudo systemctl daemon-reload
    sudo systemctl start edgecore
    

    Note: Please fix ExecStart path in edgecore.service. Do NOT use relative path, use absoulte path instead.

    If you also want also an autostart, you have to execute this, too:

    sudo systemctl enable edgecore
    
Check status

After the Cloud and Edge parts have started, you can use below command to check the edge node status.

kubectl get nodes

Please make sure the status of edge node you created is ready.

Deploy Application on cloud side

Try out a sample application deployment by following below steps.

kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml

Note: Currently, for applications running on edge nodes, we don’t support kubectl logs and kubectl exec commands(will support in future release), support pod to pod communication running on edge nodes in same subnet using edgemesh.

Then you can use below command to check if the application is normally running.

kubectl get pods

Run Tests

Run Edge Unit Tests

make edge_test

To run unit tests of a package individually.

export GOARCHAIUS_CONFIG_PATH=$GOPATH/src/github.com/kubeedge/kubeedge/edge
cd <path to package to be tested>
go test -v

Run Edge Integration Tests

make integrationtest

Details and use cases of integration test framework

Please find the link to use cases of intergration test framework for KubeEdge.

Getting Started with KubeEdge Installer

Please refer to KubeEdge Installer proposal document for details on the motivation of having KubeEdge Installer. It also explains the functionality of the proposed commands. KubeEdge Installer Doc

Limitation

  • Currently support of KubeEdge installer is available only for Ubuntu OS. CentOS support is in-progress.

Downloading KubeEdge Installer

  1. Go to KubeEdge Release page and download keadm-$VERSION-$OS-$ARCH.tar.gz..
  2. Untar it at desired location, by executing tar -xvzf keadm-$VERSION-$OS-$ARCH.tar.gz.
  3. kubeedge folder is created after execution the command.

Building from source

  1. Download the source code either by
  • git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
  1. cd $GOPATH/src/github.com/kubeedge/kubeedge/keadm
  2. make
  3. Binary keadm is available in current path

Installing KubeEdge Master Node (on the Cloud) component

Referring to KubeEdge Installer Doc, the command to install KubeEdge cloud component (edge controller) and pre-requisites. Port 8080, 6443 and 10000 in your cloud component needs to be accessible for your edge nodes.

  • Execute keadm init

Command flags

The optional flags with this command are mentioned below

$  keadm init --help

keadm init command bootstraps KubeEdge's cloud component.
It checks if the pre-requisites are installed already,
If not installed, this command will help in download,
install and execute on the host.

Usage:
  keadm init [flags]

Examples:

keadm init

Flags:
      --docker-version string[="18.06.0"]          Use this key to download and use the required Docker version (default "18.06.0")
  -h, --help                                       help for init
      --kubeedge-version string[="0.3.0-beta.0"]   Use this key to download and use the required KubeEdge version (default "0.3.0-beta.0")
      --kubernetes-version string[="1.14.1"]       Use this key to download and use the required Kubernetes version (default "1.14.1")
  1. --docker-version, if mentioned with any version > 18.06.0, will install the same on the host. Default is 18.06.0. It is optional.
  2. --kubernetes-version, if mentioned with any version > 1.14.1, will install the same on the host. Default is 1.14.1. It is optional. It will install kubeadm, kubectl and kubelet in this host.
  3. --kubeedge-version, if mentioned with any version > 0.2.1, will install the same on the host. Default is 0.3.0-beta.0. It is optional.

command format is

keadm init --docker-version=<expected version> --kubernetes-version=<expected version> --kubeedge-version=<expected version>

NOTE: Version mentioned as defaults for Docker and K8S are being tested with.

Installing KubeEdge Worker Node (at the Edge) component

Referring to KubeEdge Installer Doc, the command to install KubeEdge Edge component (edge core) and pre-requisites

  • Execute keadm join <flags>

Command flags

The optional flags with this command are shown in below shell

$  keadm join --help
 
"keadm join" command bootstraps KubeEdge's edge component.
It checks if the pre-requisites are installed already,
If not installed, this command will help in download,
to install the prerequisites.
It will help the edge node to connect to the cloud.

Usage:
  keadm join [flags]

Examples:

keadm join --cloudcoreip=<ip address> --edgenodeid=<unique string as edge identifier>

  - For this command --cloudcoreip flag is a Mandatory flag
  - This command will download and install the default version of pre-requisites and KubeEdge

keadm join --cloudcoreip=10.20.30.40 --edgenodeid=testing123 --kubeedge-version=0.2.1 --k8sserverip=50.60.70.80:8080

  - In case, any option is used in a format like as shown for "--docker-version" or "--docker-version=", without a value
        then default values will be used.
        Also options like "--docker-version", and "--kubeedge-version", version should be in
        format like "18.06.3" and "0.2.1".

Flags:
      --docker-version string[="18.06.0"]          Use this key to download and use the required Docker version (default "18.06.0")
  -e, --cloudcoreip string                         IP address of KubeEdge cloudcore
  -i, --edgenodeid string                          KubeEdge Node unique identification string, If flag not used then the command will generate a unique id on its own
  -h, --help                                       help for join
  -k, --k8sserverip string                         IP:Port address of K8S API-Server
      --kubeedge-version string[="0.3.0-beta.0"]   Use this key to download and use the required KubeEdge version (default "0.3.0-beta.0")
  1. For KubeEdge flag the functionality is same as mentioned in keadm init
  2. -k, –k8sserverip, It should be in the format IPAddress:Port, where the default port is 8080. Please see the example above.

IMPORTANT NOTE: The KubeEdge version used in cloud and edge side should be same.

Reset KubeEdge Master and Worker nodes

Referring to KubeEdge Installer Doc, the command to stop KubeEdge cloud (edge controller). It doesn’t uninstall/remove any of the pre-requisites.

  • Execute keadm reset

Command flags

keadm reset --help

keadm reset command can be executed in both cloud and edge node
In master node it shuts down the cloud processes of KubeEdge
In worker node it shuts down the edge processes of KubeEdge

Usage:
  keadm reset [flags]

Examples:

For master node:
keadm reset

For worker node:
keadm reset --k8sserverip 10.20.30.40:8080

Flags:
  -h, --help                 help for reset
  -k, --k8sserverip string   IP:Port address of cloud components host/VM

Simple steps to bring up KubeEdge setup and deploy a pod

NOTE: All the below steps are executed as root user, to execute as sudo user ,Please add sudo infront of all the commands

1. Deploy KubeEdge CloudCore (With K8s Cluster)

Install tools with the particular version
keadm init --kubeedge-version=<kubeedge Version>  --kubernetes-version=<kubernetes Version> --docker-version=<Docker version>
Install tools with the default version
keadm init --kubeedge-version= --kubernetes-version= --docker-version
or
keadm init

NOTE: On the console output, observe the below line

kubeadm join 192.168.20.134:6443 –token 2lze16.l06eeqzgdz8sfcvh –discovery-token-ca-cert-hash sha256:1e5c808e1022937474ba264bb54fea42b05eddb9fde2d35c9cad5b83cf5ef9acAfter Kubeedge init ,please note the cloudIP as highlighted above generated from console output and port is 8080.

2. Manually copy certs.tgz from cloud host to edge host(s)

On edge host

mkdir -p /etc/kubeedge

On cloud host

cd /etc/kubeedge/
scp -r certs.tgz username@ipEdgevm:/etc/kubeedge

On edge host untar the certs.tgz file

cd /etc/kubeedge
tar -xvzf certs.tgz

3. Deploy KubeEdge edge core

Install tools with the particular version
keadm join --cloudcoreip=<cloudIP> --edgenodeid=<unique string as edge identifier> --k8sserverip=<cloudIP>:8080 --kubeedge-version=<kubeedge Version> --docker-version=<Docker version>
Install tools with the default version
keadm join --cloudcoreip=<cloudIP> --edgenodeid=<unique string as edge identifier> --k8sserverip=<cloudIP>:8080 --kubeedge-version=<kubeedge Version> --docker-version=<Docker version>

Sample execution output:

# ./keadm join --cloudcoreip=192.168.20.50 --edgenodeid=testing123 --k8sserverip=192.168.20.50:8080
Same version of docker already installed in this host
Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
Expected or Default KubeEdge version 0.3.0-beta.0 is already downloaded
kubeedge/
kubeedge/edge/
kubeedge/edge/conf/
kubeedge/edge/conf/modules.yaml
kubeedge/edge/conf/logging.yaml
kubeedge/edge/conf/edge.yaml
kubeedge/edge/edgecore
kubeedge/cloud/
kubeedge/cloud/cloudcore
kubeedge/cloud/conf/
kubeedge/cloud/conf/controller.yaml
kubeedge/cloud/conf/modules.yaml
kubeedge/cloud/conf/logging.yaml
kubeedge/version

KubeEdge Edge Node: testing123 successfully add to kube-apiserver, with operation status: 201 Created
Content {"kind":"Node","apiVersion":"v1","metadata":{"name":"testing123","selfLink":"/api/v1/nodes/testing123","uid":"87d8d7a3-7acd-11e9-b86b-286ed488c645","resourceVersion":"3864","creationTimestamp":"2019-05-20T07:04:37Z","labels":{"name":"edge-node"}},"spec":{"taints":[{"key":"node.kubernetes.io/not-ready","effect":"NoSchedule"}]},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"machineID":"","systemUUID":"","bootID":"","kernelVersion":"","osImage":"","containerRuntimeVersion":"","kubeletVersion":"","kubeProxyVersion":"","operatingSystem":"","architecture":""}}}

KubeEdge edge core is running, For logs visit /etc/kubeedge/kubeedge/edge/
#

Note:Cloud IP refers to IP generated ,from the step 1 as highlighted

4. Edge node status on cloudCore (master node) console

On cloud host run,

kubectl get nodes

NAME         STATUS     ROLES    AGE     VERSION
testing123   Ready      <none>   6s      0.3.0-beta.0

Check if the edge node is in ready state

5.Deploy a sample pod from Cloud VM

https://github.com/kubeedge/kubeedge/blob/master/build/deployment.yaml

Copy the deployment.yaml from the above link in cloud host,run

kubectl create -f deployment.yaml
deployment.apps/nginx-deployment created

6.Pod status

Check the pod is up and is running state

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-d86dfb797-scfzz   1/1     Running   0          44s

Check the deployment is up and is running state

kubectl get deployments

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1/1     1            1           63s

Errata

1.If GPG key for docker repo fail to fetch from key server. Please refer Docker GPG error fix

2.After kubeadm init, if you face any errors regarding swap memory and preflight checks please refer Kubernetes preflight error fix

Cross Compiling KubeEdge

In most of the cases, when you are trying to compile KubeEdge edgecore on Raspberry Pi or any other device, you may run out of memory, in that case, it is advisable to cross-compile the Edgecore binary and transfer it to your edge device.

For ARM Architecture from x86 Architecture

Clone KubeEdge

# Build and run KubeEdge on an ARMv6 target device.

git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
sudo apt-get install gcc-arm-linux-gnueabi
export GOARCH=arm
export GOOS="linux"
export GOARM=6 #Pls give the appropriate arm version of your device 
export CGO_ENABLED=1
export CC=arm-linux-gnueabi-gcc
make edgecore

If you are compiling KubeEdge edgecore for Raspberry Pi and check the Makefile for the edge.

In that CC has been defined as

export CC=arm-linux-gnueabi-gcc;

However, it always good to check what’s your gcc on Raspberry Pi says by

gcc -v

Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/6/lto-wrapper
Target: arm-linux-gnueabihf
Configured with: ../src/configure -v --with-pkgversion='Raspbian 6.3.0-18+rpi1+deb9u1' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --program-prefix=arm-linux-gnueabihf- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-armhf/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-armhf --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-armhf --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-sjlj-exceptions --with-arch=armv6 --with-fpu=vfp --with-float=hard --enable-checking=release --build=arm-linux-gnueabihf --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf
Thread model: posix
gcc version 6.3.0 20170516 (Raspbian 6.3.0-18+rpi1+deb9u1)

If you see, Target has been defined as

Target: arm-linux-gnueabihf

in that case, export CC as

arm-linux-gnueabihf-gcc rather than arm-linux-gnueabi-gcc

Also, based on the above result, you may have to install

gcc-arm-linux-gnueabi - GNU C cross-compiler for architecture armel

or

gcc-arm-linux-gnueabihf - GNU C cross-compiler for architecture armhf

Measuring memory footprint of EdgeCore

Why measuring memory footprint

  • This platform is also tagged for a light weighted edge computing deployment
  • To be able to be deployed over devices with less resources (for example, 256MB RAM)
  • It is required to know by deploying as many as possible pods, it showcases as much as less possible memory footprint

KPI’s measured

  • %CPU
  • %Memory
  • Resident Set Size (RSS)

How to test

After deployment and provisioning of KubeEdge cloud and edge components in 2 VM’s (supported and tested over Ubuntu 16.04) respectively, start deploying pods from 0 to 100 in steps of 5. Keep capturing above KPI’s using standard linux ps commands, after each step.

Test setup

_images/perftestsetup_diagram.PNGKubeEdge Test Setup

Fig 1: KubeEdge Test Setup

Creating a setup

Requirements
  • Host machine’s or VM’s resource requirements can mirror the edge device of your choice
  • Resources used for above setup are 4 CPU, 8GB RAM and 200 GB Disk space. OS is Ubuntu 16.04.
  • Docker image used to deploy the pods in edge, needs to be created. The steps are:
    1. Go to github.com/kubeedge/kubeedge/edge/hack/memfootprint-test/
    2. Using the Dockerfile available here and create docker image (perftestimg:v1).
    3. Execute the docker command sudo docker build --tag "perftestimg:v1" ., to get the image.
Installation
  • For KubeEdge Cloud and Edge:

    Please follow steps mentioned in KubeEdge README.md

  • For docker image:

  • Deploy docker registry to either edge on any VM or host which is reachable to edge. Follow the steps mentioned here: https://docs.docker.com/registry/deploying/
  • Create perftestimg:v1 docker image on the above mentioned host
  • Then push this image to docker registry using docker tag and docker push commands (Refer: Same docker registry url mentioned above) [Use this image’s metadata in pod deployment yaml]

Steps

  1. Check edge node is connected to cloud. In cloud console/terminal, execute the below command
root@ubuntu:~/edge/pod_yamls# kubectl get nodes
NAME                                   STATUS     ROLES    AGE     VERSION
192.168.20.31                          Unknown    <none>   11s
ubuntu                                 NotReady   master   5m22s   v1.14.0
  1. On cloud, modify deployment yaml (github.com/kubeedge/kubeedge/edge/hack/memfootprint-test/perftestimg.yaml), set the image name and set spec.replica as 5
  2. Execute sudo kubectl create -f ./perftestimg.yaml to deploy the first of 5 pods in edge node
  3. Execute sudo kubectl get pods | grep Running | wc to check if all the pods come to Running state. Once all pods come to running state, go to edge VM
  4. On Edge console, execute ps -aux | grep edgecore. The output shall be something like:
USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     102452  1.0  0.5 871704 42784 pts/0    Sl+  17:56   0:00 ./edgecore
root     102779  0.0  0.0  14224   936 pts/2    S+   17:56   0:00 grep --color=auto edge
  1. Collect %CPU, %MEM and RSS from respective columns and record
  2. Repeat step 2 and this time increase the replica by 5
  3. This time execute sudo kubectl apply -f <PATH>/perftestimg.yaml
  4. Repeat steps from 4 to 6.
  5. Now repeat steps from 7 to 9, till the replica count reaches 100

Try KubeEdge with HuaweiCloud (IEF)

Intelligent EdgeFabric (IEF)

Note: The HuaweiCloud IEF is only available in China now.

  1. Create an account in HuaweiCloud.
  2. Go to IEF and create an Edge node.
  3. Download the node configuration file (<node_name>.tar.gz).
  4. Run cd $GOPATH/src/github.com/kubeedge/kubeedge/edge to enter edge directory.
  5. Run bash -x hack/setup_for_IEF.sh /PATH/TO/<node_name>.tar.gz to modify the configuration files in conf/.

MQTT Message Topics

KubeEdge uses MQTT for communication between deviceTwin and devices/apps. EventBus can be started in multiple MQTT modes and acts as an interface for sending/receiving messages on relevant MQTT topics.

The purpose of this document is to describe the topics which KubeEdge uses for communication. Please read Beehive documentation for understanding about message format used by KubeEdge.

Subscribe Topics

On starting EventBus, it subscribes to these 5 topics:

1. "$hw/events/node/+/membership/get"
2. "$hw/events/device/+/state/update"
3. "$hw/events/device/+/twin/+"
4. "$hw/events/upload/#"
5. "SYS/dis/upload_records"

If the the message is received on first 3 topics, the message is sent to deviceTwin, else the message is sent to cloud via edgeHub.

We will focus on the message expected on the first 3 topics.

  1. "$hw/events/node/+/membership/get": This topics is used to get membership details of a node i.e the devices that are associated with the node. The response of the message is published on "$hw/events/node/+/membership/get/result" topic.
  2. "$hw/events/device/+/state/update”: This topic is used to update the state of the device. + symbol can be replaced with ID of the device whose state is to be updated.
  3. "$hw/events/device/+/twin/+": The two + symbols can be replaced by the deviceID on whose twin the operation is to be performed and any one of(update,cloud_updated,get) respectively.

Following is the explanation of the three suffix used:

  1. update: this suffix is used to update the twin for the deviceID.
  2. cloud_updated: this suffix is used to sync the twin status between edge and cloud.
  3. get: is used to get twin status of a device. The response is published on "$hw/events/device/+/twin/get/result" topic.

Unit Test Guide

The purpose of this document is to give introduction about unit tests and to help contributors in writing unit tests.

Unit Test

Read this article for a simple introduction about unit tests and benefits of unit testing. Go has its own built-in package called testing and command called go test.For more detailed information on golang’s builtin testing package read this document.

Mocks

The object which needs to be tested may have dependencies on other objects. To confine the behavior of the object under test, replacement of the other objects by mocks that simulate the behavior of the real objects is necessary. Read this article for more information on mocks.

GoMock is a mocking framework for Go programming language. Read godoc for more information about gomock.

Mock for an interface can be automatically generated using GoMocks mockgen package.

Note There is gomock package in kubeedge vendor directory without mockgen. Please use mockgen package of tagged version v1.1.1 of GoMocks github repository to install mockgen and generate mocks. Using higher version may cause errors/panics during execution of you tests.

There is gomock package in kubeedge vendor directory without mockgen. Please use mockgen package of tagged version v1.1.1 of GoMocks github repository to install mockgen and generate mocks. Using higher version may cause errors/panics during execution of you tests.

Read this article for a short tutorial of usage of gomock and mockgen.

Ginkgo

Ginkgo is one of the most popular framework for writing tests in go.

Read godoc for more information about ginkgo.

See a sample in kubeedge where go builtin package testing and gomock is used for writing unit tests.

See a sample in kubeedge where ginkgo is used for testing.

Writing UT using GoMock

Example : metamanager/dao/meta.go

After reading the code of meta.go, we can find that there are 3 interfaces of beego which are used. They are Ormer, QuerySeter and RawSeter.

We need to create fake implementations of these interfaces so that we do not rely on the original implementation of this interface and their function calls.

Following are the steps for creating fake/mock implementation of Ormer, initializing it and replacing the original with fake.

  1. Create directory mocks/beego.
  2. use mockgen to generate fake implementation of the Ormer interface
mockgen -destination=mocks/beego/fake_ormer.go -package=beego github.com/astaxie/beego/orm Ormer
  • destination : where you want to create the fake implementation.
  • package : package of the created fake implementation file
  • github.com/astaxie/beego/orm : the package where interface definition is there
  • Ormer : generate mocks for this interface
  1. Initialize mocks in your test file. eg meta_test.go
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
ormerMock = beego.NewMockOrmer(mockCtrl)
  1. ormermock is now a fake implementation of Ormer interface. We can make any function in ormermock return any value you want.
  2. replace the real Ormer implementation with this fake implementation. DBAccess is variable to type Ormer which we will replace with mock implemention
dbm.DBAccess = ormerMock
  1. If we want Insert function of ormer interface which has return types as (int64,err) to return (1 nil), it can be done in 1 line in your test file using gomock.
ormerMock.EXPECT().Insert(gomock.Any()).Return(int64(1), nil).Times(1)

Expect() : is to tell that a function of ormermock will be called.

Insert(gomock.Any()) : expect Insert to be called with any parameter.

Return(int64(1), nil) : return 1 and error nil

Times(1): expect insert to be called once and return 1 and nil only once.

So whenever insert is called, it will return 1 and nil, thus removing the dependency on external implementation.

Device Management User Guide

KubeEdge supports device management with the help of Kubernetes CRDs and a Device Mapper (explained below) corresponding to the device being used. We currently manage devices from the cloud and synchronize the device updates between edge nodes and cloud, with the help of device controller and device twin modules.

Device Model

A device model describes the device properties exposed by the device and property visitors to access these properties. A device model is like a reusable template using which many devices can be created and managed.

Details on device model definition can be found here.

A sample device model can be found here

Device Instance

A device instance represents an actual device object. It is like an instantiation of the device model and references properties defined in the model. The device spec is static while the device status contains dynamically changing data like the desired state of a device property and the state reported by the device.

Details on device instance definition can be found here.

A sample device model can be found here.

Device Mapper

Mapper is an application that is used to connect and and control devices. Following are the responsibilities of mapper:

  1. Scan and connect to the device.
  2. Report the actual state of twin-attributes of device.
  3. Map the expected state of device-twin to actual state of device-twin.
  4. Collect telemetry data from device.
  5. Convert readings from device to format accepted by KubeEdge.
  6. Schedule actions on the device.
  7. Check health of the device.

Mapper can be specific to a protocol where standards are defined i.e Bluetooth, Zigbee, etc or specific to a device if it a custom protocol.

Mapper design details can be found here

An example of a mapper application created to support bluetooth protocol can be found here

Usage of Device CRD

The following are the steps to

  1. Create a device model in the cloud node.

            kubectl apply -f <path to device model yaml>
    
  2. Create a device instance in the cloud node.

           kubectl apply -f <path to device instance yaml>
    

    Note: Creation of device instance will also lead to the creation of a config map which will contain information about the devices which are required by the mapper applications The name of the config map will be as follows: device-profile-config-< edge node name >. The updation of the config map is handled internally by the device controller.

  3. Run the mapper application corresponding to your protocol.

  4. Edit the status section of the device instance yaml created in step 2 and apply the yaml to change the state of device twin. This change will be reflected at the edge, through the device controller and device twin modules. Based on the updated value of device twin at the edge the mapper will be able to perform its operation on the device.

  5. The reported values of the device twin are updated by the mapper application at the edge and this data is synced back to the cloud by the device controller. User can view the update at the cloud by checking his device instance object.

    Note: Sample device model and device instance for a few protocols can be found at $GOPATH/src/github.com/kubeedge/kubeedge/build/crd-samples/devices 

Edgemesh test env config guide

Containerd Support

  • Refer to the Usage to prepare the kubeedge environment.
  • Following steps must be taken for the container network configuration if choose containerd as the container engine.
Note: CNI plugin installation and port mapping configuration only needed for containerd

step1. Install CNI plugin

  • get cni plugin source code of version (0.2.0) from the github.com. Compile and install.
It is recommended to use version 0.2.0, which have tested to ensure stability and availability. make sure to run on the Go development environment.
# download the cni-plugin-0.2.0
$ wget https://github.com/containernetworking/plugins/archive/v0.2.0.tar.gz
# Extract the tarball
$ tar -zxvf v0.2.0.tar.gz
# Compile the source code
$ cd ./plugins-0.2.0
$ ./build
# install the plugin after './build'
$ mkdir -p /opt/cni/bin
$ cp ./bin/* /opt/cni/bin/
  • configure cni plugin
$ mkdir -p /etc/cni/net.d/
  • please Make sure docker0 does not exist !!
  • field “bridge” must be “docker0”
  • field “isGateway” must be true
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"cniVersion": "0.2.0",
	"name": "mynet",
	"type": "bridge",
	"bridge": "docker0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF

step2 Configure port mapping manually on node on which server is running

can see the examples in the next section.
  • Ⅰ. execute iptables command as follows
$ iptables -t nat -N PORT-MAP
$ iptables -t nat -A PORT-MAP -i docker0 -j RETURN
$ iptables -t nat -A PREROUTING -p tcp -m addrtype --dst-type LOCAL -j PORT-MAP
$ iptables -t nat -A OUTPUT ! -d 127.0.0.0/8 -p tcp -m addrtype --dst-type LOCAL -j PORT-MAP
$ iptables -P FORWARD ACCEPT
  • Ⅱ. execute iptables command as follows

    • portIN is the service map at the host
    • containerIP is the IP in the container. Can be find out on master by kubectl get pod -o wide
    • portOUT is the port that monitored In-container
$ iptables -t nat -A PORT-MAP ! -i docker0 -p tcp -m tcp --dport portIN -j DNAT --to-destination containerIP:portOUT
  • by the way, If you redeployed the service,you can use the command as follows to delete the rule, and perform the second step again.
 $ iptables -t nat -D PORT-MAP 2

Example for Edgemesh test env

_images/edgemesh-test-env-example.pngedgemesh test env example

Edgemesh end to end test guide

Model

_images/model.jpgmodel

  1. a headless service(a service with selector but ClusterIP is None)
  2. one or more pods’ labels match the headless service’s selector
  3. so when request a server: <service_name>.<service_namespace>.svc.<cluster>:<port>:
    1. get the service’s name and namespace from domain name
    2. query the backend pods from metaManager by service’s namespace and name
    3. load balance return the real backend container’s hostIP and hostPort

Flow from client to server

_images/endtoend-test-flow.jpgflow

  1. client request to server’s domain name
  2. DNS request hijacked to edgemesh by iptables, return a fake ip
  3. request hijacked to edgemesh by iptables
  4. edgemesh resolve request, get domain name, protocol, request and so on
  5. edgemesh load balance:
    1. get the service name and namespace from the domain name
    2. query backend pod of the service from metaManager
    3. choose a backend based on strategy
  6. edgemesh transport request to server wait server response and then response to client

How to test end to end

  • create a headless service(no need specify port):
apiVersion: v1
kind: Service
metadata:
  name: edgemesh-example-service
  namespace: default
spec:
  clusterIP: None
  selector:
    app: whatapp
  • create server deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: server
  labels:
    app: whatapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: whatapp
  template:
    metadata:
      labels:
        app: whatapp
    spec:
      nodeSelector:
        name: edge-node
      containers:
      - name: whatapp
        image: docker.io/cloudnativelabs/whats-my-ip:latest
        ports:
        - containerPort: 8080
          hostPort: 8080
  • create client deployment: replace the image please
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client
  labels:
    app: client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client
  template:
    metadata:
      labels:
        app: client
    spec:
      nodeSelector:
        name: edge-node
      containers:
      - name: client
        image: ${your client image for test}
      initContainers:
      - args:
        - -p
        - "8080"
        - -i
        - "192.168.1.2/24,156.43.2.1/26"
        - -t
        - "12345,5432,8080"
        - -c
        - "9292"
        name: init1
        image: docker.io/kubeedge/edgemesh_init:v1.0.0
        securityContext:
          privileged: true

note: -t: whitelist, only port in whitelist can go out from client to edgemesh then to server

  • client request server: exec into client container and then run command:
curl http://edgemesh-example-service.default.svc.cluster:8080

will get the response from server like: HOSTNAME:server-5c5868b79f-j4td7 IP:10.22.0.14

  • there is two ways to exec the ‘curl’ command to access your service
  • 1st: use ‘ctr’ command attach in the container and make sure there is ‘curl’ command in the container
$ ctr -n k8s.io c ls
$ ctr -n k8s.io t exec --exec-id 123 <containerID> sh
# if you get error: curl: (6) Could not resolve host: edgemesh-example-service.default.svc.cluster; Unknown error
# please check the /etc/resolv.conf has a right config like: nameserver 8.8.8.8
  • 2nd: switch the network namespace.(Recommended Use)
# first step get the id,this command will return a id start with 'cni-xx'. and make sure the 'xx' is related to the pod which you can get from 'kubectl describe <podName>' 
$ ip netns
# and the use this id to switch the net namespace. And the you can exec curl to access the service
$ ip netns exec <id> bash

FAQs

This page contains a few commonly occuring questions. For further support please contact us using the support page