FOCUS: Fog Computing in UAS Software-Defined Mesh Networks

Seçinti G. , Trotta A., Mohanti S., Di Felice M., Chowdhury K. R.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, vol.21, no.6, pp.2664-2674, 2020 (Journal Indexed in SCI) identifier identifier

  • Publication Type: Article / Article
  • Volume: 21 Issue: 6
  • Publication Date: 2020
  • Doi Number: 10.1109/tits.2019.2960305
  • Page Numbers: pp.2664-2674
  • Keywords: Cloud computing, Task analysis, Routing, Computer architecture, Network topology, Topology, Edge computing, Unmanned aerial vehicles, mobile ad hoc networks, edge computing, software defined networking, heuristic algorithms, COVERAGE, IOT


Unmanned aerial systems (UASs) allow easy deployment, three-dimensional maneuverability and high reconfigurability, as they sustain communication network in the absence of pre-installed infrastructure. The proposed FOg Computing in UAS Software-defined mesh network (FOCUS) paradigm aims to realize an implementable network design that considers practical issues of aerial connectivity and computation. It allocates UASs to the tasks of data forwarding and in-network fog computing while maximizing number of ground-users in UAS coverage. FOCUS improves efficient utilization of network resources by introducing on-board computation and innovates on top of software-defined networking stack by integrating the capabilities of network and ground controllers to enable simultaneous orchestration of both UASs and communication flows. There are three main contributions of the paper: First, a SDN-based architecture is designed enabling autonomous configuration of computation and communication as well as managing multi-hop aerial links. Second, a global optimization problem to achieve optimal forwarding and computational allocation is formulated using Open Jackson Network model and solved via a heuristic approach with well defined complexity. Third, FOCUS framework is implemented on a small-scale testbed of Intel(R) Aero UASs performing image analysis with a full software stack. Experiments reveal at least 32% latency improvement in computation service time compared to traditional centralized computation at the end-server or greedy task allocation schemes within the network.