The page below contains a public sampling of Alliance work, collected from different stages across the lifetime of the IoBT Alliance.

If you are reading this in conjunction with the RMB review, for latest demo videos, please refer to the link (to the RMB virtual demonstration page) provided in the readahead material.

Video: IoBT Overview

This video provides a quick overview of the IoBT Collaborative Research Alliance with interviews from the performing member of the alliance.


Video: Innovation – Trinty: Trusted, Resilient and Interpretable AI framework

The increasing adoption of artificial intelligence and machine learning in systems, including safety-critical systems, has created a pressing need for developing scalable techniques that can be used to establish trust over their safe behavior, resilience to adversarial attacks, and interpretability to enable human audits. This tutorial is comprised of three components: review of techniques for verification of neural networks, methods for using geometric invariants to defend against adversarial attacks, and techniques for extracting logical symbolic rules by reverse engineering machine learning models. These techniques form the core of TRINITY: Trusted, Resilient and Interpretable AI framework



Video: Innovation – IOBT Unconventional Sensing Modalities

Sensor modality: Sensors measure some form of energy and process it in analytical ways. Modality refers to the raw input used by the sensors, such as sound, pressure, temperature light, RF, etc. This video presents a demonstrates a technique developed in the IoBT CRA that reconstructs speech using signals from common cell phone motion sensors



Video: Innovation – Characterizing Properties of Gray Assets

This video illustrates a research project focused on characterizing properties of gray assets. Cameras are ubiquitous public assets in many large cities, and can be important assets for information gathering during urban operations. Such public cameras often have limited metadata describing their attributes. A key missing attribute is the precise location of the camera, using which it is possible to precisely pinpoint the location of events seen in the camera. In this work, we explore the following question: under what conditions is it possible to estimate the location of a camera from a single image taken by the camera? We show that, using a judicious combination of projective geometry, neural networks, and crowd-sourced annotations from human workers, it is possible to position 95% of the images in our test data set to within 12m. This performance is two orders of magnitude better than PoseNet, a state-of-the-art neural network that, when trained on a large corpus of images in an area, can estimate the pose of a single image. Finally, we show that the camera’s inferred position and intrinsic parameters can help design a number of virtual sensors, all of which have less than 10-15% error with high percentile. (THERE IS NO AUDIO IN THIS VIDEO)


Video: Innovation – Cross Reality Common Operating Picture

This video presents future concepts for an IoBT-supported Common Operating Picture interface, based on virtual reality.

Video: IoBT MSA Testbed

This video is a short description and demonstration of the IoBT Testbed that is in the process of being setup as part of ARL’s Multi-purpose Sensing Area (MSA) near White Sands Missile Range

Video: 360 Action via Pico Clusters

In this video, we show the challenges and directions how to find objects and actions in multi-view video 360 video streams to be processed on pico-clusters which are becoming strong candidates for edge computing. The challenge of finding objects and actions in video 360 is the equi-rectangular video representation of video 360 which causes distortions of objects and actions. Another challenge is the enormous computational needs to do the object and action detection on light-weight edge devices such as pico-clusters which are very appropriate for military settings. Our solution shows capabilities of action detection in video 360 on pico-clusters with strong accuracy which can assist in needs of detection, identification, localization and tracking of adversarial targets.

Video: Innovation – Boolean Logic For Sensor Placement and Connectivity

This video provides an overview of mission-oriented network synthesis using available sensors (both fixed and deployable) and compute elements to achieve mission goals and requirements investigated as part of the IoBT Collaborative Research Alliance.


Video: Innovation – Communication-Efficient Optimization & Learning

Large scale distributed optimization has become increasingly important with the emergence of edge computation architectures such as in the federated learning setup, where large amounts of data is massively distributed across tactical devices and systems. A key bottleneck for many such large-scale problems is in the communication overhead of exchanging information between devices over bandwidth limited networks. Existing approaches propose to mitigate these bottlenecks either by using different forms of compression or by iterative mixing of local models. We first propose a novel class of highly communication efficient operators that employ quantization with explicit sparsification. Furthermore, we incorporate local iterations into our algorithm, which allows the communication to be infrequent and possibly asynchronous, thereby enabling significantly reduced communication.

This presentation highlights how the IOBT CRA also connects industry (e.g. Amazon and Adobe) colleagues/researchers who are not part of the Alliance to work in collaboration on critical Army problems.


For additional details on the underlying research advancement you can also view the poster:  Communication Efficient Optimization & Learning

Video: Innovation – Resilient Localization Service

In this experiment a network of ultra-wideband radios provides localization services for a drone via time-of-flight calculations. One beacon will be controlled by an adversary and provide wrong information so as to compromise localization. The video illustrates how an algorithm developed in this project will detect the malicious node and still provide provide with drone with its correct location.


Video: Innovation – Distributed Prediction for DNN

This video presents ongoing work under IoBT Task 3.2 on methods for optimally partitioning deep neural network models for execution over heterogeneous IoBT networks as well as a demonstration platform implementing these methods on low-power IoT hardware.

Our approach partitions the network into a prefix, executed on an edge device where data are collected, and a suffix, executed on a server. We optimize the partitioning to trade off latency, accuracy, and energy use while respecting the storage, computation and communication bandwidth constraints of the edge device.

Our demonstration system implements our partitioning approach using an IoT node that includes a low-power gray-scale QVGA imager and a commercially available low-power neural network accelerator platform called the Gap8.

We train the demonstration system to perform a 20-class object detection task and achieve a throughput rate of 1FPS while consuming 70ma on the IoT node.


Video: Innovation – Risk-Aware Adaptive Distributed Computation Placement on the Tactical Edge

This video provides details on an approach that addresses how to incorporate future knowledge of the battlefield risks and dynamics in adaptive placement of computations on the IoBT tactical edge .


If you have difficulties viewing the video, you can view the presented information directly by clicking here while listening: Risk-aware Adaptive Distributed Computation Placement on the Tactical Edge

Video: Innovation – One-Sided Switching Games

The dynamics of adversarial activity in the modern battlefield will demand techniques that can quickly adapt and adjust given environmental and/or enemy dynamics. This is particularly important when the MDO (IoBT) battlefield will have ubiquitous intelligent devices, all operating on behalf of their local and organizational objectives and missions.


If you have difficulties viewing the video, you can view the presented information directly by clicking here while listening. One-Sided Switching Games

Video: Innovation – High Confidence Generalization for Reinforcement Learning

In Reinforcement learning, a decision-support system generates training data by interacting with, or sensing, the world. The system must learn the consequences of its actions through trial and error, rather than being explicitly told the correct action. The ability to generalize confidence in this context is critically essential to applications of reinforcement learning in dynamic military mission contexts, particularly given the pervasive and heterogeneous nature of sensing in the IoBT.


If you have difficulties viewing the video, you can view the presented information directly by clicking here while listening: High Confidence Generalization for Reinforcement Learning


Video: Activity Detection with Unconventional Modalities

This video is a poster presentation on detect activity using non-traditional sensor modalities. It is intuitive how sensors designed with the specific purpose of detecting physical activity , e.g. motion sensors, convey their signals into the cyber/virtual realm. However in the IoBT, where exponential numbers of sensors will organically exist, exploiting available sensors to provide information for which they may not have been designed will be critical. Moreover, under dynamic mission conditions this capability will be essential.


Video: Innovation Information Theoretic Insights for Storage Limited Learning

This video provides a poster presentation from the IoBT Collaborative Research Alliance discussing information-related challenges and related research in addressing the complexities of machine learning in distributed tactical environments, given battlefield constraints.


If you have difficulties viewing the video, you can view the presented information directly by clicking here while listening: Information-Theoretic Insights for Storage-Limited Learning