2019

ID: D1

Title: Advanced network pruning and quantization

Abstract: Pruning and quantization are two basic methodologies to reduce computation time in
both training and inference stages of Deep Neural Networks (DNNs). However, two challenging
problems are unsolved: (1) Pruned DNNs have irregular sparse weights and pseudo-quantized
weights. Irregular sparse computation has overhead of indexing and poor data locality, and
pseudo-quantized weights have to be converted to floating precision for computation. Those
facts limit speed gains from DNN pruning and quantization; (2) state-of-the-art pruning and
quantization methods only compress weights but ignore activations which are the units that
majority of data involved during the computation of DNNs. To tackle those problems, we will
(1) leverage hardware and system expertise to discover most computation-efficient sparsity
and quantization patterns and guide pruning and quantization to learn those patterns, so as to
exploit the power of both hardware and algorithms; (2) explore pruning and quantization on
both weights and activations, so as to transfer computation between sparse weights and dense
activations to computation between sparse weights and sparse activations, and transfer
computation between quantized weights and floating activations to computation between
quantized weights and quantized activations.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D2

Title: Fine-grained quantization for deep neural networks and its interpretation

Abstract: Many recent works have proven the effectiveness of deep neural network (DNN)
compression in reducing computational cost. However, most of the existing methods and tools
such as TensorFlow Lite simply quantize the whole DNN by assigning the same bit-width to all
the layers. Our previous work accepted in the NIPS 2018 workshop on Compact Deep Neural
Networks with industrial applications demonstrated a systematic approach to optimize the
bit-width for each layer individually. This approach can be further expanded by taking into
account other design factors such as hardware constraints etc. One open question that was
generated by NIPS workshop paper is how to interpret the sensitivity of the quantization of
each layer of the DNN to the model accuracy accurately and effectively. To answer this
question, we plan to develop an automated approach that can read in the model, automatically
interpret the quantization sensitivity of each layer, block and cell, and then compress the neural
network at different granularities without or with minimum accuracy loss.
Deliverable: 1) Submission to ML or CV conferences or journals; 2) Code and benchmarks used
in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Yiran Chen).

ID: D3

Topic: Versatile, privacy-preserving, and efficient cloud-edge image rendering

Abstract: Traditional mobile-based image rendering systems become increasingly difficult to
satisfy users’ various requirements due to versatility limitations of ad-hoc rendering algorithms
and limited availability of image resources at edge. Additional image resources can be collected
from other edge devices, however, leading to severe concerns on information privacy and
rendering efficiency in cloud-edge environments. In our project, we propose a versatile image
rendering method based on image hashtag similarity and unsupervised image decomposition.
Two goals are expected to be achieved. The first is to maximally capture hidden rendering
styles in training data, instead of one pre-defined style in the traditional methods. In our
method, the hidden rendering styles are automatically discovered by doing image translation
between similar images. The similarity is measured by hashtags’ distance between the images.
The second goal is to offer a high image rendering efficiency and preserves users’ privacy. In our
method, an image to be rendered is decomposed into two components (to enhance privacy)
which are separately sent from an edge device to the cloud, and one component only is
processed (to improve efficiency) for rendering in the cloud.
Deliverable: 1) A submission to premium mobile computing conference or journal; 2) Code and
benchmarks used in the experiment.

 

Proposed budget: $100K.

Participated site: Duke (PI: Yiran Chen).

ID: D4

Title: Reinforcement learning for datacenter scheduling

Abstract: Datacenters require sophisticated management frameworks for resource allocation
and job scheduling. Most existing mechanisms and policies are statically optimized and defined,
hindering their ability to respond to system dynamics---users may arrive or depart, jobs may
transition from one computational phase to another. We propose methods in reinforcement
learning to model system conditions and optimize the allocation of server resources. We will
show how reinforcement learning can learn effective policies as the datacenter computers and
adapt quickly to new system conditions and workloads.
Deliverable: (1) A manuscript submitted to a premier computer systems or architecture venue.
(2) Code and benchmarks used in the experiments.

Proposed budget: $50K.

Participated site: Duke (PI: Benjamin Lee).

ID: D5

Title: Hybrid parallelism and communication methods for decentralized DNN training

Abstract: Decentralized method was recently proposed to boost performance of distributed
DNN training. However, the spontaneous communications between workers demand high
bandwidth and thus significantly slow down the computing performance. We propose hybrid
parallelism, which includes both data and model parallelism, to reduce the intrinsic
communication of partial sum accumulation and weight updating. We also propose to develop
a hybrid communication scheme where the communication within a worker group is
synchronous and the communication between worker groups is asynchronous to further reduce
the communication while still maintaining the training accuracy.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D6

Title: Joint optimization of deep neural network acceleration in speech recognition applications

Abstract: DNN models, especially CTC+LSTM, have been successfully used in speech recognition
and shows great performance. In order to be better deployed on mobile devices, quantization,
sparsification and pruning methods are often used to accelerate the execution of speech
recognition models. Pruning is often used to reduce the number of nodes in CNN and sparsity
method is often applied on LSTM part. However, these acceleration methods only optimize the
CNN or LSTM part separately. Moreover, speed up one part of the model may influence the
accuracy of entire speech recognition model. We produce to optimize the CNN and LSTM model
jointly in order to further improve the overall performance. By finding a way to balance the
optimization of LSTM and CNN, we can achieve a high accuracy while reducing the execution
time as much as possible.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D7

Title: Robust DNN model compression for noisy scenarios

Abstract: Traditional DNN model compression methods reduce number of weights and neurons
via quantization, sparsification, pruning, and compact network design. These methods,
however, usually do not consider noisy application scenarios, especially when multiple noise
sources exist. There are at least two major challenges in improving resiliency of the model
compression methods to noise: 1) noise distribution is complex and often unknown in
real-world applications; 2) quantitatively analyzing the impact of the noise on accuracy of the
compressed model additive is difficult. We propose to track and interpret the changes of
patterns of DNN boundaries induced during model compression under various noisy scenarios.
DNN regularization on model compression will then be adaptively tuned for enhancing the
robustness of the DNN under noisy scenarios according to the captured boundary changing
patterns.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D8

Title: A secure and privacy preserving cloud-based online learning system

Abstract: Traditional cloud-based learning system requires all the users to upload their data to
an online server before it can start learning the model, requiring a huge data storage space,
large amount of data communication to each user, and a potential threat of leaking private user
data. The recently proposed federated learning has provide a way to train model locally with
data from each user, then communicate the model parameters to reach combined model. Such
method eliminate the threat of privacy leaking, but the performance may be limited by the data
amount and computation resource of each individual user. Here we propose to divide our
model into a public part plus a local part. The local part will be able to efficiently deployed on
the edge devices held by the users, where it will preprocess the user data to shrink the size and
eliminate privacy concerns while preserving useful information of the learning task. These
information will then be send onto the cloud for training/inferencing with the public model.
Similar to federated learning, parameters and gradients of the public model and local models
will be communicated with each other to reach a high performance model collaboratively.
Deliverable: 1) One or more submissions to premium ML conference or journal; 2) A
cloud-based online learning and data storage system implemented on famous cloud computing
platforms (e.g. AWS etc.).

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D9

Title: Customized machine learning estimators on timing and IR drop

Abstract: EDA (Electronic Design Automation) technology has advanced remarkable progress
over the decades, from attaining merely functional correct designs of thousand-transistor
circuits to handling multi-million-gate circuits. However, there are two significant challenges in
current EDA methods: 1) traditional tools are largely restricted to their own design stage, which
forces designers to make pessimistic estimation at early stage. Such a policy can lead to very
long turn-around time for desired QoR (quality of results); 2) many tools rely heavily on manual
tuning, which imposes a stringent requirement on VLSI designers’ experience. Moreover, major
design problems like IR drop, DRC violation and negative timing slack grow to be increasingly
critical as technology scales. We rethink those EDA challenges from ML (machine learning)’s
perspective: ML methods will be customized to make fast and high-fidelity prediction on
different design goals, including power, timing, and DRC violations. Here are two promising
research directions: 1) for early timing analysis before placement, we propose a graph
convolution model that can effectively learn from the gate connection topology; 2) for IR drop
and power analysis, we plan to incorporate timing and spatial information into features and
customize a CNN model with maximum structure, which captures the moment with the most
serious power hotspot.
Deliverable: 1) A submission to premium EDA conference or journal; 2) Code used in the
experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Yiran Chen).

ID: D10

Title: Processing-in-memory architecture supporting GAN executions

Abstract: Processing-in-memory (PIM) technique has recently been extensively explored in the
designs of DNN accelerators. However, we found that existing solutions are unable to efficiently
support the computational needs required by unsupervised Generative Adversarial Network
(GAN) training due to the lack of the following two features: 1) Computation efficiency: GAN
utilizes a new operator, called transposed convolution , which introduces significant resource
underutilization as it inserts massive zeros in its input before a convolution operation; 2) Data
traffic: The data intensive training process of GAN often incurs structural heavy data traffic as
well as frequent massive data swaps. We propose a novel computation deformation technique
that can skip zero-insertions in transposed convolution for computation efficiency
improvement. Moreover, we will explore an efficient training procedure to reduce on-chip
memory access, designing flexible dataflow to achieve high data reuse and implementing
specific circuits to support the proposed GAN architecture with minimum cost on area and
energy consumption.
Deliverable: 1) A submission to premium Computer Architecture conference or journal; 2) Code
and benchmarks used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Yiran Chen).

ID: D11

Title: Efficient attention-based model designed for mobile devices

Abstract: Recently, attention-based DNN models achieve state-of-the-art accuracy in modern
Neural Machine Translation (NMT) systems. In the NMT systems, however, the hidden state of
current target word is required to compare with all the hidden states of the words in the source
sentence. The incurred large-scale vector-matrix multiplications introduce large memory
consumption and high computation cost, preventing these models from being deployed on
resource-constrained mobile devices. In order to accelerate vector-matrix multiplication,
technologies such as random weight pruning could be utilized to get a sparse weight and thus
decrease the total FLOPs. However, random sparsity could hardly lead to practical speedup
using general computation units because of their poor data locality associated with the
scattered weight distribution. In this project, we propose to explore structured sparsity on the
NMT models based on attention models, which prunes the whole row or the whole column of
the weight matrix for computation cost reduction. We will realize a prototype of the proposed
technique on mobile platforms.
Deliverable: 1) A submission to premium ML/NLP conference or journal; 2) Code and
benchmarks used in the experiment.

Proposed budget: $50K.

Participated site: Duke (PI: Yiran Chen).

ID: D12

Title: BW-SVD-DNNs: A block-wise SVD-based method to balance processing pipeline in DNN
accelerators and minimize the inference accuracy loss

Abstract: Deep Neural Networks (DNNs) have made phenomenal success in many real-world
applications. However, as the size of DNNs continues growing, it is difficult to improve the
energy efficiency and performance while still maintaining a good accuracy. Many techniques,
e.g., model compression and data reuse, were proposed to reduce the computational cost of
DNN executions and to efficiently deploy large-scale DNNs on various hardware platforms.
Nonetheless, most of existing DNN Models lack of efficient hardware acceleration solutions.
One major problem is that the transmission time of the data (i.e., weights and inputs/features)
is O(n) while the computation time of the data is O(n 2 ) for a network layer. Balancing the data
transmission time and the computation time is crucial to avoid long idle time of the system
awaiting for the incoming data or requesting a large storage to buffer the data. To solve this
problem, we propose BW-SVD-DNNs – a Block-Wise Singular Value Decomposition (BW-SVD)
Method to balance processing pipeline in DNN accelerators. An optimum trade-off between the
transmission time and the computation time can be achieved with minimized the inference
accuracy loss. In particular, we plan to 1) use BW-SVD technology to decompose large pieces of
data in the DNN models to balance the computation time and the transmission time; 2) design
efficient acceleration engine and data control unit according to the characteristics of the data
processed by BW-SVD; and 3) use Group Lasso-based retraining to minimize the impact of
BW-SVD on data decomposition for retaining inference accuracy,.
Deliverable: 1) A submission to premium ML\Architecture\FPGA conference or journal; 2) Code
and benchmarks used in the experiment

Proposed budget: $50K.

Participated site: Duke (PI: Yiran Chen).

ID: D13

Title: Systolic processing unit based reconfigurable accelerator for both CNN and LSTM

Abstract: Operations of LSTM usually include vector-matrix multiplications in fully-connected
(fc) layers and element-wise forget/addition gate operations. Although accelerator designs for
convolutional (conv) layers have been extensively studied, there exist two major differences
between the computations of fc layers and conv layers: 1) the ratios between the numbers of
weights and activations in fc and conv layers are very different; and 2) there does not exist any
data reuse of weights and activations in fc layers when batch size = 1, which is the typical case
in real-time applications. In this project, we propose a reconfigurable accelerator design based
on systolic processing unit to efficiently support both CNN and LSTM. In particular, the systolic
processing array can flexibly switch between conv mode and fc mode. A programmable fused
on-chip buffer is introduced to adapt to the different ratios between weights and activations in
conv and fc layers. We will also explore the proper quantization and mixed-precision techniques
for the target applications.
Deliverable: 1) A submission to a premium conference or journal on solid-state circuit, 2) a
processing unit design IP for CNN and LSTM.

Proposed budget: $50K.

Participated site: Duke (PI: Hai Li).

ID: D14

Title: Machine learning for datacenter performance analysis

Abstract: Datacenters deliver performance at scale but suffer from performance anomalies and
stragglers, atypically slow tasks that degrade job completion times. Although varied heuristics
and mechanisms have been proposed to mitigate stragglers, they rarely diagnose their root
causes. We propose methods in causal inference to diagnose performance anomalies at
datacenter scale. We will develop these methods for offline diagnosis as well as online
detection and mitigation.
Deliverable: (1) A manuscript submitted to a premier computer systems or architecture venue.
(2) Code and benchmarks used in the experiments.

Proposed budget: $50K.

Participated Site: Duke (PI: Benjamin Lee).

ID: D15

Title: Edge computing - aided Intelligent multi-user augmented reality

Abstract: Modern augmented reality (AR) applications, while already impressive, have multiple
limitations, including excessive energy consumption, restricted multi-user capabilities, and
limited adaptiveness and intelligence. Edge computing, the use of local computing resources to
bring advanced computing capabilities closer to the end users, has the potential to address all
these limitations. In this project, building on our ongoing experiments with Google ARCore,
Microsoft HoloLens, and Magic Leap One AR systems, we will develop techniques for aiding
mobile multi-user AR experiences via persistent edge computing-based applications and
persistent edge-integrated sensors.
Deliverable: 1) A submission to a premium mobile systems conference; 2) An interactive
demonstration at a premier mobile systems conference; 3) Code and data for edge-aided
augmented reality applications.

Proposed budget: $50K.

Participated site: Duke (PI: Maria Gorlatova).

 

Title: Representation power of quantized neural networks: from the complexity bound to the
insight

Abstract: DNN compression is of great importance for implementing DNNs on
resource-constrained platforms. As a popular compression technique, quantization constrains
the number of distinct weight values and thus reducing the number of bits required to
represent and store each weight. While the representation power of conventional neural
networks is well-studied, the theoretical analysis of that for quantized neural networks is still
missing. There are many interesting questions remain unanswered. For example: 1) is quantized
neural networks more efficient than unquantized neural networks? 2) what is the optimal
bit-width for the parameters in neural networks? In order to answer these questions, we
propose to investigate the representation power of quantized neural networks especially its
comparison with unquantized neural networks. The upper bound and lower bound of the
network size for approximating any function with a given error bound for quantized neural
networks will be proved. Then the bounds derived will be used to provide early-stage resource
estimation and network design guideline.
Deliverable: 1) One submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Performing site: Notre Dame (PI: Yiyu Shi).

ID: N2

Topic: Intelligent P-QRS-T peak detection for ultra-low power implantable devices

Abstract: T-wave alternans (TWA) detection is a very important clinical method and it is ideally
achieved in real-time with very limited computation power and battery capacitor for
implantable devices such as ICD. Most of the existing works either need high computation
power or are not accuracy enough. Using the dynamic time warping (DTW) method offers the
best tradeoff between accuracy and power consumption. We are proposing a method to select
the most efficient setting (for instance: the number of templates for the DTW algorithm) to
achieve the most energy efficient solution for ECG P-QRS-T peak detection.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $100K.

Performing site: Notre Dame (PI: Yiyu Shi).

ID: N3

Title: Guide for hybrid quantization of deconvolution-based generators

Abstract: As spin-offs from conventional convolutional neural networks (CNNs), GANs have
attracted much attention in the fields of reinforcement learning, unsupervised learning and also
semi-supervised learning, but they encounter the same heavy pressure from hardware as
conventional CNNs. Quantization is a popular, efficient and hardware-friendly compression
technique to alleviate the pressure on CNNs. However, directly adopting quantization to all
layers may fail deconvolution-based generator in generative adversarial networks based on our
observation. We propose to investigate the unique property of deconvolution-based neural
networks and apply hybrid quantization to obtain the best performance-cost trade-off. The
focus will be estimating the relative redundancy in each layer of deconvolution-based
generators. Based on the estimated redundancy, an effective guideline for the hybrid
quantization of deconvolution-based generators will be developed.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code and benchmarks
used in the experiment.

Proposed budget: $50K.

Performing site: Notre Dame (PI: Yiyu Shi).

ID: N4

Topic: Hardware aware neural architecture search

Abstract: The success of DNN in a wide collection of applications mainly owes to the invention
of task-specific and well-tailored architectures. The traditional architecture engineering process
relies heavily on human effort and is unavoidably slow, error-prone, and limited of variety.
Recently, growing attention and effort have been cast to neural architecture search (NAS), a
process that automates the design of DNN architectures. The previous work on NAS, however,
only considers the accuracy at the sole objective to direct the searching but oversees the the
performance in hardware sense. We propose to incorporate hardware specifications such as
latency, power, and resources etc. into the NAS system to build a hardware-aware NAS
framework. Given the target hardware and performance for a specific task, a dedicated DNN
architecture can then be generated to meet all the requirements without any human
involvement.
Deliverable: 1) Two submissions to premium ML conference or journal; 2) Code and
benchmarks used in the experiment.

Proposed budget: $100K.

Performing site: Notre Dame (PI: Yiyu Shi).

 

ID: N5

Topic: SCNN: A general distribution based statistical convolutional neural network with
application to video object detection

Abstract: Various convolutional neural networks (CNNs) were developed recently that achieved
accuracy comparable with that of human beings in computer vision tasks such as image
recognition, object detection and tracking, etc. Most of these networks, however, process one
single frame of image at a time, and may not fully utilize the temporal and contextual
correlation typically present in multiple channels of the same image or adjacent frames from a
video, thus limiting the achievable throughput. This limitation stems from the fact that existing
CNNs operate on deterministic numbers. We propose a novel statistical convolutional neural
network (SCNN), which extends existing CNN architectures but operates directly on correlated
distributions rather than deterministic numbers. By introducing a parameterized canonical
model to model correlated data and defining corresponding operations as required for CNN
training and inference, we show that SCNN can process multiple frames of correlated images
effectively, hence achieving significant speedup over existing CNN models. We use a CNN based
video object detection as an example to illustrate the usefulness of the proposed SCNN as a
general network model.
Deliverable: 1) A submission to premium ML conference or journal; 2) Code used in the
experiment.

Proposed budget: $50K.

Performing site: Notre Dame (PI: Yiyu Shi)

 

ID: S1

Title: Attribute-based object localization

Abstract: Despite the recent advances in object detection, it is still a challenging task to localize
a free-form textual phrase in an image. Unlike locating objects over a deterministic number of
classes, localizing textual phrases involves a massively larger search space. Thus, along with
learning from the visual cues, it is necessary to develop an understanding of these textual
phrases and its relation to the visual cues to reliably reason about locations of described by the
phrases. Spatial attention networks are known to learn this relationship and enable the
language-encoding recurrent networks to focus its gaze on salient objects in the image. Thus,
we propose to utilize spatial attention networks to refine region proposals for the phrases from
a Region Proposal Network (RPN) and localize them through reconstruction. Utilizing
in-network RPN and attention allows for an independent/self-sufficient model and
interpretable results respectively.
Deliverable: Code and benchmarks used in the experiments.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Qinru Qiu).

ID: S2

Title: Biologically plausible spike-domain backpropagation for in-hardware learning

Abstract: Asynchronous event-driven computation and communication through spikes enable
massively parallel, extremely energy efficient and highly robust neuromorphic hardware
specialized for spiking neural networks (SNN). However, the lack of a unified and effective
learning algorithm limits the SNN to shallow networks with low accuracies. While
backpropagation algorithm, which utilizes gradient descent to train networks, has been
successfully used in Artificial Neural Networks (ANNs), it is neither biologically plausible nor
neuromorphic implementation friendly. In this project, we propose to develop methods to
achieve backpropagation in spiking neural networks. This will enable error propagation through
spiking neurons in a more biologically plausible way and hence makes the in-hardware learning
feasible on existing neuromorphic processors.
Deliverable: Code and benchmarks used in the experiments.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Qinru Qiu).

ID: S3

Title: Fast prediction for UAV traffic/communication in a metropolitan area

Abstract: With the rapid increase of the UAV applications in delivery, surveillance, and rescue
mission, there is an urgent need for UAV traffic management that ensures the safety and
timeliness of the missions. Accurate and fast UAV traffic prediction and resource usage
estimation is a key technique in the traffic management system. It allows us to 1) evaluate
different mission schedules or 4G/5G resource allocation schemes in short period of time; and
2) adjust mission control policy in real-time. A good prediction model should not only consider
the UAV mission information, but also the environment information such as weather, 4G/5G
base station distribution, etc. Its complexity is much beyond the traditional analytical approach.
We propose to solve this problem using machine learning. The model has a combined of
convolutional neural network (CNN) and recurrent neural network (RNN). The inputs are
multi-channel time varying streams, such as weather map, cellular network usage map,
geographical constraints, and UAV launching/landing information, etc. The outputs will be
predicted UAV conflict probability map and 4G/5G channel congestion map. Compression and
acceleration of the model will also be studied for real-time prediction.
Deliverable: Model and implementation of the UAV traffic/communication prediction
framework. Technical publications.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Qinru Qiu).

ID: S4

Title: FPGA-based Neuromorphic System for Sensor Data Processing

Abstract: Human brain’s computational power, energy efficiency, and its ability to process
real-time sensor data are attractive features for the evolving trend of the Internet of Things.
Inspired by the architecture of the human brain, brain-inspired computing is based on spiking
neural networks (SNN). SNN consists of networks of homogeneous computing elements i.e.
neurons communicating with spikes. The sparsely distributed asynchronous events i.e. spikes,
allow for a highly parallel and distributed computing architecture. Thus, a neuromorphic
hardware is highly desirable for embedded self-contained applications such as real-time sensor
data processing. However, several challenges prohibit the wide utilization of neuromorphic
hardware: limited weight precision, I/O precision, training difficulty caused by the discrete
nature of SNN. In this work, we built a flexible and reconfigurable FPGA-based neuromorphic
system for sensor data processing. An innovative workflow is proposed to mitigate
aforementioned issues. To demonstrate its effectiveness, we trained a neural network to
recognize sign language, and then used the proposed workflow to map it to the neuromorphic
system.
Deliverable: 1) Technical publication on premium ML conference or journal; 2) Code and
benchmarks used in the experiment. 3) A complete FPGA-based neuromorphic system
implementation.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Qinru Qiu).

ID: S5

Title: Autonomous Waypoint Planning and Trajectory Generation for Multi-rotor UAVs

Abstract: Safe and effective operations for multi-rotor unmanned aerial vehicles (UAVs)
demand obstacle avoidance strategies and advanced trajectory planning and control schemes
for stability and energy efficiency. To solve those problems in one framework analytically is
extremely challenging when the UAV needs to fly large distance in a complex environment. To
address this challenge, we propose a two-level strategy that ensures a global optimal solution.
At the higher-level, deep reinforcement learning (DRL) is adopted to select a sequence of
waypoints which lead the UAV from its current position to the destination. At the lower-level,
an optimal trajectory is generated between each pair of adjacent waypoints. While the goal of
trajectory generation is to maintain the stability of the UAV, the goal of the waypoints planning
is to select waypoints with the lowest control thrust consumption throughout the entire trip
while avoiding collisions with obstacles.
Deliverable: Technical publication on cyber-physical system conference or journal, and the
software implementation of the proposed DRL framework.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Qinru Qiu).

ID: S6

Title: Multi-modal Fusion with Non-linear Dependence

Abstract: Fusion of data from multiple sources/sensors has been shown to significantly improve
inference performance. However, since each sensor carries a unique physical trait, sensor
heterogeneity is the first critical challenge for multi-modal fusion. Sensors are said to be
heterogeneous if their respective observation models cannot be described by the same
probability density function. Also, multiple sensor modalities tend to be dependent due to
non-linear cross-modal interactions. This dependence can be non-linear and even more
complex. Copula-based dependence modeling approach is a flexible parametric
characterization of the joint distribution of multivariate sensor observations. It addresses the
sensor heterogeneity and cross-modal non-linear dependence. We propose to design
copula-based optimal fusion rules for multi-modal inference problems.
Deliverable: 1) Many submissions to statistical signal processing conferences or TSP journal; 2)
Code and benchmarks used in the experiment.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Pramod Varshney).

ID: S7

Title: Compressive Sensing for multimodal data

Abstract: We consider the problem of sparse signal reconstruction from compressed
measurements when the receiver has multiple measurement vectors. Most of the works in
compressed sensing with multiple measurement vectors assume that sparse signals have
non-zero elements in common. This assumption is valid when multiple sensors observe a
phenomenon with the same signal modality. In this work, we will consider that several sensors
observe the same phenomenon with signals from different modalities. These multimodal
signals do not share joint-sparse representation. However, these signals are expected to be
statistically dependent as they are observed from the same phenomenon. We seek to extend
the concept of multiple measurement vectors that leverages dependence structure among
signals from different modalities. We approach the problem in two different ways. First, we
model heterogeneous dependence among the multiple sparse signals using Copula functions.
Several copula functions model different dependencies among random variables and the one
that best represents the dependencies among multimodal sparse signals should be selected
during the reconstruction of the multiple sparse signals. Second, we will consider learning of
dependent structures among multimodal signals using generative model-based techniques.
Generative models can be used when there are enough data to learn the dependencies among
the sparse signals. We will exploit the learned dependencies during signal reconstruction and
enhance the reconstruction performance.
Deliverable: 1) Many submissions to statistical signal processing conferences or TSP journal; 2)
Code and benchmarks used in the experiment.

Proposed budget: $50K/year for two years.

Performing site: Syracuse (PI: Pramod Varshney).