|dc.description.abstract||Cyber-physical network systems are ubiquitous and play an indispensable role in advancing our modern society, where more and more devices with sensing, computation, communication, actuation capabilities are connected to revolutionize the way urban systems operate.
Regardless of the specific application, the central goal is to achieve a desired network collective behavior through the design of admissible distributed decision making algorithms for individual devices, harnessing their sensing, computation, communication and actuation capabilities.
While various decision making tools have been developed, achieving efficient decision making in a network setting is in many aspects different from traditional decision making theory. First of all, the decision making process is distributed in nature. There is not just a single decision maker; instead, each individual node in the network act as a decision maker and their decisions will collectively shape the global network behavior. Secondly, individual nodes' decision making is constrained by the limited information available to them. What further complicates the problem is the often unreliable communication links and the uncertain and even time-varying disturbances. All these challenges call for new studies of decision making theory tailored for network systems.
In this dissertation, we tackle these challenges under three specific problem settings: distributed optimization, distributed voltage control in power network, and distributed localized policy optimization in multi-agent Markov decision process.
In the distributed optimization problem, there are a set of agents, each of which has a local convex cost function $f_i(x)$, and the objective of distributed optimization is to find $x$ that minimizes the average of all the functions using local communication and local computation. In this dissertation, we propose two algorithms, Acc-DGD and Acc-DNGD and these two methods have achieved rates that are (partially) analogous to centralized gradient descent and centralized Nesterov gradient descent respectively. As a result, we have partially bridged the gap between centralized optimization and distributed optimization. In particular, for the second algorithm Acc-DNGD, we have obtained rates that are strictly better than centralized gradient descent, and have partially achieved Nesterov-type acceleration in the realm of distributed optimization.
In the voltage control problem, there is a power distribution network and the primary goal is to maintain the voltage of the network within an acceptable limit. In this dissertation, we propose a distributed voltage control in power distribution networks through reactive power compensation. The proposed control can (i) operate in a distributed fashion where each bus makes its decision based on local voltage measurements and communication with neighboring buses, (ii) always satisfy the reactive power capacity constraint, (iii) drive the voltage magnitude into an acceptable range, and (iv) minimize an operational cost. We also perform various numerical case studies to demonstrate the effectiveness and robustness of the controller using the nonlinear power flow model.
In the multi-agent Markov Decision Process (MDP) problem, there are $n$ agents and each agent $i$ is associated with a state $s_i$ and action $a_i$ taking values from a finite set. Though the global state space size and action space size are exponential in $n$, we impose local dependence structures and focus on local policies that only depend on local states. This dissertation provides some preliminary results on this problem where we propose a method that finds near-optimal local policies in polynomial time (in $n$) when the dependence structure is a one directional tree and when the local transition probabilities satisfy some equality and inequality constraints. Simulations show that the results in this paper hold under more general assumptions. It remains our future work to extend the results to more general cases.||