Show simple item record

dc.contributor.advisorLi, Na
dc.contributor.authorQu, Guannan
dc.date.accessioned2019-12-12T08:00:08Z
dash.embargo.terms2019-11-01
dc.date.created2019-05
dc.date.issued2019-05-15
dc.date.submitted2019
dc.identifier.citationQu, Guannan. 2019. Distributed Decision Making in Cyber-Physical Network Systems. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.
dc.identifier.urihttp://nrs.harvard.edu/urn-3:HUL.InstRepos:42029456*
dc.description.abstractCyber-physical network systems are ubiquitous and play an indispensable role in advancing our modern society, where more and more devices with sensing, computation, communication, actuation capabilities are connected to revolutionize the way urban systems operate. Regardless of the specific application, the central goal is to achieve a desired network collective behavior through the design of admissible distributed decision making algorithms for individual devices, harnessing their sensing, computation, communication and actuation capabilities. While various decision making tools have been developed, achieving efficient decision making in a network setting is in many aspects different from traditional decision making theory. First of all, the decision making process is distributed in nature. There is not just a single decision maker; instead, each individual node in the network act as a decision maker and their decisions will collectively shape the global network behavior. Secondly, individual nodes' decision making is constrained by the limited information available to them. What further complicates the problem is the often unreliable communication links and the uncertain and even time-varying disturbances. All these challenges call for new studies of decision making theory tailored for network systems. In this dissertation, we tackle these challenges under three specific problem settings: distributed optimization, distributed voltage control in power network, and distributed localized policy optimization in multi-agent Markov decision process. In the distributed optimization problem, there are a set of agents, each of which has a local convex cost function $f_i(x)$, and the objective of distributed optimization is to find $x$ that minimizes the average of all the functions using local communication and local computation. In this dissertation, we propose two algorithms, Acc-DGD and Acc-DNGD and these two methods have achieved rates that are (partially) analogous to centralized gradient descent and centralized Nesterov gradient descent respectively. As a result, we have partially bridged the gap between centralized optimization and distributed optimization. In particular, for the second algorithm Acc-DNGD, we have obtained rates that are strictly better than centralized gradient descent, and have partially achieved Nesterov-type acceleration in the realm of distributed optimization. In the voltage control problem, there is a power distribution network and the primary goal is to maintain the voltage of the network within an acceptable limit. In this dissertation, we propose a distributed voltage control in power distribution networks through reactive power compensation. The proposed control can (i) operate in a distributed fashion where each bus makes its decision based on local voltage measurements and communication with neighboring buses, (ii) always satisfy the reactive power capacity constraint, (iii) drive the voltage magnitude into an acceptable range, and (iv) minimize an operational cost. We also perform various numerical case studies to demonstrate the effectiveness and robustness of the controller using the nonlinear power flow model. In the multi-agent Markov Decision Process (MDP) problem, there are $n$ agents and each agent $i$ is associated with a state $s_i$ and action $a_i$ taking values from a finite set. Though the global state space size and action space size are exponential in $n$, we impose local dependence structures and focus on local policies that only depend on local states. This dissertation provides some preliminary results on this problem where we propose a method that finds near-optimal local policies in polynomial time (in $n$) when the dependence structure is a one directional tree and when the local transition probabilities satisfy some equality and inequality constraints. Simulations show that the results in this paper hold under more general assumptions. It remains our future work to extend the results to more general cases.
dc.description.sponsorshipEngineering and Applied Sciences - Applied Math
dc.format.mimetypeapplication/pdf
dc.language.isoen
dash.licenseLAA
dc.subjectcyber-physical systems
dc.subjectnetwork systems
dc.subjectcontrol theory
dc.subjectoptimization theory
dc.subjectdistributed control
dc.subjectdistributed optimization
dc.subjectMarkov decision process
dc.subjectmulti-agent systems
dc.titleDistributed Decision Making in Cyber-Physical Network Systems
dc.typeThesis or Dissertation
dash.depositing.authorQu, Guannan
dash.embargo.until2019-11-01
dc.date.available2019-12-12T08:00:08Z
thesis.degree.date2019
thesis.degree.grantorGraduate School of Arts & Sciences
thesis.degree.grantorGraduate School of Arts & Sciences
thesis.degree.levelDoctoral
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy
thesis.degree.nameDoctor of Philosophy
dc.contributor.committeeMemberParkes, David
dc.contributor.committeeMemberTsitsiklis, John
dc.contributor.committeeMemberOlshevsky, Alex
dc.contributor.committeeMemberLow, Steven
dc.type.materialtext
thesis.degree.departmentEngineering and Applied Sciences - Applied Math
thesis.degree.departmentEngineering and Applied Sciences - Applied Math
dash.identifier.vireo
dc.identifier.orcid0000-0002-5466-3550
dash.author.emailguannan.q@gmail.com


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record