AI-driven Methods for Network Resilience

Systemic operation and maintenance (O&M) play a pivotal role in the management of complex multi-asset system. Regrettably, O&M are frequently disregarded by stakeholders during the design and construction phases, despite compelling evidence indicating that they constitute over 50% of the total lifecycle expenses. Effective O&M endeavors encompass a holistic perspective the system' s life cycle, involving dynamic monitoring and maintenance strategies for every constituent unit to meet specific requirement. However, the inherent diversity of systems, intricate internal interdependencies and systemic uncertainty result in exponential complexity, rendering the determination of an optimal O&M strategy a formidable challenge. 


In response to these challenges, the research aims to present a digital twin (DT) enabled framework to simulate the O&M decision-making path based on Markov decision process (MDP) and Reinforcement learning (RL) algorithm. Generally speaking, DT represents digital replica of physical assets, processes and complex systems that effectively integrate data analysis to glean insights from diverse data sources and simulate real-world scenarios. MDP is suitable for O&M in complex systems since it succinctly portrays the degradation process and allows for sequential decisions within high-dimensional state and action space. Leveraging the mathematical foundation of MDP, RL method, an advanced approach for solving MDP, strives to identify optimal solution with reduced requirements for prior knowledge of the target system. The extensive data employed in RL could be sourced from the DT-based O&M management system. 

 

PhD Student

Longyan Tan

 

Supervisor

Ajith Parkliad

Share This