工程科学讲堂第13讲 || Learning & Control in Safety-Critical Systems

北京大学工学院
2022-06-17 22:12 浏览量: 1088

讲座题目:

Learning & Control in Safety-Critical Systems

讲座时间:

2022年6月22日周三10:00-11:30am北京时间

主持人:

宋洁教授、副院长

北京大学工学院 工业工程与管理系

开讲者:

Adam Wierman

Professor of Computing and Mathematical Sciences

Director of Information Science and Technology

California Institute of Technology

开讲学者简介

Peking University ES Seminars

Adam Wierman is a Professor in the Department of Computing and Mathematical Sciences at Caltech. He received his Ph.D., M.Sc., and B.Sc. in Computer Science from Carnegie Mellon University and has been a faculty at Caltech since 2007. Adam’s research strives to make the networked systems that govern our world sustainable and resilient. He is best known for his work spearheading the design of algorithms for sustainable data centers and his co-authored book “The Fundamentals of Heavy-tails”. He is a recipient of multiple awards, including the ACM Sigmetrics Rising Star award, the ACM Sigmetrics Test of Time award, the IEEE Communications Society William R. Bennett Prize, multiple teaching awards, and is a co-author of papers that have received “best paper” awards at a wide variety of conferences across computer science, power engineering, and operations research.

讲座摘要

Peking University ES Seminars

Making use of modern black-box AI tools such as deep reinforcement learning is potentially transformational for safety-critical systems such as data centers, the electricity grid, transportation, and beyond. However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety and are typically difficult to make use of in distributed, networked settings. So, while their performance may improve upon traditional approaches in “typical” cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift or unrepresentative training data, or in situations where global information is unavailable to local controllers. These represent significant drawbacks when considering the use of AI tools in safety-critical networked systems. Thus, a challenging open question emerges: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications? In this talk, I will provide an overview of a variety of projects from my lab at Caltech that seek to develop robust and localizable tools combining model-free and model-based approaches to yield AI tools with formal guarantees on performance, stability, safety, and sample complexity.

编辑:梁萍

(本文转载自 ,如有侵权请电话联系13810995524)

* 文章为作者独立观点,不代表MBAChina立场。采编部邮箱:news@mbachina.com,欢迎交流与合作。

收藏
订阅

备考交流

免费领取价值5000元MBA备考学习包(含近8年真题) 购买管理类联考MBA/MPAcc/MEM/MPA大纲配套新教材

扫码关注我们

  • 获取报考资讯
  • 了解院校活动
  • 学习备考干货
  • 研究上岸攻略