Statistical Theory of Deep Neural Network Models

November 7, 2024 - November 9, 2024

Organizer:

Image
Lizhen Lin
University of Maryland
Image
Vince Lyzinski
University of Maryland
Image
Yun Yang
University of Maryland

As deep learning has achieved breakthrough performance in a variety of application domains, a significant effort has been made to understand the theoretical foundations of deep neural network (DNN) models. Statisticians have devoted to understanding statistical foundations of such models by for example understanding why deep neural networks models outperform classic nonparametric estimates and providing explanations of why DNN models perform well in practice from the lens of statistical theory. This workshop aims to bring together researchers in the field to discuss the recent progress in statistical theory and foundations of DNN models, and chart possible research directions.



Participants:

  • Arash Amini, University of California, Los Angeles
  • Peter Bartlett, University of California, Berkeley
  • Ismael Castillo, Sorbonne University
  • Minwoo Chae, Postech
  • David Dunson, Duke University
  • Francesco Gafi, University of Maryland
  • Subhashis Ghosal, North Carolina State University
  • Jian Huang, The Hong Kong Polytechnic University
  • Yongdai Kim, Seoul National University
  • Kunwoong Kim, Seoul National University
  • Shivam Kumar, University of Notre Dame
  • Hyeok Kyu Kwon, Postech
  • Sophie Langer, University of Twente
  • Kyeongwon Lee, University of Maryland
  • Jeyong Lee, Postech
  • Wenjing Liao, Georgia Tech
  • Lu Lu, Yale University
  • Seokhun Park, Seoul National University
  • Rong Tang, Hong Kong University of Science and Technology
  • Matus Telgarsky, New York Univeristy
  • Jane-ling Wang, University of California-Davis
  • Yixin Wang, University of Michigan
  • Yuting Wei, University of Pennsylvania
  • Tong Zhang, University of Illinois Urbana-Champaign
  • Yiqiao Zhong, University of Wisconsin

Poster:

Image

Photos: