“…HNN-F could be seen to be on the fringe of it, with its multiplicative effects that would certainly be an odd modeling choice without a time-varying unobserved components regression in mind. Closely related, Agarwal et al (2020), O'Neill et al (2021), and Rügamer et al (2020 all develop an architecture inspired from generalized additive models to enhance interpretability in deep networks for generic tasks. While these articles certainly tackle some of the opacity issues coming from nonparametric nonlinear estimation with deep learning, none address those that are inherent to any non-sparse high-dimensional (even linear) regression-i.e., that analyzing partial derivatives of 200 things that typically co-move together unfortunately borders on the meaningless.…”