11:20:01 Okay is that we have one neurorrival. That's right. 11:20:08 We have 3 of them. So we have now is going to get her 90 s of I'm I start. 11:20:27 So I originally actually started my academic life in physics. 11:20:30 I did. I did art in physics during the finance department to do work on connects matter, statistical physics, that kind of thing. 11:20:38 Very soon after I joined I started getting more interested in applying these statistical methods to other problems, not the problems that were in the condens matter at that moment. 11:20:48 So I actually ended up doing my Phd on is using some of these statistical methods to understand approximate inferences in human cognition. 11:20:57 So basically trying to explain a bunch of these human judgment and decision-making biases as some sort of optimal use of limited resources surrounded optimality and tractable versions of bounded optimality, using various things like sampling methods, amortization with vaes very sense of 11:21:13 inference, networks this kind of work. So most of the work that I did was actually very core, very much concretely in cognitive psychology. 11:21:19 Most of the papers that I published were all in Chinese play. 11:21:24 Some of these, both in behavioral economics and cognitive psychology, I still graduated with a Phd. 11:21:29 In physics. So I think that goes to say that, like secretly in that heart of hearts, the physics department believes that everything is physics. 11:21:36 So I still. So then, after I finished my Phd. I did a very brief postdoc at Princeton stuff in comparing humans. 11:21:46 So I used to do some of this work. I did start a start of this work during internships in my Ph. D. 11:21:49 As well, but got more interested in machine learning systems because a lot of them, even all of these amortization networks and inference networks, things like that are a lot very relevant, you know, interfacing with a lot of the agency neural networks that were doing a bunch of 11:22:05 stuff but that's my timer. I should go quickly. 11:22:07 So basically china think about so that ended up working a bunch on trying to compare human and machine learning representations both in terms of things like inductive biases, category learning representations both in terms of things like inductive biases, category learning compositionality and things and lately as with everybody else, in industry I 11:22:22 currently so the last 2 years I've been a deepmind in the New York office. 11:22:27 I'm in a deep mine in the New York office. I was briefly in London during my Phd. 11:22:28 But then all of my actions were in the New York office, but there, I've been working on language and language models, as everyone in the world is, and thinking about how they can be a repository for human-like abstractions. 11:22:41 How they sort of can scaffold statistical learning by giving a structure all right, all right, I'm done. 11:22:46 I'm done. I'm done. Thank you. Thank you. 11:22:49 Thank you.