13:36:14 We'll see. And so you'll see pretty quickly that my alternative title to what is assistive learning is why, I don't know what the physical learning is, just to give you your sense of what we have planned for you today. 13:36:31 So I'm gonna give a talk that supposed to be a half an hour ish. 13:36:35 So it might be slightly longer, and that the main purpose of this. 13:36:43 Just to kind of give a little bit of kind of how I think about some issues of school learning, and to some extent why, I'm confused by what is the right thing to call sustillo learning we're then going to do a roundtable discussion. 13:36:52 Where we'll get a few more views of how some different people from the group think about what physical learning is coming from different backgrounds. 13:37:01 We'll have a break. We'll get cookies if there's any left. 13:37:05 Then we're going to have breakout groups. And so with the breakout groups, I'll show you in a moment what your goal will be. 13:37:14 The breakout group. But then you're going to have presentations from your groups on. 13:37:18 Essentially some challenge of what is and isn't Cisco learning? 13:37:24 So I'm going to have you try to come up with some sort of agreement on what might be a definition of statistical learning that would be good for others. 13:37:31 And then see if you could give some examples of what is not statistical learning from. 13:37:37 You know the behavioral standpoint, or from brain mechanism standpoint, or from the theoretical standards. 13:37:42 Oh, I got a scepter. Oh, okay, I'm a real thought. 13:37:53 And you know, I think one thing that I think is important to kind of understand is that there's 2 types of definitions that I think are going to be worth discussing today with what is assistive learning. 13:38:07 One is for each of us what is convenient for us, for how we do our research, where it could be a selfish definition, where it doesn't have to be correct for others. 13:38:16 But it's, you know, something that helps us think about the problems that we want to think about. 13:38:20 And then there is what the group exercise will be. Is to try to come up with what is the definition that would be good for others. 13:38:27 So like, you know what we want to put forth to the world. 13:38:31 And I think, understanding that these definitions don't have to be the same, you could still have your personal definition that's useful to you, that diverges from whatever your group definition that you think is useful. 13:38:43 Is I did try to do some searches on the web to find out what physical learning is. 13:38:52 One from Google is in cognitive psychology and cognitive neuroscience. 13:38:55 Physical learning refers to extraction of regularities, and how features and objects co-occur in the environment over a space and time is one that kind of implies. 13:39:05 You know, some form of associative learning component, which I think will come up again and again, chat, Gpt. 13:39:13 Of course, you know, has interesting answers. So the first one was statistical. 13:39:18 Learning refers to a set of methods and techniques used to analyze and make predictions or inferences from data as a branch of machine learning blah blah blah blah blah and so this is probably what most people think about cisco learning. 13:39:30 If they aren't considering psychology. I did ask Chachi P. 13:39:34 What about statistical learning in the brain? And it came up with the scroll loading in the brain refers to the brain's ability to extract and learn physical regularities from the environment is a cognitive process that allows us to detect patterns, make predictions and learn from experience based on physical information present in the sensory 13:39:50 input actually ain't bad. And this is yeah. And there's all sorts of other things one could follow up. 13:40:02 I did ask actually like who the most important people in statistical learning it gave a bunch of names. 13:40:09 Joseph? Wasn't there I asked. You know what about Joseph Fisher? 13:40:13 They said, like he does important work as well I asked about Aaron sites. It's like there's no evidence that Aaron Sites ever done his physical learning work as important. 13:40:20 So I don't like Chat gpt anymore. 13:40:26 So exactly! 13:40:32 So! 13:40:35 Okay, done. So this I haven't tried that. If you ask about perceptual learning that knows who I am, so. 13:40:51 But yeah, you can check out during the talk and see what you find. 13:40:55 And so I thought that for this talk I would just kind of go through some examples kind of from my past, a different learning type work that I've done, and just how you know, we might think about you know, the extent to which it's that learning or not so briefly, talk a little bit about development then talk about things that are quoted 13:41:15 quotes in school learning research that a lot of is associative. 13:41:20 This, in my perceptual learning, research, and then we'll wrap up. 13:41:23 So development. Actually, this goes back to my Phd thesis, where I started off developing computational models of the visual system before the eyes opened, and that you know. 13:41:32 Some of the challenges were basically, how do you reproduce results of critical periods? 13:41:38 Where, if you do something like Patch, when I, during early development, you'll basically see you'll get an abnormal formation of these acid dominance columns. If you wait till the animal is 12 weeks old. 13:41:51 You get normal formation? And the question is, you know, what is the way that we could reproduce this within the model? 13:41:59 I worked with Steve Grosberg. So people who know about Grossberg's work will recognize these types of diagrams. 13:42:04 And so this was a very complicated dynamical system model. 13:42:08 The key property of this really was that I was trying to understand the extent to which you could get layer that were similar or maps that are similar across the layers. So how do you get it? 13:42:18 So that layer 4 and layer 6 and layer 2, 3, all would have the same maps, even though the maps especially with layer 2, 3, and 5 form before they're connected with each other. 13:42:27 And so the key thing in my model was that there's this cortical subplate, which is a transient cortical layer that is there during early development. 13:42:39 It's connected up through layer one down to layer 2, 3, and also down to layer 5, and that basically standard kind of self-organizing map where you basically have lateral edition of cortex. 13:42:49 You put noise in that's independent to 2 eyes, and you get these nice Oklahoma columns. 13:42:54 You basically find cells that get the orientation tuning, and that their sharing. 13:42:59 Okay, you know, it's kind of the first question of like, you know. 13:43:04 On one hand, this seems to be like a real, true case of statistical learning, because basically, the system is, being patterned by the inputs at the same time. 13:43:14 When you think about it, it's actually the structure of the model that gives rise to the results. 13:43:22 And so that there are some patterns of inputs that could give rise to different map formation but generally it's just putting in noise into the model as long as it's not correlated between the 2 eyes. 13:43:33 You're going to get this and so it's a model that's capable of sustainable learning. 13:43:38 It's not really clear that this, you know, is, you know, equal to statistical learning as we want to think about it. 13:43:51 Kind of one, you know, issue to think about the difference between you know, if you're supposed to be learning from the input, what is the structure of the model that is going to be determining the answer in the system. 13:44:12 Then I've done a lot of studies that were under the kind of cognitive behavioral framework is just to on learning. 13:44:18 And so some of the definitions I've used from, you know, previous talks or abstracting Cisco overregularities from the environment. 13:44:26 And you know, basically, you have these sometimes temporal and or spatial regularities that one could pick out, and that I'm going to repeat this the associative learning concept where you know a lot of this framework is basically how do you learn that things go together in space or how do you learn that things 13:44:43 go together in time. And so one of the early studies I did this was with Ladon Shams, where we're looking at these pairs, where you know these are kind of standard visual pairs that we're seeing in sequence. 13:44:56 And then we basically had auditory stimuli that were paired with them. 13:45:01 So basically, there are regularities both within and across each sense. 13:45:05 And so see what the sound plays. 13:45:10 Yeah. Let's go back. 13:45:14 So those are couple of the sounds, you know. So and then basically, what we found was that you know, anything beyond 50% is learning. 13:45:27 And so that basically people learn the audio pairs, the visual visual pairs, the audio, visual pairs, and then audio-visual pairs, and then audio visual audio visual pairs, the multi-sensory ones. 13:45:37 They seem to learn may be a little bit better than some of the others. 13:45:39 Maybe there's more information there. Oh, so basically, you see, a pattern of these shapes that are appearing on the screen one at a time. 13:45:53 There's no task at all. You also hear these sounds that are occurring at the same time that there were pairs that were regular. 13:46:05 So in case these 2 objects always went together, there's some that are not regular, and it's basically asking people at the end which things they're familiar with either a pair that they experience or a pair that they didn't. 13:46:16 And so, we'll see this paradigm again across multiple tasks. 13:46:22 But it's one that often has been for temporal censical learning. 13:46:27 Bin used to try to understand whether people will learn co-occurrences with different probabilistic structures. 13:46:37 Amene versus so basically, this quartet is, you know. 13:46:45 So AV, so these 2 items went together, those 2 items went together, and then those pairs went with these pairs. 13:46:54 And so it could basically break this grid and look to basically see. 13:46:57 You know, for each of these circumstances did people learn these ensembles? 13:47:02 Okay, yeah, sorry about that. I had more Demos, but then I took them out because my talk's too long. 13:47:09 One thing of note is that we also did this where we did not have kind of the multi-century, so we could just present the auditory strain, or just present the visual stream. 13:47:20 And then we found that basically, whether the audio pairs and the video pairs were present in the multi-century exposure you know centric exposure. They are learned the same way. 13:47:34 And so this kind of seemed to be like, maybe we have multiple systems in the brain that are pulling out these regularities independently from each other. 13:47:41 Perhaps. 13:47:45 Here's similar study where we're asking whether this learning was implicit on. 13:47:53 We did the version of this the same day versus also looking the next essentially, you could measure in this case the reaction time. 13:48:02 So now we're dealing with triplets. And you could basically. So these are the ones that go with each other, and that the reaction time for the second one will be faster than for the first and the third one faster than the second. 13:48:16 So this is a typical format that you'll see. 13:48:18 So you first expose your people to these, and then it's supposed to basically see a target. 13:48:21 And they have to press a key when they see that target in a sequence, and if you then basically show them pictures, and then you ask them to match to a set of things that might go with it, we basically found that people weren't chance in this matching task, and so it's slightly differently than 13:48:41 the familiarity task, where you basically say, you know, here's one pair, here's another one which ones more familiar. 13:48:47 In this case you actually have to say which are these other objects that go with, and that people were not able to do that task, even though there's a nice evidence that they've learned something. 13:49:00 Other studies by this whole sequence of work, is with with lit on shams. 13:49:06 Is that we started asking this question, is this really associative learning? 13:49:10 And so what we could do is, you know. You see, here's the target task. So after you kind of learn the sequence, you've shown the target, you're based to press a key when you see the match, and that the standard paradigm is that if the matches item one versus item 2 in 13:49:30 this case people are faster. For item 2, both with visual pairs and with auditory pairs. 13:49:36 But what happens if you have the wrong item in the number 2 slot. 13:49:41 So you take a number 2. Item but it's from a different pair. 13:49:48 We find actually that same result. So basically that this is now a little bit of problem for the social hypothesis is that you know, it seems that the items in this number 2 slot are actually maybe represented better. 13:50:03 And so that even if they're not present in this visual context, and in fact, we even know this, you could present them in another task framework, and that people are better at recognizing them. 13:50:14 And so I'll talk about perceptual learning in a bit. 13:50:17 But this is more how I think about perceptual. 13:50:23 Because my name is the task force. So the you just press it key. 13:50:32 When you see that item and that item could be the first item of a pair, a second item, a pair. 13:50:40 It could be following its first item, or it could be following somebody else's first item. 13:50:49 The question, wasn't that contrary to all of us? 13:50:51 The sort of serial action time test. 13:50:54 Okay. 13:50:56 Yeah, contrast between things. Oh, the grammar versus the ones that don't like, since you have these results. 13:51:07 Yeah, I mean, I think that it is a different type of learning in that. 13:51:15 Basically one is a motor pattern learning, and then the other is thought to be, you know, a perceptual associative learning. 13:51:25 But there's other studies where you basically find, you know, when you show them in the wrong order, that basically, you also see effects. 13:51:33 And so it's a little bit more complicated. What might be being learned in the visual system than in those serial reaction time tasks. 13:51:50 And have the hostility. On the second. 13:52:12 The first. 13:52:22 Island. Exactly this. It's the first. 13:52:31 Yeah, but I think, yeah, I mean, the thing is that you know, if you have the second where you basically have a task where there's only one eye, you know, it also shows this advantage and so and this is one thing, I think we'll have more time to discuss this I kind. 13:52:49 Of want to get through some of this, you know, for me, as a researcher. 13:52:52 This is where there's a conflict between here's a statistical learning paradigm that people use in the literature to talk about a type of learning that you know kind of maybe a little bit higher in the brain than some other types of learning of how we associate things and now 13:53:11 I'm seeing a violation where this what's being learned doesn't reflect the model of the task. 13:53:23 And you know, as I go through some of these other slides, you know, I'll kind of repeat this theme where there's a concept that we have of what is someone supposed to learn from this task, and then there's what do they actually learn and they're not necessarily going to be the 13:53:37 same, and I think it's a issue with all of our paradigms. 13:53:45 So this is another series of work that I did with Maddie Chalk and Peggy's series, where you know, here we basically have a probability distribution of set of motion stimuli that we're going to show so basically here. 13:54:06 Actually, I do have an example of what it looks like. So you have a bunch of dots that are moving you have to do 2 things. 13:54:10 One you have to estimate. These are really low contrasts. 13:54:12 You can't see them, but you have to estimate what direction they're moving. 13:54:17 Are they going to continue playing? Huh? Let's go back. 13:54:22 So you first have the systemization task, and then you have to do report. 13:54:29 Do you actually see anything or not? This will become important. Where what we're doing is that we had 2 angles that were more frequent than the others that we're basically asking. 13:54:42 You know, how do people learn this probability? Distribution of the motion that were presented? 13:54:46 We look at the results. So the task we made it difficult was the dots became harder and harder to see. As you did this. 13:54:56 So we ran a staircase on the contrast of the dots, but we found afterwards, and this is kind of folded so that basically the 2 peaks of the distribution on top of each other is that we see that they're better at detecting the frequent angle than the other so this would be 13:55:08 kind, you know, a standard, maybe more perceptual learning paradigm, but you know, based upon the statistics of the stimuli you're learning the thing that got interesting, though, was that it's an estimation task. 13:55:22 And so that basically the 2 angles that were near the frequent one, they would estimate them as being closer to the frequent angle than the others. 13:55:32 The other thing was interesting is sometimes we presented nothing, and they still ought to do the estimation and what we found is that there's evidence that they seem to be perhaps hallucinating where basic when they said they detected it. 13:55:54 That they detect it in a distribution that is close to the unfrequent one. 13:56:00 But if they say they didn't see it, then their response pattern wasn't shelling that at all. 13:56:05 And so here again, you know, there's multiple kind of behavioral components that we're going to estimate from this physical learning task. 13:56:16 And some of them seem more like perceptual learning. They'll get into a moment. Others might be closer to what we think about statistical learning. I'm gonna skip this one, but. 13:56:27 But I guess so. One question to think about at this point is with his statistical learning tasks is the learning, statistical learning how we think of it in terms of a model or, you know, brain mechanisms and I think that that's something where the answer is probably mixed in my 13:56:45 view a lot of my research. And so what Chach Gpt knows me for is perception running. 13:56:52 And so this I like a definition by Eleanor Gibson as any relatively permanent consistent change in the perception of stimulus array following practice or experience with this array, and depending on definition of physical learning, this could be a subset of cisco learning but 13:57:09 a lot of people have looked at this of basically, how is the visual system refined to be able to see things in general? 13:57:22 Not necessarily the particular statistics of the stimulus, and that kind of some of the first studies I did was subliminal, perceptual learning. 13:57:31 So basically, what I do is I test people where you would see these dots that are moving in different trackions, you know, to choose an arrow indicating which direction it was. 13:57:37 Then the training was that you're supposed to do this. 13:57:41 Rsvp task in this case, basically, each of these things were shown for about a half a second each, and they were supposed to report at the end of the sequence the L and the T. 13:58:00 So the light letters! 13:58:04 Paired with these were subliminally coherent motion displays, so that if you looked at these they all seem to be moving randomly. 13:58:11 You couldn't detect at 1 point the direction changed, but with the targets there is always one direction in this case. 13:58:15 I'm showing as up. But every participant was trained with a different direction. 13:58:20 And with the distractions we had all the other drugs, so that we now have 4 directions of motion. 13:58:25 Each presented equal numbers of motion, each presented equal numbers of time. 13:58:26 One is paired with a target of another task. And then we test people again, and what we found was that the target pair, direction? 13:58:38 There's an improvement when air is tested, super threshold. 13:58:43 So we didn't test it. 5% because people can do the task. 13:58:46 But 10% coherence. We saw an advantage. And so we call this task of our perception learning. 13:58:51 We thought, this is a reinforcement learning just to kind of go one step further with this reinforcement, learning idea. 13:58:59 This is Belan Hokim, who was a grad student at the time, and that basically, you know. 13:59:05 So this is, I spend a bunch of time working with monkeys, and I learned that you food and water deprived them, and you get them to do the task by giving drops of water through a tube, and so after this I did this, with you know students. 13:59:20 And so they basically, most of the time they just came into the lab before breakfast. 13:59:23 And then we tried a bunch of substance is eventually we just went with water that they basically get drops of water where they were paired with the targets of a task. 13:59:34 And to make it even more complicated, we, as the technique called continuous flash depression, so that we had a haploscope where basically 2 monitors, and that each eye was getting input from a different display. 13:59:47 And the displays looked like this. The one eye. We basically showed this bright, flickering Mondrian pattern to the other eye. 13:59:55 If you look really carefully in the center, you might be to see the stimulus where it's at low contrast stimulus that sometimes you have an orientation that's shown in that. 14:00:06 And it's hard to see in general, especially on the display. 14:00:12 But if you have, you know this pattern to your left eye, this pound your right eye, that's all you're gonna see. 14:00:19 And so we did this for depending on experiment 10 days or 20 days one orientation was paired with a drop of water, another one was paired, and basically what we found was that what we call the trained orientation it's even hard to see when it's bright. 14:00:39 But basically the trained orientation these were gibore patches there was improvement on the post-test, where there was improvement on the post-test, where they're supposed to actually do a identification task and not so much for the control orientation and so this is also specific of the eye 14:00:56 of training, so it seemed to be fairly low level, but basically, you know, this is something that you know, I generally think of as maybe not statistical learning, more of a reinforcement learning. 14:01:09 And this kind of gave rise to this model, where, if you have a irrelevant feature. 14:01:16 With a task target the task target. You recognize that your children's reinforcement signals that would help with the task target learning. 14:01:24 But they also spill over to the irrelevant feature that you don't know. 14:01:27 It's there! 14:01:31 Is perceptual learning, statistical learning. I definitely see perceptual learning types of results from statistical learning, paradigms. 14:01:42 It definitely is that you're getting better at discriminating things that are more regular in the environment, especially if they're paired with behaviorally relevant events. 14:01:55 Just to make it a little bit, you know. More interesting. 14:02:00 We also do the same paradigm with men, so you could see you're shown a series of pictures. 14:02:05 Your task is just to report at the end of the trial whether this error is pointing to the right or the left, and then we also basically present a image and ask whether you saw that image of the sequence and what we find is that similar to the target pairing for the motion direction that we find a 14:02:26 benefit for the picture that's paired with the Arrow presentation. 14:02:31 But only the one that's the error is pointing towards, not the other one. 14:02:37 So the same type of you know, task run, perception, learning, paradigm seems to affect memory, memory's Cisco learning, I mean. 14:02:48 It seems closer to a lot of how we think about statistical learning. 14:02:54 Going back to like a lot of perceptual learning tasks. 14:02:58 They look at specificity. So if I have, you do a task we're supposed to indicate whether the arrow that's angled to the right 45 degrees is white or black. 14:03:15 Show it for tens of a second, yet shown at, instead of different locations. 14:03:21 Across this. So basically, what we did was that the all? 14:03:28 Yeah, sorry. It's yeah I'm trying to give a sense of the paradigms so yeah, what's your question? 14:03:38 No, this, say it again, okay, so so basically, the task is, can you detect an item that is tilted to the right? 14:03:49 So 45 degrees, and you just have to report whether it's white or black. 14:03:55 So here it's white and so you do a lot of trials of this. 14:04:00 But the key thing with this task is that there's a specific set of locations where the target could appear so what I'm showing is that all the things that are shown in whatever this color magenta may be, you know, is where the target could appear and then on the cyan points are 14:04:21 where the target doesn't appear, but the distractors do appear. 14:04:25 And so we're basically trying to understand, you know, a few things. 14:04:29 One is. If we look at locations, do we basically find that there's a difference between performance on the task when you're doing at the train location versus the post-test, where we could also see targets at the underrain location, and we find that there's specificity to 14:04:51 the exact. 14:04:53 With this thing is presented, even though when you show the same stimulus, you know one degree of visual angle away, people will not perform as well. 14:05:01 We also could test people where we could test for a different target. 14:05:07 So the target, instead of being to the right, is to the left. 14:05:12 We could test the background orientations of the distribution of items in the background could either be more vertically oriented or more horizontally oriented, and the key thing is that basically every single thing that we look at in terms of specificity is that we're basically finding you know here 14:05:31 that the threshold and say, slower is better, for the trained target with a trained distractors is better than the untrained target with the train distractors, which is better than the train target with the untrained distractors, so you kind of see that you almost get an independent learning 14:05:48 boost, for whether you have the target, whether you have the location and whether you have the distractors. 14:05:53 And so. 14:05:58 That's the thing. So this is like a very classic perceptual learning, paradigm. 14:06:02 It seems very statistical. Learning ish, because basically, you're really shaped by the statistics of the stimulus that people are not aware of at all. 14:06:13 But we always talk about this in my field as perceptual learning of some low-level thing that's nonstical learning. And so yeah, it's. 14:06:29 Yeah, because it's not asociative by when you think about the system being shaped by the statistics of the import. That version this matches super. Well, this is. 14:06:45 How is it associative? 14:06:54 You actually are better at the untrained orientation, at the trained location. 14:07:01 And you're better at the untrained orientation with a trained background. 14:07:05 Statistics, so it doesn't seem to be associative. 14:07:11 One idea would be that you're shaping the representation so that you're basically it could be attention. 14:07:26 Maybe people are better at attending to those locations. It could be feed forward where there's basically, you know, neurons, you know, in those locations are slightly better at responding to these types of orientated stimuli. 14:07:41 But that, you know. I think of it as more as a representation change as opposed to an associative learning, and actually, that's one of the key things that you know. 14:07:53 I'm kind of concluding from this is that you know perceptual learning often looks for. 14:08:01 Yeah, more representational learning. So physical learning often calls associative. 14:08:05 But if you think of, you know Chachgbt's definition of Cisco learning both pieces, and I think that computationally like they match some intuition of learning statistics, I'll go through this real quick just because it's fun is like you know, why is 14:08:22 perceptual learning, so specific and it's typically because we're training with these very simple tasks. 14:08:26 And so I created, yeah, something. Video game trainings where I could talk about this later. 14:08:34 But there's a bunch of things that we did to try to make it, so that learning would generalize more. 14:08:40 So we're showing, you know, their task is based to the blow up the gapors to go. Boards become harder to see as you perform the task, and that we have different orientations, different locations, different spatial frequencies a lot of things to put in variability I wanted to basically test people on something completely 14:08:58 different than the lab tasks. So I got the baseball team to do this and that. 14:09:03 They came into my lab and did 20 or, 30, 25 min sessions, you know, in between the baseball seasons of I think it was 2,012 and 2,013, and that basically after training, they're able to read lower lines on eye charts, but not the untrained players who are the 14:09:24 pitchers, their reading speed actually improved. They had fewer strikeouts. 14:09:29 This is compared the players I trained versus all the other players in the League that had played both those years. 14:09:39 And then you look at sabermetrics. So baseball statistics, which is a whole different type of statistics, is, you could calculate how many runs people should be able to create based upon the hitting performance. 14:09:51 And we could see that this improved in the Verbsight team compared to the Dig West League. 14:09:55 You can then use a Pythagoran theorem of baseball and compute. 14:09:57 How many games you should have won based upon that, and that you could take the difference between Big West League improvement. 14:10:06 First, The Riverside Improvement, and it was like a 8% in improvement in games. 14:10:10 So this is. 14:10:14 Not sure whether this is the Cisco learning. If it is, it's very much different, I mean, and this is, you know, a lot of perceptual learning is looking at, you know. 14:10:27 We train the visual system to do better, not necessarily on what you're trained on. 14:10:32 So transfer seems less statistical. Learning. Yeah, how's the task? 14:10:37 And I'll be happy to talk about any of these things later. 14:10:42 This is how I got to start this brain game center that I mentioned, and so we've been creating apps for lots of different domains of how do you get transfer one of the areas like how do you train working? Memory? 14:10:57 And I won't go too much into this. But now we're basically saying, Here's a circuit in the brain that we think you know, helps us be able to keep multiple things in memory at once. 14:11:09 Can we increase? How much stuff that system is able to keep in mind I've created a bunch of games that I could show you guys later that supposedly make this fun. 14:11:25 I've created a bunch of games that I could show you guys later that you know, supposedly make this fun. 14:11:25 This is just a, you know, if we use, you know I call the lame back so non-gamified version of the task versus the full immersive game where you're navigating the character versus where you just show a picture show you actually see the most learning for the full game the picture show actually hurts 14:11:42 your performance compared to the non-gamoph fight. 14:11:44 But you know, for this talk it's like working memory training, you know. 14:11:49 It's kind of like perceptual learning and they're trying to like improve how a system functions is it statistical learning? 14:11:55 It seems less so. And so, you know, going back, you know, to these definitions, you know, I still find it convenient to go with Google over Chat Gpt. 14:12:13 And for my research being able to look at here's a learning we're looking at associations between things versus a learning approach of we're trying to change how circuits function. 14:12:29 I find it be useful for how I do my work. But if I was coming up with a broad definition of physical learning that's probably too narrow. 14:12:37 And so, you know, going back to your church, which we'll get into soon, I think that you know the goals really are to kind of go through the same exercise that I did, trying to put together this talk. 14:12:48 It's like, you know. Why would you want to have a definition of sisal learning that drives research versus? 14:12:57 You know, if you're trying to define something, you know, for white paper. 14:13:01 From this conference, how would we say it? That includes everybody? 14:13:05 What are some tasks that are not statistical learning? 14:13:11 If we can't come up with any, then we don't have a useful definition. 14:13:16 And then same thing in terms of you know, neural mechanisms on, and computational mechanisms like, there must be some computational things that are learning that are not sensitive learning. 14:13:32 Otherwise. It is everything same thing with the brain, and with that we're going to have a little bit of a panel discussion so that some other people could say their views as well. 14:13:40 So people could maybe stand up for a couple minutes as we rearrange the room, so we could sit down.