Ep. 30 - Awakening from the Meaning Crisis - Relevance Realization Meets Dynamical Systems Theory

What follows here is a transcription of the above video by John Vervaeke (Sectioning added by MeaningCrisis.co)

Useful Links

Welcome back to awakening from the meaning crisis. This is episode 30. So last time we decided to dig into the central issue of realising what’s relevant. And we are following a methodological principle of not using or presupposing Relevance, the capacity to realize relevance, in any process, purported cognitive process, or brain process that we’re going to use to try and explain that ability. I gave you a series of arguments that we can’t use representations to explain relevance because representations crucially presuppose it. And then we took a look at some very interesting empirical evidence that really comports very well with that: The evidence supporting FINSTING and your ability to do Enactive Demonstrative Reference, this Salience Tagging, just making ‘stand out’ the hereness and the nowness of something.

Are we then took a look at, and we drew a few conclusions about the ‘meaning’ we are talking about in meaning in life - that ‘connectedness’. That connectedness is ultimately not generated by representations. Again, I’m going to keep saying this, I’m not denying that representations and belief in that level can alter and transform what we find relevant. We are talking about the explanation of the phenomenon, not how it is causally affected by other aspects of cognition. We then took a look at a syntactic level, the computational, level and saw arguments that neither inference nor rules can be used to explain the generation of relevance precisely because they also presuppose it. We looked at trying to deal with relevance in terms of some sort of internal module dedicated to it and that that won’t work; it’s homuncular and Relevance Realization needs to be scale invariant or at least multiscalular. It has to be happening simultaneously in a local and global way and that points towards something else we noted about any theory that has to account for this self-organization of relevance that is demonstrated in the phenomenon of insight.

So we then saw that a theory has to use explanatory ideas that point to processes that are, at least in the original sense, internal to the Relevance Realization, the Relevance system. I tried to get clear about how not to misunderstand that. What I meant was the goals of that govern relevance realization initially have to be constitutive goals. They cannot be goals built upon representing the environment in a particular way, instead they have to be the constitutive goals that are part of an auto-poetic system, a system that is self-organized because it has the goal of preserving and protecting and promoting its own self-organization. That draws deep connections between relevance realization and life and relevance realization and being an auto-poetic thing. And of course, as I’ve already mentioned, Relevance realization processes have to be multiscalular, they have to be self-organizing and they have to be capable of developmental self-transcendence, self-correction, insight , etc.

We noted along the way about how this links up with an argument about how the Propositional depends on the Procedural, which then depends on the Perspectival, which then depends on, is grounded in the Participatory.

Distinguishing Between Theories Of Relevance And Relevance Realisation.

But we hit a roadblock, [which] I want to now zero in on. I had been treating them as identical but I’m going to [now] make a very important theoretical distinction between a theory of relevance and a theory of relevance Realization. Because what I want to argue is that there cannot be a theory of relevance, at least a scientific theory of relevance, and since we are playing in the arena of science, scientific explanations, I am just going to keep doing that qualification, I’m just going to say there could not be a theory of relevance, a scientific theory of relevance. Why not? Well, this has to do with an issue that was originally brought up by Chiappe and Kukla in an article, a commentary on Behavioral Brain Science. Dan Chiappe and I have published work together. We are collaborating right now on a work on telepresence. I recommend you to take a look at the work of Dan Chiappe. But they made a point, and I think this point is very well taken, it’s a point that goes back to JS Mill but you can also see an updated version of it in the work of the important philosopher, and philosopher of Science, Wilford Quine (writes Quine on the board).

So this has to do with how science works. Now of course, the philosophy of science tackles all kinds of controversial claims about ‘what is science?’ and how science works, but I take it that one thing that is agreed-upon in science is that science works through “Inductive Generalisations” (writes Inductive Generalisation on the board) or it tries to generate inductive generalisations. What do I mean by that? In science youIn the science you study a bunch of things here (draws a little container with vertical lines) and then you make predictions and claims that that will be the case for all of that type of thing. So you know hereSo here I study a bunch of... (back to the diagram) here is a hunk of gold, here is a hunk of gold, here is a hunk of gold... I come up with a set of features or properties... does that generalise to all the instances of gold? And if it does then I come up with an Inductive Generalisation. I want to get the broadest possible inductive generalizations that I can, because that’s how science works. It’s trying to give us aIt’s trying to give us a powerful way ofIt’s trying to give us a powerful way of reliably predicting the world. It’s doing other things, very importantly it’s also trying to give us a way of explaining the world — I am not claiming this is.../ I’ve tried to make it clear, this is not meant to be an exhaustive account of science, it’s meant to point to a central practice within science, but a constituent practice nevertheless. If you can’t generate inductive generalizations in your purported endeavour, then you don’t have a Science. This is why pseudo-sciences like astrology fail, precisely because they cannot do inductive generalizations.

You say “OK great!” So what JS Mills pointed out is that that means that we need what’s called Systematic Import (writes systematic import on the board). And this is soAnd this is so relevant to what we were talking about last time; even using the word import is really relevant. What that means is science has to form categories because that’s what I’m trying to do right? I’m gathering a bunch of things and saying they belong, they are the same type of thing, they’re all instances of gold, they all belong to the category of gold. Science has to form categories that support powerful - meaning as broad as possible - inductive generalizations (draws another container diagram with more vertical stripes and an arrow coming out to ‘support powerful inductive generalizations’). To be able to do that is to have systematic import. Now what do I need? Think about reverse engineering this! In order to have reliable — that’s what powerful means, reliable and broad inductive generalizations — in order to have those what do I need to be the case here? Well, I need there to be important properties for that category. One thing is I need the category members to importantly be Homogenous (writes homogenous on the board). There’s a sense in which all the members of the category have to share properties (drawing) that’s me indicating they’re all sharing properties (draws a line horizontally through all the vertical lines in the diagram), right? And it’s because they share properties that I can make the inductive generalisation that other instances will also have those important properties (draws two little rectangular boxes with the same horizontal lines indicating that they are identical instances of the category). That’s exactly what I need because if the members are heterogeneous there’s no set of properties I can then extend the generalisation. They have to be homogenous.

Back To Essence - Gold, White Things & Horses

Now this gets us towards something very important. This gets us towards an idea from Quine because there’s a lot of discussion about this word right now in the culture (writes essence on the board). And I think the discussions is too polarised, and this has to go, again, with an issue made by Wittgenstein, but I want to put Wittgenstein and Quine together on this — very important modern philosopher (writes both Wittgenstein and Quine on the board and points at Quine) — because Wittgenstein, and this is what some of the critics of essence say, because if you remember according to Aristotle, and we talked about this when we talked about Aristotle, essence is a set of necessary and sufficient conditions. And what Wittgenstein pointed out, and remember we did this with the example of a game, that many of our categories don’t have essences. There is no set of necessary and sufficient conditions that will pick out all and only games, there’s no set of necessary and sufficient conditions that will pick out all and only tables. So many of our categories don’t have essences. That was Wittgenstein’s point! Now Wittgenstein, I don’t think you could ever pin him to the claim that no categories have essences, and that’s what some people, I think, have concluded: that no categories have essences, [that] everything is just nominal description! But that’s not right because of course, non-controversially, for example, Triangles have essences — that’s why Aristotle thought many things did — if it has three straight sides, three angles, it’s inclosed [then] it’s a triangle! That’s an essence to [being] a triangle. Now, that’s mathematical. Here’s what Quine argued, at least I think this is an interpretation of Quine that is philosophically defensible: Science… these things, like triangles, these are deductive essences, these are the essences that we can deduce. But what science discovers are inductive generalizations, and if they’re powerful enough, science gives us the essence of something. The essence of gold is the set of properties that will apply to all instances of gold; all and only instances of gold.

That homogenous set that can generalise is what an inductive essence is. Now, what that means is we shouldn’t.../ a couple of ways of talking in the media shouldn’t, or the general culture, should not be so uncritically excepted! Essentialism isn’t bad for things that have essences; why would it be!? Essentialism is the mistake of treating a category as if it has an essence. It is a mistake for things like games and tables precisely because they don’t have an essence. It is not a mistake for things like gold because gold has an essence, inductively. Or triangle, because triangles have a deductive essence. It is too simplistic to say ‘everything’ has an essence, or ‘everything’ doesn’t have an essence; it cuts both ways. It cuts both ways! There are many things that don’t have essences, that’s what’s right about the critique of essentialism. But it is wrong to [conclude that there is no the it’s wrong] to say that Wittgenstein’s argument [point] it is not an argument. Because it’s not a deductive argument that concludes that there are no essences, it only points [out] that many categories don’t have essences. So that means it is possible to do a science when we do what? When we categorize things in such a way that we get this (indicates ‘support powerful deductive generalizations’ on the board) because when we get ‘this’ then we have the essential properties of the thing.

Now, the reverse is the case. That’s what I mean by it cuts both ways. We can’t have a scientificWe can’t have a scientific explanation of everything. We can’t have a Scientific explanation of everything. If the category is not homogenous, if it does not support powerful Inductive generalizations, if it does not have an inductive essence, we cannot have a science about it. It doesn’t mean those things don’t exist it means we cannot scientifically investigate them. So for example, I can’t have a science of white things! Now, are there things that are white? Of course there are. This blackboard is white. This pen, at least part of it, is white. This piece of paper is white. To say there are white things in this room is to say something true. Noticed that: there are truths that are state-able, but the category that I am using — this is JS Mills’ example, White things — does not support any inductive generalizations other than the thing is white. Now don’t give me “well, we can have a theory about light and lightness!” we are not talking about a theory about light, we are talking about a theory about white things. Knowing that this is white (piece of paper) what does it tell me [about this pen]? So I study this white thing, OK? What do I learn about it other than... oh, nothing!! Other than it’s white! Is there any other important shared... (indicating the whiteboard and the piece of paper) well, no! They’re both flat but this is vertical this is horizontal...! You see? It doesn’t generalise! It doesn’t generalise. So it is correct to say that there are many categories that we form for which we could not generate a scientific theory or explanation, precisely because those categories are not homogenous; they don’t have an essence.

So notice what that doesn’t mean. The fact that I cannot have a scientific theory of it does not mean that the white things are made out of ghosts or dead Elves or ectoplasmic goo! It licenses no metaphysical weirdness. It just says that category functions in the sense that I can make true statements about its membership but it does not function in so far as it supports through systematic import, [through] powerful Inductive generalizations. What else do I need? Well, let’s compare the white things, as JS Mills did, to horses. Now, do you see, we depend on the fact that horses seem to have an essence? Now, whether or not they ultimately do, at some sort of species level or something really argued about in biology — and I’m not trying to be negligent of that, but I’m also not trying to resolve that — what did Mills mean by his example of horses? Well, what he meant by.../ if I learn a lot about this horse, it will generalise to other horses. It will generalise. So horses are in really important ways Homogenous. That’s why we can I have a veterinary medicine, and things like that, I can learn about it in terms of horses that have already been studied and it will generalise well two horses that have not themselves yet been studied. That’s fine. What else? ...and this is, I don’t mean this to be a pun! I need the category membership to be stable. That doesn’t mean to be ‘horses and stables’. What’s in the category, the kind of things that are in the category, should be stable. It shouldn’t be constantly shifting or changing because if this — and this was a point made a long time ago by Plato — if what in here is constantly shifting (refers back to container diagram), now I don’t mean particular members, I mean what kind of thing is in here is constantly shifting, then of course I can’t do inductive Generalisation because I will get into equivocation. I will get into equivocation.

The word ‘gravity’ originally meant having to do with drawing down into the grave, as we mentioned, it had to do with a sort of important seriousness, but now we use that term to describe a physical mode of attraction and interaction. And if I don’t notice the change in what goes into my categorisations I am not making a good inductive Generalisation, I am engaged in equivocation. And as I’ve tried to show you, equivocation is a way in which we make invalid, often ridiculous arguments. So it needs to be stable. We need the properties of the objects to, in some sense, be intrinsic or at least internal, inherent. This also comes from an argument by John Searle. Many objects have properties that are not intrinsic to the object but come from the object’s relationship to us, for example, then they are attributed properties. So a clearSo a clear example a sort of non-controversial example is something being money. Now here’s, again, ‘is money real?’ Well, a lot of my life is bent around money, so in that sense it seems to be real! But does anything intrinsically possess the property of being money? If I take out some coin or piece of paper, is it intrinsically money? No, it’s only money because we all attribute it as being money, we all treat it as money and that’s what makes it money. If we all decided to not treat it as money, it ceases to exist as money! We can’t do that with gold. Now, notice what I’m saying: we could all decide that gold is no longer valuable, no longer analogous to money, but we can’t all decide that gold no longer possesses it’s mass [or] atomic number! We can’t do that!

Now, the thing you have to remember is that many things that we think are intrinsic are actually attributed. (Holds up a plastic water bottle) this being a bottle is attributed because what it means to call it a bottle is the way it is relating to me and my usage of it! If there had never been human beings and this popped into existence because of some quantum event near a black hole or something, it isn’t a bottle! It is an object with a particular mass, a particular structure, but it’s not a bottle because being a bottle is something that it gets in its relationship to me. Now again, did I just show you that everything is an allusion? No! Again, the fact that there are many things that are genuinely relational, genuinely attributed, doesn’t mean that I’ve shown you that everything‘s false! I’ve just shown you that you can’t do science unless the members of your [category] are homogenous, stable, intrinsic or at least inherent (has listed these on the board), because that’s what you need to have powerful Inductive Generalisation.

OK lets see something that fails this - all of these tests. Things that happened on Tuesday, OK? Events that happened on Tuesday - Tuesday events! Are there events that happened on Tuesday? Yeah! And there are even events that can happen on multiple Tuesdays! We categorize things in terms of the days, we categorize events in terms of the days. Now are all the events on a Tuesday homogenous? No! Are all the events on many different Tuesdays homogenous? No! They are very, very different and widely varying. Is it stable, the things that happen on Tuesday, is it the same every Tuesday? No, that’s Groundhog Day or some kind of horrible Nietzchean hell! Oh and what about “TuesdayNess”, being Tuesday, is that inherent? I mean is there Tuesday in the room when it’s Tuesday? It can’t be because there was a time when we didn’t even have calendars. But notice how hard it is to realize that: there’s no “Tuesdayness”! So can I make true statements? “Last Tuesday I went to a movie”, is it true? Yes! Can I do a science of events that happened on Tuesday? No, I can’t because it doesn’t satisfy these criteria (indicates Homogenous, stable and inherent on the board). Does that mean that Tuesday is made out of ectoplasmic goo, Tuesday events actually take place in a different dimension? No, it doesn’t, none of that! None of that! You have to be careful on — and this is what we learned from Vittengstein — we have to be very careful about the grammar of our thoughts, how we are regulating our cognition.

Now, what I want to try and show you is that relevance does not have Systematic Importance. Relevant events are like Tuesday events. Here let me show you: the things that I find relevant, other than me finding them relevant, what do they share in common? I might find this pen relevant, I might find my knee relevant, I might find this air relevant, I might find the fact that it’s a particular day in May relevant. Do you see what I’m showing you? The class of things that we find relevant is not homogenous other than [that] we find them relevant, there is nothing that they share - it’s exactly like the class of white things! What about [being] stable? So when I find something relevant do I always find it relevant? “This is relevant to me now...” (holds up a pen) “...will it forever be relevant to me? I will carry it, oh, it is relevant...!” No! Things are not ‘stably’ relevant - relevant one minute, irrelevant the next! You may say, “well there’s things that are always relevant to [me]!” Always!? Don’t know, very hard to find them! ...maybe, maybe oxygen, maybe? But that’s only relevant to me if I want to keep living! A person who commits suicide, and some people commit suicide this way, they suffocate themselves to death, because that was more important to them than oxygen! It’s not stable! Is relevance, and here’s where I think we’ll get into some difficulties I suppose, with some people, but is relevance internal or intrinsic to the object? Is there a way, if there had never been human beings or sentient beings, could this have relevance (referring to plastic bottle again)? It doesn’t seem that that’s, at all, a plausible intuition! Relevance always seems to be relevant to someone or something! And that, of course, I think is going to be bound up, that relevance, ultimately, has to be relevant to an auto-poetic thing. Only things that have needs, only things that are self-organized so that they have the constitutive goal of preserving their self-organization - that’s what it is to need; I need food because I am self-organized to preserve my own self-organization, which means I need food; food literally matters to me, food literally “matters” to me (physically indicates that food becomes the matter of his body) - it’s hard to see how things could be relevant unless they were in relationship to an auto-poetic thing! Relevance is not something for which we can have a scientific theory. I want you to notice what has come along the way: relevance is not intrinsic to something. There can be no essence to relevance; nothing is essentially relevant. That’s the whole point about talking about the problem of essentialism. And relevance is not stable, it’s constantly changing!

A Helpful Analogy Of Relevance, Backed Up By Darwin

OK so what do we do? Well, first of all we add to our set of criteria that we need for good theory. OurOur theory of relevance realization can’t be a theory of relevance detection. I’ve given you a sustained argument for that! This is not how relevance realization works, it’s not detection, it’s not this (holds up a pen) has relevance, and I detect it’s relevance! And you might say, “well maybe relevance realization is just projective!” I’m going to reply to that too, I think that’s also inadequate. InIn order to see how it’s an adequateIn order to see how it’s inadequate, in order to get out of the bind we seem to be getting in, I want to open up the distinction between the theory of relevance and a theory of Relevance Realization with analogy. Oh it’s going to turnOh it’s going to turn out to be a very, I hope, helpful analogy. And this will also, I think, help us to see why relevance is not something we merely project on the world. This is why I have a sustainedThis is why I have a sustained criticism against both empiricists — we just detect it — and the Romantics — we just project it. So let’s get into that. What’s the analogy that will help develop an argument to show why we neither merely detect it or merely project it, and help us get out of the bind that we can’t have a theory of Relevance?

OK, notice a very important and, I think this is one of the central insights of Darwin and we talked about Darwin when we talked about Aristotle and dynamical systems — so if you need to, please go back and look again at video six; I don’t want to repeat all those arguments right now [as] we built them so that we can use them now. See beforeSee, before Darwin’s time, the people studying the natural world were often clergymen. Darwin himself was thinking about going into the clergy and that’s because people thought that if they studied the natural world they could understand the essence of how things were designed because, if we could get at the essence of how things were designed, how things were sort of fitted to their environment, then of course that would give us some deep insight into the mind of God. That’s why clergyman are collecting species and doing all this. But I think one of the insights, and it’s not given enough attention in the analysis of the brilliance of Darwin’s theory, is to realize that things don’t have an essential design! There is no essential design. So consider the notion of evolutionary fitness... Now, there’s a problem! There’s a technical definition of ‘fitness’ which means the capacity to survive long enough in order to be capable of reproduction that will allow the gene pool or species — all of these are kind of controversial terms — to propagate and exist. So if we want to useSo if we want to use that technical definition of ‘fitness’ then I need, or I will be talking about “fitedness”, and what I mean by “fitedness” is ‘what is it about the organism that makes it fit, what is it about the organism that allows it to survive long enough to reproduce?’, and what I want to argue is [that] there is no essential design on fitedness.

Some things are ‘fitted’ in the sense precisely because they are big, some because they are small, some because they are hard, some because they are soft, some because they are long-lived, some because they are short-lived, some because they proliferate greatly, others because they take care of a few young. Some are fast, some are slow, some are single-cellular, some are multi[cellular]... like nothing, nothing! And the answer for that, of course, is deep and profound because the environment is so complex and differentiated and dynamically changing that niches in which you can fit into the environment in order to promote your survival — auto-poetic — are varied and changing. See this is Darwin’s insight: there is no essence to design; there is no essence to fitedness. If you try and come up with a theory of how organisms have their “design” — I’m using this in quotation marks — in terms of trying to determine or derive it from the essence of design, you are doomed because it doesn’t exist. But what Darwin realises is he didn’t need such a theory! He needed a theory about how what is relevant, in this biological sense, a theory about how an organism is fitted, how it is constantly being designed, re-defined by a dynamic process. See fitedness is always redefining itself, reconstituting itself. It is something that is constantly within the process of self organization because there is no essence, there is no final design on fitedness. Fitedness has to constantly be redesigning itself in a self-organizing fashion so it can constantly pick up on the way in which the world is constantly varying and dynamically changing. There is no essence to fitedness, but I don’t need a theory of fitedness. All I need is a theory of how fitedness is constantly being realized in a self-organizing fashion. That’sThat’s exactly what the theory of evolution is.

Do you remember, there’s a feedback cycle in reproduction (draws a circular feedback arrow), and there is a virtual engine, selection, variation (labels the circular feedback arrow with selection and variation), and that virtual engine constantly shapes and regulates how the reproductive cycle (indicates a circular motion through selection and variation around the feedback arrow) feeds back onto itself and there is no — and of course, this is why some religious people get very angry about this process, but notice that this is exactly what we need — there’s no intelligent designer to this (indicates cCircular diagram on board). ThisThis is a process that is completely self-organizing. The fitedness of organisms constantly evolves out of and is constantly evolving towards other instances of fitedness. Fitness has no essence, it is not a stable phenomenon. I should not try to give a definition of a theory of fitedness, what I have is a theory of the evolution of fitedness. And again, even when I say that, you’re tempted to think “what Vervaeke means is there was no fitedness and then there was evolution and it resulted in fitedness! That is not what Vervaeke is saying! Vervaeke is saying fitedness and the evolution of fitedness are the same thing.

So, what Darwin proposed, of course, it was the first Dynamical Systems Theory of how fitedness evolves, so that fitness is ongoing. That’s the theory of evolution by natural selection. Now that tells us something that we need. First of all, this is a self-organizing process (indicates the circular feedback diagram on the board, the ‘virtual engine’), it is not Homuncular, it can generate intelligence without itself being an intelligent process. It’s doing a lot of what we need, it’s doing a lot of what we need. Here’s the analogy I want to propose to you: Let’s make relevance analogous to biological fitedness. In fact let’s call relevance Cognitive Interactional — what I mean by that [is] both in your cognition and how that cognition is expressing itself in problem-solving — Cognitive Interactional Fitedness (writes these terms on the board). And I don’t need a theory of [Cognitive Interactional Fitedness], what I need is a theory of how [Cognitive Interactional Fitedness] evolves.

A Theory Of How Cognitive Interactional Fitedness (Relevance) Evolves

My ability to formulate problems, form categories, pick up on conveyance, make inferences, all this stuff... what about that ability? Because, what I’m doing.../ what do I need? I need something that constrains the search space, that constrains how I pay attention. I need systematic constraints. And what are they doing? Those systematic constraints have to regulate a feedback loop. And what’s the feedback loop? The feedback loop is my Sensory-Motor Feedback Loop: I’m sensing, but I’m also acting, and my acting is integral to my sensing and my sensing is integral to my moving and so my moving and my sensing are doing this (indicates a rotation gesture with both hands, one over the other) a sensory-motor loop. I interact with the world and then that changes how I sense it and then I inter[act]... and so there’s a sensory-motor loop. What if there is a virtual engine, broadly construed, that is regulating that sensory-motor loop so that it is constantly evolving it’s Cognitive Interactional Fitedness to its environment? It doesn’t have to come to any final essential way of framing the environment, but what it’s constantly doing is evolving it’s Fitedness, it’s cognitive — not just it’s biological fitness, although I’m going to argue, as many people do that there’s important continuity between those two — it’s constantly evolving it’s cognitive Fitedness to the environment. Then what I need is not a theory of relevance, I need a theory of Relevance Realization. How Relevance is becoming effective, how it is altering, shaping the sensory-motor loop. I need a dynamical system for the self-organizing evolution of Cognitive Interactional Fitedness. And if I could come up with that then I would have an account of relevant realization that was not Homuncular, would be consonant and continuous with how the organ, the embodied organ, the embodied brain that is responsible for intelligence itself involved. It would plug in very nicely to what we need. Well, what do we need?

We need a set of properties, if you remember, we need a set of properties that are Sub-Semantic, sub-syntactic, that ultimately have to ground out in establishing the Agent-Arena Participation. The processes have to be Self-Organizing, they have to be Multi-Scale, they have to originally be ground out in an Auto-Poetic system. Well, what kind of properties are we talking about then? Well, we are talking about — and this again is deeply analogous to the Darwinian picture — we are talking about Bio-Economical Properties (writes bio-economical on the board). And what do I mean by that? Think, again, of your Biology as ‘economic’ - this is, again, part of Darwin’s great insight. Now don’t be confused here; when a lot of times people hear ‘economic’ they hear ‘financial economy’. That’s not what an economy is! An economy is a self-organizing system that is constantly dealing with the distribution of goods and services, the allocation and use of resources, often in order to further maintain and develop that economy. So your body is a bio-economy. you have valuable resources of time, metabolic energy, processing power — think about how we say “pay attention” by the way — processing power, and what you do as an auto-poetic thing is, you are organised such that the distribution of those resources serves the constitutive goal — it will serve other goals, of course — but it serves the constitutive goal of preserving the bio-economy itself. And the thing about economies is — of course it’s they’re self organizing (points to self-organizing) — [boi]economic properties are... They’re ‘bio’, they are part of [you], they come out of your biology, right? They are not semantic or syntactic properties. Now, we use semantic and syntactic terms to talk about them, blah blah, blah, Let’s not keep making that that confusion. They are multi-scaled (points at multi-scaled on the board); economies work locally and globally simultaneously, bottom-up, top-down. So bio-economic properties are great and that’s good because that comports well with the analogy because Darwin’s theory is ultimately a bio-economic theory.

Efficiency And Resiliency In Bio-Economies

So can we think about what kind of norms are at work in a bio-economy? So here we are dealing with norms, ultimately, of truth (writes truth beside sub-semantic). Here we are dealing, probably, with norms of validity, at least formal validity, in some way (writes validity beside sub-syntactic). When we are here (indicates bio-economic properties) we are not dealing with these kinds of logical, semantic norms - economies are governed by logistical norms (writes ‘logistical norms’ beside bio-economic properties) or at least regulated by logistical norms. I want to try to use the word ‘governing’ for selective constraints (indicates sub-semantic & truth and Sub-syntactic & validity) and ‘generating’ for enabling constraints (indicates bio-economic properties and logistical norms), I apologise if I sometimes flip! Economies are regulated by Logistical Norms. Logistics is the study of the proper disposition and use of your resources. So, [for example], if you are doing a logistical analysis for the military you are trying to figure out how [your] limited resources [of] food, and ammo and personnel and time and space... how can I best use them to achieve the goals I need to achieve? So what are logistical norms? Well, logistical norms are things like efficiency (writes efficiency on the board) and resiliency (writes resilience under efficiency on the board). Efficiency and resiliency. We’ll talk about each in more detail, ... A way of thinking about these is [that] resiliency is basically long-term, broadly applying efficiency. But instead of using efficiency and efficiency, which is confusing, we’ll talk about efficiency and resiliency.

So what if, let’s go step-by-step, this is very (does confusing gesture)... what if relevance realization is this ongoing evolution of our cognitive interactional fitedness, that there is some virtual-engine that is regulating the sensory-motor loop and it is regulating it by regulating the bio-economy and it’s regulating the bio-economy in terms of logistical norms like efficiency and resiliency? Now all of this, of course, can be described scientifically, mathematically , etc., because, of course, Darwin’s theory is a scientific theory; we can do calculations on these things etc... (indicates logistical norms efficiency resiliency on the board). (wipes board clean.) One more time: the fact that I use science to talk about it does not mean that it exemplifies Propositional properties. My properties of my theory and the properties that my theory is about, are not the same thing. What kind of relationship? How do we put this notion of self-organization and this notion of the logistical norms governing the bio-economy together? So one way of doing this is to think about a multi-scaled way in which your bio-economy is organised to function; a multi-scaled way, many scales of analysis. There is a way in which your bio-economy is organised to function.

Let's take your Autonomic Nervous System as an example. This is not exhaustive, in fact my point is [that] you will find this "strategy," this "design" at many levels of analysis in your biology. I'm only using this as an example. So your [autonomic] nervous system (writes this on the board)… this is part of your nervous system that is responsible for your level of arousal. That doesn't mean sexual arousal. Arousal means how — and notice how this is logistical — how much your metabolic resources are being converted into the possibility of action; interaction. So you have a very low level of arousal when you're falling asleep. You have a very high level of arousal when you're running away from a tiger. Now think about this. You need your level of arousal.../ there is no final, perfect design on your level of arousal. There is nothing you should.../ There isn't a level that you should always shoot for! You shouldn't maximise your level of arousal. IF I'M ALWAYS ARGGHHHHH (shouting), that's not good! I'm never going to sleep, I'm never going to heal, right? If I'm just like (half lies down on the counter) always, “Okay, that's it, I’m going to sleep!” That's not good! And the Canadian solution? “Well, I'll always have a middling level of arousal!” That's not good either because I can't fall asleep and I can't run away from the tiger!!

So what does your autonomic nervous system do? Well, you're autonomic nervous system is divided into two components. There is your sympathetic, and your parasympathetic (writes these both on the board). So your sympathetic system is designed.../ it's really biased. It's designed towards interpreting the world in a way…/ it's biased — notice what I said! Remember the things that make us adoptive also make us susceptible to self-deception — it's biased, because you can't look at all of the evidence! It's biased to looking for and interpreting evidence that — and I mean, evidence non-anthropomorphically — that you should raise your level of arousal. Your parasympathetic system is biased the other way. These are both heuristic ways of processing; they work in terms of biasing the processing of data. So the parasympathetic system is constantly trying to find evidence that you should reduce your level of arousal. So they're opposed in their goal, but here's the thing: they're also interdependent in their function (writes ‘opposed’ and ‘interdependent in their function’ on the board). So the sympathetic nervous system is always trying to arouse you (hooks both hands together vertically with fingers), this is this hand pulling up, and the parasympathetic system is always trying to pull you down. And as the environment changes that tug of war shifts around your level of arousal. The Opponent Processing is when you have two systems that are opposed, but integrated. You have opponent processing (writes opponent processing on the board), the opponent processing means that your level of arousal is constantly evolving, constantly evolving to fit the environment. Is it perfect? No, nothing can be. Any problem solving machine in order to be perfect, would have to explore the complete problem space. That's combinatorially explosive, it can’t! But what is this? Well, you've seen this before! Opponent Processing is a powerful way to get optimisation (writes optimisation on the board). Remember when we talked about optimisation when we talked about Plato. You're optimising between systems that are working different goals, but are integrated in their function. And that way the system constantly self-organises and it then thereby evolves it's fittedness to the environment.

So the way we can get this, I would argue, is by thinking about how the brain, and I am going to argue very importantly, the embodied, imbedded brain uses opponent processing in a multi-scale way in order to regulate your bio-economy, your auto-poetic bio-economy, so that it is constantly optimising your Cognitive Interactional Fittedness to the environment. Let's think about it this way: let's think if we can get a virtual engine out of efficiency and resiliency (writes these both on the board, efficiency above resiliency), because here's the thing about them: they are in an Opponent Relationship. They pursue... — “pursue”! The problem with language, eh? It's like, Nietzsche said: "I fear we are not getting rid of God because we still believe in grammar", right? The problem with languages is it makes everything sound like an agent! It makes everything sound like it has intentionality. It makes everything sound like it has intelligence. And of course that's not the case. So bear with me about this! I have to speak anthropomorphically just because that's the way language makes me speak! — Let's use a financial analogy to understand the trade off relationship between efficiency and resiliency. Not all economies are financial because [-] the resource that's being disposed of in an economy is not necessarily money. It might be time, etc… Okay, I'm using a financial analogy, or at least a commercial analogy, perhaps is a better way of putting it, in order to try and get some understanding of how these are in a tradeoff relationship.

So you have a business. One of the things you might do is you might try to make your business more efficient because - ‘ceteris paribus’ - if your business is more efficient than that person's business, you're going to outcompete them. You're going to survive and they're going to die off - obviously the analogy to evolution. So what do I do? What I do is I try to maximise the ratio between profit and expenditure/cost. Well, we did it.../ we keep thinking of it as the magical solution, but we've been doing it since Ronald Reagan, at least. We do massive downsizing. We fire as many people as we can in our business. And that way, what we have is we have the most profit for the least labor costs. That's surely the answer, right? So notice what efficiency is doing. Notice how efficiency is a selective constraint (writes selective constraint beside efficiency). The problem is if you are “cut to the bone”, if you "reduced all the fat", if you've got all the efficiencies, and this is the magic word that people often invoke, without remembering, and forgetting the relationship, the Opponent Relationship to resiliency… See if I cut my business to the bone like that, what happens if one person is sick? Nobody can pick up the Slack because everyone is working to the max. What happens if there's an unexpected change in the environment, a new threat or a new opportunity? Nobody can take it on because everybody is worked the limit. I have no resources by which I can repair, restructure, redesign myself. I don't have any precursors to new ways of organizing because there is nothing that isn't being fully used. Notice also, if there's no Slack in my system — and this is now happening with the way AI is accelerating things — error propagates, massively and quickly. If there's no redundancy, there's no Slack in the system, there's no place, there's no wiggle room and error just floods the system. You see if I make the system too efficient, I lose resiliency. I lose the capacity to differentiate, restructure, redesign, repair, exapt new functions out of existing functions, [to] slow down how error propagates through the system. Efficiency and resiliency are in a tradeoff relationship.

Now, what resiliency is trying to do is enable you to encounter new things (writes enable beside resiliency), enable you to deal with unexpected situations of damage, or threat, or opportunity. It's enabling. These are in a tradeoff relationship. As I gained one, I lose the other. What if I set up a virtual engine in the brain that makes use of this trade-off relationship. It sets up a virtual engine between the selective constraints of efficiency and the enabling constraints of resiliency and that virtual engine bio-economically, logistically shapes my sensory-motor loop with the environment, so it's constantly evolving it's fittedness. We'll take a look at that possibility and some suggestions on how that might be realized in the brain, in the next lecture.

Thank you very much for your time and attention.

- END -

Episode 30 Notes

To keep this site running, we are an Amazon Associate where we earn from qualifying purchases

Chiappe
Dr. D.L. Chiappe

Kukla
A. Kukla

Chiappe and Kukla
an article/commentary on Behavioural Brain Sciences

J.S. Mill
John Stuart Mill, usually cited as J. S. Mill, was an English philosopher, political economist, and civil servant. One of the most influential thinkers in the history of classical liberalism, he contributed widely to social theory, political theory, and political economy.

Wilford Quine
Willard van Orman Quine was an American philosopher and logician in the analytic tradition, recognized as "one of the most influential philosophers of the twentieth century".

ceteris paribus
Ceteris paribus or caeteris paribus is a Latin phrase meaning "other things equal"; English translations of the phrase include "all other things being equal" or "other things held constant" or "all else unchanged".

Other helpful resources about this episode:
Notes on Bevry
Additional Notes on Bevry

A Kind Donation

If you like what you read and have gained value by being here, we hope you can consider making a small donation so we can keep this blog running. Even a few dollars for a cup of coffee would be awesome!

Want to know when the next one comes out?

Enter your information below and you’ll be informed right when it’s published!