Welcome back to Awakening from the Meaning Crisis. This is episode 26. Last time we took a look at what's happening in Germany in the period post Hegel, post Marx, post Nietzsche. And we took a look at the rise of the Pseudo-Religious Ideologies and all the various other cultural undercurrents and threads and processes of transformation that were gathered together in Germany and then exacerbated and ignited, if you'll allow me a volatile metaphor, by Germany’s terrific defeat in the terror that was world war one and the impact of this had on Germany and how all of this, all of these features that we saw at work in Germany and in the Meaning Crisis get spun in Hitler's autodidactic myopia into a Gnostic nightmare, titanic pseudo-religious ideology, and how the two great pseudo religious ideologies of Nazi-ism and Marxism, at least the Stalinesque version in the Soviet union come to titanic blows in the Eastern front at the battle of Kursk. And then I pointed out that [all] this, and then the ideological battle, [the] political ideological battles of the cold war thereafter have left us deeply traumatised.
We place no faith in pseudo-religious ideologies, utopian visions to solve the Meaning Crisis. At least many of us don't. We do not see ourselves as capable of the nostalgic return to religion, somehow pretending that all of this history and all the science can be ignored in a kind of fundamentalism - and please note that I'm not equating all religion with fundamentalism. Instead we find ourselves in the middle, in between these, and we're trapped! We can't go back and we can't do a secular alternative to religion. And yet we need something that will systematically create psycho-technologies that transform consciousness, cognition, character and culture in a way that religions have, if we're going to address the meaning crisis and, in fact, the Metacrisis that we're confronting right now in the world today.
And so we're caught in this situation and we pursue either various radicalisms — and I critiqued the idea that the meaning crisis should be understood, or we should attempt to solve it, by the clash or through the clash by means of the clash of political ideologies, that that is to fundamentally misframed it because, if you remember, Kierkegaard and Marx and Schopenhauer all in their different ways, and in ways that we can't criticise, are nevertheless pointing to the fact that the participatory and perspectival knowing that is so crucial to responding to, and losses of meaning and regenerating meaning, has been ignored by Hegel. So we can't do this politically. It doesn't mean that politics is irrelevant, but it means that framing and formulating the problem at the political level is to radically miss-frame it and miss-formulate it.
And then I proposed to you that instead we turn to an alternative way of trying to reformulate the problem, that we try and get a scientific understanding, as best we can, of the meaning machinery, this machinery that we perspectivally participate within. We don't just simply…/ I'll often say “Meaning Making”, but as I’ll argue, we don't make meaning the way the Romantics said, neither do we just receive meaning from the world the way the Empiricists in the enlightenment argued. We're going to see that it's neither one of those - that's another dichotomy that we have to transcend. But nevertheless, let's look at this machinery, the machinery of meaning realisation. What are the cognitive processes at work within it? And I propose that we do that from a scientific worldview, precisely because — at least from a scientific point of view — precisely because we need that to compliment the historical analysis and because the scientific worldview is part of the problem of the Meaning Crisis itself.
So I propose we take a look at the science of cognition and that means that we take a look at Cognitive Science. I am a cognitive scientist professionally but, as I said from the very beginning of this series, I'm offering you a particular interpretation of how to do Cognitive Science. Not everybody in cognitive science would agree with me, but I think it is a viable and a respectable version of cognitive science that can be argued for. What is that idea? It is the idea that cognitive science is born out of a particular way in which the scientific study of mind has unfolded. And as I indicated last time, it's a way that has actually contributed significantly to a version of the meaning crisis that is deeply personal, deeply into the very guts of our minds and bodies. And this has to do with the idea that this term (writes Mind in the top right of the board) has now become equivocal (writes equivocal below Mind), and we'll come back to that in a minute, because we actually are talking about different things or at least different levels of the reality of mind with different disciplines. Different disciplines that use different vocabularies, different theoretical styles of argumentation, different means of measuring phenomena, different ways of gathering evidence.
So we have the brain that's being talked about by the neuroscientist (writes both on the board, left and right respectively) who talk about patterns of neural activity, using FMRs, et cetera… we talked about that. And then we have a totally different level at which we're understanding intelligence, the mind, in terms of Information Processing (writes Information Processing above Brain on the left), especially when we're in the project, not of measuring brains, but of trying to make machines that are instances of mind; not just simulations, but bonafide instances. And this is the project of Artificial General Intelligence, projects like machine learning, et cetera (writes AGI and machine learning on the right above neuroscience). And of course, they talk about different things! They don't talk about neurones. They might talk about neural networks, but those aren't the same thing, importantly. They'll talk about algorithms, heuristics, all that sort of stuff. And they don't use FMRIs, they actually make machines and processes. Different ontology, different methodology, et cetera.
Then, of course, we have understanding mind as Behaviour and thats Psychology (writes behaviour on the left and psychology on the right) and here we talk about things like working memory and problem solving, decision-making and we do experimentation on human beings and statistical analysis… Again, different ontology, different theoretical vocabulary, different methods of studying the phenomenon, different ways of presenting evidence. And we noted throughout that there are attempts to create hybrids between the various disciplines. So these are the levels of reality here (writes Levels at the top of the list on the left), and these are the disciplines here (writes Disciplines at the top of the list on the right). Because above this, as we said, we have Language (writes language above behaviour) because of the tremendously important and special role that language plays with respect to being a medium for mind and being a way of communicating mind. And of course, here we have Linguistics (writes linguistics above psychology). And we might get psycholinguistics to try and bridge between them, but nevertheless, in linguistics we're talking about things like, sentence structures and rules of transformation and gathering different kinds of evidence, et cetera. And then the networking of minds and brains together through language and behaviour is Culture (writes culture at the top on the left), and that studied by Anthropology (writes anthropology at the top on the right) and that has a very different method. This (anthropology) really emphasises the perspectival and the participatory. So you do participant observation, you write ethnographies.
And of course that (indicates the breakdown of “Levels” and “Disciplines” on the board) helps to give…/ one of the advantages of the science is it helps to formulate and specialised people so they can get much more precise analysis. And so this is a good [thing/fit]. I'm not trying to in any way despise this, but it is problematic in that it carries with it… first of all, it fragments us (circles equivocal) with this term “mind”(circles Mind). It's now become equivocal. What are we talking about when we're talking about our mind and related constructs like ourself? So let's remember what equivocation is. Equivocation is when you fall into confusion, precisely because you do not keep track of the meaning of your terms. So here's one of my favourite [examples], right? So, “Nothing is better than long life and happiness” (written on the board). Kind of something that you'd get in star Trek, right? Great! …and then I can say to you, “a peanut butter and jelly sandwich is better than nothing” (written below previous statement). Okay, so nothing is better than long life and happiness and a peanut butter and jelly sandwich is better than nothing, ergo “a peanut butter and jelly sandwich is better than long life and happiness” (draws a line under the previous two statements and writes this)! So you know what you should do! You should eat a peanut butter and jelly sandwich and then commit suicide!!
Now that’s ridiculous, and that's the whole point! This should not convince you to eat a peanut butter and jelly sandwich and commit suicide. It's designed to convince you of the ridiculousness of this argument. And why is the argument ridiculous? Because I'm use[ing]…/ the argument hinges on the this (circles both instances of the use of “nothing”) and that, because I'm using the same term, it sounds like I can make the inference. But of course, although I'm using the same word, I'm not meaning the same thing. This (nothing #1) means ‘no thing’ from the set of things that makes life worth living is better than long life and happiness. This (nothing #2) means no thing from this set of things you should eat. And those are not equivalent sets. There's not equivalent reference. If you don't keep track of what your words mean — that's why I'm always so careful to try and give you the meaning of things and the history of the meaning of a term — If you don't get sensitive to the meaning of terms, you'll fall into equivocation (circles equivocation again), which is a disastrous way to try and reason about anything.
So what's the point of that? Well, The term mind isn't univocal. It can be…/ What am I talking about? Am I talking about ‘this’? Am I talking about this? Am I talking about this? Am I talking about this? Am I talking about it? (Pointing to all the different levels) Am I talking about how this and this [interact]? Some are talking about this (pointing to two different levels)? We don't know! And the degree to which I'm equivocal, without realising it, is the degree to which my thinking about my mind and therefore myself is ridiculous. If you aren't clear about what this word means, how you're using this concept, you will be bullshitting yourself through equivocation. I also pointed out that one of the consequences of this is fragmentation. What do I mean by that? Well, there's an ignorance in this (indicates the levels/disciplines on the board). And again, I'm not, I am not disparaging these sciences. I love these sciences! I've got education in them and I value them. That is not what is happening here. But, one of the things that is missing that we are ignorant of in this model is this: these various levels of reality causally impact and constrain each other in very important ways (draws little arrows between the different levels). These individual disciplines don't capture that. That's why there's this constant temptation and need to create the hybrids like psycho-linguistics.
So, how are we going to study, therefore, the relationship between the levels? We have to get the disciplines to hybridise or, beyond that, to talk to each other, to integrate together in some fashion, if we're going to deal with that ignorance and that has to be an astute practice. We can't just integrate by equivocating - no integration through equivocation because that's just bullshitting. So, what we have to do is we have to have a philosophically astute integration. We need philosophy (writes Philosophy to the right, grouping the disciplines together) precisely because philosophy is the discipline that has us take conceptual care to try and articulate the meaning of our terms, to try and bridge — because thats what philosophy does: it tries to bridge between these different vocabularies, these different ontologies, these different methodologies. That's what philosophy does! Philosophy isn't about sitting around in cafes, smoking cigarettes and saying vaguely obscure things! The discipline that tries to come up with a philosophically astute integration between these disciplines so that we can avoid equivocation and deal with fragmentation and overcome the ignorance of the causal relationship between the levels… that's Cognitive Science. That's what cognitive science is. That's what people mean when they say cognitive science is an interdisciplinary science.
Now I hope you see that cognitive science is already directly confronting and addressing one of the ways in which the Meaning Crisis is inhabiting us in terms of the way the scientific Worldview — again, I'm not denigrating it — but the way in which the scientific worldview is fragmenting (indicates the levels on the board) us and causing us to fall into ignorance about who and what we are, and to equivocate and bullshit ourselves about who and what we are. Cognitive science is placed to address that by its very nature. And of course it is the science that is going to talk about this meaning generation process, because all of these (the disciplines) are about that in some important way. They're all about how mind makes sense. (Wipes board clean.)
Now I want to argue that there are different ways in which you can understand how you practice cognitive science - how you try and create these bridges between the disciplines. [I’m going to] And I'm not presenting them in a neutral fashion. I'm telling you that right from the beginning. I think one of these ways is the best way. A way, nevertheless, in which people use this term, and they'll often not capitalise it and they'll pluralise it, they'll talk about the "cognitive sciences" (writes this on the board) and then all that is is just “Generic Nominalism” (writes this below cognitive sciences). Cognitive Science or the cognitive sciences is just a name for the ‘genus’ that each one of these disciplines belong to: Anthropology is one of the cognitive sciences, machine learning is one of the cognitive sciences, Neuroscience is one of the cognitive sciences. Generic Nominalism is useless given the concerns I've articulated for you. It's not going to address the equivocation. It is not going to address the fragmentation. It is not going to give us any purchase on addressing the ignorance between the different levels of reality. So I believe we should, although this is often used this way, we should reject this at least as the sole meaning of what Cognitive Science is doing. (Wipes board clean again.)
The next thing people do is they will have this term “Cognitive Science”, and they understand it as a kind of Interdisciplinary Eclecticism (both written on the board). [What I do, I'm, you know, to do cognitive science] Let's say [-] I'll be a member of one of the core disciplines, let's say I'm a neuroscientist. And what I do is I'm a neuroscientist, but I read a bit about psychology or perhaps linguistics… And then I picked some ideas that are interesting from the other disciplines and, you know, and I use them. And sometimes I I'll tell some of my linguists or psychological friends, some of the things I'm doing in neuroscience and some of those ideas might interest them… And the model for this is, kind of — I mean this as an analogy — what we have in ‘Interfaith Dialogue’. See, the whole point about interfaith dialogue is the Christians and the Buddhist talk, and, you know, I find something interesting — let's say I'm a Buddhist — in Christianity and the Christian side finds something interesting me, and we talk… But we're not trying to radically transform each other. There's no sense in which there's going to be a really significant transformative transfer of insight between us. And we're certainly not trying to make something above and beyond Buddhism and Christianity. It's not like after we do the interfaith dialogue, we're going to have come up with a new religion! Now I use that metaphor precisely because I think it's helpful for understanding.
Now the goal of interfaith dialogue is to enhance mutual understanding and build tolerance and respect, reduce violence… and these are all noble endeavours, so I'm not disparaging this. But notice the problem here. Notice it's not really capturing why people feel so strongly drawn towards creating things like Psycho-linguistics that bridge between them, the different disciplines. See the problem with this is it's either too weak or too strong. This (interdisciplinary eclecticism) can very quickly just become…/ there's no significant bridging between the disciplines. There's some interest, there's some creation of mutual respect, but it’s not the case that we're really capturing significant, theoretically important relationships between the different levels by having strong and transformative insights passing between the disciplines. So this will tend to degenerate, or people will realise they need something stronger. They will realise that there is something to this attempt to create the linking disciplines. (Wipes board clean again.)
And that's the third and I think, therefore, best vision of Cognitive Science. That's the vision of Cognitive Science as Synoptic Integration (writes this on the board). Synoptic Integration is not saying that all the disciplines are saying the same thing. But it's not simple eclecticism of “well, they're all saying different things, but let's get them all to be friendly and like each other, and they can have some sort of peripheral influence on each other”. Synoptic integration is saying, “no, we need to build something right between the disciplines that addresses the equivocation, deals with the fragmentation and fills in the ignorance - tells us about how the levels are actually causally interacting and constraining each other. That's Synoptic Integration. So what you want to do is you need to say, “They're not saying the same thing, but they're not just saying different things either!”. You have to create this bridging vocabulary that integrates across the disciplines (writes Bridging Vocabulary below Synoptic Integration).
Now that's a tricky thing to do! But we've talked a long time ago about the fact that our brains, actually, since the upper Palaeolithic transition… we've been training them and developing them and inculturating them to get very good at bridging between domains. Even the word ‘bridge’ exemplifies what I'm referring to, namely metaphor. We use metaphor as a way of bridging between demands. I am not about to make the argument that science is metaphor - that is not what's happening here. But I'm trying to use metaphor as a way of saying that there's already a cognitive ability in us that we can exapt and make use of in Cognitive Science.
Look at how a metaphor works: If I say to you — and Sam is a human being — I say to you “Sam is a Pig” (writes this on a freshly cleaned board), notice what you have to do in order to make this work. It looks like I'm creating an identity claim: “Sam is a pig”, but of course what makes it a metaphor is it's not an identity claim! I can't use this as a way of telling you that Sam has pink skin and he lives on a farm and he's going to be eaten by other human beings someday. Because that's not what I mean. I mean something like he's glutinous or he's sexually selfish or something like that. So notice what I have to do: I have to keep the two different (left to right down/up line below the two) while also saying how they're importantly the same (right to left up/down line above them). Now, why am I doing that? Because the difference, right (writes ‘difference’ on the line below Sam and Pig)? That difference gets me outside of my [framing of Sam] - here's my framing of Sam (makes a containment gesture over on the left near Sam). And the difference gets me outside of my framing of Sam (moves viewpoint way over to the right). And I look through this (the Pig end of the simple metaphor framework on the board), I look through if you'll allow me (holds up his own glasses) the ‘framing’ of something as a pig, and I use this to look and see something in Sam, a way in which they're identical. So I stepped back, through the difference, but I look through, into the identity (writes ‘identity’ on the line above Sam and Pig) and I see things in Sam through my ‘pig lenses’, if you'll allow the metaphor, and it allows me to see and understand Sam in a different way. It alters what I consider salient in Sam, it restructures.
And so metaphor has this duality about it (indicates the anti-clockwise motion around the metaphor on the board (Sam -> difference -> Pig -> identity) and what you want to do is you want to create a metaphor that balances these in an appropriate way. When a metaphor balances these two well, we talk about a metaphor being ‘apt’ (writes aptness on the board). So notice if I make the ‘identity’ relation too strong, if I emphasise this side too much and I say, “Bees are Hornets!” you don't think, “Oh wow! What a great metaphor that is! That's such a wonderful metaphor!” In fact, that's a really crappy metaphor! It doesn't give me enough distance. I can't step back enough and have an insight into [bees]. This provokes, no insight, there's no insightful transformation of my understanding of bees - this is too close. But if emphasise the ‘difference’ too much, something like, “arguments are chairs! Well, you know, arguments are chairs because they're both human made structures and we use them on a day to day basis and…”! This is a very crappy metaphor precisely because the difference is too great. I've stepped back so much, if you'll allow me, I'm losing sight of this and it's not clear, it's vague - what am I supposed to see about arguments through this distant lens of chairs!? ‘Apt’ is when I get an appropriate balance of that.
So what I'm looking for, in Cognitive Science, is I'm looking for theoretical constructs, proposed theoretical entities, that get this apt balance, that allow me to keep the differences between the disciplines, but I also see from and through those distances, relevant identities that allow me to look from neuroscience into Artificial intelligence in an insightful way. Or [-] step back from behaviour and psychology [and] step into linguistics and then I look at psychology - I keep that distance, but I also try and see, “ahh, but what can I see in psychology through the lens of linguistics now?”. Of course the thing about Cognitive Science is it's not trying to create single aptness, it's not trying to create a one…/ This (Sam is a Pig) is one way - I've just bridged between two domains. Cognitive Science is trying to create constructs that are multi-apt, that bridge between multiple domains, multiple disciplines. And that multi-aptness is really important.
So I'm trying to create constructs with multi-aptness (writes this on the board). They get this balance between identity and difference that affords and provokes insightful transformation of the theorising from one discipline to another. And I start to create an overarching integration. Now this brings up an important idea about “well, what's constraining me? [-] How do I judge if I'm doing this well or badly?” Well, this has to do, I think, with this notion that is very much the notion that we use when we're considering how to create new constructs, new ideas, theoretical entities that will bridge between domains, create lines of inquiries. This is plausibility (writes plausibility below constructs with multi-aptness). Now there are two meanings to this word: One is just a synonym for high probability (writes high probability off plausibility), and that's not the one I mean. There's a different meaning originally, really, brought to the fore by Rescher’s work (writes Rescher off plausibility too), but a lot of other people… there's work being done now… some good work in psychology being done on plausibility. I'm doing some work with Leonardo Ferraro and Junchul Kim on plausibility and this other sense of plausibility isn't a synonym for high probability. It's a synonym, not for rational, but for reasonable, [the sense] ‘making good sense’, and of course, man, that's so relevant to us, right? Making good sense. Deserving to be taken seriously. Right? (Writes ‘reasonable’, ‘making good sense’ and ‘deserving to be taken seriously’ all off Rescher.) This is what plausible means. It's reasonable. It makes good sense. It should be taken seriously.
When we look at ideas that we're using to make sense of the world - and again, this is building on a lot of people's work! I can't refer to them all, all the people I'm trying to draw together. Of course, people make judgments of plausibility all over the place and they bullshit themselves in this in all kinds of ways. We'll talk about that. So I'm talking about here, not a descriptive theory of what people do when they're saying something possible, I'm talking about a normative theory about ‘what do they do when they're doing it well?’. What are they doing when they're doing it well? So part of the argument is this: people, of course, really like ideas that are multi apt, right? Here's my idea (draws a little box on the board), my thesis, my proposal, some model, whatever kind of construct I'm using. And it's multi-apt in that I can use it and it can bridge to this domain and this (draws four medium length arrows coming out of and diverging from the right side of the box), I can go into these many different domains. I can do this insightful connecting and transfer.
I think this is a much better way, this notion of multi-aptness and being able to go into many different domains and help us find, formulate and solve problems. I think this is a much better understanding of what scientists are trying to invoke when they say a theory is ‘elegant’ than just talking about simplicity (writes elegant on the board, below the diverging arrows) precisely because of the way we have no canonical way of trying to work out what scientists are meaning when they talk about simplicity, whereas it's clear that they do seek constructs that do this . Now here's a problem: Is that good enough (the multi-apt box and arrows construct)? Is that good enough to make a construct plausible? The problem with that is, if it's just on its own, we're lacking something. We're lacking another thing that we want and this is a point actually made by Rescher, and it comes out in some of the psychology of plausibility. You see, we also want that these constructs are produced in a certain way. [Not just that…] This is how they're used (writes Used above the arrows coming out of the box on the right), but this is how they're produced (writes Produced above the ‘idea’ box, to the left). This is their forward orientation (on the right). This is their backward orientation (on the left). We want a construct that has been produced by many convergent, independent lines of investigation (draws 4 arrows converging into the left side of the ‘idea’ box, writing ‘convergence’ below).
Now, let me show you just a concrete example of that is this. And you can see it even in young kids. You prefer information that integrates — think about integration — across multiple senses. So you prefer information that is not just something you can see, but also you can simultaneously here. That's why seeing and hearing me right now is better than just seeing me or just hearing me. Now, why is that the case? Well, you see, if I'm getting all of my information, just through one channel, there's a very good chance that this thing (idea box) is being produced by bias, by distortion in that [one] channel. But if I'm getting the same thing produced from multiple independent channels, there's a very good chance that it's not being produced by the bias or distortion in any one of these [input channels]. There's a very good chance that the relative biases and distortions cancel each other out. So by doing convergence, I get bias reduction (writes bias reduction off of convergence), and man does that matter for overcoming the way in which we bullshit ourselves with salience! So convergence gives me bias reduction, what Rescher calls trustworthiness (writes trustworthiness below bias reduction). Now trustworthiness isn't truth. It's not certainty. This, by the way, is why science likes numbers. Scientists like numbers, not because we're fascists, or because we don't appreciate the artistic beauty of blah, blah, blah, blah… We like numbers because they give us this (indicates the construct on the board). Look (draws 3 tallies on the board), you can see three. You can hear three (claps loud three times), you can touch three (touches three things). Numbers afford convergence, and they boost trustworthiness. They help to reduce bias. And remember, I’ve tried to argue that the [-] scientific method of experimentation and observation are methods, psycho technologies, in which we're trying to reduce bias; reduce the way in which we're deceiving how we're coming up with our constructs.
Now think about this: If you had just elegance — that multi-aptness, you can bridge to many different areas and link lots of stuff together — but it wasn't produced in a trustworthy manner, what would you have? You would have conspiracy theories. That's exactly what conspiracy theories do! They're a form of bullshitting because they're very: “Look, if you will just accept that the British Royal family are lizard reptilian aliens from another dimension, you can explain so much of their political and social and interpersonal behaviour! Just give me this idea…” (elegance imbalanced construct) “…just give me this and look at what I can do! Look at all the different disparate facts I can link and integrate together! I can give you this synoptic integration…”. And you should be saying “Yeah, but it's all bullshit”. It's bullshit because we want this (useful/elegant side), so it's very salient, but it's bullshit precisely because we've lost this (taps production/convergent side of the construct). We don't trust the construct.
What about the opposite? So, The conspiracy theory is farfetched because it gives us lots of this (*1 - draws large divergent arrows), but very, very little convergence (draws small convergent arrows). What about this? Where I've got tremendous convergence (*2 - draws large convergent arrows), surely we'd like this, and there's very little insight integration (draws small divergent arrows). What's that? Well, if you read scientific journals, you'll see people — any [journals], even beyond philosophical [ones] — people rejecting this (*2). This up here, this (*1) is the conspiracy theory and it's farfetched (writes farfetched beside new diagram *1). But you know what this is (*2)? This is triviality (writes triviality beside new diagram *2) and we reject things [that are trivial]. Now to accuse something of being trivial is to not say it's false. It's to say that it has no transformative power (indicates divergent arrows on main construct diagram): it makes no difference. It causes no insight. It affords no integration.
So this (indicates *1 & *2) tells us something. And notice! Notice, notice, notice ways in which we can equivocate on these (excitedly cleans the right side of the board, leaving the diagrams). We can equivocate between these two (draws a little double ended arrow between *1 & *2). So Daniel Dennett talks about this. This is a way in which, one way in which we bullshit ourselves, he calls it the “Deepity”; a Deepity. So people do this: they'll say things like “love is only a four letter word” (writes this on the board)! And everybody says this and “Oh, it's very profound…” and you take another drink of your alcohol or whatever! Now notice what's going on here: On one level this is a triviality. Of course Love is a four letter word! I've got many different independent memories, different uses, everybody's using the word love this way (indicates all the different converging arrows on *2). Love is a four letter word. Now at that level, it’s trivial. But then I'm supposed to equivocate between this as a word (draws a square around love), I'm equivocating between this as a word and this as a concept for a thing that I care about because love as a thing has tremendous elegance. It goes into so much of my life. But notice what's happened here: I'm not actually giving you any information or analysis about love as a thing or a concept! I'm pretending to give you multi-aptness, when in fact all I'm doing is giving you triviality. I equivocate. So I say, “love. It's just a four letter word”. It's bullshit!
It's bullshit that makes use of, abuses - and this is a powerful way in which we bullshit ourselves, with these Deepities, these things that sound deep and are not deep at all because what we're doing is equivocating. We start with something that's undeniably trivial and then we're equivocating with something that looks like it's elegant and multi-apt. And we bullshit ourselves! We do the reverse, right? We say something that looks like it's really multi-apt, really controversial, it’s going to change everything (draws another box with large (*3) divergent arrows coming out; representing great insight and transformative power). And then we're challenged, “Oh no, but ‘great criticism’…”, “Oh no, no. I never meant that! I only met this trivial thing that nobody would possibly object to!” (Draws another box with small (*4) convergent arrows going into it; representing weak convergence). This is the motte-and-bailey strategy, right? Where you present [an argument] and it's looks radical and controversial (*3). And then when you're criticised you withdraw. “No, I was just saying this! (*4)”, it looks like I'm doing [this] (*3), but yeah it’s just this (*4)! But then I pretend that I didn't actually change my position. They're the same thing (points out the two different boxes from *3 & *4). And that is, again, how we bullshit ourselves.
So notice how we can abuse this machinery and bullshit ourselves. Now, the fact that we can abuse it, tells us how we can improve it [to] how it's supposed to be used! Because if we acknowledge that [-] this is abuse (*1 & *2), that the Deepity is an abuse. If we acknowledge that the motte-and-bailey strategy (*3 & *4) of avoiding criticism is an abuse. If we admit that we don't want farfetchedness and triviality, we can put this all together into a Normative account of what good plausibility is: We have high convergence matched by high elegance. Now we'll talk a little bit later, when we do some more cognitive science, we want something more about this (circles particular attention within the first box of the main construct) in terms of relevance, realisation and other things, but we'll come back to that. But when I get a balance between convergence and elegance, then I get something that's highly plausible. I’m making a construct that is trustworthy and powerful and it's affording me getting a new pattern of intelligibility. This is how I make integrative new patterns of intelligibility in the world. And so this is what you're trying to do in cognitive science. You're always trying to create these constructs that are high in plausibility.
When I get something that is extremely trustworthy, that is balanced with powerful multi-aptness then I think that's what we mean when we say something is profound (writes Profound on the board). It's the exact opposite of the Deepity, the triviality, the conspiracy, the Motte-and-Bailey bullshit. It's profound! Now being profound doesn't mean it's true. Being profound means it's very reasonable and it should be taken very seriously.
I'm going to now try and exemplify this with you. I should mention before I go, that this idea of the balance between this (convergent side) and this (divergent side), I got this from Elijah Millgram's really, really brilliant work on Practical Induction. Practical induction is not induction about how you change your belief. Practical induction is how you change your desires, how you change what you care for. It's deeply relevant to that Socratic project of having a rationality of what you care about. (Wipes entire board clean.)
Okay. So what I want to do is first of all draw this together in this definition… so Cognitive science is the discipline that it's trying to bring about synoptic integration. And I'm going to say, it's trying to bring about profound synoptic integration. It's trying to create these constructs, that bridge — don't reduce — bridge between the disciplines. Profound synoptic integration that addresses equivocation fragmentation and ignorance - the ignorance of the causal relationship between the levels of reality that we designate by the term line.
• Cognitive science = “Profound synoptic integration that addresses equivocation, fragmentation and ignorance.”
That's what cognitive science is trying to do. And of course, as I've tried to show you throughout, it is deeply relevant to the Meaning Crisis because it deals with this issue of fragmentation, the confusion we have about ourselves, how we're sort of pulled apart. It deals with, as you've just seen, the meaning making machinery and it deals with, again, as you've just seen, our propensity for bullshitting ourselves and deceiving ourselves.
Now, what I want to do is I want to do the Cognitive science of “meaning making”. I'm not totally happy with the term (meaning making), because it sounds too romantic in my ears. I'd also want to say “meaning seeking”, but that sounds too empiricist in my ears, like meaning is just out there to be seen and experienced! So I'm going to try a new metaphor, because it also goes with something else I've been saying from the beginning. And this is inspired by the way we talk about the ‘cultivation' of wisdom and it's inspired by Heidegger's use of this metaphor: “Meaning Cultivation” (writes this on the board). Because what I'm going to try and argue, and [-] also you've seen this in the history, is meaning isn't something we are imposing wilfully on the world. That's a mistake from our history. Meaning isn't something we just find in the world - that's to ignore the scientific revolution. Meaning is something between us and the world, like the way you cultivate a plant - you're doing stuff with the plant, but you're also allowing the plant to unfold. You're cultivating, with the world, meaning between you and the world. So as much as possible, I'm going to try and switch [to] this (underlines Meaning Cultivation). [But] I will fall into habit because this is the language that we inherited from the cognitive revolution in psychology about talking about meaning making and making sense. And, where the emphasis is on us making, even though, as I'm going to show you, the third generation Cog-science is much more talking this way (taps Meaning Cultivation) than the romantic notion that we impose or make meaning, or we simply sense it or find it in the world.
Okay, I want to start doing the cognitive science of this (Meaning Cultivation). I want to take a look at the science of cognition. And I want to try and exemplify what I showed you, what I argued for. I want to try and exemplify Synoptic Integration and the creation of a plausible construct. So the faculty in us that is supposed to be our core cognitive capacity, our core capacity for meaning cultivation and being able to adaptively respond to the world — and this goes all the way back to the Greek heritage — this is the notion of Intelligence. Intelligence is the capacity that makes you a cognitive agent. At least an agent whose cognition is working with meaning as opposed to a living thing that is in some sense — like a plant — responding, maybe in a very sophisticated fashion, but it's just responding in this complex fashion to its environment. Intelligence means you are in some sense, a Cognitive Agent (writes Cognitive Agent on the board, off Intelligence).
Okay. So, what is it that we should…/ how should we frame this? We're going to try and get a purchase on this. So a good way of trying to understand this is, the way we try to test for intelligence, is being a General Problem Solver (writes this off of Intelligence too). And this goes back both within the psychometric to people like Binet and Simon who are trying to measure intelligence and people like Newell and Simon — two different [Simons], by the way — that are trying to create artificial intelligence, make intelligence: artificial intelligence, artifactual intelligence, right? Not fake intelligence. Both of them, point to this idea that when we're trying to measure or make intelligence, we're trying to measure you as a general problem solver, or we're trying to make a machine that is a general problem solver. Now, what does that mean? Okay, so here's a machine (picks up his red cup) that's not a general problem solver. This is good for solving this problem, holding water, and we've talked about this, in such a way that I can use it. It solves a bunch of other problems maybe, but not very many. Not very many. It's very limited in its problem solving capacity. Unlike that cup, you are capable of solving a wide variety of problems in a wide variety of domains. You are a general problem solver.
You can worry about God and how to get a drink. How to go swimming. How to build a houseboat…! The number of domains in which you can operate is vast and within each one of those domains, there are many different kinds of problems you can solve. You are a general problem solver. And that's why, when we measure intelligence, we give you a wide variety of different kinds of tests - to see if you can solve a wide variety of problems across multiple domains. That's how I test to see if you're a general problem solver and what Newell & Simon were exactly trying to make. They in fact called it — we can’t use this term now because it now means Global Positioning [Satellite] — but they called it the GPS, the General Problem Solver. The first project to make artificial intelligence, not just computational machines but artificial intelligence, was this project of trying to make the General Problem Solver.
Okay! So you are a general problem solver! Okay, great, that makes sense. (Indicating General Problem Solver on the board…) Now, what I need to do is a couple of things to be…/ I’ve got to be very careful here! And cause people get very…/ (Now moving back to indicate Intelligence on the board…) Of course this is… we'll come back to that. This is such a politically Laden term, precisely because we aren't clear about what we're talking about - we’re equivocating all over the place and we're ignorant of how this term is applied to different levels of reality. And yet, nevertheless, we bullshit ourselves by finding it salient and rushing into speech and action without the clarity that is needed. We'll see, for example, that this (still indicating intelligence) is not a synonym for being rational; being intelligent. And what you ultimately should care about is not how intelligent you are, but how rational you can become.
So let's take care here. We're meaning some capacity you have for solving your problems and learning, and so we want to keep intelligence separate from knowledge (writes knowledge off of intelligence also). Of course having knowledge enables you to do things and in that sense, in a broad sense of activity, it makes you more adoptive to your environment: you can do lots of things, even if those things make claims that other people value for their truth. But [-] if you make these synonymous (intelligence and knowledge), then you can't use this (intelligence) to explain this (knowledge). You can't say the reason why Susan was able to acquire such knowledge is because she's intelligent because if intelligence means possessing knowledge then all you're saying is Susan possesses knowledge because Susan possesses knowledge, which is non explanatory! (Wipes knowledge off the board.)
So what we want to ask is “what is it to solve a problem?”. Don't focus on the product, having the answer, getting the knowledge. Focus on the process. The process. And this is going to be a hallmark — we’ve talked about this before — the hallmark of rationality is valuing the process, not just being fixated on the product, especially the belief, the conclusion. So what — we've got to analyse the process — what is it to solve a problem? And this is where the work of Newell & Simon was just so deeply influential. (Wipes board clean.) It's been influential in psychology, computer science, economics. It's just Seminole and important work.
Now, as I first try to describe it to you, it's going to strike you as somewhat trivial! But let's go very carefully on what we're going to do. We're going to try and analyse a problem down into four basic features. And we'll end today's episode with that. And at that point, it won't seem like we've got that profound construct that we're looking for. And then I'll need you to wait to next time. But Newell and Simon basically said, “What is it to have a problem?”. A problem is when there's a difference between the state you're in, which they called your initial state (draws a circle on the left of the board with I.S. in it) and the state you want to be, and this is your goal state (draws a circle on the right with G.S. in it). So I have a problem when there's a significant difference between those, for example, I'm thirsty (I.S.) and I don't want to be thirsty (G.S.), and those aren't the same thing. Those things have to change in the world and in me (walks over and takes a drink of water) for the difference between the initial state and the goal state to go away. So part of what I have to do when I solve a problem is this system has to be able to represent — we'll have to come back to how are we’re going to use that term — the initial state and the goal state.
And then there are actions (draws little lines from I.S. circle to the right), operations I can perform that will change the state I'm initially in, to some other States (draws new, similar circles at the end of the operation lines). So, for example, part of the problem I had was I'm over here and the glass is over there! So one of the things I can do, one of the things I can do is I can walk towards the glass. The cup. Of course, another thing I can do is I can raise my hand (raises hand straight up, well away from the cup). Now you say, “Well, don't raise your hand. That's stupid!”. Well, wait! There's going to come a point though, when I do need to raise my hand (demonstrates). So I'm going to need that operation! Okay? So... I think you're you starting to see things...! And then what I do is, from here (diagram on the board), perhaps I can do two other operations (adds two more operations to one of the new States circles), from here, maybe three from here, maybe only one and so on and so forth… (adds a network of operation and states to the board, between I.S. and G.S.). I'm not going to draw it all out, right…?
So, so far I've got three things that I'm analysing problem solving into: Initial State, a Goal State and Operators that can transform one state into another state. (Simple, horizontal flow Search Space Diagram shown on screen (@57:24)) There's one other thing I have. I have what are called Path Constraints. You see, I'm a general problem solver. I don't want to find just any solution. One of the ways I can make lunch for myself is to burn down my house. It will cook my food! That is not a good solution. Not because it doesn't achieve the goal I want in this situation of cooking my food, but it really reduces me as a general problem solver. I don't want to solve this problem to the detriment of my ability to solve multiple other problems, or I will lose my intelligence. I will lose my capacity as a general problem solver.
So what I have to do is the following… Solving a problem is this: it's to have a sequence of operations (starts to indicated progressing left to right through the operation circles on the board), a sequence of operations that will transform the Initial State into the Goal State while obeying the Path Constraints, preserving me as a General Problem Solver. That's what it is to solve a problem. And this (brackets the whole diagram on the board), this is called the Problem Space or sometimes called the Search Space. Now, what this analysis does for you is it explicates the problem space, and this is where all of the powerful insight of Newell and Simon's work comes out, because once we start to pay attention to properties of this search space, we can see how profound this idea actually turns out to be. And that's what we'll do in our next session together.
Thank you very much for your time and attention.
- END -
To keep this site running, we are an Amazon Associate where we earn from qualifying purchases
Nicholas Rescher is a German-American philosopher, polymath, and author, teaching at the University of Pittsburgh. He is the Chairman of the Center for Philosophy of Science and has formerly served as Chairman of the Philosophy Department.
Book alluded to: Plausible Reasoning - Buy Here
Daniel Clement Dennett III is an American philosopher, writer, and cognitive scientist whose research centers on the philosophy of mind, philosophy of science, and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science.
Book mentioned: Intuition Pumps and Other Tools for Thinking - Buy Here
The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions which share similarities, one modest and easy to defend (the "motte") and one much more controversial (the "bailey").
Elijah "Lije" Millgram is an American philosopher. He is E. E. Ericksen Professor of Philosophy at the University of Utah. His research specialties include practical reason and moral philosophy. Elijah Millgram received his Ph.D. from Harvard University in 1991.
Book mentioned: Practical Induction - Buy Here
Martin Heidegger was a German philosopher, and a seminal thinker in the Continental tradition of philosophy. He is best known for contributions to phenomenology, hermeneutics, and existentialism.
Binet and Simon
Binet Simon Scale of Intelligence
The Binet-Simon scale was created by the French psychologist Alfred Binet and his student Theodore Simon. ... Binet believed that intelligence is malleable and that intelligence tests would help target kids in need of extra attention to advance their intelligence.
Newell and Simon
Fifty years ago, Newell and Simon (1956) invented a “thinking machine” called the Logic Theorist. ... The Logic Theorist and other cognitive simulations developed by Newell and Simon in the late 1950s had a large impact on the newly developing field of information-processing (or cognitive) psychology.