The brain is a Peirce engine

There comes from Scott Alexander’s blog news of a new unified theory of neural cognition called the “predictive processing model”. Read his review of the book “Surfing Uncertainty” before proceeding further.

This model seems to solve a whole raft of longstanding problems about how the brain does what it does, offer insight into how various neurotransmitters work in cognition, and even into how disorders such as autism can be understood as consequences of very specific processing failures with testable consequences.

Now excuse me while I spike a ball in the end zone and yell “YEEHAA!”. Because, although its framers seem still unaware, the predictive-processing model tends strongly to confirm a set of philosophical positions I’ve been taking (and taking flak for) for many years.

Specifically, under the predictive processing model, the brain is a Peirce engine. “Mind” is what we observe as the epiphenomenon of that engine running – its operating noise, more or less.

The Peirce I’m referring to is Charles Sanders Peirce. In his seminal 1878 paper On Making Our Ideas Clear he recast “truth” as predictive accuracy, asserting that our only (but sufficient warrant) for believing any theory is the extent to which it successfully anticipates future observations.

This insight was half-buried and corrupted by later analytic philosophy, notably when William James and John Dewey vulgarized “what is predictive” into “what is useful to believe” and invented the whole sorry mess called Pragmatism.

As a result, the incisiveness of Peirce’s insight was largely forgotten for most of a century except by specialists in the philosophy of science who used it to construct the new commonly accepted explanation of what we mean when we assert that a scientific theory is true, Or false; Karl Popper’s falsifiability criterion is another not-quite-right approximation of Peirce.

What Peirce tells us perhaps best expressed in an antinomious way: There is no “Truth”, only prediction and test.

And now it turns out this is what the brain is doing, all the time, at the neural-circuitry level. Endless waves of top-down expectations crashing against endless waves of bottom up sense data, interpreting predictive failure as unwelcome surprise. Knowledge emerging as a constant Bayesian update of priors at the collision face.

OK, that oversimplifies. What PP actually tells us is that there isn’t any one collision face but many, scattered all through the nervous system. feeding each other. Some in the retina of the eye, for example, where first-stage visual processing is done.

PP boldly says this collision-and-update is not a metaphor and not an approximation of lower-level neural processing of a different kind – it’s the actual computation that the actual meat substrate of your mind is doing in hardware.

I think this is right. It explains so much – and it’s Peirce having the last laugh.

169 thoughts on “The brain is a Peirce engine

  1. It’s been a long time since I’ve looked at this material, but I believe some of Peirce’s thinking here derives from his “boxing-master” Chauncey Wright. Wright was an early Darwinian and one of the first to describe cognition as a natural-selection-like process involving variation and selection of ideas. One standard study is Madden’s “Chauncey Wright and the Foundations of Pragmatism” (1963):
    https://books.google.com/books?id=vto0AQAAIAAJ

  2. I haven’t followed the whole trail of your thoughts on mind, but from what I have gathered you are a materialist with respect to mind. However, from this post I don’t see how the Peircian model confirms the materialist view of mind. The model certainly is consistent with materialism, but to confirm it (over against some kind of dualism, e.g.), the model would have to be inconsistent with those alternatives or at least more improbable with respect to it than it is with respect to materialism. Am I missing something? Perhaps there’s more in Scott Alexander’s post that makes it clear how it confirms views like yours.

    • >However, from this post I don’t see how the Peircian model confirms the materialist view of mind.

      My position: Dualism is confused bullshit, but PP as a prediction of what goes on in neural computation is neither sufficient nor necessary to demonstrate that.

      If you’re talking some kind of ontological dualism like Descartes, the right answer is “Who gives a shit?” The right question is not what substrate, or how many substrates, the mind is running on, nor whether some of them are “non-material” (whatever that means) – it’s whether we can make causal predictions about the entirety of the mind from the behaviors we can observe and the measurements we can make. “It’s all predictive processing all the way down” is such a prediction.

      If you’re going to claim that some part of a mysterian mind is causally isolated so this isn’t possible, you run headlong into the problem I pointed out in my
      argument against the autonomy interpretation of free will. If it’s truly causally isolated, it’s necessarily just a random-noise generator from any view confined to the observable universe.

      But without that (useless) premise, the idea of a “non-material” component of mind is no more a bar to theory formation than is hypothesis of the existence of material neural structure we can’t observe with the naked eye. It’s just mysterianizing bullshit to propose that these cases were ever different.

      If you’re talking non-ontological dualism, this is mainly just philosophers masturbating at each other about technical difference in language analyses that they pretend have huge consequences because they need to write more papers or something. Do I sound disrespectful? That’s because I am. “Predicate dualism” vs. “Property dualism”, and nineteen other big-endian-vs.-little-endian disputes…give unto me a fucking break.

      That is all just noise. At least ontological dualists are claiming something which, while utterly stupid, would have important consequences if it were true the way they imagine it to be. The “non-ontological dualists” are too lame to even manage that much.

      • I was looking into Ed Feser’s Aristotelo-Thomistic stuff recently. Like you, I tend strongly towards epistemic scepticism, that is, being aware of map-terrain problems. My tentative result so far is that they are actually more sceptical than what we give them credit for, and they actually may have something sort of an immateriality argument worth considering.

        The technical term for their view is hylomorphic dualism (matter-form dualism).

        As a very quick summary, a dog and a stone statue of a dog have the same form but different matter. The form of the dog, the universal/form/essence of dogness, is something they both have, but it is mixed with different matter. And they actually accept that “dogness” when considered as itself, the pure form – is a mere abstraction that exists in the mind only, a mental model, clearly in “map” terrain! I did not know this before. This is what I say that they are more sceptical than people give them credit for. But the form or universal of “dogness” also exists in the dog and the statue of the dog but there not in the pure abstract form but inseparably mixed with matter. Matter can only exist in form, there is no formless matter (a shapeless blob or a puddle of goo is considered a form), and in objects form and matter is inseparably mixed. But our minds abstract away the form from matter and create the mental model of pure form, “dogness”.

        The main argument is: how else would we know that when we say that yes it is a dog and yes it is a statue of a dog that this statement is correct, right, true, if there was no real actual similarity? So form exists in the mind as a mental model or map of Pure Similarity abstracted away from matter, while in actual objects similarity (“dogness”) is inseparably mixed with matter.

        And basically you can “make” terrain by taking an 1:1 map (form) and mixing it with soil, rock and grass matter. Is that not how we build things after a blueprint, at least conceptually?

        This actually sounds convicing to me, without any obvious glaring errors. Formerly I thought they are less sceptical and they think forms are even in the pure form objectively real. According to Feser’s The Last Superstition, they never thought that. Feser also demonstrates the Enlightenment is based on misunderstanding Scholasticism, largely because Scotus and Ockham screwed up Scholasticism and it was not the real deal anymore.

        My point is, I know the Enlightenment is all screwed up, it completely contradicts our modern science, and the simplest thing to do is to check what was there before the Enlightenment. It was Scholasticism and if its basic ideas have no obvious mistakes it at least worths considering.

        Now on to immateriality. This is a harder concept. I think Feser and Aquinas probably have multiple levels of immateriality, and one level makes sense, another not.

        Clearly, forms without matter exist. For example, numbers. OK you are the math genius, I suck at it, but I remember someone, maybe Neumann saying 0 is the number of the elements of the empty set, 1 is the number of the elements in a set that contains the empty set, 2 is the number of the elements of a set that contains both, and so on. Amazing how much you can build on literally nothing? Because that third set is still less than vapor… materially…

        So anyway 2 or 10 or .. or two or zwei are merely representations of the number, they are not the number. The representations can be written on stone, paper, LED screen, and they are not the number. Apparently it really exists in an abstract way. It is an immaterial thing.

        Anyhow it seems software code can also been as an immaterial object. Because on a screen or in hardcopy it is still the same code. The dog and the statue of the dog are clear different things, but the code on a LED screen, in RAM, on a hard disk and on paper are the same thing, and these are merely the representations.

        So basically every computer that does something useful consists of material objects, hardware, and immaterial objects, software. This level of immateriality is defensible and even obvious.

        I think Feser and the Scholastics also believe in higher levels of immateriality, which are really like ghosts, but that is not really relevant. But to be fair I am not even sure about that, really. After all they believe in the resurrection of bodies after the last judgement because they are aware humans have no ghosts that can live a happy ghost life in an afterlife. But Aquinas even argued angels are pure form and could that really mean angels are like software code? I think they did believe in higher immaterialities, where I cannot follow them.

        But saying software is immaterial and you need that for your computer to do something useful is obvious and correct. After all if software was not immaterial, we could not even try argue that videogame piracy is not theft :-)

        BTW the weird thing about materialism and hylomorphism is that in modern science matter does not even matter. Sure the dog and the statue of the dog is made of different material, but that different material is made of the same protons, neutrons and electrons. Flesh vs. granite are simply different forms of the arrangement of the same particles. So today it would forms all the way down…

        Seriously though, we can understand the whole Scholastic model on the usual computer based cognition models without going into ghosts. The dog and the statue of the dog really do share a similarity. This similarity really does exist in the mind abstracted away as a concept, map, while in the real dog and statue it is the shape of their matter, the arrangement of their atoms. Any computer with a good pattern recognition software gets the similarity. The similarity will then exist in the model in the computer as data in the memory. Data is immaterial, as argued above, only representations of it are material. This sounds seriously not wrong. All this immateriality says it does not matter if that data exists as magnetism on the HDD, or ink on paper. And indeed it does not.

        • Hm, time to recast the whole thing in predictive terms.

          The most glaring feature of immaterial things is that they can be really easily copied, see the never-ending debates if software, music, movie, whatever piracy is theft or not.

          Dogs are not easy to copy. Nor are statues of dogs.

          The abstract idea of dogness, the similarity between the dog and the statue, is a bunch of a data in a computer equipped with pattern-recognition software. (This is pure form, abstracted away from matter.)

          This data is really easily copied. You can publish it and people can pirate it and then you can complain it is your copyright and they are thieves and they can argue they are not thieves because they did not remove the original. But I cannot “pirate” your dog or your statue of the dog without removing the original.

          If I have a copy of the data, the immaterial object, the form, I can use a 3D printer to quite literally mix the form with matter, and recreate your statue of the dog.

          These are the predictions. Immateriality is easy and perfect copiability. Feser is not saying this, but I do, because I am really, really into explaining important things based on the copying arguments from Ruth Millikan: https://dividuals.wordpress.com/2015/12/14/copying-is-everything/

          • I fail to see how any of the preceding involves “mixing forms with matter.” You are arranging matter to correspond to a “form”, if you will, but there is no inherent link between the two.

          • Please try to find a way to make the beliefs you espouse here pay rent.

            A statue of a dog, and an actual dog, do not share dogness. You wouldn’t expect a statue to bark at the mailman, piss on a tree nor eat chocolate then have diarrhea afterwards. You would from a dog, as that is “essence of dogness”.

            Likewise, having a graphical representation of a subset of some software – remember, screens are limited in size – requires quite a different map than compiling that code into instructions and storing them in-memory. What can fill pages in sourcecode might fit into a small kernel page.

            What’s more, if you managed to wire up a dog, made a snapshot of his brain, and simulated it in a virtual machine somewhere, you’d have to provide (virtual) hardware interfaces for all its limbs, stomach functions, barking-machine / mouth, etc. Otherwise your copy “should fail to boot” – the essence of dog-brain is more than just a copy of that brain’s contents, it’s also that brain’s interfaces and virtualizations thereof.

            Pure form involves more than pattern recognition. Recognizing is one thing, another is acting on those recognized. A third is updating the pattern-recognition-machine with new data, a fourth, updating the set of cached actions, and so forth.

            The problem here is that your analogies are awfully misleading, and you do a terrific job in hiding buried assumptions and flawed conclusions in them.

            The problem with that is, one has to second-guess whether you actively hold these flawed views, and for what reason: do you actually believe that, or do you believe that because holding the flawed beliefs “pays rent”?

            My predictive model fails here.

            • Doesn’t the copying part and the software parallel pay rent? Behavior here is certainly part of the form, if you want to abstract away dogness to make a blueprint for make dogs, or make convincing simulated dogs in VR (and the whole of the code and data is the form in this sense, remember, we are talking 2500 years old terminology here, form does not merely mean shape), you also code for the behavior.

              Yes the statue of the dog lacks that important aspect of the form.

              Imagine something bark, piss, eat chocolate and have diarrhea, but it looks like Spongebob. Is that a dog? No. Both behavior and shape are important aspects of the form.

              It is seriously easy IMHO. You are looking in a correct direction with the VR simulation. Everything that you need there, the code and data, are form in this sense, also behavior. And you are jumping from visual shape only to brain only, why? Of course the virtual dog needs a stomach but not only because otherwise the brain does not work right (why would the brain be so especially important?) but because the stomach is an important feature of the animal itself.

              Anything you can easily copy is form. Any information.

          • Sort-of-aside: Philosophically, that data is not “the idea of a dog”, but “a description of the idea of a dog”, more or less.

            (In software terms, an implementation of the idea of a dog, say, but not a concretization the way an instance-of-dog would be.)

            The idea of a dog per se is a Platonic Form or a Kantian Schema (pick yer poison), existing as a unit, uncopyable because outside of presence itself, etc. – it’s the very concept of dog-qua-dog, which is not data and not copyable; the concept of “two ideas of dog-per-se” is meaningless*.

            In software terms, the Forms are singletons.

            I mean, more, that “just because it’s immaterial doesn’t mean it’s easy and perfectly copyable”, not in principle.

            (That said, note that I’m a materialist in my theory of mind, on simple evidentiary and Ockham;s Razor grounds.)

            (* Assuming we agree on what dog-per-se is; dog-according-to-one-culture and dog-according-to-another would be distinct. But equally, not copies of one another.)

        • Do you have a dog? Have you ever?

          Suggesting a statue of a dog has ‘dogness’ is like suggesting a chalk outline of a corpse at a crime scene has ‘manness’.

          But maybe it’s just me.

          The statue *is* a map.

          • I have a dog. He is a doggy dog. A sort of not-quite platonic ideal of a dog. He pisses on firehydrants, light poles, weeds, and once someone’s shoe.

            He likes to chase bunnies. And squirrels, and cats.

            He likes it A LOT.

            On our walk there are several stone or ceramic bunnies.

            You may think that that a stone bunny has no bunnyness, but Shadow cannot be convinced otherwise until the bunny doesn’t run as he approaches.

            Which is kinda funny to watch, because at a certain point there is, in his little doggy brain, a collision between “bunnies run when I get close (model)” and “this bunny isn’t running (reality)” and he sort of flips into a different mode of approach.

            So Shadow does think there is at least *some* bunnyness to the stone bunny.

            • He’s got multiple ongoing map/terrain problems when he talks about duality.

              The abstract idea of dogness is NOT the similarity between the statue and the dog.

              The statue is a map. It’s a representation of dogness, abstracted in a particular way. It captures the 3d physical outline of a dog, but that’s it.

              The character Dog in Half Life 2 is also a map. A representation of dogness. Didn’t look like a ‘dog’ worth a damn, but nevertheless conveyed a great deal of dogness.

              How effective a representation of dogness is an actual dog, just that it’s dead? That dog don’t fetch.

              That form/matter business, is so wrong it leaves me at a loss for words.

      • It’s funny, but I’ve found being brutally reductionist is a very useful heuristic that’s allowed me to see through and avoid a great deal of absurd clever-sounding bullshit that I’ve seen people smarter than me eat up like candy.

        Along the lines of, only an intellectual could believe something so stupid.

  3. A long time ago I read a .signature that said “The only form of intelligence that matters is the capacity to predict — Colin Blakemore”. That struck me as profound.

    • I don’t know if I’m disagreeing or restating, but IMO the only form of intelligence that matters is the capacity to figure out what to do when shit starts going wrong.

      Predicting failure is moderately useful.

      Preventing failure is really useful.

      Reacting well to black swan failures is awesome.

  4. Typo: Top-Town for Top-Down.

    Interesting news, if true (call back to Civil War-era headlines.)

  5. “What Peirce tells us perhaps best expressed in an antinomious way: There is no “Truth”, only prediction and test.”

    I’m quite certain that Pierce would not accept that as an expression of his thought. A man who wrote the following –

    Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion. This activity of thought by which we are carried, not where we wish, but to a fore-ordained goal, is like the operation of destiny. No modification of the point of view taken, no selection of other facts for study, no natural bent of mind even, can enable a man to escape the predestinate opinion. This great hope is embodied in the conception of truth and reality. The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. That is the way I would explain reality.

    – could not then agree that “truth” doesn’t exist; for if there were no truth, investigations could not converge as Pierce says they do to one conclusion.

    Nor, if the Stanford Encyclopedia of Philosophy is right about Pierce, would he have agreed that “mind” is no more than an epiphenomenon of a brain constantly updating Bayesian priors. In fact, he held that everything is mind to some degree; inanimate things were mind that had “congealed” into a single fixed thought. Pierce was much closer to Aristotle than to any materialist.

    “Surfing Uncertainty” says, according to Alexander, that predictive processing is the basic operation of nervous systems in general (not just human!) – an extremely interesting theory. But I don’t see how it resolves the mind/body puzzle, or why you’re spiking a football in the end zone (figuratively, anyway.)

    • >I’m quite certain that Pierce would not accept that as an expression of his thought.

      You’re not pointing at a fundamental difference, just a terminological one. I was trying to hint at it by capitalizing “Truth”.

      Yes, Peirce’s passage expresses a conviction that we all live in the same reality – that our experiments necessarily all converge on confirming the same set of prediction generators, i.e. theories. In that sense, yes, truth emerges.

      But the person who wrote “On Making Our Ideas Clear” would necessarily deny that you can capital-T truth, that is treat your prediction generators as other than contingent and falsifiable.

      • Are you or Peirce more into correspondence or coherence theories of truth? For me – even before reading Scott’s article https://en.wikipedia.org/wiki/Correspondence_theory_of_truth which says true statements are correspond to “raw data”, was an obvious bunk, there is no such thing as a “raw datum”, every input is the result of not only fallible observation but also fallible theorization on the microlevel, say, theorizing about which aspect of the potentially available dataset to actually pay attention to or consider important. So truth can only mean coherence by other statements accepted as truths, the whole corpus. I think Quine’s Two Dogmas Of Empiricism also made a very strong coherentist case.

        The obvious problem with coherentism is Russel’s critique, namely that both A and not-A will cohere with some other statements.

        My solution is that it means that there are basically statement-clusters, information-clusters, and internally they are coherent. For example one such cluster is Science. Another is Catholic Theology. A third one is the “mind only” school of Buddhism what even states everything solid comes from collective pride and everything hot from collective anger. And so on.

        And that means navigating inside the clusters is straightforward enough, when a new scientific hypothesis is proposed, we match it with the existing corpus of accepted theories and accepted data, and based on the match it is either rejected or something else, say, an old theory or old data is ejected from the consensus, based on which one of the two competing ones is more coherent with everything else.

        But it also means navigating between these clusters is not easy and could actually take something sort of a leap of faith.

        I mean for pragmatic people like me it is easy. Since I tinker with things and solve actual problems, just like the otehr 99,9% of people who have actual jobs, it is fairly obvious to leap towards the science cluster.

        But one CAN, theoretically, leap outsside it. It follows from coherentism. You cannot reject some bits and pieces of science but you can reject the whole of it, even 2 + 2 = 4. You can become a shaman or something, have a fully mystical worldview. At that point you are not coherent if you ever use a computer or go to a doctor so this is a very difficult life, but some people really did it. If you really gonna be a mystical hermit living in a desert eating locusts and wild honey and ever do anything civilized ever again, then of course you can reject the whole science cluster because what exactly do you have to lose at that point? You will live a short life followed by a nasty death anyway.

        • From which it would follow that the proposition “Santa Claus leaves gifts for good children on Christmas Eve” is true, because millions of children coherently believe it. (An example due to Bertrand Russell, from his criticism of William James.)

        • >Are you or Peirce more into correspondence or coherence theories of truth?

          I would say that the coherence and correspondence theories of truth are both obvious failures leading to pernicious nonsense. I am in little doubt that if Peirce were familiar with that distinction (which only developed after his death) he would say the same.

          The central virtue of Peircean predictivism is that it allows you to evade committing to either of these piles of crap. They only looked attractive in the first place because people have a persistent habit of trying to jump to ontological conclusions about “reality” before they have a theory of confirmation about what they observe. This is backwards.

            • >Where do you think coherence theory go into nonsense? This is easily how people actually do science.

              Paranoid schizophrenics often have coherent beliefs. It doesn’t do them any good.

          • ESR has very little patience for the correspondence theory of truth. About five years ago I reinvented it after reading a bit of Hofstadter and I proudly emailed him a write-up asking for his thoughts.

            He kicked me like a puppy that’s drug a bird carcass up onto the porch :)

            • >ESR has very little patience for the correspondence theory of truth.

              Yeah, it’s a kind of dumb mistake that can only be made by sufficiently smart people.

              In case anyone doesn’t already know how to blow it up, the problem is with the predicate “corresponds”. What’s our test for it? That is, given a proposition, what is the procedure by which we check that it “corresponds” to reality?

              Unfortunately, most peoples’ answer reduces to “Because my neurolinguistic prejudices say it’s obvious.” FAIL.

              The Peircean answer is “A proposition corresponds to reality if it appears in a hypothesis that generates confirmed predictions about observables.”

              • That’s not quite complete, is it? Peirce’s rule was “a proposition is true if all who examine it must eventually agree that it’s true”. In principle that works even when the proposition is about unobservable things and doesn’t entail any observable consequences. Whence comes the limit to predictions of observables?

                • >Whence comes the limit to predictions of observables?

                  Peirce, and I, would deny that any prediction not grounding out in observables is meaningful.

                  We only have unobservable terms in our theories in order to make predictions about observables. So, yeah, go ahead and have a theory in which “electrons” are real, but you only get to confirm it by making them do stuff you can measure.

                  See also: invisible pink unicorns.

                  • How about this prediction: “The ratio of a circle’s circumference to its diameter is exactly pi.”

                    Despite appearances, this is not a proposition that grounds out in observables, because circles do not exist in nature. (There are things strongly resembling circles, to be sure, but those are at best polygons with sides too short to distinguish.) How, then, can you see a meaning in it?

                    • All methodologies for testing predictions result in measurements that are approximations, and they all take place as time moves forward. Your application of idealization is misplaced. That only works in abstraction space where measurement can be (in theory) ideal and time direction can be ignored.

                    • >Despite appearances, this is not a proposition that grounds out in observables, because circles do not exist in nature.

                      Er, so what?

                      >How, then, can you see a meaning in it?

                      I can write it as a wff in a system that axiomatizes plane geometry. On that level, the statement predicts the existence of a proof chain in the formal system. Wouldn’t you describe whether one can reach a proof from axioms via valid steps as an observable? Sure, it’s theory-laden as hell, but so is every other percept.

                      But I think that’s not the kind of meaning that interests you. The kind of meaning you want only exists in the presence of some kind of model-object relationship between plane geometry, numbers, and some set of observables. In that case, the prediction to be falsified is that tests on the observables will match tests on the model.

                  • “Peirce, and I, would deny that any prediction not grounding out in observables is meaningful.”

                    How do you feel about the logical positivists? I ask because Carnap wrote a book called “The Logical Structure of the World” which, I think, endorses the same view but from the other direction, starting just with sense data.

                    Or something. I haven’t read it yet and am trying to decide if I should.

                    • >How do you feel about the logical positivists?

                      Maybe Carnap got past the problem, but I remember the LPs as a gang that developed an elaborate correspondence theory of truth, then belatedly noticed they had nothing to ground it in. Oops…

              • “The Peircean answer is “A proposition corresponds to reality if it appears in a hypothesis that generates confirmed predictions about observables.””

                Yup. One way or another I wound up there. I guess I didn’t realize I was a Peircian.

      • That doesn’t separate Pierce from Aristotle or Aquinas, though. “Nothing is in the intellect that was not first in the senses”, and the fallibility of the senses, were both Aristotelian ideas.

        And didn’t James and Dewey depart from Pierce specifically by allowing, where Pierce denied, the possibility that the road of inquiry leads nowhere?

        • >And didn’t James and Dewey depart from Pierce specifically by allowing, where Pierce denied, the possibility that the road of inquiry leads nowhere?

          That wasn’t their fundamental error. Their fundamental error was degrading “what is predictive” to “what is useful”.

          Once you’ve done that, yes, the road of inquiry can lead nowhere.

          • Their fundamental error was degrading “what is predictive” to “what is useful”.

            Which mutates approximately 4.1 picoseconds later into “what is politically convenient to believe.”.

            • >Which mutates approximately 4.1 picoseconds later into “what is politically convenient to believe.”.

              That is correct, and exactly describes the historical decay process.

              Peirce was an unfortunate fellow. His academic reputation has still not recovered from the vulgar, stupid way James and Dewey garbled his ideas. But if he had only written that one essay On Making Our Ideas Clear, I judge it would still make him the most important philosopher of the last thousand years.

  6. My working definition of “truth” has always been the accurate perception or conception of reality. And since time moves forward, this is always a predictive exercise.

  7. Eric, that was a very nice find.

    What interested me was the degree to which there is some kind of class hierarchy at work, as if the brain maintains some kind of “object oriented” database with top-layer concepts like “medical professionals” tied into bottom-level concepts like “nurse.” This ties into one of my ideas about intelligence, which is that the ability to draw fine distinctions is an important aspect of intelligence. The number of layers and divisions in the brains object-oriented database is probably a fair predictor of IQ.

    • I think you may be overfitting your mental model to the textual example.

      The general idea is more like a mesh of probabilities – Markov chains come to mind as an example of this: when node X shows up, these other nodes Y,Z,K,M have higher odds of showing up. No ‘concept’ or ‘object’ with ‘properties’ is necessary for this – only stimuli happening in temporal proximity to one another.

  8. > …he recast “truth” as predictive accuracy, asserting that our only (but sufficient warrant) for believing any theory is the extent to which it successfully anticipates future observations.

    I’m not sure I completely agree (or, perhaps, I do not completely understand), although it only appears to be a problem on certain edge cases. Imagine, using standard techniques, we prove that a given Turing machine, with access to $LOTS of memory, will halt in $CONFIGURATION after $LARGE number of steps. But, thanks to the Bekenstein bound, a physical analog of such a machine cannot be built in our actual universe.

    I would be inclined to argue that such a proof offers a counterfactual ‘prediction’ – had the fundamental constants of our universe been different (e.g. smaller Planck constant), we would expect a physical analog of the given Turing machine to halt in $CONFIGURATION. But even though the proof’s ‘prediction’ is not one we can ever actually check, I’d say it’s still true or false.

    • You model that ‘alternate’ universe and that Turing machine – your prediction is about the models, and it can check out or not. If you have no models, you have no predictions.

      Your prediction with regards to the actual universe would be according to the model we have: given X and Y constraint, the machine can’t exist. And that can be a good prediction or a bad prediction – and if it’s bad, it means your model is wrong, and the lower level data (the actual machine, in this case, because denying the prediction implies actually building the machine) would cause you to alter your model (either of turing machines, of this universe, or both).

    • > But even though the proof’s ‘prediction’ is not one we can ever actually check, I’d say it’s still true or false.

      You have two different notions of “proof” and “truth” confused – one relates to demonstration in a formal system vs. one relating theory confirmation in the observable world. Dig up my essay “The Utility of Mathematics” and read it, then we’ll talk.

      • > You have two different notions of “proof” and “truth” confused

        Yep. I’m surprised I missed that.

  9. Very interesting. I’ve come to the conclusion that our brain (connected to our body) is a very good pattern matching machine. This article describes that pattern matching in more detail, including our sometimes perverse tendency to try to fit things into a pattern.

    I train people in a complicated technical trade that eventually results in a very broad experience, and the process starts with learning to enjoy the confusion of new learning. At one point it becomes a metalearning prediction; I don’t know but I know I can figure it out.

    We do lots of troubleshooting and the common patterns of failure fit into this. Facing some issue, all the patterns learned through experience are fitted to the incoming data, until something fits. It may or may not be right, but to shake yourself loose from that conclusion requires concrete data that can’t be ignored.

    I regularly experience the weird experience of seeing with my hands, ears, the vibrations and smells. I have the predictive pattern of what the thing is doing, from experience, and then can almost see the innards working. I encourage my apprentices to touch and experience everything to get that same feel. And instruments become an extension of my sensory experience, directly getting fed into the bottom up process.

    The part about movement was interesting. I was thinking of sports or any dynamic physical activity; the senses see the ball coming and predict where your racket should be. Practice allows that to be a background process and the higher level predictive top down is used for more complex strategies, putting a spin on the ball, etc. The racket and ball becomes a physical extension of your mind.

    Fascinating.

    • Have you read Norman Doidge’s books on the plastic brain? He comes from the clinical side, but what he writes about pain and other disorders is quite fascinating. Essentially he describes a series of techniques that facilitate the top down predictive patterns that sometimes perpetuate an illness can be changed through stimulation.

  10. ““Mind” is what we observe as the epiphenomenon of that engine running – its operating noise, more or less.”

    That is, more or less, the position Daniel Dennet has taken for decades. Except, it is not noise, but it is needed to tie together the history of the individual into a time line that can be interpolated and extrapolated to allow inferences.

    “Endless waves of top-down expectations crashing against endless waves of bottom up sense data, interpreting predictive failure as unwelcome surprise.”

    I do not see what is new here. I was taught 30 years ago that neurons only encode differences over place and time. Also, all sensory systems predict change and encode only the difference. This is beautifully illustrated by the visual system of eye saccades.

    Bottom up/top down has also been used to explain speech understanding for decades. Nothing new here.

    If your point is that there is no Truth, then you are jumping from facts to morals. How the brain works does not tell us whether there is Truth or Not. We need more steps in between before we can reach that conclusion.

    • >I do not see what is new here.

      What’s new here is some more specific claims, most notably that predictive modeling with Bayesian updates of priors is what the hardware is doing rather than being an epiphenomenal effect of lower-level processes of a different kind.

      • “most notably that predictive modeling with Bayesian updates of priors is what the hardware is doing rather than being an epiphenomenal effect of lower-level processes of a different kind.”

        This happens at the level of short term habituation and at the the synaptic level (hebbian learning). This is actually what is implemented in recursive deep learning (neural) networks. Actually, that is why early, fully connected, artificial neural networks were called baysian neural networks.

        So you are right about the low level implementation. But I do not really see exactly what is new here.

        • From what i understand (specifically mostly from learning neural networks) the difference is one of why things communicate.

          In the model that is mimicked by NNs, a node receives one or more electrical impulses which, after being weighted, is tested against a threshold. If it reaches the threshold it activates and outputs electrical impulses of it’s own to one or more other nodes.

          In PP, a node receives predictions from higher abstraction nodes (e.g. the shape detector sends predictions to the edge detector) with a confidence and precision rating. A node receives sense data from lower abstraction nodes(that it sent predictions to) when those predictions don’t adequately predict the sense data. Here, you only receive input when your prediction fails to predict data. An important side note is that a node may decide that low confidence or low precision sense data is trumped by high confidence or high precision predictions and just squelch the alert (e.g. the upper level continues thinking there’s a line there even though the sense data says there is a big hole in the middle of it because of a blind spot).

          • @JonCB
            That was indeed a common model of top-down versus bottom-up in, eg, word recognition. However, a better model seems to be that top-down and bottom-up work in parallel and the first to reach a threshold in confidence “wins”. That would be a logical way of operation in systems that need to do complex identification and recognition tasks under time constraints.

            What does not seem to happen is that top-down information supresses bottom-up information.

            • > What does not seem to happen is that top-down information supresses bottom-up information.

              Hogwash! There are plenty of examples where your expectations override your perception, to the point of denying the actual evidence of your senses.

              An easy example – the Ames room optical illusion.

              Continuing to do this, in the face of convincing external evidence that your sensorium is faulty or deceived, is a pretty good working definition for insanity.

              • Many psychophysical experiment have shown that expectations do not suppress sensory information. The most famous is the cigarette/shigaret non-confusion. Every speaker of English will interpret “I light a shigaret” as meaning “I light a cigarette”. But still, that speaker will clearly note that cigarette has been mis-pronounced.

                There are a number of ambiguous cases where top down information can shift a sensory boundary. Famous example: coat/goat. The /k/ in “how to milk a coat” will shift to be perceived as a /g/ of goat at a lower voicing cue than in “I put on my coat”. But “coat” with a clear /k/ will still be heard as “coat”, whatever the context.

                Something similar happens in the McGurk effect where conflicting visual stimuli can shift the perception between /b/, /d/, and /g/.

                In the ames room, no stimuli are suppressed. The 2D visual stimuli are there already. What changes is the, 3D reconstruction of the, 2D, retinal image.

                • >Many psychophysical experiment have shown that expectations do not suppress sensory information

                  Maybe in your universe. In this one, that happens quite often. Consider this:

                  Image of 'Paris in the the spring' sign

                  Most of the time, humans automatically suppress the second “THE”.

                  • @esr
                    This happens not in speech, only in reading (although people tend to ignore stuttering). The same visual effect can be obtained by writing three “l”s with the correct layout.

                    A very nice example of this type of “confusion” is the ability to read words with scrambled letters (first and last letter correct:
                    I cnduo’t bvleiee taht I culod aulaclty uesdtannrd waht I was rdnaieg.
                    https://www.brainhq.com/brain-resources/brain-teasers/scrambled-text
                    (Shizophrenics do not seem to be able to do this trick)

                    These illusions are based on the fact that the visual system assembles words and sentences by collecting characters and words in groups. But this system is bad at counting and ordering elements. Working memory is also very keen on pruning objects (characters and words) it doesn’t need. In your example, the two “the”s are also placed nicely with respect to the natural groupings of words, so this illusion also invokes the syntax engine of the reader.

                    In short, all these types of “illusions” have in common that the rules of the visual system are not designed to handle reading well (characters into sentences).

                    • In short, all these types of “illusions” have in common that the rules of the visual system are not designed to handle reading well (characters into sentences).

                      One: You put illusion in quotes; what about these is not an illusion?

                      Two: It doesn’t change the fact that high level knowledge of What Should Be is overriding What Is.

                    • @ian
                      “You put illusion in quotes; what about these is not an illusion?”

                      It is different from illusions like the ames room

                      “It doesn’t change the fact that high level knowledge of What Should Be is overriding What Is.”

                      I doubt that. In normal speech you regularly remove noise, hesitations and restarts from the actual sound to get to the utterance. This type of ignorance is simply part of working memory: You forget what you do not need.

                      Vision works differently. When you try to count stones in masonry, your eyes will have difficulty fixating on the correct stones. The effect esr mentioned could also work if you printed it in Chinese, or in meaningless sequences of numbers or letters.

                      What I consider a problem in this type of examples is that reading is a very complex acquired skill that takes years of training to master and stretches the demands on the visual system to its boundaries.

                      There are too many complicating factors and too many sensory mismatches to get a clean conclusion that “expectations” suppress “observations”.

                      There are also too many examples from much more natural behavior, e.g., speech and vision, that show that expectations do not suppress observations. These have to be explained too.

                • @Christopher Smith
                  “Winter is a communist”

                  I learn something new here every time.

                  But could you explain how this supposed factoid would be relevant in this discussion?

      • >It looks to me like the PP model in general is basically the kind of model Dennett has been expecting all along.

        That statement can be strengthened. PP is the testable neural-cognition model that Dennett and other physicalists have been groping towards for decades. Absolutely so.

    • >But you also believe in evolved instincts and this model has a problem with them:

      It appears to me that Scott thinks there’s a problem only because he doesn’t understand gene expression and morphogenesis very well.

      Come to that, I’m not an expert on those myself. But I know enough to find the explanatory replies to his query convincing.

  11. >But you also believe in evolved instincts and this model has a problem with them:

    Not at all. An instinct couples a certain stimulus, sweet taste (sugar), salt, a certain shape, smell, or touch to the reward center in the brain. The prediction learning machinery links these rewards to the actual objects, fruit, honey, popcorn, young women/men.

    The genes change to alter the sensory preferences in such a way that the bayesian predictors link them to the correct behavior with high probability. This goes wrong when humans create super stimuli like potato chips, candy, and breast implants.

  12. What I’m reading here:

    Consensus is that we’ll never achieve Truth. But we can narrow in on it. So we’ll never know exactly how long my foot is, but we know it’s longer than six inches. And shorter than twenty inches. And later, we know it’s longer than eight inches, and shorter than fifteen inches, and so on.

    Isn’t this Truth, though? Yes, but only in a rather vapid, uninteresting sense; of course we know that the universe is older than five minutes and a baseball is lighter than a tank, but these are all superficial prunings from the graph of possible worlds which could be true. …Well, some prunings might be truly profound (e.g. electricity propagates through pure copper within 10000 S/m of this fast), but in the limit, we’ll never get that graph down to one node. In fact, we’ll never get any interesting subgraph of that graph (e.g. the subgraph of possible worlds ignoring everything but my foot) down to one node.

    Which is somewhat interesting to me, because one of the first things I thought of when reading that it’s predictions all the way down was to try and hack that and predict that wasn’t the case and try to rely on the principle of “predictions are more likely to be false” to get me somewhere exact. Sorry, thank me for playing, but that only holds if my prediction space is actually smaller than its complement, hack better next time.

    On that note, my mind’s eye still perceives leads of interest, but I’m having trouble resolving them for now.

    • Truth here is the conclusions reached. We have preconceptions or the top down memories and thought pathways that predict something; It’s August, the sun is shining, it will be hot today.

      During the eclipse last month, the light was similar to the January light in cold weather, and the temperature on a clear August day was cooler. Without knowing about the eclipse, the bottom up senses were signalling to the top down predictions that something was different. A new truth. It was August and clear, but cool and an odd light.

      This happens continuously during our waking hours. This describes how we perceive the world; the predicted things that we saw yesterday are only noticed if there is something different either in them via the bottom up senses or in us via the top down senses; we just met a girl, or something like that and we see the world differently from the top down.

      Truth is describing the constant challenge to our predictions by the incoming information from our senses. Working properly this is a self correcting mechanism within limits.

  13. Meanwhile, I’ll ask the same thing here that I asked on that thread.

    The literature seems to suggest that PP might be linked to actual neurological pathways. We might not know the exact causal nature yet, but one thing people are predicting is that autists have less top-down prediction in certain ways; their predictions are constantly kept very sensitive to sensory data, so they get distracted by their shirt feeling differently today or that program taking slightly longer to work than they remember. If we analyze MRIs and use other tricks in the toolbox, we might discover something physical.

    How much of this sort of predictive processing is happening in other animals? Is it more accurate to suppose that there’s some other important difference? Or is it rather more accurate that they are mainly processing machines with even less prediction mechanism relative to sensory data?

    Suppose PP were controllable via a knob or a cocktail of hormones or the presence of some number of neurons configured in a particular way with a particular cycle of aforesaid hormones, and you could just turn it up or down, possibly requiring a non-trivial amount of tissue fabrication. Could you plug that in to a frog, say, and bump it up and have a frog appear roughly as intelligent as a rabbit? Then a little higher and wind up with a horse’s level? Then higher still to pig? Corvid? Chimp? Human?

  14. @Paul Brinkley
    “How much of this sort of predictive processing is happening in other animals?”

    The same. These predictive processes are build into the neurons and networks at every level. It is just that more synapses (connections) and a higher metabolism means more accurate and more complex predictions.

    “Could you plug that in to a frog, say, and bump it up and have a frog appear roughly as intelligent as a rabbit?”

    Basically, yes. However, a bicycle with a Rolls Royce jet engine will not fly. You need more structural adaptations. Sometimes, retrofitting will not work.

    Growing neural networks require changes in organization. A bigger brain needs better senses to get information, better structured long range connections, better modular organization, faster connections, more energy, better cooling, better insulation against environmental disturbance etc.

    To make a frog as intelligent as a rabbit, it would need a warm blooded body the size of a rabbit and some biochemical and anatomical changes.

    On the other hand, an octopus is basically a slug with an intelligence boosted to mammalian levels (maybe even able to pass the mirror test?), so why not a frog?

  15. This is very nice, but it raises a rather interesting question: if our brains are basically executing finely-tuned Bayesian reasoning, why is our reasoning process so, for want of a better word, crappy?

    I mean, just start with all the cognitive biases that have been documented. You would think that a Bayesian reasoning system wouldn’t fall prey to such; yet there they are. Why the heck did evolution not calibrate the system properly?

    Or just think of humans’ terrible record at integrating evidence to arrive at reasonable scientific hypotheses about the world. People are generically highly confident in matters of religion, politics, economics, and so on; but given how contradictory our beliefs in these areas are, we can’t all be right. Why do vastly different (and therefore mostly false) higher-level beliefs persist in the face of roughly the same stream of lower-level sense data? Wouldn’t a Bayesian mental module, presumably carefully tuned by evolution, be expected to perform much better?

    I have a hypothesis here, and think it might be quite significant for those interested in improving their rationality. But before I offer it, I’m curious what others think. Do you agree that there is a puzzle here? And what resolution do you suggest?

    • >Why the heck did evolution not calibrate the system properly?

      The generally accepted answer is that it is more or less calibrated properly – for an environment that no longer exists.

      Evo-psych people speak of the EAA – the Environment of Ancestral Adaptation. Before writing, before cities, before civilization. In that context many of our peculiar cognitive biases make more sense as heuristics for bounding risk.

      We no longer live in that environment, and our genomes have not fully adapted to newer ones. Which change faster than selection can handle, anyway.

      • Well, yes, that almost has to be correct, but it’s a bit generic relative to what I have in mind. My question is more what aspects of our current environment differ from the EEA so as to make us such poor reasoners in certain domains in our current environment, and how this fits in with the specifics of the predictive processing model. And a related question is how to use this insight to improve our reasoning.

        The short version of my answer is that our brains’ shortcomings derive from taking a module designed for navigating through concrete physical space, and using it for abstract theoretical reasoning.

        If your problem is “learn about a physical space and navigate through it”, you quickly acquire high levels of confidence in your basic beliefs about reality, relevant changes in your environment are quite obvious, and there’s a lot of baseline noise to tune out.

        By contrast, for abstract theoretical problems there’s much less certainty possible, and the sort of contradictory evidence you’re likely to encounter is much more subtle.

        The result is that our world-model-updating system tunes out the evidence that contradicts our beliefs because the level of conflict registers as noise, since that’s just what it would be in concrete physical navigation problems.

        Note that this isn’t just EEA vs. modern. It predicts people should be pretty good at dealing with concrete practical everyday questions, and only go insane about more abstract ideological sorts of questions.

        • >My question is more what aspects of our current environment differ from the EEA so as to make us such poor reasoners in certain domains in our current environment, and how this fits in with the specifics of the predictive processing model.

          That is not a simple question. We are only at the beginning of the lines of research that might answer it in detail.

          >The short version of my answer is that our brains’ shortcomings derive from taking a module designed for navigating through concrete physical space, and using it for abstract theoretical reasoning.

          This seems to me like a promising idea.

      • A physical analogue to this us wisdom teeth. Often we have them extracted because the space available on the jaw is less than that needed to accommodate all of them. But we evolved them in an environment without dental care, where by the time we reached adulthood and these teeth erupted, we’d already lost some of our teeth, which left enough room for these new arrivals.

  16. Viewing hack mode in terms of the PP model is intriguing. For the problem focus, surprise is valued over smoothing. For everything else smoothing is valued over surprise.

    Yanking out of hack mode raises lots of alarms as the predictive model has been using smoothed data.

    You have to be a surprise junky to enter hack mode. You are actively seeking surprises.

    • >You have to be a surprise junky to enter hack mode. You are actively seeking surprises.

      Not quite right. I would say you are actively seeking the resolution of surprise – the moment when it all comes together and your priors readjust.

  17. What Peirce tells us [is] perhaps best expressed in an antinomious way: There is no “Truth”, only prediction and test.

    FWIW, the word “antinomious” appears in no dictionary I have access to. I suspect the intended root may be “antonym”, but attempts to track down an existing variant have failed.

    • >I suspect the intended root may be “antonym”, but attempts to track down an existing variant have failed.

      It’s a play on “antinomy” n. a contradiction between two beliefs or conclusions that are in themselves reasonable; a paradox.

      I meant to group my restatement of Peirce with the style of Zen rhetoric that attempts to induce enlightenment through the contemplation of paradoxes. I was aware it was a maneuver that might fail.

      • Thanks. That fits with my other guess, “antinomian”, but I was aware only of the ecclesiastical and related usages. I’m pleased to add “antinomy” to my personal lexicon.

  18. Incidentally, on your / a Peircian account of truth, how do you treat statements that could never in principle produce sense perceptions, but seem to be logically implied by well-validated theories that do produce successful predictions?

    For example, consider a spaceship that leaves Earth at near lightspeed and eventually goes far enough away that (given the expansion of the universe) our future light cones do not intersect; or parallel time paths implied by Everett interpretations of quantum mechanics; or other universes created by eternal inflation models of cosmology. In these examples, our theories imply the existence of entities that we cannot ever interact with in principle. So would you say that statements about these entities cannot be true or false?

    • >So would you say that statements about these entities cannot be true or false?

      Yes. In my (and I think Peirce’s) view, “untestable propositions” is a fundamental category on the level of “falsified propositions” or “confirmed propositions”.

      • This has me thinking.

        As I read it from the thread etc., “truth as predictivity” leans towards being much like Platonic Knowledge.

        (Knowledge per Plato being “justified true belief”.

        Belief is a predicate of having a working prediction model; obviously you believe the outcomes once you’ve demonstrated the predictions work.

        Justification is also a predicate of demonstrated predictivity; what could better justify the belief than that?

        And truth, in the sense Plato uses, which IIRC in this context is mere commonplace correspondence (“the thing claimed is actually so in the real world”) is also there, because, again, the predictions work.)

        Thoughts on that? Super obvious to you and you assumed to us, or you don’t even think of Plato (can’t blame you – I only do because I’m a trained Philosopher), or somewhere in between?

        • >Thoughts on that? Super obvious to you and you assumed to us, or you don’t even think of Plato (can’t blame you – I only do because I’m a trained Philosopher), or somewhere in between?

          Obvious to me. Also I don’t think about Plato much because I consider him the original begetter of the single stupidest persistent error in Western philosophy, the rush to do a super-elaborate ontology before you have any notion of how to test and confirm propositions.

    • It’s more interesting to consider entities that are part of theories we don’t believe, in this connection. For instance, does the luminiferous ether exist? At one point we thought it did, because Maxwell’s equations imply that light is a wave and light waves need a medium to exist in. Then quantum mechanics came along and replaced Maxwell’s equations, and the need for a medium in which light exists went away.

      As a matter of principle, it wasn’t possible to interact with the ether directly, only with the disturbances in it called “light”. Yet in 1890 people would have said the luminiferous ether is real. Forty years later they’d have said it wasn’t, as we do now. The proposition “the luminiferous ether exists” isn’t testable, but that didn’t stop anyone from thinking it was true or false.

      • The proposition “the luminiferous ether exists” isn’t testable

        Michelson-Morley, for the definition of “luminiferous aether” as used at that time.

        • The Michelson-Morley experiment shows that, if there is a luminiferous ether, the Earth is always at rest relative to it. That doesn’t show there isn’t an ether, just that it behaves very oddly. And no, general relativity doesn’t explain the experiment; it assumes the experiment’s result as an axiom.

          The experiments that first raised doubts of the ether’s existence were Einstein’s studies of photoelectricity, because those showed that light was, in some circumstances, particulate; a thing that Maxwell’s equations don’t allow for. But that’s quantum mechanics, not relativity.

          Which is off the present point. I’m asking, in what sense has the luminiferous ether’s existence been falsified?

          • It hasn’t been falsified in the Russell’s-teapot sense, but given that we know the earth is in relative motion to the solar system, the solar system in relation to the galaxy, and so on, the observation that if the aether exists it’s in the same reference frame as the earth makes its existence implausibly unlikely even before the introduction of a model that makes it unnecessary.

          • >Which is off the present point. I’m asking, in what sense has the luminiferous ether’s existence been falsified?

            It’s no longer a term in a theory with a competitive predictive record.

            In principle, the same thing could happen to…say, “electrons”.

      • >As a matter of principle, it wasn’t possible to interact with the ether directly, only with the disturbances in it called “light”. Yet in 1890 people would have said the luminiferous ether is real.

        You’re making this question appear more difficult than it is.

        The reason “luminiferous ether” was considered “real” was that it occurred as a term in a prediction generator that seemed to work. The same is true today of (for example) “dark matter” today – unobservable, can only be deduced to have a role in a predictive theory with low Kolmgorov complexity.

        • I was driving toward a point, which I can now make:

          “Untestable” propositions, or unobservable entities, fall into two radically distinct sets. There are entities which, though unobservable in themselves, are necessary parts of theories which are confirmed by other observations. And there are entities which are not part of such theories, either because they’re parts of theories that make false predictions, or because they’re parts of theories that make no predictions at all (by being consistent with all possible observations.) The former set is normally considered true, and the latter false.

          Therefore “untestable” isn’t a fundamental category, but a conflation of two categories, as different from each other as true predictions are from false predictions.

          • >There are entities which, though unobservable in themselves, are necessary parts of theories which are confirmed by other observations.

            Yes, there are. I respectfully suggest that you are confused.

            The fact that entities are unobservable does not mean that propositions about them are untestable. One might say that the existence test for an unobservable is whether it is entailed in the minimum-complexity member of the set of most predictively possible theories. For a contemporary example, think “virtual particle”. For an obsolete one, see “luminiferous ether”.

            >And there are entities which are not part of such theories, either because they’re parts of theories that make false predictions, or because they’re parts of theories that make no predictions at all (by being consistent with all possible observations.) The former set is normally considered true, and the latter false.

            You’re confusing yourself by confounding the existence of entities with the testability of propositions. This is Plato’s fscking put-the-ontological-cart-before-the-confirmational-horse again.

            Entities cannot be “true” or “false”. Truth and falsehood are values of propositional claims.

            For an example of an untestable proposition, consider “Green ideas sleep furiously.” That is untestable because there is no way to unpack those natural-language primitives into an assertion that implies an experiment. This is the fundamental category I am talking about.

            • IMO “colorless green ideas sleep furiously” isn’t a proposition at all, as it fails to mean anything. By an untestable proposition I mean a statement which has a fixed meaning, but can’t be verified or falsified even in principle.

              And if you’re counting “being entailed by a theory that’s strongly confirmed by experience” as “testable”, then I frankly don’t see how there can be an untestable proposition. The only way to say something that’s meaningful but not verifiable is to talk about unobservable entities; that’s why I mentioned them in the first place. (I may be unclear on occasion, but not confused.)

              • >By an untestable proposition I mean a statement which has a fixed meaning, but can’t be verified or falsified even in principle.

                Excellent. You’re making the standard mistakes of a non-stupid person in a sequence I have often seen before. This is usually the last or second-to-last one before they get it.

                To get the rest of the way, consider the following definition: The “meaning” of a proposition about observables is its ensemble of possible confirmation procedures.

                Can you generate any counterexample in which this fails to capture the natural-language sense of “meaning”? If not, why not?

                Can you generate any other definition of “meaning” that this one does not subsume?

                (Do consider both propositions about observables and propositions in a formal system; Euclidean plane geometry will do nicely.)

                • “The “meaning” of a proposition about observables is its ensemble of possible confirmation procedures.”

                  Strike “about observables”, and I don’t see anything wrong with that definition. What’s your point?

                  • >Strike “about observables”, and I don’t see anything wrong with that definition. What’s your point?

                    The set of untestable propositions is the set of those with empty confirmation ensembles. Thus, untestability coincides with meaninglessness. not just contingently but necessarily.

                    You said: “By an untestable proposition I mean a statement which has a fixed meaning, but can’t be verified or falsified even in principle.” That set is empty. It’s like trying to specify the set of all propositions with confirmation-procedures ensembles that don’t have any confirmation procedures.

                    This is one view of – one angle on – Peirce’s winning move. This why he’s the most important philosopher of the last thousand years.

                    (There are a couple of other ways to view the winning move.)

  19. disorders such as autism can be understood as consequences of very specific processing failures with testable consequences

    Autism in all its collection of seemingly disparate presentations is fairly comprehensively explained by turning up synapse gain too high. Nothing else is required, and recent indications that autistic brains are significantly less pruned go most smoothly with a model of “overactive neural connections”.

  20. Our experience of reality is always future-oriented because of sensory latency. This is why our reasoning processes must be predictive in nature. It is only in abstraction space that we can toy with time direction, and that is a relatively new phenomenon that likely post-dates our development of complex language skill.

  21. Not being a terribly big fan of deeply nested comments, let me restate my case on the top level: if you think computation is a good model of the mind, you are not a materialist. You are a hardware-software dualist, where the material part is the hardware.

    Information, like the number 3, is not a material thing. It can be represented as 3, III, 100, three, drei, [ [ ], [ [ ] ], [ [ [ ] ] ] ] but neither are that number, they are all representations of it, moreover, all of these representations are correct and perfectly accurate, not roughly accurate like the 3D model of a rock. The number itself is basically invisible. And that makes it seriously not like a rock.

    You reply that the number is basically just a concept. Yes, it is. A concept is a thing that lives in our minds. That is the whole point. What is “just” about it? Aren’t minds part of reality? Representations of it live on paper, HDD, screens, and the brain. There is probably a shape of neural links and chemicals in the brain that represent the number. They are still not the number. The number lives in the mind, the representation in the brain. We don’t think of neural links and chemicals when we think of the number three.

    This doesn’t mean the mind and the concept in it is a ghost, a supernatural thing. The problem is the Enlightenment seriously muddled concepts like immateriality and supernaturality beginning with Descartes and the Medievals had clearer concepts of it, which is what Ed Feser conveying these days.

    AFAIK it merely means information has different characteristics that material objects. Like, very good copiability. Or it can have multiple representations and each can be entirely correct, neither more correct than the other, which is not true of a rock as its only perfect representation is itself.

    This doesn’t mean Platonic forms exist. The first man who said Platonic forms are bullshit was literally Aristotle, and because they are really bullshit most serious Medieval thinkers were Aristoteleans and the few Neoplatonists were weirdo mystics, the hippies of the age.

    Aristotle said the essence of trianguality exists in a wooden triangle mixed with matter (you could say, implemented in matter) and that in the pure, not mixed form exists in the human mind who abstracted it away based on noticing the similarity between various wooden, metal etc. triangles, but does not exists in any sort of weird Platonic extradimensional realm. This view isn’t mystical or overly abstract, it seems to be entirely common sense. We really do create concepts and categories by noticing similarities in the world. As far as material objects go, there is really not much more to forms or essences than similarity. There is no form of a perfect bed living in a Platonic realm, there is just the fact that real beds are kind of similar mostly because they are made for the same purpose for the same species, us. This was what the Medievals believed. We abstract away this similarity and build a mental model in our minds. Do we actually do that? Yes. This model is the pure form or essence, de-mixed from matter. It is information. Data. It can be used to recreate the bed, or it can be used to design new beds for the same purpose. And some things like numbers only exist as a pure form in the mind albeit representable many different wys.

    Natural, physical objects have properties. Information also has properties. It turns out, these properties are wildly different. If you call the bed and the rock natural or material, then it makes sense to call information immaterial and extra- or supernatural. This doesn’t mean ectoplasmic ghost magic. You think it means ghost magic only because the Moderns from Descartes on hopelessly muddled these concepts.

    These are absolutely pragmatic things that pay rent. The copiability of information is behind our is piracy theft debates. Representing a number a binary bits is of course hugely important and it is important to know that it is not less accurate than representing it in the common decimal, that we can trust binary computations, that we will NOT receive a letter from your bank “sorry due to the inaccuracy to binary to decimal conversion, your real balance is…” this is entirely pragmatic.

    So you are not a materialist. If you were a materialist you would believe that in order to make AI we just need to build extremely powerful hardware and then it will just happen. If you think like AFAIK everybody thinks to build AI we also need to code, create software, work with feeding it data/information, while of course a brutally powerful hardware will be necessary, too, you are a matter-information dualist. And this is why all this old stuff is interesting as it seems they, too, were. (I was looking into these ideas for years on and still haven’t become a theist so have no fears on that front. )

    ESR and most people around here spent their lives working with information, software, not soldering hardware. You already know they are not the same thing but have wildly different properties. Yet when you call yourself a materialist it sounds like saying software is not important only hardware and is entirely reducible to hardware. No, it is actually important and it is not reducible to it.

    • >if you think computation is a good model of the mind, you are not a materialist. You are a hardware-software dualist, where the material part is the hardware.

      Aha. I think I understand now. You’re not a native English speaker. And don’t understand that the semantic field of “materialist” changes slightly in philosophical discourse. My apologies, Dividualist, I should have noticed sooner and made allowance for the resulting confusion. General Semanticians are supposed to do that.

      Now that I’ve clued in, I will tell you a couple of things:

      1. Your interpretation of the term ‘materialist’ would be considered a bit too literal and narrow in philosophical English. I am not saying this means you’re wrong, just that you’re using a different map for the territory, and this can be expected to create disputes that look substantive but are not.

      2. I don’t describe myself as a “materialist” exactly because of this confusion. In fact many philosophers of mind have abandoned the term in favor of describing themselves as “physicalists”. I have applied that label to myself as a shorthand but don’t find it entirely satisfactory.

      3. People who call themselves “physicalists” are really being…I want to say ontological monists but that’s not quite precise enough either. They are denying that there is an unobservable world that is causally prior to the observable one, and that the mind lives partly in that unobservable world.

      From the point of view of people disputing ontological monism vs ontological dualism or manyism, the distinction between “matter” and “information” inside the brain/mind is not very relevant. OK, they have different copying rules, but they’re a causal unity – if there ain’t no neuron firing, there ain’t no informational activity.

      When people make a big deal about information being “immaterial”, this is usually a sign that they’re going to try to sell you some kind of ontological-dualist fairytale. In reality, the observation that there is pattern in the brain as well as “this crude matter” is completely unproblematic for a physicalist, since what he’s really arguing for is the causal unity of the observable world.

      EDIT: I will add as a relevant point that while a pattern does not behave quite like a normal physical object (yes, Millikan’s copying-is-everything argument has a good deal of punch) there is no such thing as information that is not expressed as a set of physical observables. You can’t divorce information from material reality; there’s nowhere else for it to be.

      • @esr
        “You can’t divorce information from material reality; there’s nowhere else for it to be.”

        I think this cannot be stressed enough:
        Information is inevitably tied to a physical representation and therefore to restrictions and possibilities related to the laws
        of physics and the parts available in the universe.

        Rolf Landauer
        http://cqi.inf.usi.ch/qic/64_Landauer_The_physical_nature_of_information.pdf

        Moreover, in the brain, there is no separation between “memory/information” and “computation” as in electronic (Von Neumann) computer architecture.

          • >Another way to think of this is . . . did information exist before our species evolved?

            Of course it did, in genomes if nowhere else. I’m not getting why this question is interesting.

          • I think I see the problem. Does information exist without an interpretation.

            Instead of information, use the complimentary concept of entropy. The first law of thermodynamics tells us that there is energy, the second tells us what this energy will do.

            Information is that what determines what will happen with the available energy. For instance, the information in a genome will direct the metabolic energy generated in the organism to create a specific individual. No interpretation needed.

              • >Are you defining information as an instrument of causality, e.g. a natural force in the universe?

                I don’t have any idea what you think you mean by that question.

                • I’m trying to understand Winter’s comment that “Information is that what determines what will happen with the available energy.” The word “information” is just a human label for something which exists in reality (as opposed to strictly an abstraction), and that reality would still be there in the absence of humans. What are the distinguishing characteristics that make it a differentiated subset of all other real things?

              • @TomA
                ” instrument of causality”

                I do not think this is right way to describe it.

                Information is, sort of, the complement of entropy (more information = less entropy and vice versa). Entropy is a well known aspect of physics. Like water flows down-hill, energy flows “up-entropy”. When you look at the landscape, you see which way is down hill and you can predict which way water will flow. Engineers use this knowledge to handle, e.g., flood risks.

                When you look at the entropy “landscape” you can predict, and manipulate, which way energy will flow. Storing and using information is more or less manipulating entropy to produce work from energy. All these terms, “storing”, “using”, “information”, “producing”, “work”, imply a conscious actor, a mind. But the physics is exactly the same in bacteria and stars. Therefore, we often use “information” and “work” in describing bacteria, plants, or neurons, where we do not have a planning mind.

                • This is backwards; the amount of information required to describe the current state of a system is directly proportional to the amount of entropy in the system, not inversely proportional to it.

                  The description of a perfect crystal lattice is quite simple; add defects, and the information required for an accurate description increases.

                  But it’s an easy mistake to make; and may even be relevant to the debate about predictive power vs. utility.

                  Consider a log going through a sawmill. We may ascribe meaning to the resultant boards, but not to the piles of sawdust.

                  There is more entropy in the sawdust, and it would take more information to perfectly describe the configuration of the sawdust, but in general, it is not useful to do so; as far as humans are concerned, piles of sawdust are quite fungible.

                  • Information is what is known, entropy what is not know about a system. Together they make up everything that is there to know about a system. In this sense they are each others complement.

                    A perfect crystal at zero kelvin has zero entropy and only very little information. But there is very little to know about the system.

                    • @ Winter – “Information is what is known”

                      This implies an entity capable of ascertaining knowledge. How does this distinction occur in the absence of such an entity?

                    • The last paragraph is perfectly true. The first paragraph is perfectly nonsensical.

                      Oh, sure, it might make some sense if you squint just right and you are considering SNR and trying to receive a particular message. But you have agreed with me that perfect order (the perfect crystal at zero kelvin) has very little information, yet still claim that adding disorder removes information.

                    • @Patrick
                      ” But you have agreed with me that perfect order (the perfect crystal at zero kelvin) has very little information, yet still claim that adding disorder removes information.”

                      Adding disorder requires adding energy. It also increases the total amount of “things to know” in the system.

                      I think you are trying to solve Maxwell’s demon paradox. This has been done, but is not something any one of us can repeat on their own.

                  • IMHO the interesting part is not the amount of entropy, gases (high) and stars (low) are both sort of boring, lacks complexity. Interesting complexity appears with life, and I think life can be understood as an entropy exporter. Plants get low entropy sunlight, animals eat it and export heat and feces. Localized low entropy places can be built by exporting entropy.

                    Yet, it seems it is not low entropy itself that is interesting, no matter how hard animals and humans export entropy we are never as low entropy as the Sun where we got all our low energy input from in the first place.

                    It seems it is process of exportng it itself, the process called life, is what is inherently interesting.

                    Intelligence is probably about being really good at exporting entropy.

      • “You can’t divorce information from material reality; there’s nowhere else for it to be.”

        If that’s correct, in what sense is the Pythagorean theorem true?

        See, that theorem holds exactly only on a surface, or in a space, which has zero curvature; and no such space exists in material reality. We can find spaces that are very close to being flat, and in them the Pythagorean theorem almost holds, but none that are exactly flat – wherever there is matter, there is gravity, and space is curved.

        So how is it true that Euclid proved that the square of a right triangle’s hypotenuse equals the sum of the squares of its other two sides?

        • >If that’s correct, in what sense is the Pythagorean theorem true?

          As a proof in a formal system. That is, you can reach the theorem from the axioms via a sequence of truth-value preserving transformations.

          If you want to apply Pythagoras’s theorem to triangles in the observable world, you need an additional premise that Euclidean plane geometry models the observable system they are in. That is not a formal claim, but rather an empirical one that can be falsified – for example, by summing the angles of the triangle.

          That is, you use the formal model to generate empirical predictions about things you can easily check before you apply it to get a result you can’t easily check. Your “far” result borrows the confirmation strength of the “near” tests you used to validate the model.

          Restating: Euclid did not prove that the the square of your (observable) right triangle’s hypotenuse equals the sum of the squares of its other two sides – he only proved it for a set of marks in a formal system. It’s up to you to supply the demonstration that your triangle is predictively modeled by the formal system Euclid proved his result in.

          (Note: The above was not just me being an autodidact. Before I was a programmer I was a mathematician – I trained for this exact kind of reasoning.)

          • I see a more exact phrasing is needed. What I’m asking is: are theorems of Euclidean geometry true before you try to apply them to material objects? What entities do they refer to, if no such application is being contemplated?

            At this point I’ll quote Peirce again: “The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real.” Now everyone who investigates Euclidean geometry agrees that its theorems are true, from which it follows that the objects they refer to are real. But those objects don’t exist in material reality – only approximations of them do. So where are they?

            • Why are you claiming that material real objects are approximations of Euclidean objects, rather than the other way around?

              (I confess I haven’t grokked the above convo enough to know whether that’s relevant. But it might be.)

              • >Why are you claiming that material real objects are approximations of Euclidean objects, rather than the other way around?

                It could be said the other way around. That wouldn’t matter to the rest of the argument.

                Try this on: We say that a formal system A models an observable system B when, some specific pairing between parts of A and parts of B turns formal theorems in A into correct prediction of measurements in B.

                Under this definition, either A could be said to approximate B or vice-versa.

            • >But those objects don’t exist in material reality – only approximations of them do. So where are they?

              Inside the productions of formal axiomatic systems. And nowhere else.

              >Now everyone who investigates Euclidean geometry agrees that its theorems are true, from which it follows that the objects they refer to are real.

              What is blocking you is a hidden premise in that statement, one you are unaware of having. You’ve confused truth in an axiomatic system with truth about observables. Because you have done this, your notion of “real” remains extremely muddled.

              This is a very, very common error. You have lots of company, even among bright people. I understand generally why it’s so common – semantic prejudices built into natural language – but I don’t know an instant fix for it. I will just have to keep explaining the central point in different ways until you get it.

              One place to start is to notice that the Pythagorean Theorem is not a truth about observables.

              • So the argument seems to be that it’s impossible for anything to exist outside of or beyond material reality. (If this isn’t what you’re driving at, my apologies for the misunderstanding.)

                But then you introduce this new concept of “within the axioms of a formal system” which is apparently the space in which propositions like Pythagoras’ theorem are true. My question is, whence the formal system? Does Euclidean geometry exist? If not, how can we make true statements about it? If so, where in material reality is this existence anchored?

                • > If so, where in material reality is this existence anchored?

                  Every formal system exists as a collection of representations, the representations themselves being collections of physical states in brains and books and computers, which are recognized as isomorphic by the human minds using them.

                  (This is a slight modernization of an insight due to Willard V. O. Quine.)

                  • It seems wrong to claim that the “existence” of Euclidean geometry (or similar cases, number theory, calculus, whatever) is dependent on the material representations of it that humans have created. Had humanity never evolved and Earth remained covered in prokaryotes, it seems obvious that Pythagoras’ theorem would remain true in the sense that we recognize it to be true; there is no conceivable causal dependency on the existence of humans. Therefore, since the theorem would remain true (although unknown by any sophont), the system surrounding it must also still exist in that case.

                    • Actually not, because the theorem describes a relationship between elements of a model, and you have to make the model first. Without humans nobody makes that model. There are no triangles without humans.

                      In other words, you are wandering into Platonist territory which is a good sign of being wrong.

                      Aristotle said triangularity in the abstract does not exist anywhere but the mind, it exists in objects as a similarity but this similarity mixed with matter i.e. difference and is not entirely abstract. So if you think triangles as such and not triangular objects even exist somewhere outside minds or books or computers you are wandering into Platonism.

            • I would say that a logical theorem “exists” as patterns in human brains, or other substrates that can encode logical reasoning. And saying that a theorem is “true” just means that it logically follows from the premises, where “logically follows” refers to certain cognitive operations that can be performed by human brains (with varying degrees of ability), or other cognitive processes.

              More generally, I think that a lot of philosophical confusion stems from using the same word (“true”) for theories about the physical world (which are prediction-generating algorithms), states of affairs, and logical deductions from formal proof systems.

              (I do slightly differ from Eric in that I’m willing to apply the word “truth” to statements about states of affairs that are not in principle observable by us. But his usage is consistent and reasonable — and indeed by assumption this makes no practical difference to us, except in weird cases, like if we posit that our actions might cause in principle unobservable consequences that we might care about morally.)

              • >(I do slightly differ from Eric in that I’m willing to apply the word “truth” to statements about states of affairs that are not in principle observable by us. But his usage is consistent and reasonable — and indeed by assumption this makes no practical difference to us, except in weird cases, like if we posit that our actions might cause in principle unobservable consequences that we might care about morally.)

                That’s an interesting edge case.

                I think I would analyze this as follows. Prediction generators about observables might in principle imply predictions about consequences not observable to us. We cannot in any strong sense call those latter predictions “true”, but we can assign them a contingent probability of being true if they were observable proportional to our level of confidence in the theory.

                It might then be that precautionary ethics requires us to act or not act on that judgment. If only for the reason that our belief that they will forever be unobservable could be in error.

          • Popping a few levels up for ease of reading…

            You’ve claimed the objects of geometry exist “Inside the productions of formal axiomatic systems. And nowhere else.” I don’t see how this settles the question at hand, which is whether immaterial entities exist. It only explains one abstract concept (Euclidean space) in terms of another (formal axiomatic systems.)

            Formal systems are not material objects, any more than the lines of geometry are. You can point to material representations of a formal system, say a written list of its axioms and rules of inference, but those are only representations; they are to the actual system what a drawing of a line on a chalkboard is to a line in a geometric proof. So you still face the question of how, or where, formal systems exist.

            “What is blocking you is a hidden premise in that statement, one you are unaware of having. You’ve confused truth in an axiomatic system with truth about observables.”

            It’s you who are confused, not I; for “truth in an axiomatic system” is indefinable within that system. That’s Tarski’s Theorem: propositions in a formal system can be true or false, but the system cannot express the proposition that one of its propositions is true. Hence, in a real sense, “truth in an axiomatic system” isn’t even a coherent concept.

            For example: the natural numbers are the smallest possible model for Peano arithmetic – all other models include them as a subset. The Goedel sentence of Peano arithmetic is true for the natural numbers (because there is no finite proof of it), but false for larger models that contain numbers larger than ?. (Thus it’s undecidable by Peano arithmetic.) But no proposition whatever is true in Peano arithmetic, not even “2+2=4”. Propositions are true, not in a formal system, but in its models, abstract or concrete.

            Now if you had said that I confuse truth about abstract entities with truth about material objects, I’d have protested, but just about a prejudical choice of words. What you have done is confuse truth about abstract entities, a rational concept, with “truth in an axiomatic system”, which is meaningless.

            • >It’s you who are confused, not I; for “truth in an axiomatic system” is indefinable within that system,

              Whether you can reach a proof target from the axioms is the only definition of “truth within the axiomatic system” we need here. It doesn’t have the Tarski problem, and corresponds well to what people intuitively think of as mathematical truth.

              This is not merely a quibble. My definition of “existence of thing X” within a formal system only requires that it appear as a term in the production chain connecting its axioms to some theorem.

              Thus, a “triangle” appears as an element in many production chains in the Euclidean axiomatization of plane geometry. How many corresponds to our intuitive notion of the salience of a mathematical object.

              • “Whether you can reach a proof target from the axioms is the only definition of “truth within the axiomatic system” we need here.”

                No, sir. That is provability, and it’s not the same thing as truth. Neither natural intuition, nor rigorous mathematical theory, claims that provability and truth are synonymous. Unless you throw away Goedel, Tarski, Church and Turing, and reject the whole field of proof theory as blather, you don’t get to redefine truth about abstract entities as provability in a formal system.

                Fallacy of equivocation; ten yard penalty, second down.

                • >No, sir. That is provability, and it’s not the same thing as truth.

                  Formally, no. But remember the purpose here. You want to know where entities like “triangles” exist in formal systems. Where else can they exist except as recurring elements (sub-WFFs) in proof targets and their productions from the axioms?

                  • Entities like triangles don’t exist in formal systems. Formal systems are sets of propositions and relations of entailment between them. A term within a proposition refers to something, but is not the thing it refers to – “the map is not the territory”, the word “triangle” is not a triangle.

                    Which brings us back to the starting point. Euclidean geometry is a formal system (or can be made so.) Certain material objects serve as approximate models of that system; but the exact model of that system, Euclidean space, can’t exist as a material object. Nonetheless, because everyone who investigates it comes to the same conception of it, Euclidean space must be real by Peirce’s criterion.

      • @ESR If we equate information with the representation of information in a physical pattern we lose the concept of meaning. OK meaning is a bit of a murky topic on its own, but if we stick to predictability then meaning is an important aspect of predicting the behavior of conscious, intentional goal-seeking actors. Also for communication and trying to get each other to do things.

        Say you are trying to get you car fixed in Brazil but you don’t speak Portuguese so you try using some Spanish words you hope to have close parallels, and they are partially getting it but not really, so you ask them to lift the car and the point to it and say see that cosa behind the wheel is no bueno, OK? And then you see their faces light up that they are getting it.

        It seems two conscious communicators can encode the same concept in many ways, indeed infinite ways, as they can as well invent a private language, private terminology – siblings like do that, right?

        Yet it is the same concept. That is, it can be used to predict or control behavior. Meaning is thus a tool for predicting or controlling behavior. Once you see them replacing the right part or generally do what you want you know they got the idea correctly. Even if it meant you were talking to Italians in Latin which I did once when I ran out of other languages to try. The encoding matters far less than the idea itself.

        The point is here you may be falling into exactly that kind of essentialism that you are working hard to avoid. From the predictionality angle if two predictive concepts have widely different uses then they are just two different concept even if you think “essentially” they are the same. The meaning of information and the representation of information have so widely different uses and properties that it is just not useful to reduce them to each other.

        See I got this idea when I visited a printing press. As any random reader the only thing that interested me about books was their content, their meaning. And there I found people who can discuss at length about a book, its typography the paper quality and everything without giving half a damn about what it is actually written in it. They were into representation I was into meaning.

        If you reduce them you end up saying a discussion about the content of a book and a discussion about its typography is the same discussion. When in reality in most cases people interested in one are not even interested in the other.

        This doesn’t mean information itself exists in a magic dimension apart from its representation. It means this reduction is lossy af, it loses meaning, behavior prediction, behavior control and all that jazz.

  22. For me, the most interesting part of this is the discussion about the EAA.

    Because my brain doesn’t seem to work like most of those around me, and I often wonder about this miscalibration.

    I have often explained to others that if you tell me something that completely comports with my worldview, I might forget it immediately after you tell me; might even forget that you told me. If you later told me that you had told me before, that wouldn’t be surprising, because I have enough self-awareness to know how that works.

    But if you tell me something that _doesn’t_ comport with my worldview, and it seems like it might be something important, I will challenge you on it, with the end goal of either falsifying it, or finding out that I need to update my mental model.

    A lot of people find this approach off-putting, because they are spouting shit all the time, even about important stuff, and can’t defend it, and don’t really want to defend it, yet get obstreperous when challenged.

    So, it must be that more “normal” humans are tuned to filter out this sort of bad data better; perhaps partly to support better interpersonal relationships. Or maybe they don’t sift through the days’ data dump until they’re dreaming. I think I need to do some immediate sifting; one consequence of my mild ADHD is that I might not remember later otherwise.

    Surprises are to be avoided; one way to avoid them is to allow for a lot of cognitive dissonance and, perhaps, what might be considered to be modal or situational reasoning (religion anyone?), and another way to avoid them is to regularly update your neural weights. Each approach has its advantages, but as someone who regularly updates his weights and who lives and works among others who don’t, I can make good money by seeing the obvious that others miss. Depending on the circumstances, I am either a genius or a complete raging nutter.

    I think one reason most people don’t regularly update their weights is that it is a difficult problem. If you make your model too large, it becomes useless. To do a good job on your model, you have to think hard about commonalities and differences and, yes, even probabilities, so that you can prune your model appropriately — “as simple as possible, but no simpler.” And this may require much more neural capacity (for technical fields) than was needed in the EAA.

    • You may be somewhat autistic. I am one, too, but I have suspicion that this thing actually just correlates with mild autism somehow and is not directly caused by it. At any rate, it is this: taking statements at their literal meaning as opposed to their social meaning. So we are prone to take a statement like “Manchaster United is the best football team” as a claim at least theoretically provable by looking at their stats and rankings, while actually it is said with a social meaning.

      Social is an interesting word. What does “social” mean? I used think it largely means being friendly with other people, sharing things, that sort of thing. Actually it doesn’t.

      Currently I think social means positioning yourself in a social landscape. This positioning can be horizontal: who are my tribe and who are its competitors, the ingroup and the outgroup. Vertical positioning is status: who are the cool rock stars and who are those who hang their heads in shame. There is also the combination of the two: status inside a group, or, very commonly, the status of a group compared to other groups, this combo seems the most emotionally moving. For example, I was recently thinking whether words like outcast, pariah or marginalized suggest that there is some kind of mega-group most people are considered being part of and thus those kicked out of it horizontally also necessarily suffer low social status vertically?

      Anyway, normies usually talk in the social meaning, where “MU is the best team” means “MU fans are my ingroup” and “I like the idea that MU and their fans have higher status than other teams and their fans, like, by winning a lot”.

      People who are slightly autistic simply don’t instinctively get this social meaning, they have to learn how these things work. The spend their teenage years being annoyed how illogical everybody are then figure this out.

      Most normies have built in receptors for social positioning and somehow don’t have a problem saying things that are false or unfalsifiable-not-even wrong in the literal meaning, they just mean it in the social meaning and don’t get what is the problem with it.

      Many autistic teenagers tend to crash into this at some point. Religion is an excellent way to crash into this, your grandma dies and you are sad because you loved her and people tell you will see her in heaven and you get mad and demand how do they know and then they just blink. They did not literally mean it as a prediction. It is just the nice thing to say in these situations, beyond comforting, it reaffirms religious ingroup status and thus sends a doubly comforting “we are with you” message. The question is entirely moot whether they actually believe it or not. Mostly they simply don’t spend enogh time thinking to decide if they are really atheist or theist. They are just going through the motions mostly. And then the autistic teen concludes they are stupid.

      They are actually not. Just their intelligence is too pragmatic to be interested in truth. Going through the motions creates them a reliable church ingroup, friends and business contacts, and occasionally some consolation. These are not bad things to have. Often they are happier and have better mental health outcomes. They are fairly efficiently optimizing for useful things.

      They are, in a way, smarter than the autistic teen, I had to realize it, but their smartness is different. Instead of the usual kind of smart, “this complicated conscious reasoning suggests me statement X is true, predictive” it is more like “this complicated subconscious processing suggests me saying statement X aloud will be good for me and for people I like socially, we can predict it will lead to good social outcomes for us regardless of what an inane absurd shit statement X is”.

      The autistic teen is better at conscious reasoning but lacks this subconscious social calculus machinery.

      I will be 40 this year and it took a while to figure this out. There is one thing I do not get. Why do humans need to say false or unfalsifiable things in order to position themselves socially?

      I mean, why is it so that we cannot just say stuff like “I like you guys and hate those guys and I think I am very cool but Dave is even cooler, but Bret, fuck that guy” without any inane attempt to rationalize it? Why do we have to make up shit like “I love you guys because (complicated rationalization) but I hate those guys because (complicated rationalization), I think I am very cool because (complicated rationalization) but Dave is even cooler because (complicated rationalization) but Bret, fuck that guy because (complicated rationalization)” ?

      If we wouldn’t need to make up false or unfalsifiable shit in order to rationalize our social positioning messages, it would be so much better. Why cannot just we just be honest and simply express emotion without rationalization, loving the ingroup and disliking the outgroup can just be pure emotion, without any rational reason, why are we not allowed to admit that openly? Andy why do we have to make up rational sounding reasons for a person or group being high or low status, cannot we just say we want them so?

      Instead of “MU is the best team”, why cannot we just say “I really like MU and I really hope they get a lot of glory, it would feel good to me”, why cannot we be this honest? Instead we say “MU is the best team” and then stumble upon an Arsenal fan and then we argue forever about stats and rankings to rationalize our statement when in fact it was purely emotional and should need no rationalization at all. So why is this honesty not allowed?

      And… this is the curse of intelligence. Stupid or low educated people can ALMOST say things like I like you guys and dislike those guys and I find this cool and that not cool and almost get away without a rationalization. But two intellectuals will argue forever about technical details because one would gain status by proving the other wrong – even their tiny status gains need to be rationalized!

      • I don’t think I’m on the spectrum. I think my ADHD means that, when I’m focused, I need to get everything sorted out. If you engage me and make a statement, it is best if I cope with that statement now, while you have my attention.

        My major complaint isn’t about social things and magic faerie dust, or quibbles over minor technical details. It’s about plausibly true assertions that, if true, mean that a serious redirection of effort needs to take place; e.g. I am working on the wrong thing, I owe the IRS a gazillion more dollars, whatever.

  23. esr:

    From the point of view of people disputing ontological monism vs ontological dualism or manyism…

    Quibble: why “manyism” rather than “pluralism”?

  24. I’m a bit surprised that nobody’s mentioned Ayn Rand yet. Essentially Aristotelian, her position is that we live in an objective, factual universe whether we succeed in perceiving it accurately or not.

    The basis of her concept of morality is that it’s good, just, and proper to see and act on facts, and that it’s evil, stupid, and dangerous to ignore them, and in particular that to teach others to ignore reality is a vicious thing to do. (Hence her withering contempt for religion, marxism, and “mystics” in general.)

    • A question for Rand scholars – did Ayn Rand ever seriously consider the Thomist arguments for God’s existence? Those take Aristotelian metaphysics as an assumption, so anyone who tries to be an Aristotelian and an atheist has to deal with them.

      Also, did Rand ever express an opinion on Korzybski, or the converse? From what I know their systems look compatible, but that doesn’t mean they would think so.

    • The problem with Rand is not that she ignored basically every philosopher other than Aristotle, the problem was she did not even care about that pretty old Aristotelean tradition with Aquinas level brains. I mean even if I would see Aquinas as a fantasy writer, he would be about two order of magnitudes above all modern fantasy writers, this is a level genius I did not even expect to exist before AI.

      At least Feser has a basic theory why modern philosophy sucks: because later Scholastics like Scotus and Ockham began misrepresenting Aristotelo-Thomistic ideas and then moderns like Hume based on that and based on their own ignorance refuted an entirely false version of it.

      But what is Rand’s excuse for ignoring all those centuries when everybody was an Aristotelean?

      And it was not just theology. For example there was serious work in economics.

  25. (pop, pop, pop)

    TheDividualist: “Actually not, because the theorem describes a relationship between elements of a model, and you have to make the model first. Without humans nobody makes that model. There are no triangles without humans.”

    That’s the disputed question – does Euclidean space (the model in which the Pythagorean theorem is true) exist independently of human reasoners?

    Once again I cite Peirce’s condition: any proposition that all reasoners who investigate must eventually agree to, is true, and the terms of that proposition refer to real things. Well, all reasoners who consider Euclidean geometry do eventually agree that it is true, which implies that it refers to something real – that is, independent of any single geometer. And if it’s independent of any one of them, it was independent of the first geometer, and will not depend on the last geometer, if there’ll be one – that is, it existed when there were no geometers at all, and will exist when there are no geometers. So it doesn’t depend on the set of geometers, either.

    Yet this reality to which Euclidean geometry refers cannot be material; everything in the observable universe differs from it in some way. So we are forced to admit the reality of an unobservable, immaterial entity that exists prior to and independently of all human minds. I don’t see any point in the reasoning that is open to challenge.

    • I think the basic problem is calling it an entity. Or even “real things” or “things” I don’t know if Peirce did that but if yes that sounds like a mistake. This is not how it works and actually this is fairly standard philosophy: there are analytical and synthetical statements. The Euclidian theorem is analytical, that is the conclusions follow from the premises, they are indeed part of the premises as they are defined. So it does not even convey new information, that could be true or false, it rather explains some aspects of the premises. The standard philosophers example is “all bachelors are unmarried” this is not as much as true as not even a new information, rather it just explains what the term bachelor really means. With synthetic statements you get genuinely new information, with analytical statements you get information that was always there we were just not conscious of it.

      So new information does not arise through proving the theorem, rather, the theorem is instantly there at the second we define the premises and the proof merely draws attention this already existing but easily ignored information.

      So the proof is really just an explanation of a proper understanding of the premises. All a priori, analytical, mathemathical statements work like that.

      That means the truth of the theorem is created instantly when the premises, the rules of the game are laid down. So they are not prior to minds. After all minds have to be trained in a specific way to even understand the premises. That specific training lays down the rules of the game and the theorem itself merely draws attention to an aspect of them.

      Granted it is somewhat mysterious that humans can define so precise rules to the math game that people can eventually derive complex numbers and Mandelbrot sets from them, and they were already there from the very beginning, just not noticed. This skill is somewhat mysterious to have and this is why mathemathicians are constantly tempted by Platonism.

      • First, the analytic/synthetic distinction is not one between statements, but between the processes by which we arrive at them. “Analytic statements” are just those reached by deductive methods; “synthetic statements” are those reached by abductive methods (a Peirce coinage, that word.) In themselves statements can be analytic, synthetic, both or neither, depending on how people come to believe them.

        Second, while the Pythagorean theorem is a necessary deduction from the premises of Euclidean geometry, that doesn’t settle the question of its truth. It just pushes the matter back to those premises. Now, as I’ve already mentioned, there’s no such thing as truth within a formal system; rather, the theorems of a formal system become true (or false) when their basic terms are defined with respect to things outside that system.

        (Jumping to arithmetic again: the finite ordinals and the Church numerals are both models of Peano arithmetic. But a finite ordinal is not the same thing as a Church numeral – finite ordinals are sets, while Church numerals are algorithms. The theorems of Peano arithmetic are true in both, but those theorems don’t exhaust what can be truly said of either.)

        It follows that your claim “the truth of the theorem is created instantly when the premises, the rules of the game are laid down” cannot be right. If the premises are true at all, they are true from eternity, long before anyone thinks of them, and the theorem’s truth follows from them just as eternally. If any premise is false, it always was false, and so was the theorem. If the premises have no meaning, neither does the theorem.

        Your claim to the contrary would imply that, though many Pythagorean triples were known to the Babylonians (who used them to create precise right angles for surveying) the general rule given in the Pythagorean theorem meant nothing whatsoever before the Greeks started thinking about geometry in the abstract, at which point it suddenly became meaningful and true. Which is absurd.

        • Most people know the game of chess from playing the board version with tangible game pieces, but high level masters can play the game in a purely mental context by “seeing” the game in their head and communicating moves verbally (blind chess). When this happens, the game is said to be conducted in abstraction space, and the underlying reality is reduced to the neurological processes taking place in each participant’s brain.

          In this latter example, chess is just a formal system with specific rules and relationships, and participants are confined within these constraints or they are not playing chess any longer. When played within these bounds, all actions and results are consistent. If some form of distortion occurs between a brain neurological process and game action in abstraction space, then true/false may become relevant.

          • Chess, however, is a matter of convention. If someone who knew nothing of the game were given a board and set of pieces, but wasn’t told the rules, it’s extremely unlikely that he’d come up with those rules on his own.

            By contrast, anyone can begin studying arithmetic with nothing more than a supply of pebbles, and geometry with a stick and a smooth patch of sand … and they’ll get the same answers that we do, even if they never meet a trained mathematician.

            Chess exists only in and through chessplayers; mathematics existed before people knew of it.

            • >mathematics existed before people knew of it.

              Unjustified reification. Regularities that human beings now correspond to entities in formal systems invented by human beings existed before humans. But mathematics is a thing humans do and cannot have existed before humans (well, not unless there have been other sophonts performing similar abstractions before).

              • Would it make a difference if I said that the objects of mathematics (numbers, spaces, functions, etc.) existed before people started thinking about them? I’ve already called formal systems “maps” to an immaterial “territory”; and I’d have no problem conceding that the process of doing mathematics takes place within our minds. My concern is with that process’s final result.

                • >Would it make a difference if I said that the objects of mathematics (numbers, spaces, functions, etc.) existed before people started thinking about them?

                  No. You’re still trying to put the ontological cart before the confirmational horse. Peirce, and I, are trying to tell you that’s a mistake that will lead you to think and speak nonsense. It’s a very common mistake, but no less pernicious for being common.

                  You can say that observables which humans would have modeled using “numbers, spaces, functions, etc.” existed before humans. This pretty much has to be true unless we’re all Boltzmann brains or the universe was created three seconds ago with a fake history.

                  But that is a very different claim from asserting that the abstractions “numbers, spaces, functions, etc.” exited before humans. The only way I’d believe that is if, say, we contacted a civilization that predated human sapience and discovered that our mathematicians could understand mathematical logic written two million years ago.

                  Would all sophonts do mathematics in recognizably isomorphic ways? Yes, I think so; we’re all abstracting from the same universe, after all (see Peirce on the eventual necessity of agreement). But “mathematics” – the abstraction of that universe into zero-content formal systems – necessarily takes place inside minds.

                  • Given this point, I’m compelled to reexamine a theory I’ve had for a while.

                    Consider strings of words we use to communicate and share ideas, such as “fruit flies like a banana” or one of Shakespeare’s sonnets. I believe it’s fair to speak of the strings, as being sequences of words, or letters, or characters. There are finitely many strings (of finite length and character set). One can posit the set of these.

                    Did this set exist before people? I’ve long thought so. A sonnet is an element in it. Even if Shakespeare wrote it, no matter how moving or artistic we find it, it pre-existed the Bard. I claim that we can say he’s the author because it’s vanishingly unlikely that anyone brought that sequence to our attention, out of the space of the 10 septillion or however many possible, before he did – but it’s still possible.

                    If Sonnet 116 were to appear by chance in a rockface on Phobos long before we visited, we might say it was meaningless before we arrived, and imply that we invented the mapping of its meaning to the words on the rock. Which to me would imply that the abstract ideas evoked by Sonnet 116 were of our creation. (Unless a pre-human species were found to have come up with them, etc.)

                    All this time, though, I’ve been speaking in terms of the set of all character sequences. Is there a similar set of all possible abstract concepts? A set of all permutations of chess-like games, for example? Or of all arrangements of propositions involving concepts such as functions, successorship, combination, composition, and so on?

                    I find I can’t prove that none exists. I admit that the only way I have to access this abstraction space is via a mind, which hints that I’m just coming around to Peirce from another direction. Numbers and functions exist, but so do therdiglobs and vriggoliths, and I’m only pondering the former because my ontology says they’re useful. (And it says they’re useful because I can approximate predictions about observables. Even the purest academic pursuit I can contemplate is itself something I necessarily observe with my mind’s eye.) So if that space exists, so what? What’s more useful: to imagine that our minds are exploring it, or that our minds are building it as they go?

                    Of course, this still permits Sonnet 116 on Phobos. It’s still unlikely. I can still claim Shakespeare was the author, from an argument from probability that makes me correspondingly unlikely to have been the first to author the word “therdiglob”. (I’m not, but to be fair, I knew that when I brought it up, so it’s not the best example.)

                    I’m not sure I have a point to make here; just documenting my trip.

                    • Frankly, if one of Shakespeare’s sonnets were found engraved on the surface of Phobos, I’d sooner believe a time traveler put it there than that a poem in English was implicit in the structure of the universe. Human languages do depend on human minds.

                • What does it mean for a mathematical object to “exist” apart from any physical encoding of that object?

                  • It’s the assertion (by a human) of the existence of a metaphysical realm within the reality-based Universe in which reside eternal abstractions.

    • This conundrum is sometimes stated as a classical query.

      “Was mathematics invented by mankind or discovered by mankind?”

  26. OT technical: Turn off css styles (which I do when the indentation gets 3-words-per-line ridiculous) and scroll down to the bottom of (every page I’ve checked, including main) and look between “Eric Conspiracy” and “Anti-Idiotarian Manifesto”. What I see with three completely different browsers and OSs (including Tails DVD boot, to attempt to eliminate any weird caching going on somewhere between me and A&D) probably isn’t something you want there.

    • I can see it in the source, yes. I agree, Eric; and I imagine you’ll want to know how it got there (I have no clue personally).

      • Eric has been notified, and we’re working on the issue.

        ATTENTION: Any senior-level WordPress experts out there want to lend us a hand? We’re especially looking for people who grok the internals enough to surgically remove malware without a complete “burn it to the ground”. Please contact me at:

        j d b (at) s y s t e m s a r t i s a n s (dot) c o m

        Thank you!

  27. Specifically, under the predictive processing model, the brain is a Peirce engine. “Mind” is what we observe as the epiphenomenon of that engine running – its operating noise, more or less.

    The Peirce I’m referring to is Charles Sanders Peirce. In his seminal 1878 paper On Making Our Ideas Clear he recast “truth” as predictive accuracy, asserting that our only (but sufficient warrant) for believing any theory is the extent to which it successfully anticipates future observations.

    How does this explain Austrian economics, then? :)

  28. (popping fresh)

    ESR: “Wouldn’t you describe whether one can reach a proof from axioms via valid steps as an observable? Sure, it’s theory-laden as hell, but so is every other percept.”

    No, I would not, for two reasons already mentioned. Propositions, and relations among them, are just as much abstract objects as numbers, spaces, sets and functions are; so the claim that a proposition has a proof doesn’t ground out to observations of material objects. Formal proofs aren’t just theory-laden, they’re nothing but theory. Calling a formal proof observable doesn’t make it one, any more than calling a tail a leg turns it into one.

    Secondly, you can’t collapse truth about abstract objects into provability. Quite apart from the issue of Tarski’s theorem, provability is at its base a relation among propositions, while truth is a relation between a theory and its model. A proposition doesn’t predict that it has a proof when it’s formally stated, because it isn’t, fundamentally, about whether it has a proof. (Other than Goedel sentences, of course.) Putting formal proofs in the same category as material evidence that, say, dogs are mammalian is sheer equivocation.

    Me: “Would it make a difference if I said that the objects of mathematics (numbers, spaces, functions, etc.) existed before people started thinking about them?”

    “No. You’re still trying to put the ontological cart before the confirmational horse.”

    Not at all. I take the existence of independent mathematical traditions, which are in agreement when they speak on the same topics, as confirmation that the things they speak of are real. It’s a perfect example of “eventual agreement” at work.

    We don’t, by the way, need to introduce hypothetical ancient aliens to recognize this. Consider that every human language ever recorded has names for the natural numbers … and that many languages were developed independently.

    “Would all sophonts do mathematics in recognizably isomorphic ways? Yes, I think so; we’re all abstracting from the same universe, after all (see Peirce on the eventual necessity of agreement). But “mathematics” – the abstraction of that universe into zero-content formal systems – necessarily takes place inside minds.”

    I take issue with the claim that mathematics, as a discipline, is no more than the abstraction of the material universe into contentless formal systems; and I believe nearly all working mathematicians would object as much as I do. And as far as I’m concerned, talking about the process of doing mathematics is merely an evasion; my interest is in the objects mathematicians spend their days thinking about.

    Just as an aside, though: formal proof is not by any means the sole method of mathematics for confirming propositions. It’s just the most certain method, the one that (when it exists) decides a question for good and all. Most mathematicians were convinced that Fermat’s Last Theorem is true long before Andrew Wiles found his proof; most mathematicians today are convinced that Reimann’s Hypothesis is true, although nobody has found a proof yet. Such beliefs were and are established inductively, by balancing the probabilities in a way similar to Bayesian inference.

    Working mathematicians make conjectures by abduction, confirm them by induction from examples and consequences, and finally prove them by deductive arguments. They proceed, in fact, just as natural scientists do – the difference is only that they get to deduction much more often than natural scientists do. Mathematics has even had paradigm shifts, where whole branches of the field have been recast, as Einstein’s physics recast Newton’s.

    • > Calling a formal proof observable doesn’t make it one, any more than calling a tail a leg turns it into one.

      A proof is composed of a series of steps executed by rules. Whether the steps lead to success is an observable. There is no difficulty of principle here, you only think there is one because you’re fixed on something like timeless Platonic noumena existing and the proof being a mundane reflection of them. At the current state of our knowledge this is not a viable position.

      >provability is at its base a relation among propositions, while truth is a relation between a theory and its model.

      If you keep pushing in that direction, you’ll get to Peirce’s position eventually. It’s how he did, and how I did.

      >I take issue with the claim that mathematics, as a discipline, is no more than the abstraction of the material universe into contentless formal systems; and I believe nearly all working mathematicians would object as much as I do.

      That’s kind of complicated. Remember I used to be be a mathematician myself, so this a place where I speak from some authority of experience.

      The truth is that most mathematicians feel like Platonists but think like Formalists. It is psychological reality that we feel like we’re discovering rather than inventing, feel that there is some numinous, essential relationship between the marks on the paper and reality-whatever-that-is. The trouble is that we also know that the Formalist critique of essentialism is ironclad, unassailable – that argument has been over since 1934.

      Here’s an example of the kind of question that killed Platonism: is set theory is more than a zero-content system of marks on paper, then do “real” sets obey Zermelo-Frankel or Von Neumann-Bernays rules? And if you think you know the answer, how do you confirm it?

      Most working mathematicians understand the force of this question. Thus, they would want to agree with you, but also know that your ontological position is not viable. There is nothing in the least heterodox or unusual about the Formalist account of mathematical truth you’ve been hearing from me, unless maybe it’s that I’m better at articulating it than most.

      Therefore they’d try to change the subject on you, ignore the problem, and leave it to specialists who want to do metamathematics (this was my intended path). This is not that unreasonable a response, because the problems associated with the collapse of Platonism only really manifest near transfinite sets. Ever since Robinson gave us the hyperlimit formulation of nonstandard analysis they have posed no practical difficulties for the rest of mathematics.

      In fairness to you, there is a way this might change in the future. If the constructivists have their way, we might find a proof criterion that banishes various sorts of transfinite monstrosities and restores the idea of “natural” mathematics. The Intuitionists took a swing at this and failed, but it is in fact possible that we might find a way to choose a unique “best” mathematics that among other things banishes either ZF or VNB.

      If that happens, then some kind of neo-Platonic position would start to look viable again. Thus would come as a vast relief to most mathematicians.

      • On that last bit – I looked up constructivism just to make sure my memory was correct. Brouwer, the founder of intuitionism, firmly believed that mathematical objects are creations of the human mind, a radically anti-Platonic position if ever there was one. The chief goal of intuitionism, and of constructivism after it, was and is to reformulate logic and mathematics on that metaphysical basis, and to study provability without reference to truth.

        That isn’t a useless course of study – computer programming owes a great debt to intuitionist logic. But if the constructivists ever succeed in displacing axiomatic set theories as the preferred foundation of mathematics, that wouldn’t revive mathematical Platonism, but leave it stone-cold dead.

  29. >A proof is composed of a series of steps executed by rules. Whether the steps lead to success is an observable.

    True only in the sense that the “steps” are strings of symbols written on paper, ignoring what they mean to a sapient reader. Unless you are prepared to identify a sentence of English with the sounds an English speaker makes while pronouncing it, that’s not legitimate. (And if you are, I get to have fun with your theory of language …)

    >If you keep pushing in that direction, you’ll get to Peirce’s position eventually.

    I think I already am at Peirce’s position, thank you. You are at Korzybski’s position. Nominalism is the point where Korzybski differed from Peirce, and where I differ from you.

    >Here’s an example of the kind of question that killed Platonism: is set theory is more than a zero-content system of marks on paper, then do “real” sets obey Zermelo-Frankel or Von Neumann-Bernays rules? And if you think you know the answer, how do you confirm it?

    Well, the first question is – is it necessary to choose between them? Since everything that bears the label “set” in Zermelo-Frankel matches with something that bears the same label in Von Neumann-Bernays, and vice versa, and the systems don’t disagree about any of those things, they’re both equally good maps to the territory of sets.

    As I understand, the great motivating force behind Zermelo-Frankel set theory was the desire to create an absolutely rigorous foundation for all of mathematics from the absolute minimum of basic concepts. I don’t think I’m wrong in saying that, in that respect, it failed. And IMO the reason it failed was that its creators made the same error you have, and equated doing mathematics with discovering formal proofs.

    So, I don’t see why the varieties of axiomatic set theory are an issue for mathematical Platonism. It’s not a necessary consequence of Platonism that every mathematical entity be represented in a totally rigorous formal system. Nor must we say that the finite ordinals of Cantor are the natural numbers to claim that the natural numbers exist as more than marks on paper. Abstract objects are allowed to be heterogenous.

Leave a Reply

Your email address will not be published. Required fields are marked *