If I had to name the single most frustrating aspect of modern, popular
debate, it would have to be the openly
adversarial nature on which it all appears to operate. Rather than
work together on building a better understanding of our world, we
instead seem to be more focused on scoring philosophical points against
the "opposition." Often times, this might even be by design, like in a
courtroom setting where both advocates must dogmatically argue on behalf
of their clients’ best interests. The rest of the time, however, it
seems to just happen spontaneously---as if no one really knows what the
rules are when engaging in this process. That's why so much
philosophical debate in this world feels a lot like trying to play a
game of chess against an
opponent who thinks that this is supposed to be checkers. Without some
formal agreement over what truth is and how to recognize the truth when
we see it, then
there is simply no way to productively engage with opposing points of
view.
To
me, this presents a very strong incentive to just sit down and
summarily examine the question of "What is truth?" Fortunately, in the
context of philosophical and mathematical logic, there are actually very
well-established answers to this question. It's just that you wouldn't
really know it because no one ever seems capable of spelling them all
out within a
single, comprehensive reference. That’s why I feel personally motivated
to present my findings on this issue. It’s a perfect opportunity to
explicitly lay the foundations of basic epistemology for everyone to
see, such that we can finally begin to hold each other accountable to a
more rigorous set of philosophical rules.
To begin, it's important to understand that any time we talk about a thing like truth, we're not talking
about some intrinsic metaphysical quality of reality itself.
Technically, what we're really talking about is a property of
propositions. That is to say, propositions can either be true or they can be false, but there is no
such thing as raw "essence of truth" interwoven into the fabric of space and time. Speaking more formally, a truth value
is classically defined as a member of a
binary set that contains the elements "True" and "False" [1]. The
purpose of this set is to serve as a kind of marker for linguistic
propositions in order to help us measure their epistemic "correctness." What exactly that means is open to some interpretation, but we can give
it a rigorous definition through a mechanism known as a truth assignment. Speaking formally again, a truth assignment (also called a truth valuation, or an interpretation) is defined as a mapping function between the set of simple linguistic propositions and the set of binary truth values.
If
that sounds a bit technical at first, then just think of it like this: Imagine me writing down a simple proposition on a post-it note and
placing it in front of you. In your left hand is a giant rubber stamp
that says "true" while in your right hand is a another giant rubber
stamp that says "false." Your job is to decide which label deserves to
be stamped on this note. So ask yourself, how do you go about doing
that? Do you just arbitrarily stamp things randomly? Or do you apply
some set of rules that give your labels a more significant meaning? Whatever answer you give to this question is effectively your truth
assignment function. It's an algorithm that takes simple linguistic
propositions as an input and then determines a binary truth value as the
output.
Now let's take it one step further. Suppose
you've stamped a dozen or so of these post-it notes with truth values,
when suddenly you feel like connecting them together into more complex
arrangements. For example, maybe you think two true propositions
connected left to right should also be stamped with a value of "true."
Or maybe you think two false propositions connected top to bottom should
always be "false." Maybe you think propositions stamped with "true" on
the top should all be stamped with "false" on the bottom, and
vice-versa. These are all perfectly valid operations, and represent the
role of logical connectives contained within the scope of propositional logic. We like using logical connectives because they allow us to literally
"connect" propositions together, thereby creating more interesting propositional formulas.
Notice also how there's nothing physically forcing us to stick with only a binary set of truth values. For example, maybe you think truth would make more sense if we used a ternary set of values rather than binary. It's a perfectly valid conception that's even used in practice today by scientists, engineers, and mathematicians [2]. Some systems of logic even treat truth as a continuum value rather than a discrete set [3], and again find use in modern scientific applications. There is no objectively right or wrong answer except for the collective say-so of human philosophers in our ultimate quest for a meaningful conception of truth.
This is all pretty standard material so far, and can readily be verified in most relevant textbooks on the subject [4,5]. However, something you generally won't find is an official stance over the precise nature of an ideal truth assignment. It's as if we're all experts at manipulating truth values once we have them, but no one knows how to go about assigning those actual truth values in the first place. That's a real shame, because this question represents the heart of what an idea like truth is supposed to philosophically encapsulate. At best, we only seem to have this vague notion that truth should, in some way or another, represent a kind of "correspondence" between the set of linguistic propositions and the factual state of affairs in objective reality. True propositions are those which effectively describe the real world as it really is, while false propositions do not.
This is a fairly common epistemic concept that philosophers like to call the correspondence theory of truth. And, at first glance, it does seem to be a pretty intuitive definition. Unfortunately, there's also a glaring hole that needs to be addressed. Namely, what exactly is this "correspondence" thing you speak of, and how do I recognize it when I see it? For example, consider a simple proposition like "the Moon is round." Is that true or false? According to correspondence theory, the best we can say is that if the Moon is round, then it is "true" that the Moon is round. Since that's obviously just a vapid tautology, correspondence theory of truth hasn't really told me anything about how to assign truth to propositions.
But let's take it even further. What if I stand outside one evening and simply look at the moon directly with my own eyes? That way, if I see a generally roundish object, then I can legitimately say that the Moon is round, right?
Well, no.
For example, what if there was some kind of optical illusion brought on by the atmosphere that makes squareish things appear round? Or what if I'm just looking at a giant photograph of the moon, or maybe some elaborate hologram? Maybe it's all just an hallucination brought on by drugs, or perhaps a really vivid, lucid dream. Maybe I'm being tricked by a magical demon, or maybe I'm really just a brain in a vat, plugged into some kind of matrix simulation. I simply do not know, and what's more, I can't know. No amount of reason or evidence can ever allow me to perfectly determine objective reality as it really is. Correspondence theory of truth is therefore useless because it offers no way to differentiate between all of these competing scenarios. So if we're ever going to make any progress in building a viable epistemology, then we need to operate under the basic constraints that nature has given us.
This is a fundamental philosophical concept known as the egocentric position, or equivalently, the problem of external world skepticism. All it says is that for whatever sensory perception you may be experiencing at any given moment, there are limitless ad hoc explanations for what might be causing it. Remember that I'm just a sentient agent trapped within my immediate mental awareness. It's not like I can just crawl out of that awareness and directly perceive reality as it really is. And even if I could, how exactly would I correspond linguistic propositions to those objective states? What are the rules I have to follow and how do I apply them? We simply cannot ignore the fundamental barriers that exist between reality, our perceptions of reality, and our linguistic frameworks for describing reality.
This is the part where many philosophers really begin to butt heads with each other, but there are at least a few general principles that most people do tend to agree on. For example, one theory of truth that has great utility is known as the principle of mental incorrigibility, or simply empiricism. All this says is that any honest statement of immediate sensory perception is automatically a true proposition. For example, consider a statement like "I feel a pain in my foot". Even if it turns out to be a complete illusion (like an amputee with a bad case of phantom limb syndrome) I still cannot deny the fact that I am definitely experiencing a distinct sensory perception that is unique from many others. It therefore seems perfectly reasonable to just acknowledge our perceptual data for what it is, designate those experiences with linguistic markers, and then assign a basic truth value to such propositions accordingly.
Notice also how there's nothing physically forcing us to stick with only a binary set of truth values. For example, maybe you think truth would make more sense if we used a ternary set of values rather than binary. It's a perfectly valid conception that's even used in practice today by scientists, engineers, and mathematicians [2]. Some systems of logic even treat truth as a continuum value rather than a discrete set [3], and again find use in modern scientific applications. There is no objectively right or wrong answer except for the collective say-so of human philosophers in our ultimate quest for a meaningful conception of truth.
This is all pretty standard material so far, and can readily be verified in most relevant textbooks on the subject [4,5]. However, something you generally won't find is an official stance over the precise nature of an ideal truth assignment. It's as if we're all experts at manipulating truth values once we have them, but no one knows how to go about assigning those actual truth values in the first place. That's a real shame, because this question represents the heart of what an idea like truth is supposed to philosophically encapsulate. At best, we only seem to have this vague notion that truth should, in some way or another, represent a kind of "correspondence" between the set of linguistic propositions and the factual state of affairs in objective reality. True propositions are those which effectively describe the real world as it really is, while false propositions do not.
This is a fairly common epistemic concept that philosophers like to call the correspondence theory of truth. And, at first glance, it does seem to be a pretty intuitive definition. Unfortunately, there's also a glaring hole that needs to be addressed. Namely, what exactly is this "correspondence" thing you speak of, and how do I recognize it when I see it? For example, consider a simple proposition like "the Moon is round." Is that true or false? According to correspondence theory, the best we can say is that if the Moon is round, then it is "true" that the Moon is round. Since that's obviously just a vapid tautology, correspondence theory of truth hasn't really told me anything about how to assign truth to propositions.
But let's take it even further. What if I stand outside one evening and simply look at the moon directly with my own eyes? That way, if I see a generally roundish object, then I can legitimately say that the Moon is round, right?
Well, no.
For example, what if there was some kind of optical illusion brought on by the atmosphere that makes squareish things appear round? Or what if I'm just looking at a giant photograph of the moon, or maybe some elaborate hologram? Maybe it's all just an hallucination brought on by drugs, or perhaps a really vivid, lucid dream. Maybe I'm being tricked by a magical demon, or maybe I'm really just a brain in a vat, plugged into some kind of matrix simulation. I simply do not know, and what's more, I can't know. No amount of reason or evidence can ever allow me to perfectly determine objective reality as it really is. Correspondence theory of truth is therefore useless because it offers no way to differentiate between all of these competing scenarios. So if we're ever going to make any progress in building a viable epistemology, then we need to operate under the basic constraints that nature has given us.
This is a fundamental philosophical concept known as the egocentric position, or equivalently, the problem of external world skepticism. All it says is that for whatever sensory perception you may be experiencing at any given moment, there are limitless ad hoc explanations for what might be causing it. Remember that I'm just a sentient agent trapped within my immediate mental awareness. It's not like I can just crawl out of that awareness and directly perceive reality as it really is. And even if I could, how exactly would I correspond linguistic propositions to those objective states? What are the rules I have to follow and how do I apply them? We simply cannot ignore the fundamental barriers that exist between reality, our perceptions of reality, and our linguistic frameworks for describing reality.
This is the part where many philosophers really begin to butt heads with each other, but there are at least a few general principles that most people do tend to agree on. For example, one theory of truth that has great utility is known as the principle of mental incorrigibility, or simply empiricism. All this says is that any honest statement of immediate sensory perception is automatically a true proposition. For example, consider a statement like "I feel a pain in my foot". Even if it turns out to be a complete illusion (like an amputee with a bad case of phantom limb syndrome) I still cannot deny the fact that I am definitely experiencing a distinct sensory perception that is unique from many others. It therefore seems perfectly reasonable to just acknowledge our perceptual data for what it is, designate those experiences with linguistic markers, and then assign a basic truth value to such propositions accordingly.
Another popular method for assigning truth to propositions is the use of axiomatic formalism, or for the sake of this discussion, rationalism. Basically, all this system says is that certain "obvious" propositions, called axioms, deserve a specific truth-value by rote fiat. For
example, take the reflexive law of equality: A = A. No one derived
this proposition from any prior logical framework, nor was it
empirically discovered hiding under some rock. It was just
asserted outright as "true" because mathematicians needed a concept of
equality from which to build a working system of algebra.
Once we finally settle on an agreeable set of axioms, it then becomes possible to generate new true propositions out of the old ones by exercising rules of inference. For example, one classic rule of inference is the transitive law of equality: if A=B and B=C, then A=C. Again, no one derived this rule from any deeper foundations, nor was it empirically discovered. It was just asserted outright as a thing we're allowed to do with the concept of numerical equality. Any new propositions generated in such a fashion are then called theorems, and they represent the core driver behind all propositional and mathematical logic.
This might feel like strangely circular reasoning at first glance, and in all fairness, it kind of is. However, contrary to popular misconceptions, axiomatic systems like math and logic make no effort to describe any objective sense of mind-independent reality. Rather, a far better way to think of such systems is as a kind of highly formalized language. Good axiomatic assertions are therefore not really circular so much as they are definitional. That's why all logical and mathematical theorems are said to analytic in nature, because such truths are ultimately derived entirely from the raw meaning we impose on the terms themselves, and not from any direct connection they have to the external world. One could even argue that this makes analytic propositions a kind of formal extension on the incorrigible, since anyone is internally free to define their own personal vocabulary however they like.
But what about the so-called synthetic propositions that actually do attempt to describe objective reality---that is to say, the world "out there" beyond purely mental processes? For example, consider a proposition like "all bachelors are bald" or maybe "all dogs live on Earth." How do I assign truth to propositions in this category? Again, it's not like I can just pop open a can of reality and directly observe the facts of the matter beyond my senses. Nor can I logically derive their truth from any assigned meaning to the words themselves. So what do we do?
Once we finally settle on an agreeable set of axioms, it then becomes possible to generate new true propositions out of the old ones by exercising rules of inference. For example, one classic rule of inference is the transitive law of equality: if A=B and B=C, then A=C. Again, no one derived this rule from any deeper foundations, nor was it empirically discovered. It was just asserted outright as a thing we're allowed to do with the concept of numerical equality. Any new propositions generated in such a fashion are then called theorems, and they represent the core driver behind all propositional and mathematical logic.
This might feel like strangely circular reasoning at first glance, and in all fairness, it kind of is. However, contrary to popular misconceptions, axiomatic systems like math and logic make no effort to describe any objective sense of mind-independent reality. Rather, a far better way to think of such systems is as a kind of highly formalized language. Good axiomatic assertions are therefore not really circular so much as they are definitional. That's why all logical and mathematical theorems are said to analytic in nature, because such truths are ultimately derived entirely from the raw meaning we impose on the terms themselves, and not from any direct connection they have to the external world. One could even argue that this makes analytic propositions a kind of formal extension on the incorrigible, since anyone is internally free to define their own personal vocabulary however they like.
But what about the so-called synthetic propositions that actually do attempt to describe objective reality---that is to say, the world "out there" beyond purely mental processes? For example, consider a proposition like "all bachelors are bald" or maybe "all dogs live on Earth." How do I assign truth to propositions in this category? Again, it's not like I can just pop open a can of reality and directly observe the facts of the matter beyond my senses. Nor can I logically derive their truth from any assigned meaning to the words themselves. So what do we do?
This is another point where things tend to get very confusing, simply because there are so many oddball truth assignments to choose from and no real official answers to turn to. For example, suppose we decide to assign truth to propositions that reinforce our sense of personal identity or social status. Let's call this egotistical validation. Granted, it might not be a very good system, but it's still a perfectly valid function that operates under well-defined rules. Maybe you've even encountered this system yourself, like in religious or political discussions where personal emotions tend to run very high.
Another interesting class of synthetic truth assignment is called Biblical inerrancy,
and simply says that no true proposition can ever contradict the
records contained within the Holy Bible. It's actually a fairly common
truth assignment, typically emerging from religious fundamentalist
organizations. Truth, in their view, is basically whatever the Bible
says. So while it is tempting to criticize the implicit goals contained
within such a definition, it is hard to ignore the clear, meaningful
distinction it represents.
But let's face facts. Those
truth assignments are obviously arbitrary and completely unsatisfying
because they make no effort to philosophically connect our beliefs with
any objective sense of mind-independent reality. Unless we can find a
way to overcome the egocentric position imposed on us by nature, then no
system of truth assignment will ever have any meaningful sense of
merit. That’s why so much of the philosophical debate in our world
appears to be so pointless. Most truth assignment functions utilized in
practice are either needlessly arbitrary, brazenly self-serving, or
deliberately obtuse.
To address this problem, I find
that it helps to step back and ask ourselves a fundamental question
about truth that surprisingly few philosophers ever seem to ask. Namely, why is it so god-damned important to believe in as many
"true" propositions as possible while simultaneously rejecting as many
of the "false?" What difference does it make at the end of the
day? For instance, consider a possible world where everything I believe
about the universe just so happens to be categorically false. However,
every single time I make a decision based off of those beliefs, the
consequences are maximally predictable and desirable for me anyway. Likewise, any time I commit a single "true" belief to action, the
outcome is never predictable or desirable for me at all. Now let’s ask
ourselves - given such a world, is it even meaningful to call any of my
beliefs "false?” And if so, why would I ever want to believe anything that was true? I could spend my entire life being completely wrong about absolutely everything and actually be better off for it.
This simple thought experiment represents the core principle behind a system of truth assignment generally known as pragmatism. All this system has to say is that the
only meaningful reason why anyone would ever bother believing anything
at all is so that we can eventually use that information as a guide for
our actions. Decisions based on “true” beliefs will therefore
manifest themselves in the form of controlled, predictable experiences,
while decisions based on “false” beliefs will eventually fail in that
goal. Any beliefs that refuse to drive any actions whatsoever, even in
principle, are thus effectively reduced to useless rhetorical gibberish.
To illustrate how this system might work in practice, simply imagine yourself standing at a busy intersection when suddenly you decide that you'd like to walk across to the other side. Sure, you can axiomatically declare premises and logically deduce conclusions all you want, but sooner or later you're going to have to translate that information into a real, committed action. So while you may think you're being very clever with all your intellectual presumptions and sophisticated rhetoric, I have yet to encounter a single philosopher who could successfully argue with a speeding bus. Everyone, everywhere, is therefore universally bound to the same pragmatic process in our daily epistemology. We collect empirical data, we formulate it as a rationally descriptive model of objective reality, we exercise a decision accordingly, and then we empirically observe the outcome. If our understanding of traffic behavior is indeed "true," then we can expect to safely cross the street without incident. However, if our model contains flaws or inconsistencies, then it's only a matter of time before we eventually find ourselves getting plowed by oncoming traffic.
To illustrate how this system might work in practice, simply imagine yourself standing at a busy intersection when suddenly you decide that you'd like to walk across to the other side. Sure, you can axiomatically declare premises and logically deduce conclusions all you want, but sooner or later you're going to have to translate that information into a real, committed action. So while you may think you're being very clever with all your intellectual presumptions and sophisticated rhetoric, I have yet to encounter a single philosopher who could successfully argue with a speeding bus. Everyone, everywhere, is therefore universally bound to the same pragmatic process in our daily epistemology. We collect empirical data, we formulate it as a rationally descriptive model of objective reality, we exercise a decision accordingly, and then we empirically observe the outcome. If our understanding of traffic behavior is indeed "true," then we can expect to safely cross the street without incident. However, if our model contains flaws or inconsistencies, then it's only a matter of time before we eventually find ourselves getting plowed by oncoming traffic.
This is what makes
pragmatism the only epistemology with any viable sense of “connection”
to the external world beyond our senses. Because even if my entire
reality is little more than a glorified matrix simulation or
demon-spawned hallucination, then even that reality is still objectively
real, and apparently operating in accordance with causally predictive
patterns. So if on the off-chance that my actions have any influence on the outcome of future events, then I can use those outcomes to gain real information about the rules governing my reality. Beliefs drive actions, actions have consequences, and consequences are objective.
We can give this process a nice, technical-sounding name like pragmatic empirical
rationalism, but really, it's all just a glorified way of saying science.
Because really, that's all science fundamentally boils down to; a formalized system of gathering
empirical data, expressing it within a rational, predictive framework, and then
testing those predictions against quantifiable actions and consequences. We like basing our beliefs on scientific
methods because it ultimately allows us to make real decisions in the real
world with real, empirical consequences. Mental incorrigibility and axiomatic
formalism are not mere ends unto themselves, but essential tools for
the greater purpose of pragmatically navigating the world.
Notice
also how the
pragmatic framework implicitly captures many other familiar principles
of both science and scientific method. For example, consider the
principle of fallibilism, which simply states that no synthetic
propositional model can ever be assigned a value of "true" with
any kind of perfect, universal certainty. At best, we only know what to expect from such models if and
when we ever happen to find them. Consequently, all knowledge
claims about objective reality must always remain open to possible
revision when faced with any newer and better information. Likewise,
the principle of falsifiability states that we
can indeed be perfectly confident in assigning certain models a value
of false. That's because the very definition of a false propositional
model is one whose empirical predictions fail to come to pass.
Likewise, we can even use pragmatism to quantify the
principle of Occam's Razor (also known as the principle of parsimony): given two propositional models that
happen to make perfectly equivalent predictions, then the model
containing fewer assumptions is automatically preferable. After all, if
both models are empirically equivalent either way, then you might as
well just go with the one that takes less work to think about.
But
hey,
maybe that's being too presumptuous. Maybe you think pragmatism is a
terrible
principle of truth assignment, and that we should all replace it with
some
"higher" form of understanding. But let's be clear about what
that entails.Without some ultimately pragmatic purpose by which to
measure our beliefs, then they are effectively disconnected from any
empirically predictive decision
we could ever hope to make. I could therefore openly concede every last proposition
you have to say about reality, and literally nothing in my life would ever
have change as a result.That's why no one cares how many angels
can dance
on the head of pin. Any answer we give is necessarily going to be
trivial and vacuous. We do, however, care a great deal about what
medicines work best
for treating cancer and why. That's because any decisions we might hope
to make on the subject are necessarily dependent on the final answers
we give. So unless your truth assignment can somehow
facilitate my desire to solve actual problems and reliably predict the
outcomes of my actions, then by definition and
admission, it is irrelevant and worthless. Pragmatic scientific method therefore is the ultimate measure of all philosophical truth.
Notes/References:
- Usually denoted as {T, F}.
- See tri-state logic
- See fuzzy logic
- Hodel, R. E, "An Introduction to Mathematical Logic," Dover Books (2013)
- Priest, G. "An Introduction to Non-classical Logic," 2nd Ed, Cambridge University Press (2008)
I've been watching your videos since 2015. I have nothing else to say other than "keep doing it". It's enlightening and entertaining.
ReplyDeleteCheers from Brazil
Had to make an account to comment
ReplyDeleteAnd though i hypothesize that the comment is not appearing here because it is queued for review, i am posting this comment, post account creation, to see if this is indeed the case
In both scenarios, this can be deleted
Look forward to talking again of life, death, and the rest
(Part 1/3) >Correspondence theory of truth is therefore useless because it offers no way to differentiate between all of these competing scenarios.
ReplyDeleteIt may still be true, and we just have to admit that said knowledge is impossible. That there simply is no way to flawlessly differentiate between all of these competing scenarios, and correspondence theory simply exposes this fact.
>So if we're ever going to make any progress in building a viable epistemology, then we need to operate under the basic constraints that nature has given us.
Theories of truth are not epistemologies, a theory of truth is an alethiology. Epistemology studies how we gain knowledge of true statements, theories of truth are about what makes something true. You are confusing the map with the territory. Theories of truth say nothing about how to gather knowledge.
We can calculate the temperature of stars (analogous to an epistemology) by its thermal radiation, but that is not what makes it true that a star is a certain temperature, the thermal radiation is simply one of the physical consequences of Planck’s Law, which we can use to deduce the temperature indirectly. What makes it true (analogous to an alethiology) is the average kinetic energy of the molecules that make up the star.
The map is not the territory.
>And even if I could, how exactly would I correspond linguistic propositions to those objective states? What are the rules I have to follow and how do I apply them?
Now this question actually does apply to alethiology, and it’s an interesting question that has gone woefully unexplored. I like to call it the Problem of Compilation. (Because this is analogous to a compiler turning higher programming language code into machine code)
Most of the research into this problem (though I don’t think it has an official name) is actually done in philosophy of language, so I’ll start taking some pointers from them…
…Is what I would say if this wasn’t another case of academic philosophy glossing over an obvious solution. The rules we use are literally the rules of grammar and definition. We get to decide what those rules are. Different rules of compilation/decompilation are literally what makes languages distinct from one another. There is literally nothing stopping me from assigning “Snow is black” to “[State Z of H2O molecules] reflects all light”, and thus assigning it true, rather than “State Z of H2O molecules absorbs all light”, and thus assigning it false.
The reason why you don’t see anybody doing that is because we have all collectively agreed to define “white” as “reflecting all light”. That is the rules we assigned to the English language. Other languages have other rules, and thus different compilation/decompilation procedures. Linguistic propositions by themselves are not inherently true or false, they only become true or false when they are “compiled” into the meaning that we had assigned them to. This is an important distinction to make, as if we try to claim all linguistic statements we can make are propositions with definite truth values, you start running into all sorts of paradoxes, such as, most infamously, the Liar’s Paradox… but also simple cases of “It is raining” appearing to be both true and false at the same time! (It is raining at location X, but not at location Y). Eventually, you are left with a proposition that either perfectly matches the state of affairs (and is true), or mismatches the state of affairs (and is false). Or you run into a compiling error and are left with nothing but meaninglessness (“This statement is false”).
(Part 2/3)
DeleteHere, I’ll show two partial example of compilation in action, one where it’s successful, the other where it fails.
The moon is round->[The largest body that orbits the Earth] [is roughly spherical in shape]->Every point of the surface of [the largest body in orbit around the planet third in distance from the Sun] is roughly equidistant from the center of [the largest body in orbit around the planet third in distance from the Sun]
…And so on.. Eventually we’d get an enormous statement, but it’d at least be one that we could directly apply rules to in order to determine a truth value, given perfect knowledge of external reality. (Whether said knowledge is possible is a separate question, but mental incorrigibility demonstrates this can be done with internal reality.) Now to a case where compilation fails.
This statement is false->[This statement is false] is false]->[[This statement is false] is false] is false->…
And our compiler just ran into an infinite loop. We just fed our compiler an invalid statement (analogous to invalid code in Python, or any programming language, for that matter.) So we can’t assign it a truth value at all. It is not true, and not false. In fact, it isn’t even a coherent statement.
(Part 3/3)
Delete>Again, it's not like I can just pop open a can of reality and directly observe the facts of the matter beyond my senses.
Well, of course not. But we shouldn’t need to accept as an axiom that knowledge of synthetic statements is possible. Why can’t we just admit we can’t know for sure, and we haven’t solved the problem, in fact can’t solve the problem, instead of saying that means we must have put the goalposts in the wrong place? Why can’t we just accept Cartesian skepticism as the correct position?
>For instance, consider a possible world where everything I believe about the universe just so happens to be categorically false. However, every single time I make a decision based off of those beliefs, the consequences are maximally predictable and desirable for me anyway. Likewise, any time I commit a single "true" belief to action, the outcome is never predictable or desirable for me at all.
How is such a world possible? If it’s predictable, we can predict what will happen, so we’d have a belief about that outcome. What if your belief is “Nothing will go wrong for me today”? That breaks your entire scenario. If it’s true, nothing will go badly, but that means the outcome was desirable and predictable, which should only happen if your belief is false. Meanwhile, if the belief is false, that means at least one thing will go wrong for you today, but that means you didn’t get the maximally desired outcome, like you should have from a false belief. So it appears this isn’t even a coherent logically possible world. This is analogous to the Omnipotence Paradox, I just asked your world to do the equivalent of making a stone it can’t lift.
Can you solve this problem? As a bonus, can you devise some kind of mechanism for how said world could even function?
>I have yet to encounter a single philosopher who could successfully argue with a speeding bus
Correct. That bus is a part of a mind-independent reality, you can’t just declare the bus exists and, as a consequence, not get hit by said bus. Consequences are objective, and we can start reasoning backwards as to what may have caused it, by ruling out things that predict things that contradict the objective consequences we have observed. Models make predictions from implications derived by those models.
A->B. A is a statement about external reality, and B is an observed consequence that affects internal reality. Now, we cannot directly measure the truth value of A, due to the problem of external world skepticism. But if we observe not-B, we can derive not-A that way. This allows us to derive falsifiability without needing to define truth as anything other than a description that matches external reality.
We can also derive Occam’s Razor as well. Fewer assumptions means fewer possible false assumptions, and is thus less likely to end up being false. This can also be backed up empirically, when multiple theories predicted data equally well, the simpler one was more likely to be able to predict new data from new sources, such as the heliocentric model vs epicycles.