Tuesday, January 23, 2024

Notes: Philosophical Foundations for a Christian Worldview: Part I: Introduction, Chapter 2: Argumentation and Logic


Notes: Philosophical Foundations for a Christian Worldview: Part I: Introduction, Chapter 2: Argumentation and Logic


Purpose

Previously, we completed the notes for Part I: Introduction, Chapter 1: What Is Philosophy?, now, we are moving on, to Chapter 2: Argumentation and Logic:


Content:

1 Introduction

2 Deductive Arguments
2.1 Logical Validity
2.1.1 Sentential Logic
2.1.1.1 Nine Rules of Logic
2.1.1.2 Some Equivalences
2.1.1.3 Conditional Proofs
2.1.1.4 Reductio Ad Absurdum
2.1.2 First-Order Predicate Logic
2.1.2.1 Universally Quantified Statements
2.1.2.2 Existentially Quantified Statements
2.1.3 Modal Logic
2.1.3.1 Possible Worlds Semantics
2.1.3.2 Common Modal Fallacies
2.1.4 Counterfactual Logic
2.1.4.1 Stalnaker-Lewis Semantics
2.1.4.2 Invalid Inferences in Counterfactual Logic
2.1.4.3 Nontrivially True Counterpossibles
2.1.5 Informal Fallacies
2.2 & 2.3 True Premises & Premises More Plausible Than Their Denials

3 Inductive Reasoning
3.1 Bayes' Theorem
3.2 Inference to the Best Explanation


1 Introduction

Philosophy is about thinking well, and, to do so, we must have the ability to formulate and assess arguments. An argument in philosophy is a set of statements/premises that lead to a conclusion.

Arguments can either be deductive or inductive. In deductive arguments, the truth of the premises will guarantee the truth of the conclusions, while, in inductive arguments, the premises render the conclusion more probable than its competitors. There are different criteria to determine a good argument, based on whether it is deductive or inductive.

2 Deductive Arguments

A deductive argument must be formally and informally valid, has true premises/sound and has premises more plausible than their contradictories/negations.

To be formally valid means that the conclusion of the premises follows according to the rules of logic(Logic is a technical discipline in philosophy that studies reasoning). If the argument is not formally valid, then it is invalid and fails, regardless of whether the premises or conclusions are true.

To be informally valid means not to commit an informal fallacy. An informal fallacy renders an argument bad, and its conclusion not follow, while the argument be formally valid.

A sound argument has all true premises. For example, the argument:

P1 if it is 2AM, then I have to sleep

P2 it is 2AM

C1 therefore, I have to sleep

is formally and informally valid, but it is not 2AM(if it is for you, please go to sleep), so (P2) is false, and the argument fails, as it is unsound.

For a premise to be more plausible(even if by a small margin) than its contradictories/denials is good enough for the deductive argument to be good. 100 percent certainty is unneeded(and unreasonable).

If all these are fulfilled in a deductive argument, the argument is formally and informally valid, sound and good, and we should believe in its conclusion, and it has served as adequate proof.

2.1 Logical Validity

2.1.1 Sentential Logic

2.1.1.1 Nine Rules of Logic

Sentential/propositional logic is the most basic level of logic. There are nine rules of inference to learn and a few extra bits.

Rule #1: Modus Ponens

1. P→Q

2. P

3. Q

In symbolic logic, letters like "P", are used to stand for statements, like "I am full". Premise 1 is read, "P implies Q", or "if P, then Q", where P and Q stand for any propositions/statements. The rule of Modus Ponens(translated from Latin, "mode that by affirming affirms"), tells us that from the two premises above, P→Q and P, we can validly conclude Q. To put it into an actual argument with statements:

P1 if I am hungry, then I will go eat

P2 I am hungry

C1 therefore, I will go eat (MP, 1, 2)

Rule #2: Modus Tollens

1. P→Q

2. ¬Q

3. ¬P

In this case, all the symbols once again mean the same thing, but the symbol "¬" stands for "not", so "¬P" stands for "not P", which is the contradictory of P. The rule of Modus Tollens(translated from Latin, "mode that by denying denies") tells us that from P→Q and ¬Q, we can validly conclude ¬P. To put it into an actual argument:

P1 if Daniel packed up the house, then the house would be neat

P2 the house is not neat

C1 therefore, Daniel did not pack up the house (MT, 1, 2)

Modus Tollens involves negating a premise, so if a premise is already negated, for example, "¬P", then we would have a double negation, "¬¬P", which is logically equivalent to "P". So we can interchange "¬¬P" and "P"(or any letter) in our arguments and use it accordingly with Modus Tollens. For example:

1. ¬P→Q

2. ¬Q

3. ¬¬P (MT, 1, 2)

4. P (equivalent)

For both Modus Ponens and Modus Tollens, we must notice that for if-then statements, the antecedent "if" statement states a sufficient condition for the consequent "then" statement. And the consequent "then" statement states a necessary condition of the antecedent "if" statement. This means that, for P→Q, P is the sufficient condition for Q, and Q is the necessary condition of P. So if P is true, Q is necessarily true; but if Q is true, it doesn't mean anything for the truth of P. As Q is merely the necessary condition of P, not the sufficient condition of P. Therefore:

1. P→Q

2. Q

3. P (INVALID)

For example:

P1 if Daniel packed the house, then the house will be clean

P2 the house if clean

C1 therefore, Daniel packed the house (INVALID)

The conclusion that Daniel packed the house is invalid because it could be Daniel's mother that packed the house. We simply cannot validly conclude anything given such information. This form of invalid reasoning is called "affirming the antecedent".

Rule #3: Hypothetical Syllogism

1. P→Q

2. Q→R

3. P→R

Hypothetical Syllogism states that if P implies Q, and Q implies R, then P implies R. We cannot make any conclusions for the truth of any of the statements as we do not have sufficient information, but we can know this at least. For example:

P1 if Daniel cleaned the house, then the house is neat

P2 if the house is neat, then Daniel's mother will be happy

C1 if Daniel cleaned the house, then Daniel's mother will be happy (HS, 1, 2)

Rule #4: Conjunction

1. P

2. Q

3. P & Q

In this case, "&" is the symbol used for conjunction, and is read "and". Any sentence/statement can be conjoined with '&", even if-then statements. For example:

1. P→Q

2. R→S

3. (P→Q)&(R→S)

Note how parentheses, "()", were used to properly clarify the statements. "&" can represent any conjunctive, not just "and", but also "but", "while", "although", "whereas", etc.

Rule #5: Simplification

1. P&Q

2. P

OR

1. P&Q

2. Q

Used to clean up arguments and get specific statements from conjunctions.

Rule #6: Absorption

1. P→Q

2. P→(P&Q)

This rule shows the obvious, that P implies itself as well. It is used in specific cases.

Rule #7: Addition

1. P

2. P v Q

Here, a new symbol is added, "v", P v Q is read a "P or Q", and it represents what is called a disjunction For any disjunct to be true, only one of the statements in it must be true, so in P v Q, P could be false, but as long as Q is true, P v Q is true.

Rules #8: Disjunctive Syllogism

1. P v Q

2. ¬P

3. Q

OR

1. P v Q

2. ¬Q

3. P

If a disjunction is true, and one of the statements is false, then the other must be true. However, you cannot conclude that the other is false if one is true, because, in logical disjuncts, both may be true, but both cannot be false. When it comes to disjuncts, take note of semantics used when converting to symbolic logical form.

Rule #9: Constructive Dilemma

1. (P→Q)&(R→S)

2. P v R

3. Q v S

This tells us that if P→Q and R→S are true, and if either P v R is true, then either Q v S will ultimately be true.

2.1.1.2 Some Equivalences

P is equivalent to ¬¬P

P v P is equivalent to P

P→Q is equivalent to ¬P v Q

P→Q is equivalent to ¬Q→¬P

¬P&¬Q is equivalent to ¬(P v Q) -- by using algorithm below

¬P v ¬Q is equivalent to ¬(P&Q) -- by using algorithm below

To convert any disjunction or conjunction into the other, you can use the following algorithm:

Put "¬" in the front of each letter

Change the "&" or "v" into the other, depending on if the statement is a disjunction or conjunction

Bracket the entire statement and put "¬" in front of it

To convert P v Q into a conjunction, for example:

1. ¬P v ¬Q

2. ¬P&¬Q

3. ¬(¬P&¬Q)

In an argument, you can replace statements with their equivalents to forward the argument.

2.1.1.3 Conditional Proofs

A powerful technique in logical argumentation is called "conditional proofs". We can use it when we want to argue that, given the truth of a premise, something will necessarily follow. You have to indent the premises that are conditional. For example:

1. P→Q

2. Q→R&S

3.     P (conditional premise)

4.     Q (MP, 1, 3)

5.     R&S (MP, 2, 4)

6.     R (simp., 5)

7. P→R (CP, 3-6)

The conditional premise is used to conclude at the end that, if the conditional premise is true, then that conclusion, (7), is also true.

2.1.1.4 Reductio Ad Absurdum

A reductio ad absurdum(translated from Latin, "reduction to absurdity") is used when we show that, given a premise is true, it results in a contradiction/logical absurdities, and therefore is impossible to be true. For example:

P1 if I can think, then I exist

P2 I can think

P3     I do not exist (conditional premise)

P4     I cannot think (MT, 1, 3)

P5 therefore, if I do not exist, then I can think and cannot think (CP, 3-4)

P6 therefore, I do not not exist (RAA, 5)

P7 therefore, I exist (equiv., 6)

The above argument shows why I must exist, by reducing the contrary(that I do not exist) to absurdity. When using reductio ad absurdum, try to make your opponent/interlocutor have to lose as much as possible(intellectually) in terms of their argumentation. So, in this case, you would have to make the solipsist, that thinks I do not exist, lose something or change their argument. The solipsist would likely argue against (1) or (2). However this is merely an example and reductio ad absurdum can be used in any scenario.

2.1.2 First-Order Predicate Logic

2.1.2.1 Universally Quantified Statements

Statements about all or none of a group are "universally quantified statements", since it covers all members of a group. Any universally quantified statement can be reduced to an if-then statement. For example, the universally quantified statement, "all books are material" is equivalent to the if-then statement, "if something is a book, then it is material".

When dealing with universally quantified statements in symbolic notation we add the variable x(or any lower-case letter) to represent any individual thing. The antecedent and consequent clauses are symbolised as capital letters. For example, the statement, "all books are material", is an affirmative existential quantifying statement, and in symbolic logic is:

(x)(Bx→Mx)

Which is read, "for any x, if x is a book, then x is material". All, every, each, any, etc., are terms used to universally quantify objects. Sometimes, such terms are not used, for example, "Humans are bipedal" is a universally quantified statement. However, "People are sleeping" does not mean that if one is a person, they are sleeping. Therefore, it is important to be clear when speaking philosophically.

For the statement, "every mind is immaterial", it is a negative(immaterial is equivalent to not material) existentially quantified statement, and can be symbolically represented as:

(x)(Mx→¬Ix)

Here, Material is represented as "I", for clarity. To use these in an actual argument:

1. (x)(Bx→Mx)

2. Bp

3. Mp

This is read:

P1 for any x, if x is a book, then x is material

P2 Philosophical Foundations for a Christian Worldview is a book

C1 therefore, Philosophical Foundations for a Christian Worldview is material.

Hypothetical Syllogism can be used with multiple universally quantified statements, for example:

1. (x)(Bx→Mx)

2. (x)(Mx→Cx)

3. (x)(Bx→Cx) (HS, 1, 2)

Which is to say:

P1 for any x, if x is a book, then x is material

P2 for any x, if x is material, then x is created

P3 for any x, if x is a book, then x is created (HS, 1, 2)

2.1.2.2 Existentially Quantified Statements

Statements only about some members of a group are "existentially quantified statements". These tell us that there exists at least one thing that has a property in question. For example, "some humans are tall" tells us that there is at least one human that is tall. To symbolise this affirmative existentially quantified statement, we say:

(∃x)(Hx&Tx)

Which is to say, "there is at least one x, such that x is both a human and is tall". Existentially quantified statements use "&" and not "→". An example of a negative existentially quantified statement is:

(∃x)(Hx&¬Tx)

Which is to say, "there is at least one such x, such that x is both a human and not tall".

Affirmative and negative existentially quantified statements are not contradictory, while affirmative and negative universally quantified statements are contradictory. Below is a diagram(Fig. 2.1) for the relationships between universally and existentially quantified statements:

Fig 2.1

An affirmative universal quantified statement(universal affirmative): "drinks are liquid" (x)(Dx→Lx) is equivalent to ¬(∃x)(Dx&¬Lx)

A negative universal quantified statement(universal negative): "all drinks are not edible" (x)(Dx→¬Ex) is equivalent to ¬(∃x)(Dx&Ex)

An affirmative existential quantified statement(existential affirmative): "some humans are tall" (∃x)(Hx&Tx) is equivalent to ¬(x)(Hx→¬Tx)

A negative existential quantified statement(existential negative): "some humans are not tall" (∃x)(Hx&¬Tx) is equivalent to ¬(x)(Hx→Tx)

When symbolising arguments with both universal and existential quantifiers, substitute existential quantifiers first, regardless of the order of premises. This is so that it doesn't result in any peculiarities. For example:

1. (x)(Dx→Lx)

2. (∃x)(Dx&Tx)

3. Dc&Tc

4. Dc→Lc

5. Dc (simp., 3)

6. Lc (MP, 4, 5)

7. Tc (simp., 3)

8. Lc&Tc (conj., 6, 7)

9. (∃x)(Lx&Tx)

That is to say:

P1 for any x, if x is a drink, then x is a liquid

P2 there is at least one x, such that x is both a drink and is tasty

P3 Coca Cola is both a drink and is tasty

P4 if Coca Cola is a drink, then Coca Cola is a liquid

P5 Coca cola is a drink (simp., 3)

C1 therefore, Coca Cola is a liquid (MP, 3, 5)

P6 Coca Cola is tasty (simp., 3)

P7 Coca Cola is both a liquid and is tasty (conj., C1, P6)

C2 therefore, there is at least one x, Coca Cola, such that x is both a liquid and is tasty

C3, there is at least one x, such that x is both a liquid and is tasty

Modal logic deals with necessary and possible/contingent truth, the modes of truth as they are. Some statements are true and could have been false, or vice versa, and some other statements, such as "I both exist and don't exist", must have necessarily been false.

We will use the symbol "□" to stand for the mode of necessity and the symbol "◇" to stand for the mode of possibility:

"□P" is read as "necessarily, P", indicating that P is necessarily true. "□P" is equivalent to "¬◇¬P". "□P" implies "◇P", but excludes "◇¬P".

"□¬P" is read as "necessarily, not P", indicating that P is necessarily false. "□¬P" is equivalent to "¬◇P", read as "not possibly, P". "□¬P" implies "◇¬P", but excludes "◇P".

"◇P" is read as "possibly, P", indicating that P could or could not be true or false. "◇P" is equivalent to "¬□¬P".

"◇¬P" is read as ""possibly, not P", indicating that P could or could not be true or false. "◇¬P" is equivalent to "¬□P". Below is a diagram(Fig. 2.2) showing the relationships between these modal statements:

Fig 2.2

2.1.3.1 Possible Worlds Semantics

Possible worlds semantics is an interpretation of modal logic that describes the way a possible world could have been. A possible world is a maximal description of how reality could be, with as many compossible(do not contradict) true propositions together, and is actualisable.

The term actualisable has no set definition and has been defined as "strictly logically possible" by certain philosophers. However, others have pointed out that there are statements that are strictly logically possible , yet cannot be actualised(E.g., the prime minister is a prime number). Therefore, they have defined actualisability as "broad logical possibility", but have also not properly defined this term. Other philosophers have also coined the term metaphysical possibility to decide between whether or not a world is actualisable.

Possible world semantics are, therefore, merely used to better illustrate modal logic.

In possible world semantics, a necessary truth(E.g., □P) is true in all possible worlds(E.g., W). A possible truth(E.g., ◇P) is true in at least one possible world(contingently true). A necessary falsehood(E.g., □¬P) is false in all possible worlds. A possible falsehood(E.g., ◇¬P) is false in at least one possible world(contingently false).

In possible world semantics, it is important to be as clear as possible, whether the necessity being described is necessity de dicto or necessity de re. Necessity de dicto is the necessity given to statements(translated to Latin, "dictums") themselves(E.g., "necessarily, the laws of logic exist", is true for all possible worlds), while necessity de re is necessity given to a thing(translated to Latin, "a res") possessing a certain property(E.g., "necessarily, I am a human being" is true for every possible world in which I exist, not all possible worlds).

2.1.3.2 Common Modal Fallacies

Modus Ponens and others are valid inference forms to use in modal logic, but there are some common modal logical fallacies to look out for, such as:

1. □(P v ¬P)

2. □P v □¬P (INVALID)

Just because "necessarily, P or not P", does not follow that "necessarily, P or necessarily, not P". This is a confusion of necessity in sensu composito(translated from Latin, "in a composite sense") and necessity in sensu diviso(translated from Latin, "in a divided sense"). Just because the whole is necessary doesn't mean that the individual statements will be necessary. Another fallacy is:

1. □(P v Q)

2. ¬Q

3. □P (INVALID)

It doesn't follow that just because "necessarily, P or Q", then, if in one possible world "not Q", then "necessarily P", in all possible worlds. The correct inference is:

1. □(P v Q)

2. ¬Q

3. P

The above fallacy is also a confusion of necessity in sensu composito and necessity in sensu diviso. Another fallacious inference is:

1. □(P→Q)

2. P

3. □Q

This is a confusion of necessitas consequentiae(translated from Latin, "necessity of the consequence") and necessitas consequentis(translated from Latin, "necessity of the consequent"). Just because in all possible worlds "if P, then Q" is true, then just because in a possible world, P happens to be true, therefore, in all possible worlds, Q is true. This is fallacious.

2.1.4 Counterfactual Logic

Counterfactuals are conditional statements in a subjunctive mood(counter to the actuals facts)(E.g., "if Oswald didn't shoot Kennedy, then somebody else did" is an indicative conditional; "if Oswald hadn't shot Kennedy, then somebody else would have" is a counterfactual conditional") For the counterfactual conditional, the antecedent and consequent are contrary to the facts and are not certain to be true. If Oswald had not shot Kennedy, it is likely that Kennedy would have just survived. There are also a type of counterfactuals called deliberative conditionals, where the consequent would actually be true if the antecedent were true(E.g., "if I were to start exercising, then I would lose weight" is a counterfactual deliberative conditional). "Counterfactuals" is used to describe all subjunctive conditionals.

There are "would" and "might" counterfactuals. For "would" counterfactuals, if the antecedent were true, then the consequent would be true. These are symbolised as "P□→Q" and are read as "if P were true, then Q would be true". For "might" counterfactuals, if the antecedent were true, then the consequent might be true, but not necessarily. These are symbolised as "P◇→Q" and are read "if P were true, then Q might be true". "might" counterfactuals are not "could" counterfactuals. "Could" counterfactuals denote mere logical possibility, and is used for modal statements, while "might" counterfactuals denote an actual live possibility in that specific world, and is therefore more restrictive.

P□→Q is contradictory to P□→¬Q.

P□→Q implies P◇→Q.

P□→¬Q implies P◇→¬Q.

P◇→Q is contradictory to P◇→¬Q.

Below is a diagram(Fig. 2.3) that shows the relationship between different counterfactual conditionals:

Fig 2.3

2.1.4.1 Stalnaker-Lewis Semantics

The most commonly used semantics for counterfactual conditionals is the Stalnaker-Lewis Semantics, though all semantics have limitations. Below, there will be a diagram(Fig. 2.4), that I will use to illustrate how the Stalnaker-Lewis Semantics works:

Fig 2.4

The middle sphere is the actual world(our world), while the spheres going out are the possible worlds, and the closer a sphere is to the center, the more similar it is to the actual world.

If we consider the worlds in which the statement, P→Q, is true, and if this is true in all spheres, then the "would" counterfactual, P□→Q is true. If the statement is only true in some possible worlds and false in others, then the "might" counterfactual, P◇→Q is true.

There are many issues with this semantics, one of which is its inability to account for counterfactual conditionals with impossible antecedents(called, counterpossibles). For example, "if 3 were an even number, then it would be divisible by 2". 3 is an even number in no possible worlds, and is impossible. However, the statement itself , "if 3 were an even number, then it would be divisible by 2", is trivially true. As, in every sphere, the antecedent is false, so there is no possible sphere/world in which the antecedent "3 is an even number" is true, and the consequent "3 is divisible by 2" is false. Therefore, the statement itself is trivially true. This seems to be absurd, yet is possible in Stalnaker-Lewis Semantics.

2.1.4.2 Invalid Inferences in Counterfactual Logic

Certain rules of inference cannot be applied in counterfactual logic. For example, while Modus Pollens and Modus Tollens are valid, Hypothetical Syllogism is not:

1. P□→Q

2. Q□→R

3. P□→R (INVALID)

This is because the truth values, with counterfactual conditionals, of the antecedent and consequent are uncertain, and we therefore cannot certainly conclude (3).

In counterfactual logic, the equivalence of P→Q and ¬Q→¬P is not valid. So, P□→Q is not equivalent to ¬Q□→¬P. There is another fallacy:

1. P□→Q

2. (P&R)□→Q (INVALID)

This form of inference is valid in propositional logic, but invalid here, and is called the fallacy of "strengthening the antecedent". This is also because of the uncertain truth values of the antecedent and consequents. However, there are a few valid inference forms in counterfactual logic, one of which is:

1. P□→Q

2. (P&Q)□→R

3. P□→R

Another valid counterfactual logic inference form is:

1. P□→Q

2. Q□→P

3. Q□→R

4. P□→R

The last valid inference form is:

1. P□→Q

2. □(Q→R)

3. P□→R

2.1.4.3 Nontrivially True Counterpossibles

Do note that philosophers who accept that nontrivially true counterpossibles exist reject this last inference form. This is because the inference form can be used to show that [□(P→Q)→(P□→Q)] by:

1. P□→P

2. [(P□→P)&□(P→Q)]→(P□→Q)

3.     □(P→Q) (conditional premise)

4.     (P□→P)&□(P→Q) (conj, 1, 3)

5.     P□→Q (MP, 2, 4)

6. □(P→Q)→(P□→P) (CP, 3-5)

Nontrivially true counterpossibles are counterpossibles(E.g., "if I were a hamburger, then I would be edible") that aren't merely trivially true, but actually true. And some philosophers hold to this. For more information of nontrivially true counterpossibles: https://academic.oup.com/book/34971/chapter/298623461

If nontrivially true counterpossibles exist, then [□(P→Q)→(P□→P)] would be false, because it wouldn't follow from, "necessarily, if 3 is an even number, then it will be divisible by 2" that "if 3 were to be an even number, then it would be divisible by 2" for any counterpossible(especially if some were trivially true and others were nontrivially true), and the inference pattern used to attain it would be false.

[Note: I'm not too sure how this part works, if anybody could explain in the comments, that would be great. Then I'll add the explanation here]

The last inference pattern however, still works with normal counterfactuals, regardless of whether or not one believes in nontrivially true counterfactuals. All three valid inference forms in counterfactual logical are useful to replace Hypothetical Syllogism.

2.1.5 Informal Fallacies

A good deductive argument must not only be formally, but informally valid, that is, to make no informal fallacies, such as:

  1. Petitio Principii(circular reasoning/begging to question): in these cases, the conclusion is hidden in one of its premises, for example:

P1 either God exists, or The Legalistic Philosopher blog doesn't exist

P2 The Legalistic Philosopher blog does not not exist

P3 therefore, God exists (DS, 1, 2)

This is logically valid(Disjunctive Syllogism), but commits the informal fallacy of petitio principii, because, for (1) to be true, you must already believe that God exists(because the disjunct is valid if at least one of the statements is true), and since The Legalistic Philosopher blog obviously exists, then God must exist. This is to subtly beg the question.

  1. Genetic Fallacy: the informal fallacy of arguing a belief to be false based on how it originated.
  2. Argument From Ignorance: the informal fallacy of arguing a claim to be false because there isn't sufficient evidence the claim is true. Just because a truth claim does not have evidence, doesn't mean that it is necessarily false, though it will be less useful of a claim, but not necessarily false.
  3. Equivocation Fallacy: the fallacy of using words in different ways in the same argument, such as:

P1 Graham Oppy is a doctor

P2 Doctors treat patients

P3 therefore, Graham Oppy treats patients

This is equivocating "doctor", as in "someone who has received a medical degree", and "doctor", as in "someone who has received a doctor of philosophy". Thus, it is important to define terms, to reduce the possibility of equivocation.

  1. Amphiboly Fallacy: the fallacy of formulating premises in an ambiguous manner. For example, "if William Lane Craig write a book, then necessarily the book will be popular" is ambiguous as it could be symbolised as both □(P→Q) or P→(□Q). Thus, it is important to be as clear as possible when making an argument, as well as to be able to identify the possible meanings of the argument and to decide which one is the most plausible.
  2. Fallacy of composition: the fallacy of inferring that a whole has a property just because all of its parts has that property. This may be true in some cases(E.g., every part of a machine is blue, therefore the machine is blue), but it can be false in others(E.g., all cells of an elephant are light, therefore the entire elephant is light).

For a longer list of informal fallacies, go to:

https://www.txst.edu/philosophy/resources/fallacy-definitions.html

https://www.logical-fallacy.com/articles/list-of-informal-fallacies/

2.2 & 2.3 True Premises & Premises More Plausible Than Their Denials

It goes without saying that for a deductive argument to be a good one, it must have factually accurate premises. However, the epistemic status/whether we know the premise to be true doesn't necessarily affect the truth of the argument or premise. A premise could be true even if we had no way of knowing, however, the argument would then be useless to us.

An argument could be sound(formally valid and has true premises) and informally valid, but be a bad argument. For the argument to be good, it must have a certain epistemic status:

Certainty is an unrealistic and unattainable goal, resulting in extreme skepticism.

Plausibility/epistemic probability are insufficient and unnecessary, as the premise could be or could not be true, which would bring us back to square one.

We are looking for a comparative criterion instead: that the plausibility of the premise be higher than that of its denial(probability > 50%).

Do note however, that the probability of a conclusion(being true) is not equal to the probability of the conjunction of its premises(being true). The probability of the conjunction of its premises merely set a lower limit, the actual probability of a conclusion could be much higher.

The plausibility of premises is relative, and can be argued for, or one could use premises that are already widely-accepted as plausible.

If all the above criterion are met, namely, formally and informally valid, true premises, premises more plausible than their denials, then the conclusion of the the deductive argument is undeniable, and it should be accepted.

3 Inductive Reasoning

In deductive arguments, the conclusion will follow necessarily from the premises. An argument being in deductive or inductive form has nothing to do with its premises and conclusions being epistemically plausible or not, so an inductive argument with "stronger" premises could be "stronger" than a deductive argument with weak premises. Premises in a deductive argument may be established by inductive reasoning. Both forms of reasoning are equally valid.

An inductive argument is one where is is possible all the premises be true and no invalid inferences be made, yet the conclusion still be false. A good inductive argument must have true premises that are more plausible than their denials and commit no informal fallacies. However, an inductive argument cannot be said to be formally valid, because the truth of its premises doesn't guarantee the conclusion. The evidence and inference is said to underdetermine the conclusion(i.e., render the conclusion more plausible, but not guarantee it). Inductive reasoning is a "inference to the best conclusion based on the evidence", for example:

P1 Groups A, B &C were composed of similar persons suffering the same disease.

P2 Group A was given a new drug, Group B was given a placebo and Group C was given nothing.

P3 The rate of death in Group A was lower than 75% of the death rate in Group B and Group C.

P4 Therefore, the new drug is effective in reducing the death rate from the disease.

The conclusion is likely, based on the evidence and rules of induction, but not inevitably true. It could have been that there was another variable or A was lucky.

3.1 Bayes' Theorem

Inductive reasoning is used everyday, yet its definition is controversial among philosophers. One way to understand induction is by using probability calculus.

Rules have been formed to accurately calculate the probability of particular statements, given the truth of other statements. These probabilities are called conditional probabilities, represented by Pr(A|B), read as "the probability of A given B", where A and B are specific statements. Probabilities range from 0 to 1, where 1 is certainty and 0 is impossibility. A probability of >0.5 indicates plausibility, while <0.5 indicates implausibility, and 0.5 indicates balance between possibilities.

Most cases of inductive reasoning are references from sample cases to generalisations(E.g., the probability of James getting diabetes given that he consumes x amount of sugar per day). Such cases are usually more scientific than philosophical, but philosophical hypotheses can be argued to be probable, given the evidence.

Philosophers can use Bayes' Theorem, which gives formulas for calculating the probability of a hypothesis(H), given the evidence(E), Pr(H|E). One form of Bayes' Theorem is seen below(Fig. 3.1):

Fig 3.1

To compute, Pr(H|E), values are normally plugged into the formula, but for philosophical discussions, precision is impossible and unneeded, so vague terms like "highly probable"(>>0.5), "highly improbable"(<<0.5), "approximately even"(~0.5), "probable"(>0.5) and "improbable"(<0.5), as they are still useful.

In the numerator(top portion), the intrinsic probability of H, Pr(H), is multiplied by the explanatory power of H, Pr(E|H)[how probable is E given H? The higher, the better explanatory power H has for the E]. Note that the intrinsic probability of H is not H taken in isolation, but is taken with background information, B, in isolation from E. The same goes for Pr(E|H), it is taken with background information, B, so it is really [Pr(H|B) X Pr(E|H&B)], for the numerator. All probabilities in the equation is taken with background information, B, in mind.

In the denominator(bottom portion), the intrinsic probability of H, given B, is multiplied by Pr(E|H), the explanatory power of H given E, given B, and added to the Pr(¬H), the intrinsic probability of the denial of H, given B, is multiplied by Pr(E|¬H), given B, the explanatory power of the denial of H.

Note that the lower the probability of the intrinsic probability of the denial of H and the explanatory power of the denial of H, given B, the higher the probability of H given E. If [Pr(¬H) X Pr(E|¬H)] = 0, then the denominator and numerator would be equal, then the probability of Pr(H|E) = 1, and be certain. So, when using Bayes' Theorem, we try to argue that the probability of anything concerning ¬H is lower and the probability of anything concerning H is higher.

One issue with using this formulation of Bayes' Theorem is that ¬H has an extreme variety of options, making it hard to calculate and consider all of them. For example, the denial of a hypothesis like theism is not only atheism, but polytheism, pantheism, idealism, etc.. This makes it hard to properly calculate Pr(H|E) using this formula.

Therefore, if you want to merely compare two competing hypotheses, you can instead use the odds form of Bayes' Theorem. This is used to compare the intrinsic probabilities and explanatory powers of two hypotheses, H1 and H2. The formula can be seen below(Fig 3.2):

Fig 3.2

Using this formula will be similar to using the first formulation of Bayes' Theorem, where you will try to argue that one hypothesis, H1, is more probable than another, H2.

The main disadvantage of using Bayes' Theorem to understand inductive reasoning is that probabilities can be hard to scrutinise or even impossible to calculate.

3.2 Inference to the Best Explanation

This is a more useful method to do inductive reasoning in philosophy. In inferences to the best explanation, we have a certain set of data and a bunch of possible hypotheses to explain this data. We can then determine what best explains the data. There are six main criterion that most philosophers and scientists have agreed on(though there are many more):

Explanatory Scope: the best hypothesis will explain and account for the largest amount of data.

Explanatory Power: the best hypothesis will make the observable data the most epistemically possible/plausible.

Intrinsic Plausibility: the best hypothesis will be implied by the greatest amount of accepted truths and its negations will be implied by fewer accepted truths than rival hypotheses.

Least Ad Hoc: the best hypothesis will depend on the least amount of new suppositions, that are not already accepted.

Accordance With Accepted Beliefs: the best hypothesis will imply the least amount of falsehoods, given already accepted truths.

Comparative Superiority: the best hypothesis will succeed other hypotheses in meeting criterion 1 to 5 in such a way that makes it difficult for competing hypotheses to gain ground on it.

Using these criterion, one can argue for and decide which explanation/hypothesis is the best inference, given the evidence.


Conclusion

The next post in the Philosophical Foundations for a Christian Worldview notes series will be on Part II: Epistemology, Chapter 3: Knowledge and Rationality.

Purchase Philosophical Foundations for a Christian Worldview: https://www.amazon.sg/Philosophical-Foundations-Christian-Worldview-Moreland/dp/0830851879

Notes: Philosophical Foundations for a Christian Worldview: Part I: Introduction, Chapter 1: What Is Philosophy?


Notes: Philosophical Foundations for a Christian Worldview: Part I: Introduction, Chapter 1: What Is Philosophy?


Purpose

This is the first post in the Philosophical Foundations for a Christian Worldview notes series, on Part I: Introduction, Chapter 1: What Is Philosophy?:


Content:

1 Introduction

2 The Nature of Philosophy

3 A Christian Justification of Philosophy
3.1 How Philosophy is Important to Christians
3.2 Common Christian Objections to Philosophy

4 The Role of Philosophy in Integration
4.1 Examples of the Need of Philosophy
4.2 Different Models of Integration
4.3 Some Philosophical Principles Used in Integration