Can inductive reasoning be valid




















An inductive argument is inductively strong when you have the following:. If all its premises were true, then it its highly likely or probable that its conclusion would also true. To determine if an argument is strong or weak:. Premise 1: Most peacocks eat oatmeal for breakfast. Premise 2: This bird is a peacock. Conclusion: Therefore, probably this bird eats oatmeal for breakfast. This argument is inductively strong because if all its premises were true, then it would be highly likely or probable that its conclusion would also true.

To summarize, a strong inductive argument is one where it is improbable for the conclusion to be false, given that the premises are true. A weak inductive argument is one where the conclusion probably would not follow from the premises, if they were true. Cogency is the attribute of an inductive arguments that denotes the truth of its premises and its logical strength. An inductive argument is cogent when:. Premise 1: Europa a moon of Jupiter has an atmosphere containing oxygen.

Premise 2: Oxygen is required for life. Conclusion: Thus, there may be life on Europa. This argument is cogent because 1 it is inductively strong if the premises were true, then the conclusion would probably be true and 2 the premises actually are true.

On the other hand, the example above concerning peacocks, used to demonstrate inductive strength, is not cogent, because it does not have all true premises. In summary, an inductive argument is one in which it is improbable that the conclusion is false given that the premises are true. The important take-away from the information on the attributes of both deductive and inductive arguments is this:. A good argument proves, or establishes, its conclusion and has two key features:.

Weak inductive arguments do not. All Internet hackers and spies for the Chinese government are Chinese. Wen Ho Lee is Chinese. The first premise is not saying that all Chinese are internet hackers and spies. All Chinese people are Internet hackers and spies for the Chinese government.. Probably not. So most likely this argument is not sound. But it is valid. IF these premises are true, the conclusion is true.

Take Away Point: Both arguments are attempting to provide conclusive evidence for the conclusion. They are attempting to deduce a conclusion from a general statement and information about Wen Ho Lee. These arguments are not using any language that would indicate that the conclusions are only probably true. They are both implying a slam dunk conclusion. The first one though fails in this attempt; it is invalid.

The second one partially accomplished the goal of conclusive evidence for the conclusion; it is valid. But the premises would have to all be true for the conclusion to be conclusive. After careful observation we have not seen any hummingbirds all day in this forest. Therefore, probably there are no any hummingbirds in this forest.

Only one day of observation by untrained observers. Generalizing from one day to knowing something about the whole forest. After careful observation by trained hummingbird specialists over many weeks, no hummingbirds or signs of hummingbird habitation were found in this forest.

Here is the formula:. Finally, whenever both independence conditions are satisfied we have the following relationship between the likelihood of the evidence stream and the likelihoods of individual experiments or observations:. In scientific contexts the evidence can almost always be divided into parts that satisfy both clauses of the Independent Evidence Condition with respect to each alternative hypothesis. To see why, let us consider each independence condition more carefully.

To appreciate the significance of this condition, imagine what it would be like if it were violated. Condition-independence , when it holds, rules out such strange effects. Result-independence says that the description of previous test conditions together with their outcomes is irrelevant to the likelihoods of outcomes for additional experiments.

If this condition were widely violated, then in order to specify the most informed likelihoods for a given hypothesis one would need to include information about volumes of past observations and their outcomes. What a hypothesis says about future cases would depend on how past cases have gone. Such dependence had better not happen on a large scale. Otherwise, the hypothesis would be fairly useless, since its empirical import in each specific case would depend on taking into account volumes of past observational and experimental results.

However, even if such dependencies occur, provided they are not too pervasive, result-independence can be accommodated rather easily by packaging each collection of result-dependent data together, treating it like a single extended experiment or observation. Thus, by packaging result-dependent data together in this way, the result-independence condition is satisfied by those conjunctive statements that describe the separate, result-independent chunks.

The version of the Likelihood Ratio Convergence Theorem we will examine depends only on the Independent Evidence Conditions together with the axioms of probability theory. It draws on no other assumptions. Indeed, an even more general version of the theorem can be established, a version that draws on neither of the Independent Evidence Conditions.

However, the Independent Evidence Conditions will be satisfied in almost all scientific contexts, so little will be lost by assuming them. And the presentation will run more smoothly if we side-step the added complications needed to explain the more general result. From this point on, let us assume that the following versions of the Independent Evidence Conditions hold.

Assumption: Independent Evidence Assumptions. We now have all that is needed to begin to state the Likelihood Ratio Convergence Theorem. The Likelihood Ratio Convergence Theorem comes in two parts. Such outcomes are highly desirable. It will be convenient to define a term for this situation.

Definition: Full Outcome Compatibility. The first part of the Likelihood Ratio Convergence Theorem applies to that part of the total stream of evidence i. It turns out that these two kinds of cases must be treated differently. This is due to the way in which the expected information content for empirically distinguishing between the two hypotheses will be measured for experiments and observations that are fully outcome compatible ; this measure of information content blows up becomes infinite for experiments and observations that fail to be fully outcome compatible.

Thus, the following part of the convergence theorem applies to just that part of the total stream of evidence that consists of experiments and observations that fail to be fully outcome compatible for the pair of hypotheses involved. Here, then, is the first part of the convergence theorem. For proof see Proof of the Falsification Theorem. The Falsification Theorem is quite commonsensical.

First, notice that if there is a crucial experiment in the evidence stream, the theorem is completely obvious. The theorem is equally commonsensical for cases where no crucial experiment is available. To see what it says in such cases, consider an example. When this happens, the likelihood ratio becomes 0.

It is instructive to plug some specific values into the formula given by the Falsification Theorem, to see what the convergence rate might look like.

They tell us the likelihood of obtaining each specific outcome stream, including those that either refute the competitor or produce a very small likelihood ratio for it. Convergence theorems become moot. The point of the Likelihood Ratio Convergence Theorem both the Falsification Theorem and the part of the theorem still to come is to assure us in advance of considering any specific pair of hypotheses that if the possible evidence streams that test hypotheses have certain characteristics which reflect the empirical distinctness of the two hypotheses, then it is highly likely that one of the sequences of outcomes will occur that yields a very small likelihood ratio.

These theorems provide finite lower bounds on how quickly such convergence is likely to be. Thus, they show that the CoA is satisfied in advance of our using the logic to test specific pairs of hypotheses against one another.

The Falsification Theorem applies whenever the evidence stream includes possible outcomes that may falsify the alternative hypothesis. Evidence streams of this kind contain no possibly falsifying outcomes. Hypotheses whose connection with the evidence is entirely statistical in nature will usually be fully outcome-compatible on the entire evidence stream.

So, evidence streams of this kind are undoubtedly much more common in practice than those containing possibly falsifying outcomes. Furthermore, whenever an entire stream of evidence contains some mixture of experiments and observations on which the hypotheses are not fully outcome compatible along with others on which they are fully outcome compatible , we may treat the experiments and observations for which full outcome compatibility holds as a separate subsequence of the entire evidence stream, to see the likely impact of that part of the evidence in producing values for likelihood ratios.

The logarithm of the likelihood ratio provides such a measure. Definition: QI—the Quality of the Information. Thus, QI measures information on a logarithmic scale that is symmetric about the natural no-information midpoint, 0. Probability theorists measure the expected value of a quantity by first multiplying each of its possible values by their probabilities of occurring, and then summing these products.

Thus, the expected value of QI is given by the following formula:. Whereas QI measures the ability of each particular outcome or sequence of outcomes to empirically distinguish hypotheses, EQI measures the tendency of experiments or observations to produce distinguishing outcomes. It can be shown that EQI tracks empirical distinctness in a very precise way. We return to this in a moment. We are now in a position to state the second part of the Likelihood Ratio Convergence Theorem.

For proof see the supplement Proof of the Probabilistic Refutation Theorem. This theorem provides sufficient conditions for the likely refutation of false alternatives via exceeding small likelihood ratios.

The conditions under which this happens characterize the degree to which the hypotheses involved are empirically distinct from one another. It turns out that in almost every case for almost any pair of hypotheses the actual likelihood of obtaining such evidence i. This condition is only needed because our measure of evidential distinguishability, QI, blows up when the ratio. Furthermore, this condition is really no restriction at all on possible experiments or observations.

We merely failed to take this more strongly refuting possibility into account when computing our lower bound on the likelihood that refutation via likelihood ratios would occur.

The point of the two Convergence Theorems explored in this section is to assure us, in advance of the consideration of any specific pair of hypotheses, that if the possible evidence streams that test them have certain characteristics which reflect their evidential distinguishability, it is highly likely that outcomes yielding small likelihood ratios will result.

These theorems provide finite lower bounds on how quickly convergence is likely to occur. Thus, there is no need to wait through some infinitely long run for convergence to occur. Indeed, for any evidence sequence on which the probability distributions are at all well behaved, the actual likelihood of obtaining outcomes that yield small likelihood ratio values will inevitably be much higher than the lower bounds given by Theorems 1 and 2.

The true hypothesis speaks truthfully about this, and its competitors lie. Even a sequence of observations with an extremely low average expected quality of information is very likely to do the job if that evidential sequence is long enough.

Thus, the Criterion of Adequacy CoA is satisfied. Up to this point we have been supposing that likelihoods possess objective or agreed numerical values. Although this supposition is often satisfied in scientific contexts, there are important settings where it is unrealistic, where hypotheses only support vague likelihood values, and where there is enough ambiguity in what hypotheses say about evidential claims that the scientific community cannot agree on precise values for the likelihoods of evidential claims.

Recall why agreement, or near agreement, on precise values for likelihoods is so important to the scientific enterprise. To the extent that members of a scientific community disagree on the likelihoods, they disagree about the empirical content of their hypotheses, about what each hypothesis says about how the world is likely to be. This can lead to disagreement about which hypotheses are refuted or supported by a given body of evidence. Similarly, to the extent that the values of likelihoods are only vaguely implied by hypotheses as understood by an individual agent, that agent may be unable to determine which of several hypotheses is refuted or supported by a given body of evidence.

We have seen, however, that the individual values of likelihoods are not really crucial to the way evidence impacts hypotheses. Rather, as Equations 9—11 show, it is ratios of likelihoods that do the heavy lifting. Furthermore, although the rate at which the likelihood ratios increase or decrease on a stream of evidence may differ for the two support functions, the impact of the cumulative evidence should ultimately affect their refutation or support in much the same way.

When likelihoods are vague or diverse, we may take an approach similar to that we employed for vague and diverse prior plausibility assessments.

We may extend the vagueness sets for individual agents to include a collection of inductive support functions that cover the range of values for likelihood ratios of evidence claims as well as cover the ranges of comparative support strengths for hypotheses due to plausibility arguments within b , as represented by ratios of prior probabilities. Similarly, we may extend the diversity sets for communities of agents to include support functions that cover the ranges of likelihood ratio values that arise within the vagueness sets of members of the scientific community.

This broadening of vagueness and diversity sets to accommodate vague and diverse likelihood values makes no trouble for the convergence to truth results for hypotheses. For, provided that the Directional Agreement Condition is satisfied by all support functions in an extended vagueness or diversity set under consideration, the Likelihood Ratio Convergence Theorem applies to each individual support function in that set.

That can happen because different support functions may represent the evidential import of hypotheses differently, by specifying different likelihood values for the very same evidence claims. However, when the Directional Agreement Condition holds for a given collection of support functions, this problem cannot arise. Thus, when the Directional Agreement Condition holds for all support functions in a vagueness or diversity set that is extended to include vague or diverse likelihoods, and provided that enough evidentially distinguishing experiments or observations can be performed, all support functions in the extended vagueness or diversity set will very probably come to agree that the likelihood ratios for empirically distinct false competitors of a true hypothesis are extremely small.

As that happens, the community comes to agree on the refutation of these competitors, and the true hypothesis rises to the top of the heap. What if the true hypothesis has evidentially equivalent rivals? Their posterior probabilities must rise as well. In that case we are only assured that the disjunction of the true hypothesis with its evidentially equivalent rivals will be driven to 1 as evidence lays low its evidentially distinct rivals.

The editors and author also thank Greg Stokley and Philippe van Basshuysen for carefully reading an earlier version of the entry and identifying a number of typographical errors. Inductive Arguments 2. Inductive Logic and Inductive Probabilities 2. The Likelihood Ratio Convergence Theorem 4. Inductive Arguments Let us begin by considering some common kinds of examples of inductive arguments.

Consider the following two arguments: Example 1. Semi-formalization Formalization Premise 1 The frequency or proportion of members with attribute A among the members of S is r. Theorem: Nonnegativity of EQI. Republished in by Dover: New York. Chihara, Charles S. Kyburg, Jr. Smokler eds. Krieger Publishing Company, Dowe, David L. Duhem, P. Son objet et sa structure , Paris: Chevalier et Riviere; translated by P. Earman, John, , Bayes or Bust?

Edwards, A. Taper and Subhash R. Lele eds. Friedman, Nir and Joseph Y. Glymour, Clark N. Zalta ed. Harper, William L. Joyce, James M. Mele and Piers Rawling eds.

Kelly, Kevin T. Kolmogorov, A. Koopman, B. Reprinted in H. Kyburg and H. Jeffrey, ed. Borchert ed. Eells and B. Skyrms eds. McGrew, Timothy J.

Norton, John D. Quine, W. Routledge Encyclopedia of Philosophy, Version 1. Braithwaite ed. Kegan,, — Reprinted in Studies in Subjective Probability , H. Krieger Publishing Company, , 23— Reprinted in Philosophical Papers , D.

Mellor ed. Rosenkrantz, R. Royall, Richard M. Salmon, Wesley C. Feigl and G. Maxwell eds. Sarkar, Sahotra and Jessica Pfeifer eds. Savage, Leonard J. Schlesinger, George N. Harper and Brian Skyrms eds. Cohen and L. Laudan eds. Vranas, Peter B. Academic Tools How to cite this entry. Enhanced bibliography for this entry at PhilPapers , with links to its database.

Therefore, Harold is mortal. It is assumed that the premises, "All men are mortal" and "Harold is a man" are true. Therefore, the conclusion is logical and true. In deductive reasoning, if something is true of a class of things in general, it is also true for all members of that class.

According to California State University, deductive inference conclusions are certain provided the premises are true. It's possible to come to a logical conclusion even if the generalization is not true.

If the generalization is wrong, the conclusion may be logical, but it may also be untrue. For example, the argument, "All bald men are grandfathers. Harold is bald. Therefore, Harold is a grandfather," is valid logically but it is untrue because the original statement is false. Inductive reasoning is the opposite of deductive reasoning.

Inductive reasoning makes broad generalizations from specific observations.



0コメント

  • 1000 / 1000