Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Logic in Argumentative Writing

Welcome to the Purdue OWL
This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.
Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.
This resource covers using logic within writing—logical vocabulary, logical fallacies, and other types of logos-based reasoning.
This handout is designed to help writers develop and use logical arguments in writing. This handout helps writers analyze the arguments of others and generate their own arguments. However, it is important to remember that logic is only one aspect of a successful argument. Non-logical arguments , statements that cannot be logically proven or disproved, are important in argumentative writing—such as appeals to emotions or values. Illogical arguments , on the other hand, are false and must be avoided.
Logic is a formal system of analysis that helps writers invent, demonstrate, and prove arguments. It works by testing propositions against one another to determine their accuracy. People often think they are using logic when they avoid emotion or make arguments based on their common sense, such as "Everyone should look out for their own self-interests" or "People have the right to be free." However, unemotional or common sense statements are not always equivalent to logical statements. To be logical, a proposition must be tested within a logical sequence.
The most famous logical sequence, called the syllogism , was developed by the Greek philosopher Aristotle. His most famous syllogism is:
Premise 1: All men are mortal. Premise 2: Socrates is a man. Conclusion: Therefore, Socrates is mortal.
In this sequence, premise 2 is tested against premise 1 to reach the logical conclusion. Within this system, if both premises are considered valid, there is no other logical conclusion than determining that Socrates is a mortal.
This guide provides some vocabulary and strategies for determining logical conclusions.

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
1 What is Logic?
Matthew Knachel
There’s an ancient view, still widely held, that what makes human beings special—what distinguishes us from the “beasts of the field”—is that we are rational. What does rationality consist in? That’s a vexed question, but one possible response goes roughly like this: we manifest our rationality by engaging in activities that involve reasoning —making claims and backing them up with reasons, acting in accord with reasons and beliefs, drawing inferences from available evidence, and so on.
This reasoning activity can be done well and it can be done badly; it can be done correctly or incorrectly. Logic is the discipline that aims to distinguish good reasoning from bad.
Good reasoning is not necessarily effective reasoning. In fact, as we shall see in a subsequent chapter on logical fallacies, bad reasoning is pervasive and often extremely effective—in the sense that people are often persuaded by it. In logic, the standard of goodness is not effectiveness in the sense of persuasiveness, but rather correctness according to logical rules.
For example, consider Hitler. He persuaded an entire nation to go along with a variety of proposals that were not only false but downright evil. You won’t be surprised to hear that if you examine it critically, his reasoning does not pass logical muster. Hitler’s arguments were effective, but not logically correct. Moreover, his persuasive techniques go beyond reasoning in the sense of backing up claims with reasons. Hitler relied on threats, emotional manipulation, unsupported assertions, etc. There are many rhetorical tricks one can use to persuade.
In logic, we study the rules and techniques that allow us to distinguish good, correct reasoning from bad, incorrect reasoning.
Since there are a variety of different types of reasoning and methods with which to evaluate each of these types, plus various diverging views on what constitutes correct reasoning, there are many approaches to the logical enterprise. We talk of logic, but also of logics . A logic is just a set of rules and techniques for distinguishing good reasoning from bad. A logic must formulate precise standards for evaluating reasoning and develop methods for applying those standards to particular instances.
Basic Notions
Reasoning involves claims or statements—making them and backing them up with reasons, drawing out their consequences. Propositions are the things we claim, state, assert.
Propositions are the kinds of things that can be true or false. They are expressed by declarative sentences . We use such sentences to make all sorts of assertions, from routine matters of fact (“the Earth revolves around the Sun”), to grand metaphysical theses (“reality is an unchanging, featureless, unified Absolute”), to claims about morality (“it is wrong to eat meat”).
It is important to distinguish sentences in the declarative mood, which express propositions, from sentences in other moods, which do not. Interrogative sentences, for example, ask questions (“Is it raining?”), and imperative sentences issue commands (“Don’t drink kerosene.”). It makes no sense to ask whether these kinds of sentences express truths or falsehoods, so they do not express propositions.
We also distinguish propositions from the sentences that express them, because a single proposition can be expressed by different sentences. “It’s raining” and “es regnet” both express the proposition that it’s raining; one sentence does it in English, the other in German. Also, “John loves Mary” and “Mary is loved by John” both express the same proposition.
The fundamental unit of reasoning is the argument. In logic, by “argument” we don’t mean a disagreement, a shouting match; rather, we define the term precisely:
Argument = a set of propositions, one of which, the conclusion, is (supposed to be) supported by the others, the premises.
If we’re reasoning by making claims and backing them up with reasons, then the claim that’s being backed up is the conclusion of an argument; the reasons given to support it are the argument’s premises. If we’re reasoning by drawing an inference from a set of statements, then the inference we draw is the conclusion of an argument, and the statements from which it’s drawn are the premises.
We include the parenthetical hedge—“supposed to be”—in the definition to make room for bad arguments. A bad argument, very roughly speaking, is one where the premises fail to support the conclusion; a good argument’s premises actually do support the conclusion.
Analysis of Arguments
The following passage expresses an argument:
So does this passage:
Again, the ultimate purpose of logic is to evaluate arguments—to distinguish the good from the bad. To do so requires distinctions, definitions, principles, and techniques that will be outlined in subsequent chapters. For now, we will focus on identifying and reconstructing arguments.
The first task is to explicate arguments—to state explicitly their premises and conclusions. A perspicuous way to do this is simply to list declarative sentences expressing the relevant propositions, with a line separating the premises from the conclusion, thus:
- McDonald’s pays their workers very low wages.
- The animals that provide McDonald’s meat are raised in deplorable conditions.
- McDonald’s food is very unhealthy.
- [latex]/ \therefore[/latex] You shouldn’t eat at McDonald’s. [1]
This is an explication of the first argumentative passage above. To identify the conclusion of an argument, it is helpful to ask oneself, “What is this person trying to convince me to believe by saying these things? What is the ultimate point of this passage?” The answer is pretty clear in this case. Another clue as to what’s going on in the passage is provided by the word “because” in the third sentence. Along with other words, like “since” and “for,” it indicates the presence of a premise. We can call such words premise markers . The symbol “/∴” can be read as shorthand for “therefore.” Along with expressions like “consequently,” “thus,” “it follows that” and “which implies that,” “therefore” is an indicator that the argument’s conclusion is about to follow. We call such locutions conclusion markers . Such a marker is not present in the first argument, but we do see one in the second, which may be explicated thus:
- The universe is vast and complex.
- The universe displays an astonishing degree of order.
- The planets orbit the sun according to regular laws.
- Animals’ minutest parts are arranged precisely to serve their purposes.
- Such order and complexity cannot arise at random.
- [latex]/ \therefore[/latex] The universe must be the product of a designer of enormous power and intellect: God.
Several points of comparison to our first explication are worthy of note here. First, as mentioned, we were alerted of the conclusion by the word “therefore.” Second, this passage required much more paraphrase than the first. The second sentence is interrogative, not declarative, and so it does not express a proposition. Since arguments are, by definition, collections of propositions, we must restrict ourselves to declarative sentences when explicating them. Since the answer to the second sentence’s rhetorical question is clearly “yes,” we paraphrase as shown. The third sentence expresses two propositions, so in our explication we separate them; each one is a premise.
So sometimes, when we explicate an argument, we have to take what’s present in the argumentative passage and change it slightly, so that all of the sentences we write down express the propositions present in the argument. This is paraphrasing. At other times, we have to do even more. For example, we may have to introduce propositions which are not explicitly mentioned within the argumentative passage, but are undoubtedly used within the argument’s reasoning.
There’s a Greek word for argumentative passages that leave certain propositions unstated: enthymemes . Here’s an example:
There’s an implicit premise lurking in the background here—something that hasn’t been said, but which needs to be true for the argument to go through. We need a claim that connects the premise to the conclusion—that bridges the gap between them. Something like this: An all-loving God would not allow innocent people to suffer. Or maybe: widespread suffering is incompatible with the idea of an all-loving deity. The premise points to suffering, while the conclusion is about God; these propositions connect those two claims. A complete explication of the argumentative passage would make a proposition like this explicit:
- Many innocent people all over the world are suffering.
- An all-loving God would not allow innocent people to suffer.
- [latex]/ \therefore[/latex] There cannot be an all-loving God.
This is the mark of the kinds of tacit premises we want to uncover: if they’re false, they undermine the argument. Often, premises like this are unstated for a reason: they’re controversial claims on their own, requiring evidence to support them; so the arguer leaves them out, preferring not to get bogged down. [2] When we draw them out, however, we can force a more robust dialectical exchange, focusing the argument on the heart of the matter. In this case, a discussion about the compatibility of God’s goodness and evil in the world would be in order. There’s a lot to be said on that topic. Philosophers and theologians have developed elaborate arguments over the centuries to defend the idea that God’s goodness and human suffering are in fact compatible. [3]
So far, our analysis of arguments has not been particularly deep. We have noted the importance of identifying the conclusion and clearly stating the premises, but we have not looked into the ways in which sets of premises can support their conclusions. We have merely noted that, collectively, premises provide support for conclusions. We have not looked at how they do so, what kinds of relationships they have with one another. This requires deeper analysis.
Often, different premises will support a conclusion—or another premise—individually, without help from any others. Consider this simple argument:
Propositions 1 and 2 support the conclusion, proposition 3—and they do so independently. Each gives us a reason for believing that the war was unjust, and each stands as a reason even if we were to suppose that the other were not true; this is the mark of independent premises .
It can be helpful, especially when arguments are more complex, to draw diagrams that depict the relationships among premises and conclusion. We could depict the argument above as follows:

In such a diagram, the circled numbers represent the propositions and the arrows represent the relationship of support from one proposition to another. Since propositions 1 and 2 each support 3 independently, they get their own arrows.
Other relationships among premises are possible. Sometimes, premises provide support for conclusions only indirectly, by giving us a reason to believe some other premise, which is intermediate between the two claims. Consider the following argument:
In this example, proposition 1 provides support for proposition 2 (the word “hence” is a clue), while proposition 2 directly supports the conclusion in 3. We would depict the relationships among these propositions thus:

Sometimes premises must work together to provide support for another claim, not because one of them provides reason for believing the other, but because neither provides the support needed on its own; we call such propositions joint premises . Consider the following:
In this argument, neither premise 1 nor premise 2 supports the conclusion on its own; rather, the second premise, as it were, provides a key that unlocks the conclusion from the conditional premise 1. We can indicate such interdependence diagrammatically with brackets, thus:

Diagramming arguments in this way can be helpful both in understanding how they work and informing any attempt to critically engage with them. One can see clearly in the first argument that any considerations put forward contrary to one of the independent premises will not completely undermine support for the conclusion, as there is still another premise providing it with some degree of support. In the second argument, though, reasons telling against the second premise would cut off support for the conclusion at its root; and anything contrary to the first premise will leave the second in need of support. And in the third argument, considerations contrary to either of the joint premises will undermine support for the conclusion. Especially when arguments are more complex, such visual aids can help us recognize all of the inferences contained within the argument.
Perhaps it will be useful to conclude by considering a slightly more complex argument. Let’s consider the nature of numbers:
The conclusion of this argument is the last proposition, that numbers are abstract objects. Notice that the first premise gives us a choice between this claim and an alternative—that they are concrete. The second premise denies that alternative, and so premises 1 and 2 are working together to support the conclusion:

Now we need to make room in our diagram for propositions 3 and 4. They are there to give us reasons for believing that numbers are not concrete objects. First, by asserting that numbers aren’t located in space like concrete objects are, and second by asserting that numbers don’t interact with other objects, like concrete objects do. These are separate, independent reasons for believing they aren’t concrete, so we end up with this diagram:

Logic and Philosophy
At the heart of the logical enterprise is a philosophical question: What makes a good argument? That is, what is it for a set of claims to provide support for some other claim? Or maybe: When are we justified in drawing inferences? To answer these questions, logicians have developed a wide variety of logical systems, covering different types of arguments, and applying different principles and techniques. Many of the tools developed in logic can be applied beyond the confines of philosophy. The mathematician proving a theorem, the computer scientist programming a computer, the linguist modeling the structure of language—all these are using logical methods. Because logic has such wide application, and because of the formal/mathematical sophistication of many logical systems, it occupies a unique place in the philosophical curriculum. A class in logic is typically unlike other philosophy classes in that very little time is spent directly engaging with and attempting to answer the “big questions”; rather, one very quickly gets down to the business of learning logical formalisms. The questions logic is trying to answer are important philosophical questions, but the techniques developed to answer them are worthy of study on their own.
This does not mean, however, that we should think of logic and philosophy as merely tangentially related; on the contrary, they are deeply intertwined. For all the formal bells and whistles featured in the latest high-end logical system, at bottom it is part of an effort to answer the fundamental question of what follows from what. Moreover, logic is useful to the practicing philosopher in at least three other ways.
Philosophers attempt to answer deep, vexing questions—about the nature of reality, what constitutes a good life, how to create a just society, and so on. They give their answers to these questions, and they back those answers up with reasons. Then other philosophers consider their arguments and reply with elaborations and criticisms—arguments of their own. Philosophy is conducted and makes progress by way of exchanging arguments. Since they are the primary tool of their trade, philosophers better know a little something about what makes for good arguments! Logic, therefore, is essential to the practice of philosophy.
But logic is not merely a tool for evaluating philosophical arguments; it has altered the course of the ongoing philosophical conversation. As logicians developed formal systems to model the structure of an ever-wider range of discursive practices, philosophers have been able to apply their insights directly to traditional philosophical problems and recognize previously hidden avenues of inquiry. Since the turn of the 20th century especially, the proliferation of novel approaches in logic has sparked a revolution in the practice of philosophy. It is not too much of an exaggeration to say that much of the history of philosophy in the 20th century constituted an ongoing attempt to grapple with new developments in logic, and the philosophical focus on language that they seemed to demand. No philosophical topic—from metaphysics to ethics to epistemology and beyond—was untouched by this revolution.
Finally, logic itself is the source of fascinating philosophical questions. The basic question at its heart—what is it for a claim to follow from others?—ramifies out in myriad directions, providing fertile ground for philosophical speculation. There is logic, and then there is philosophy of logic . Logic is said to be “formal,” for example. What does that mean? It’s a surprisingly difficult question to answer. [5] Our simplest logical formulations of conditional sentences (those involving “if”), lead to apparent paradoxes. [6] How should those be resolved? Should our formalisms be altered to better capture the natural-language meanings of conditionals? What is the proper relationship between logical systems and natural languages, anyway?
Traditionally, most logicians have accepted that logic should be “bivalent”: every proposition is either true or false. But natural languages contain vague terms whose boundaries of applicability are not always clear. For example, “bald”: for certain subjects, we might be inclined to say that they’re well on their way to full-on baldness, but not quite there yet; on the other hand, we would be reluctant to say that they’re not-bald. There are in-between cases. For such cases, we might want to say, for example, that the proposition that Fredo is bald is neither true nor false. Some logicians have developed logics that are not bivalent, to deal with this sort of linguistic phenomenon. Some add a third truth-value: “neither” or “undetermined,” for instance. Others introduce infinite degrees of truth (this is called “fuzzy logic”). These logics deviate from traditional approaches. Are they therefore wrong in some sense? Or are they right, and the traditionalists wrong? Or are we even asking a sensible question when we ask whether a particular logical system is right or wrong? Can we be so-called logical “pluralists,” accepting a variety of incompatible logics, depending, for example, on whether they’re useful?
These sorts of questions are beyond the scope of this introductory text, of course. They’re included to give you a sense of just how far one can take the study of logic. The task for now, though, is to begin that study.
First, explicate the following arguments, paraphrasing as necessary and only including tacit premises when explicitly instructed to do so. Next, diagram the arguments.
- Numbers, if they exist at all, must be either concrete or abstract objects. Concrete objects–like planets and people–are able to interact with other things in cause-and-effect relations. Numbers lack this ability. Therefore, numbers are abstract objects. [ You will need to add an implicit intermediate premise here! ]
- Abolish the death penalty! Why? It is immoral. Numerous studies have shown that there is racial bias in its application. The rise of DNA testing has exonerated scores of inmates on death row; who knows how many innocent people have been killed in the past? The death penalty is also impractical. Revenge is counterproductive: “An eye for an eye leaves the whole world blind,” as Gandhi said. Moreover, the costs of litigating death penalty cases, with their endless appeals, are enormous.
- A just economic system would feature an equitable distribution of resources and an absence of exploitation. Capitalism is an unjust economic system. Under capitalism, the typical distribution of wealth is highly skewed in favor of the rich. And workers are exploited: despite their essential role in producing goods for the market, most of the profits from the sales of those goods go to the owners of firms, not their workers.
- The mind and the brain are not identical. How can things be identical if they have different properties? There is a property that the mind and brain do not share: the brain is divisible, but the mind is not. Like all material things, the brain can be divided into parts—different halves, regions, neurons, etc. But the mind is a unity. It is my thinking essence, in which I can discern no separate parts. [7]
- Every able-bodied adult ought to participate in the workforce. The more people working, the greater the nation’s wealth, which benefits everyone economically. In addition, there is no replacement for the dignity workers find on the job. The government should therefore issue tax credits to encourage people to enter the workforce. [ Include in your explication a tacit premise, not explicitly stated in the passage, but necessary to support the conclusion. ]
- The symbols preceding the conclusion, "[latex]/ \therefore[/latex]" represent the word "therefore." ↵
- This is not always the reason. Some claims are left tacit simply because everybody accepts them and to state them explicitly would be a waste of time. If we argue, “Elephants are mammals, and so warm-blooded,” we omit the claim that all mammals are warm-blooded for this innocent reason. ↵
- These arguments even have a special name: they’re called “theodicies.” ↵
- An extremely compressed version of Plato’s objections to poetry in Book X of The Republic . ↵
- John MacFarlane, in his widely read PhD dissertation, spends over 300 pages on that question. See: MacFarlane, J. 2000. “What Does It Mean to Say That Logic Is Formal?” University of Pittsburgh. ↵
- For a concise explanation, see the Wikipedia entry on paradoxes of material implication . ↵
- A simplified version of an argument from Rene Descartes. ↵
The unambiguated meaning of declarative sentences.
Sentences which communicate that something is, or is not, the case. For example, “Bob won the 50m freestyle.” Declarative sentences can be contrasted with those that pose questions, called interrogative sentences , and those which deliver commands, known as imperative sentences . (Declarative sentences are also known as indicative sentences)
Words that generally indicate what follows is a premise, e.g. “given that,” “as,” “since.”
Words that generally indicate that what follows is a conclusion, e.g. “therefore,” “thus,” “consequently.”
Arguments which leave certain premises unstated.
Premises which aim to provide sufficient support on their own for the truth of the conclusion.
Premises which attempt to directly support not the conclusion of an argument, but another premise.
Premises which only provide support for the truth of the conclusion when combined.
What is Logic? Copyright © 2020 by Matthew Knachel is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.
Share This Book
1.1 Introduction
Logic is one of the oldest intellectual disciplines in human history. It dates back to Aristotle. It has been studied through the centuries by people like Leibniz, Boole, Russell, Turing, and many others. And it is still a subject of active investigation today.
We use Logic in just about everything we do. We use the language of Logic to define concepts, to encode constraints, to express partial information. We use logical reasoning to derive conclusions from these bits of information. We use logical proofs to convince others of our conclusions.
And we are not alone! Logic is increasingly being used by computers - to prove mathematical theorems, to validate engineering designs, to diagnose failures, to encode and analyze laws and regulations and business rules.
Logic is also becoming more common at the interface between man and machine, in "logic-enabled" computer systems, where users can view and edit logical sentences. Think, for example, about email readers that allow users to write rules to manage incoming mail messages - deleting some, moving others to various mailboxes, and so forth based on properties of those messages. In the business world, eCommerce systems allow companies to encode price rules based on the product, the customer, the date, and so forth.
Moreover, Logic is sometimes used not just by users in communicating with computer systems but by software engineers in building those systems (using a programming methodology known as logic programming ).
This chapter is an overview of Logic as presented in this book. We start with a discussion of possible worlds and illustrate the notion in an application area known as Sorority World. We then give an informal introduction to the key elements of Logic - logical sentences, logical entailment, and logical proofs. We then talk about the value of using a formal language for expressing logical information instead of natural language. Finally, we discuss the automation of logical reasoning and some of the computer applications that this makes possible.
1.2 Possible Worlds
Consider the interpersonal relations of a small sorority. There are just four members - Abby, Bess, Cody, and Dana. Some of the girls like each other, but some do not.
The following figure shows one set of possibilities. The checkmark in the first row here means that Abby likes Cody, while the absence of a checkmark means that Abby does not like the other girls (including herself). Bess likes Cody too. Cody likes everyone but herself. And Dana also likes the popular Cody.
Of course, this is not the only possible state of affairs. The figure below shows another possible world. In this world, every girl likes exactly two other girls, and every girl is liked by just two girls.
As it turns out, there are quite a few possibilities. Given four girls, there are sixteen possible instances of the likes relation - Abby likes Abby, Abby likes Bess, Abby likes Cody, Abby likes Dana, Bess likes Abby, and so forth. Each of these sixteen can be either true or false. There are 2 16 (65,536) possible combinations of these true-false possibilities, and so there are 2 16 possible worlds.
Let's assume that we do not know the likes and dislikes of the girls ourselves but we have informants who are willing to tell us about them. Each informant knows a little about the likes and dislikes of the girls, but no one knows everything.
This is where Logic comes in. By writing logical sentences , each informant can express exactly what he or she knows - no more, no less. For our part, we can use the sentences we have been told to draw conclusions that are logically entailed by those sentences. And we can use logical proofs to explain our conclusions to others. Let's consider each of these elements in turn.
1.3 Logical Sentences
The following figure shows some logical sentences pertaining to our sorority world. The first sentence is straightforward; it tells us directly that Dana likes Cody. The second and third sentences tell us what is not true without saying what is true. The fourth sentence says that one condition holds or another but does not say which. The fifth sentence gives a general fact about the girls Abby likes. The sixth sentence expresses a general fact about Cody's likes. The last sentence says something about everyone.
Sentences like these constrain the possible ways the world could be. Each sentence divides the set of possible worlds into two subsets, those in which the sentence is true and those in which the sentence is false, as suggested by the following figure. Believing a sentence is tantamount to believing that the world is in the first set.
Given two sentences, we know the world must be in the intersection of the set of worlds in which the first sentence is true and the set of worlds in which the second sentence is true. Ideally, when we have enough sentences, we know exactly how things stand.
Effective communication requires a language that allows us to express what we know, no more and no less. If we know the state of the world, then we should write enough sentences to communicate this to others. If we do not know which of various ways the world could be, we need a language that allows us to express only what we know. The beauty of Logic is that it gives us a means to express incomplete information when that is all we have and to express complete information when full information is available.
1.4 Logical Entailment
Logical sentences can sometimes pinpoint a specific world from among many possible worlds. However, this is not always the case. Sometimes, a collection of sentences only partially constrains the world. For example, there are four different worlds that satisfy the sentences in in the preceding section, viz. the ones shown below.
Even though a set of sentences does not determine a unique world, it is often the case that some sentences are true in every world that satisfies the given sentences. A sentence of this sort is said to be a logical conclusion from the given sentences. Said the other way around, a set of sentences logically entails a conclusion if and only if every world that satisfies the sentences also satisfies the conclusion.
What can we conclude from the bits of information in our sample logical sentences? Quite a bit, as it turns out. For example, it must be the case that Bess likes Cody. Also, Bess does not like Dana. There are also some general conclusions that must be true. For example, in this world with just four girls, we can conclude that everybody likes somebody. Also, everyone is liked by somebody.
One way to check whether a set of sentences logically entails a conclusion is to examine the set of all worlds in which the given sentences are true. For example, in our case, we notice that, in every world that satisfies our sentences, Bess likes Cody, so the statement that Bess likes Cody is a logical conclusion from our set of sentences. In every world that satisfies our sentences, Bess does not like Abby, so the statement that Bess likes Abby is not a logical conclusion from our set of sentences and the statement that Bess does not like Abby is a logical conclusion.
Note that, when there are multiple possible worlds, as in this case, there are sentences that we do not know to be true or false. For example, in the case above, there is a possible world in which Abby likes Bess and there is a possible world in which Abby does not like Bess. The upshot is that the statement Abby likes Bess is not a logical conclusion; and, at the same time, the statement that Abby does not likes Bess is not a logical conclusion either. Obviously, one of these statements is true in the real world, but we do not know which is true purely on the basis of the information we are given.
1.5 Logical Proofs
Unfortunately, determining logical entailment by checking all possible worlds is impractical in general. There are usually many, many possible worlds. Moreover, as we shall see, in some cases the number of possible worlds is infinite . The upshot is that case checking is not always practical.
Luckily there is an alternative that can work even when case checking fails. The answer is logical deduction , i.e. the application of rules of inference to derive logical conclusions and thereby produce logical proofs , i.e. sequences of reasoning steps that leads from premises to conclusions .
For example, we can use this sort of reasoning to conclude that Block C is next to block D in our Blocks World example without enumerating possible worlds. The line of argument goes as shown below.
The alternative is logical reasoning , viz. the application of reasoning rules to derive logical conclusions and produce logical proofs , i.e. sequences of reasoning steps that leads from premises to conclusions .
The concept of proof, in order to be meaningful, requires that we be able to recognize certain reasoning steps as immediately obvious. In other words, we need to be familiar with the reasoning "atoms" out of which complex proof "molecules" are built.
Formalizing this, we say that a conclusion is provable from a set of premises if and only if there is a finite sequence of sentences in which every element is either a premise or the result of applying a sound rule of inference to earlier members in the sequence.
As we shall see, for well-behaved logics, logical entailment and provability are identical - a set of premises logically entails a conclusion if and only if the conclusion is provable from the premises. This is a very big deal.
One of Aristotle's great contributions to philosophy was his recognition that what makes a step of a proof immediately obvious is its form rather than its content. It does not matter whether you are talking about people or buildings or numbers. What matters is the structure of the facts with which you are working. Such patterns are called rules of inference.
As an example, consider the reasoning step shown below. We know that all Accords are Hondas, and we know that all Hondas are Japanese cars. Consequently, we can conclude that all Accords are Japanese cars.
Now consider another example. We know that all borogoves are slithy toves, and we know that all slithy toves are mimsy. Consequently, we can conclude that all borogoves are mimsy. What's more, in order to reach this conclusion, we do not need to know anything about borogoves or slithy toves or what it means to be mimsy.
What is interesting about these examples is that they share the same reasoning structure, viz. the pattern shown below.
The existence of such reasoning patterns is fundamental in Logic but raises important questions. Which patterns are correct? Are there many such patterns or just a few?
Let us consider the first of these questions. Obviously, there are patterns that are just plain wrong in the sense that they can lead to incorrect conclusions. Consider, as an example, the (faulty) reasoning pattern shown below.
Now let us take a look at an instance of this pattern. If we replace x by Toyotas and y by cars and z by made in America , we get the following line of argument, leading to a conclusion that happens to be correct.
On the other hand, if we replace x by Toyotas and y by cars and z by Porsches , we get a line of argument leading to a conclusion that is questionable.
What distinguishes a correct pattern from one that is incorrect is that it must always lead to correct conclusions, i.e. they must be correct so long as the premises on which they are based are correct. As we will see, this is the defining criterion for what we call deduction .
Now, it is noteworthy that there are patterns of reasoning that are sometimes useful but do not satisfy this strict criterion. There is inductive reasoning, abductive reasoning, reasoning by analogy, and so forth.
Induction is reasoning from the particular to the general. The example shown below illustrates this. If we see enough cases in which something is true and we never see a case in which it is false, we tend to conclude that it is always true.
Abduction is reasoning from effects to possible causes. Many things can cause an observed result. We often tend to infer a cause even when our enumeration of possible causes is incomplete.
Reasoning by analogy is reasoning in which we infer a conclusion based on similarity of two situations, as in the following example.
Of all types of reasoning, deduction is the only one that guarantees its conclusions in all cases, it produces only those conclusions that are logically entailed by one's premises.
1.6 Formalization
So far, we have illustrated everything with sentences in English. While natural language works well in many circumstances, it is not without its problems. Natural language sentences can be complex; they can be ambiguous; and failing to understand the meaning of a sentence can lead to errors in reasoning.
Even very simple sentences can be troublesome. Here we see two grammatically legal sentences. They are the same in all but the last word, but their structure is entirely different. In the first, the main verb is blossoms , while in the second blossoms is a noun and the main verb is sank .
As another example of grammatical complexity, consider the following excerpt taken from the University of Michigan lease agreement. The sentence in this case is sufficiently long and the grammatical structure sufficiently complex that people must often read it several times to understand precisely what it says.
The University may terminate this lease when the Lessee, having made application and executed this lease in advance of enrollment, is not eligible to enroll or fails to enroll in the University or leaves the University at any time prior to the expiration of this lease, or for violation of any provisions of this lease, or for violation of any University regulation relative to resident Halls, or for health reasons, by providing the student with written notice of this termination 30 days prior to the effective date of termination, unless life, limb, or property would be jeopardized, the Lessee engages in the sales of purchase of controlled substances in violation of federal, state or local law, or the Lessee is no longer enrolled as a student, or the Lessee engages in the use or possession of firearms, explosives, inflammable liquids, fireworks, or other dangerous weapons within the building, or turns in a false alarm, in which cases a maximum of 24 hours notice would be sufficient.
As an example of ambiguity, suppose I were to write the sentence There's a girl in the room with a telescope . See the following figure for two possible meanings of this sentence. Am I saying that there is a girl in a room containing a telescope? Or am I saying that there is a girl in the room and she is holding a telescope?
Such complexities and ambiguities can sometimes be humorous if they lead to interpretations the author did not intend. See the examples below for some infamous newspaper headlines with multiple interpretations. Using a formal language eliminates such unintentional ambiguities (and, for better or worse, avoids any unintentional humor as well).
As an illustration of errors that arise in reasoning with sentences in natural language, consider the following examples. In the first, we use the transitivity of the better relation to derive a conclusion about the relative quality of champagne and soda from the relative quality of champagne and beer and the relative quality or beer and soda. So far so good.
Now, consider what happens when we apply the same transitivity rule in the case illustrated below. The form of the argument is the same as before, but the conclusion is somewhat less believable. The problem in this case is that the use of nothing here is syntactically similar to the use of beer in the preceding example, but in English it means something entirely different.
Logic eliminates these difficulties through the use of a formal language for encoding information. Given the syntax and semantics of this formal language, we can give a precise definition for the notion of logical conclusion. Moreover, we can establish precise reasoning rules that produce all and only logical conclusions.
In this regard, there is a strong analogy between the methods of Formal Logic and those of high school algebra. To illustrate this analogy, consider the following algebra problem.
Xavier is three times as old as Yolanda. Xavier's age and Yolanda's age add up to twelve. How old are Xavier and Yolanda?
Typically, the first step in solving such a problem is to express the information in the form of equations. If we let x represent the age of Xavier and y represent the age of Yolanda, we can capture the essential information of the problem as shown below.
Using the methods of algebra, we can then manipulate these expressions to solve the problem. First we subtract the second equation from the first.
Next, we divide each side of the resulting equation by -4 to get a value for y . Then substituting back into one of the preceding equations, we get a value for x .
Now, consider the following logic problem.
If Mary loves Pat, then Mary loves Quincy. If it is Monday and raining, then Mary loves Pat or Quincy. If it is Monday and raining, does Mary love Quincy?
As with the algebra problem, the first step is formalization. Let p represent the possibility that Mary loves Pat; let q represent the possibility that Mary loves Quincy; let m represent the possibility that it is Monday; and let r represent the possibility that it is raining.
With these abbreviations, we can represent the essential information of this problem with the following logical sentences. The first says that p implies q , i.e. if Mary loves Pat, then Mary loves Quincy. The second says that m and r implies p or q , i.e. if it is Monday and raining, then Mary loves Pat or Mary loves Quincy.
As with Algebra, Formal Logic defines certain operations that we can use to manipulate expressions. The operation shown below is a variant of what is called Propositional Resolution . The expressions above the line are the premises of the rule, and the expression below is the conclusion.
There are two elaborations of this operation. (1) If a proposition on the left hand side of one sentence is the same as a proposition on the right hand side of the other sentence, it is okay to drop the two symbols, with the proviso that only one such pair may be dropped. (2) If a constant is repeated on the same side of a single sentence, all but one of the occurrences can be deleted.
We can use this operation to solve the problem of Mary's love life. Looking at the two premises above, we notice that p occurs on the left-hand side of one sentence and the right-hand side of the other. Consequently, we can cancel the p and thereby derive the conclusion that, if is Monday and raining, then Mary loves Quincy or Mary loves Quincy.
Dropping the repeated symbol on the right hand side, we arrive at the conclusion that, if it is Monday and raining, then Mary loves Quincy.
This example is interesting in that it showcases our formal language for encoding logical information. As with algebra, we use symbols to represent relevant aspects of the world in question, and we use operators to connect these symbols in order to express information about the things those symbols represent.
The example also introduces one of the most important operations in Formal Logic, viz. Resolution (in this case a restricted form of Resolution). Resolution has the property of being complete for an important class of logic problems, i.e. it is the only operation necessary to solve any problem in the class.
1.7 Automation
The existence of a formal language for representing information and the existence of a corresponding set of mechanical manipulation rules together have an important consequence, viz. the possibility of automated reasoning using digital computers.
The idea is simple. We use our formal representation to encode the premises of a problem as data structures in a computer, and we program the computer to apply our mechanical rules in a systematic way. The rules are applied until the desired conclusion is attained or until it is determined that the desired conclusion cannot be attained. (Unfortunately, in some cases, this determination cannot be made; and the procedure never halts. Nevertheless, as discussed in later chapters, the idea is basically sound.)
Although the prospect of automated reasoning has achieved practical realization only in the last few decades, it is interesting to note that the concept itself is not new. In fact, the idea of building machines capable of logical reasoning has a long tradition.
One of the first individuals to give voice to this idea was Leibnitz. He conceived of "a universal algebra by which all knowledge, including moral and metaphysical truths, can some day be brought within a single deductive system". Having already perfected a mechanical calculator for arithmetic, he argued that, with this universal algebra, it would be possible to build a machine capable of rendering the consequences of such a system mechanically.
Boole gave substance to this dream in the 1800s with the invention of Boolean algebra and with the creation of a machine capable of computing accordingly.
The early twentieth century brought additional advances in Logic, notably the invention of the predicate calculus by Russell and Whitehead and the proof of the corresponding completeness and incompleteness theorems by Godel in the 1930s.
The advent of the digital computer in the 1940s gave increased attention to the prospects for automated reasoning. Research in artificial intelligence led to the development of efficient algorithms for logical reasoning, highlighted by Robinson's invention of resolution theorem proving in the 1960s.
Today, the prospect of automated reasoning has moved from the realm of possibility to that of practicality, with the creation of logic technology in the form of automated reasoning systems, such as Vampire, Prover9, the Prolog Technology Theorem Prover, and others.
The emergence of this technology has led to the application of logic technology in a wide variety of areas. The following paragraphs outline some of these uses.
Mathematics. Automated reasoning programs can be used to check proofs and, in some cases, to produce proofs or portions of proofs.
Engineering. Engineers can use the language of Logic to write specifications for their products and to encode their designs. Automated reasoning tools can be used to simulate designs and in some cases validate that these designs meet their specification. Such tools can also be used to diagnose failures and to develop testing programs.
Database Systems. By conceptualizing database tables as sets of simple sentences, it is possible to use Logic in support of database systems. For example, the language of Logic can be used to define virtual views of data in terms of explicitly stored tables, and it can be used to encode constraints on databases. Automated reasoning techniques can be used to compute new tables, to detect problems, and to optimize queries.
Data Integration The language of Logic can be used to relate the vocabulary and structure of disparate data sources, and automated reasoning techniques can be used to integrate the data in these sources.
Law and Business. The language of Logic can be used to encode regulations and business rules, and automated reasoning techniques can be used to analyze such regulations for inconsistency and overlap.
1.8 Reading Guide
Although Logic is a single field of study, there is more than one logic in this field. In the three main units of this book, we look at three different types of logic, each more sophisticated than the one before.
Propositional Logic is the logic of propositions. Symbols in the language represent "conditions" in the world, and complex sentences in the language express interrelationships among these conditions. The primary operators are Boolean connectives, such as and , or , and not .
Relational Logic expands upon Propositional Logic by providing a means for explicitly talking about individual objects and their interrelationships (not just monolithic conditions). In order to do so, we expand our language to include object constants and relation constants, variables and quantifiers.
Functional Logic takes us one step further by providing a means for describing worlds with infinitely many objects. The resulting logic is much more powerful than Propositional Logic and Relational Logic. Unfortunately, as we shall see, some of the nice computational properties of the first two logics are lost as a result.
Despite their differences, there are many commonalities among these logics. In particular, in each case, there is a language with a formal syntax and a precise semantics; there is a notion of logical entailment; and there are legal rules for manipulating expressions in the language.
These similarities allow us to compare the logics and to gain an appreciation of the fundamental tradeoff between expressiveness and computational complexity. On the one hand, the introduction of additional linguistic complexity makes it possible to say things that cannot be said in more restricted languages. On the other hand, the introduction of additional linguistic flexibility has adverse effects on computability. As we proceed though the material, our attention will range from the completely computable case of Propositional Logic to a variant that is not at all computable.
One final comment. In the hopes of preventing difficulties, it is worth pointing out a potential source of confusion. This book exists in the meta world. It contains sentences about sentences; it contains proofs about proofs. In some places, we use similar mathematical symbology both for sentences in Logic and sentences about Logic. Wherever possible, we try to be clear about this distinction, but the potential for confusion remains. Unfortunately, this comes with the territory. We are using Logic to study Logic. It is our most powerful intellectual tool.
Logic is the study of information encoded in the form of logical sentences. Each logical sentence divides the set of all possible world into two subsets - the set of worlds in which the sentence is true and the set of worlds in which the set of sentences is false. A set of premises logically entails a conclusion if and only if the conclusion is true in every world in which all of the premises are true. Deduction is a form of symbolic reasoning that produces conclusions that are logically entailed by premises (distinguishing it from other forms of reasoning, such as induction , abduction , and analogical reasoning ). A proof is a sequence of simple, more-or-less obvious deductive steps that justifies a conclusion that may not be immediately obvious from given premises. In Logic, we usually encode logical information as sentences in formal languages; and we use rules of inference appropriate to these languages. Such formal representations and methods are useful for us to use ourselves. Moreover, they allow us to automate the process of deduction, though the computability of such implementations varies with the complexity of the sentences involved.
Exercise 1.1: Consider the state of the Sorority World depicted below.
For each of the following sentences, say whether or not it is true in this state of the world.
Exercise 1.2: Consider the state of the Sorority World depicted below.
Exercise 1.3: Consider the state of the Sorority World depicted below.
Exercise 1.4: Come up with a table of likes and dislikes for the Sorority World that makes all of the following sentences true. Note that there is more than one such table.
Exercise 1.5: Consider a set of Sorority World premises that are true in the four states of Sorority World shown in Section 1.4. For each of the following sentences, say whether or not it is logically entailed by these premises.
Exercise 1.6: Consider the sentences shown below.
Say whether each of the following sentences is logically entailed by these sentences.
Exercise 1.7: Say whether or not the following reasoning patterns are logically correct.

- Table of Contents
- Random Entry
- Chronological
- Editorial Information
- About the SEP
- Editorial Board
- How to Cite the SEP
- Special Characters
- Advanced Tools
- Support the SEP
- PDFs for SEP Friends
- Make a Donation
- SEPIA for Libraries
- Entry Contents
Bibliography
Academic tools.
- Friends PDF Preview
- Author and Citation Info
- Back to Top
Logic and Ontology
A number of important philosophical problems are at the intersection of logic and ontology. Both logic and ontology are diverse fields within philosophy and, partly because of this, there is not one single philosophical problem about the relation between them. In this survey article we will first discuss what different philosophical projects are carried out under the headings of “logic” and “ontology” and then we will look at several areas where logic and ontology overlap.
1. Introduction
2.1 different conceptions of logic, 2.2 how the different conceptions of logic are related to each other, 3.1 different conceptions of ontology, 3.2 how the different conceptions of ontology are related to each other, 4.1 formal languages and ontological commitment. (l1) meets (o1) and (o4), 4.2 is logic neutral about what there is (l2) meets (o2), 4.3 formal ontology. (l1) meets (o2) and (o3), 4.4 carnap’s rejection of ontology. (l1) meets (o4) and (the end of) (o2), 4.5 the fundamental language. (l1) meets (o4) and (the new beginning of) (o2), 4.6 the form of thought and the structure of reality. (l4) meets (o3), 5. conclusion, other internet resources, related entries.
Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap from time to time and problems or questions arise that concern both. This survey article is intended to discuss some of these areas of overlap. In particular, there is no single philosophical problem of the intersection of logic and ontology. This is partly so because the philosophical disciplines of logic and of ontology are themselves quite diverse and there is thus the possibility of many points of intersection. In the following we will first distinguish different philosophical projects that are covered under the terms ‘logic’ and ‘ontology’. We will then discuss a selection of problems that arise in the different areas of contact.
‘Logic’ and ‘ontology’ are big words in philosophy, and different philosophers have used them in different ways. Depending on what these philosophers mean by these words, and, of course, depending on the philosopher’s views, sometimes there are striking claims to be found in the philosophical literature about their relationship. But when Hegel, for example, uses ‘logic’, or better ‘Logik’, he means something quite different than what is meant by the word in much of the contemporary philosophical scene. We will not attempt to survey the history of the different conceptions of logic and of ontology, nor the history of the debate about their relationship. Instead this article will look at this issue fairly top down, with an emphasis on areas of overlap that are presently actively debated. For more historical information, see Kneale and Kneale 1985. Nonetheless, two historically important figures, namely Gottlob Frege and Immanuel Kant, will make repeat appearances below.
There are several quite different topics put under the heading of ‘logic’ in contemporary philosophy, and it is controversial how they relate to each other.
On the one hand, logic is the study of certain mathematical properties of artificial, formal languages. It is concerned with such languages as the first or second order predicate calculus, modal logics, the lambda calculus, categorial grammars, and so forth. The mathematical properties of these languages are studied in such subdisciplines of logic as proof theory or model theory. Much of the work done in this area these days is mathematically difficult, and it might not be immediately obvious why this is considered a part of philosophy. However, logic in this sense arose from within philosophy and the foundations of mathematics, and it is often seen as being of philosophical relevance, in particular in the philosophy of mathematics, and in its application to natural languages.
A second discipline, also called ‘logic’, deals with certain valid inferences and good reasoning connected to them. The idea here is that there are certain patterns of valid inferences which are both an object of study in itself as well as connected to certain patterns of good reasoning. How this connection between inference and reasoning is to be understood more precisely and to what extent it obtains is controversial, and beyond the scope of this survey. However, see Christensen 2005 for more. In any case, logic does not capture good reasoning as a whole. That is the job of the theory of rationality. Rather it deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations. Some patterns of inference can be seen as valid by merely looking at the form of the representations that are involved in this inference. Such a conception of logic thus distinguishes validity from formal validity. An inference is valid just in case the truth of the premises guarantees the truth of the conclusion, or alternatively if the premises are true then the conclusion has to be true as well, or again alternatively, if it can’t be that the premises are true but the conclusion is false.
Validity so understood is simply a modal notion, a notion about what has to be the case. Others might think of validity as involving a more fine grained hyperintensional notion, but in any case, validity so understood is not what logic is concerned with. Logic is concerned with formal validity , which can be understood as follows. In a system of representations, for example a language, it can be that some inferences are always valid as long as the representational or semantic features of certain parts of the representations are kept fixed, even if we abstract from or ignore the representational features of the other parts of the representations. So, for example, as long as we stick to English, and we keep the meanings of certain words like “some” and “all” fixed, certain patterns of inference, like some of Aristotle’s syllogisms, are valid no matter what the meaning of the other words in the syllogism. [ 1 ] To call an inference formally valid is to assume that certain words have their meaning fixed, that we are within a fixed set of representations, and that we can ignore the meaning of the other words. The words that are kept fixed are the logical vocabulary, or logical constants, the others are the non-logical vocabulary. And when an inference is formally valid then the conclusion logically follows from the premises. This could be generalized for representations that are not linguistic, like graphic representations, though it would require a bit more work to do so. Logic is the study of such inferences, and certain related concepts and topics, like formal invalidity, proof, consistency, and so on. The central notion of logic in this sense is the notion of logical consequence. How this notion should be understood more precisely is presently widely debated, and a survey of these debates can be found in the entry on logical consequence .
A third conception of logic takes logic to be the study of special truths, or facts: the logical truths, or facts. In this sense logic could be understood as a science that aims to describe certain truths or facts, just as other sciences aim to describe other truths. The logical truths could be understood as the most general truths, ones that are contained in any other body of truths that any other science aims to describe. In this sense logic is different from biology, since it is more general, but it is also similar to biology in that it is a science that aims to capture a certain body of truths. This way of looking at logic is often associated with Frege.
This conception of logic can, however, be closely associated with the one that takes logic to be fundamentally about certain kinds of inferences and about logical consequence. A logical truth, on such an understanding, is simply one that is expressed by a representation which logically follows from no assumptions, i.e. which logically follows from an empty set of premises. Alternatively, a logical truth is one whose truth is guaranteed as long as the meaning of the logical constants is fixed, no matter what the meanings of the other parts in a representation are.
And there are other notions of ‘logic’ as well. One of them is historically prominent, but not very widely represented in the contemporary debate. We will briefly discuss it here nonetheless. According to this conception of logic, it is the study of the most general features of thoughts or judgments, or the form of thoughts or judgments. Logic thus understood will for example be concerned with the occurrence of subject and predicate structure that many judgments exhibit, and with other such general features of judgments. It will mostly be concerned with thoughts, and not directly with linguistic representations, though, of course, a proponent of this conception can claim that there is a very close connection between them. To talk about the form of a judgment will involve a subtly different notion of ‘form’ than to talk about the form of a linguistic representation. The form of a linguistic representation, basically, was what was left once we abstract from or ignore the representational features of everything except what we keep fixed, the logical constants. The form of a thought, on the other hand, is often understood as what is left over once we abstract from its content, that is, what it is about. We will briefly pursue the question below how these notions of form are related to each other. This conception of logic is associated with Kant. Kant distinguished different notions of logic (for example transcendental logic, general logic, etc.), but we won’t be able to discuss these here. See the entry on Immanuel Kant for more.
One important philosophical aspect of logic, at least in the senses that deal with logical consequence and the forms of judgements, is its normativity. Logic seems to give us a guide how we ought to reason, and how we ought to draw inferences from one representation to another. But it is not at all clear what guide it gives us, and how we should understand more precisely what norms logic puts on our reasoning. For example, logic does not put us under the norm “If you believe \(A\) and you believe if \(A\) then \(B\), then you ought to believe \(B\).” After all, it might be that I should not believe \(A\) and if \(A\) then \(B\) in the first place. So, in particular I shouldn’t believe \(B\). A reductio ad absurdum is a form of argument that illustrates this. If I believe A and if A then \(0=1\), then this should lead me to abandon my belief in A, not lead to a belief that \(0=1\). The consequences of my beliefs can lead me to abandon them. Still, if I have some reasons for my beliefs then I have at least some prima facie, but not necessarily conclusive, reason to hold the consequences of those beliefs. Logic might thus tell us at least this much, though: whenever I have some reason to believe \(A\) and if \(A\) then \(B\), then I have a prima facie reason to believe \(B\). See Harman 1986 for the view that logic has no distinctive normative role, and Field 2009 for a nice critical discussion of Harman’s view and an argument why logic should be tied to norms of rationality. A survey of this an related issues can be found in the entry on the normativity of logic .
And, of course, logic does not tell us how we ought to reason or infer in all particular cases. Logic does not deal with the particular cases, but only with the most generally valid forms of reasoning or inference, ones that are valid no matter what one reasons about. In this sense logic is often seen to be topic neutral. It applies no matter what one is thinking or reasoning about. And this neutrality, or complete generality of logic, together with its normativity, is often put as “logic is about how we ought to think if we are to think at all” or “logic is the science of the laws that we ought to follow in our thinking no matter what we think about”. There are well known philosophical puzzles about normativity, and these apply to logic as well if it is normative. One is why it is that thinkers are under such norms. After all, why shouldn’t I think the way I prefer to think, without there being some norm that governs my thinking, whether I like it or not? Why is there an “ought” that comes with thinking as such, even if I don’t want to think that way? One idea to answer this is to employ the notion of a ‘constitutive aim of belief’, the idea that belief as such aims at something: the truth. If so then maybe one could argue that by having beliefs I am under the norm that I ought to have true ones. And if one holds that one of the crucial features of logically valid inferences is that they preserve truth then one could argue that the logical laws are norms that apply to those who have beliefs. See Velleman 2000 for more on the aim of belief. The normativity of logic will not be central for our discussion to follow, but the topic neutrality and generality will be. [ 2 ]
Overall, we can thus distinguish four notions of logic:
There is, of course, a question how these different conceptions of logic relate to each other. The details of their relationship invite many hard questions, but we should briefly look at this nonetheless.
How (L1) and (L2) relate to each other is subject of controversy. One straightforward, though controversial view, is the following. For any given system of representations, like sentences in a natural language, there is one and only one set of logical constants. Thus there will be one formal language that best models what logically valid inferences there are among these natural representations. This formal language will have a logical vocabulary that captures the inferential properties of the logical constants, and that models all other relevant features of the natural system of representation with non-logical vocabulary. One especially important system of representations are our natural languages. Thus (L1) is the study of formal languages of which one is distinguished, and this one distinguished language nicely represents the fixed and non-fixed features of our natural languages, through its logical and non-logical vocabulary, at least assuming that our natural languages are similar to each other in this regard. And validity in that formal language, a technical notion defined in the appropriate way for that formal language, nicely models logical validity or logical consequence in our natural language system of representations. Or so this view of the relationship between (L1) and (L2) holds.
This view of the relationship between (L1) and (L2), however, assumes that there is one and only one set of logical constants for each system of representations. A contrary view holds that which expressions are treated as logical constants is a matter of choice, with different choices serving different purposes. If we fix, say, ‘believes’ and ‘knows’ then we can see that ‘\(x\) believes that \(p\)’ is implied by ‘\(x\) knows that \(p\)’ (given widely held views about knowledge and belief). This does not mean that ‘believes’ is a logical constant in an absolute sense. Given other interests, other expressions can be treated as logical. According to this conception, different formal languages will be useful in modeling the inferences that are formally valid given different set of ‘logical constants’ or expressions whose meaning is kept fixed.
This debate thus concerns whether there is one and only one set of logical constants for a system of representations, and if so, which ones are the logical ones. We will not get into this debate here, but there is quite a large literature on what logical constants are, and how logic can be demarcated. For a general discussion and further references, see for example Engel 1991. Some of the classic papers in this debate include Hacking 1979, who defends a proof-theoretic way of distinguishing logical constants from other expressions. The leading idea here is that logical constants are those whose meaning can be given by proof-theoretic introduction and elimination rules. On the other hand, Mauthner 1946, van Benthem 1986, van Benthem 1989, and Tarski 1986 defend semantic ways to mark that difference. The leading idea here is that logical notions are ‘permutation invariant’. Since logic is supposed to be completely general and neutral with respect to what the representations are about, it should not matter to logic if we switch around the objects that these representations are about. So, logical notions are those that are invariant under permutations of the domain. Van Benthem 1989 gives a general formulation to this idea. See the entry on logical constants for more.
The relationship between (L2) and (L3) was briefly addressed above. They seem to be closely related because a logical truth can be understood as one that follows from an empty set of premises, and A being a logical consequence of B can be understood as it being a logical truth that if A then B. There are some questions to be ironed out about how this is supposed to go more precisely. How should we understand cases of logical consequence from infinitely many premises? Are logical truths all finitely statable? But for our purposes we can say that they are rather closely related.
The relationship between (L2) and (L4) on the other hand raises some questions. For one, of course, there is an issue about what it means to say that judgments have a form, and whether they do in the relevant sense. But one way in which this question could be understood directly ties it to (L2). If thoughts, and thus judgments, are realized by minds having a certain relation to mental representations, and if these representations are themselves structured like a language, with a “syntax” and a “semantics” (properly understood), then the form of a judgment could be understood just like the form of a sentence. Such a view of thoughts is commonly called the Language of Thought hypothesis, see Fodor 1975, and if it is correct then in the language of thought there might be logical and non-logical vocabulary. The form of a judgment could be understood along the lines we understood the form of a linguistic representation when we talked about formally valid inferences. Thus the relationship between (L2) and (L4) is rather direct. On both conceptions of logic we deal with logical constants, the difference is that one deals with a system of mental representations, the other with a system of linguistic representations. Both, presumably, would deal with corresponding sets of logical constants. Even though mental and linguistic representations form different sets of representations, since they are closely connected with each other, for every logical constant in one of these sets of representations there will be another one of the corresponding syntactic type and with the same content, or at least a corresponding inferential role.
But this conception of their relationship assumes that the “general features of judgments” or “forms of judgment” which (L4) is concerned with deal with something like the logical constants in the language of thought. Here the judgment as a mental act is assumed to operate on a mental representation that itself has syntactic structure. And the form of the judgment was understood as the form of the representation that represents the content of the judgment, whereby the form of the representation was understood along the lines of (L2), involving logical constants. But what if we can’t understand “form of judgment” or “form of thought” that way? One way this could fail is if the language of thought hypothesis itself fails, and if mental states do not involve representations that have something like a syntactic form. The question then becomes, first how should we understand ‘form of judgement’ more precisely, and secondly, how does logic, as the discipline concerned with forms of judgments in the sense of (L4), relate to (L2)?
One way to answer the first question is to understand “form of judgment” as not being concerned with the representation that might be involved in a judgment, but rather with the content of the judgment, i.e. with what the judgment is representing to be the case. Contents of judgments can be seen as propositions, and these can be understood as entities that are structured, for example Russellian propositions. Such propositions are ordered sets whose members are objects and properties. How such a conception of (L4) relates to (L2) will in part depend on how one thinks of the logical constants in Russellian propositions. If they are higher-order properties or functions that are members of these propositions alongside other objects and properties then presumably the logical constants have content. But this seems to be in conflict with an understanding of (L4) as being concerned with the form that is left once we abstract from all content. If would seem that on such an understanding of (L4) one can’t closely associate ‘form of judgment’, understood as what’s left once we abstract from all content of the judgment, with logical constants if the latter have content.
Another way to understand “form” as being concerned with what the judgment is about, rather than the judgment itself, is to think of what it is about, the world, itself as having a form. In this sense we associate “form” neither with the representation that is involved in the judgment, nor with the proposition which is its content, but rather with the world that is judged about. On such a conception the world itself has a form or basic structure. (L4) would be concerned with this structure. How (L4) relates to (L2) is then a somewhat tricky question. One way, again, could be that the logical constants that (L2) is concerned with correspond to the structure of what a representation in which they occur is about, but don’t contribute to the content of that representation. This again seems incompatible with the logical constants themselves having content. So, whether one associates form of judgment with the ‘syntactic’ structure of a representation that is involved in the judgment, or with the content of that representation, or with the structure of what the representation is about, the relationship between (L4) and (L2) will in part depend on whether one thinks the logical constants themselves contribute to content. If they do, and if form is contrasted with content, then a close association seems impossible. If the logical constants don’t have content, then it might be possible.
Finally, the relationship between (L1) and (L4) either comes down to the same as that between (L1) and (L2), if we understand ‘form of thought’ analogous to ‘form of representation’. If not, then it will again depend on how (L4) is understood more precisely.
Thus there are many ways in which (L1), (L2), (L3), and (L4) are connected, and many in which they are quite different.
3. Ontology
As a first approximation, ontology is the study of what there is. Some contest this formulation of what ontology is, so it’s only a first approximation. Many classical philosophical problems are problems in ontology: the question whether or not there is a god, or the problem of the existence of universals, etc.. These are all problems in ontology in the sense that they deal with whether or not a certain thing, or more broadly entity, exists. But ontology is usually also taken to encompass problems about the most general features and relations of the entities which do exist. There are also a number of classic philosophical problems that are problems in ontology understood in this way. For example, the problem of how a universal relates to a particular that has it (assuming there are universals and particulars), or the problem of how an event like John eating a cookie relates to the particulars John and the cookie, and the relation of eating, assuming there are events, particulars and relations. These kinds of problems quickly turn into metaphysics more generally, which is the philosophical discipline that encompasses ontology as one of its parts. The borders here are a little fuzzy. But we have at least two parts to the overall philosophical project of ontology, on our preliminary understanding of it: first, say what there is, what exists, what the stuff of reality is made out of, secondly, say what the most general features and relations of these things are.
This way of looking at ontology comes with two sets of problems which leads to the philosophical discipline of ontology being more complex than just answering the above questions. The first set of problems is that it isn’t clear how to approach answering these questions. This leads to the debate about ontological commitment. The second set of problems is that it isn’t so clear what these questions really are. This leads to the philosophical debate about meta-ontology. Let's look at them in turn.
One of the troubles with ontology is that it not only isn’t clear what there is, it also isn’t so clear how to settle questions about what there is, at least not for the kinds of things that have traditionally been of special interest to philosophers: numbers, properties, God, etc. Ontology is thus a philosophical discipline that encompasses besides the study of what there is and the study of the general features of what there is also the study of what is involved in settling questions about what there is in general, especially for the philosophically tricky cases. How we can find out what there is isn’t an easy question to answer. It might seem simple enough for regular objects that we can perceive with our eyes, like my house keys, but how should we decide it for such things as, say, numbers or properties? One first step to making progress on this question is to see if what we believe already rationally settles this question. That is to say, given that we have certain beliefs, do these beliefs already bring with them a rational commitment to an answer to such questions as ‘Are there numbers?’ If our beliefs bring with them a rational commitment to an answer to an ontological question about the existence of certain entities then we can say that we are committed to the existence of these entities. What precisely is required for such a commitment to occur is subject to debate, a debate we will look at momentarily. To find out what one is committed to with a particular set of beliefs, or acceptance of a particular theory of the world, is part of the larger discipline of ontology.
Besides it not being so clear what it is to commit yourself to an answer to an ontological question, it also isn’t so clear what an ontological question really is, and thus what it is that ontology is supposed to accomplish. To figure this out is the task of meta-ontology, which strictly speaking is not part of ontology construed narrowly, but the study of what ontology is. However, like most philosophical disciplines, ontology more broadly construed contains its own meta-study, and thus meta-ontology is part of ontology, more broadly construed. Nonetheless it is helpful to separate it out as a special part of ontology. Many of the philosophically most fundamental questions about ontology really are meta-ontological questions. Meta-ontology has not been too popular in the latter parts of the 20th century, partly because one meta-ontological view, the one often associated with Quine, had been widely accepted as the correct one, but this acceptance has been challenged in recent years in a variety of ways. One motivation for the study of meta-ontology is simply the question of what question ontology aims to answer. Take the case of numbers, for example. What is the question that we should aim to answer in ontology if we want to find out if there are numbers, that is, if reality contains numbers besides whatever else it is made up from? This way of putting it suggest an easy answer: ‘Are there numbers?’ But this question seems like an easy one to answer. An answer to it is implied, it seems, by trivial mathematics, say that the number 7 is less than the number 8. If the latter, then there is a number which is less than 8, namely 7, and thus there is at least one number. Can ontology be that easy? The study of meta-ontology will have to determine, amongst others, if ‘Are there numbers?’ really is the question that the discipline of ontology is supposed to answer, and more generally, what ontology is supposed to do. We will pursue these questions further below. As we will see, several philosophers think that ontology is supposed to answer a different question than what there is, but they often disagree on what that question is.
The larger discipline of ontology can thus be seen as having four parts:
The relationship between these four seems rather straightforward. (O4) will have to say how the other three are supposed to be understood. In particular, it will have to tell us if the question to be answered in (O2) indeed is the question what there is, which was taken above to be only a first approximation for how to state what ontology is supposed to do. Maybe it is supposed to answer the question what is real instead, or what is fundamental, some other question. Whatever one says here will also affect how one should understand (O1). We will at first work with what is the most common way to understand (O2) and (O1), and discuss alternatives in turn. If (O1) has the result that the beliefs we share commit us to a certain kind of entity then this requires us either to accept an answer to a question about what there is in the sense of (O2) or to revise our beliefs. If we accept that there is such an entity in (O2) then this invites questions in (O3) about its nature and the general relations it has to other things we also accept. On the other hand, investigations in (O3) into the nature of entities that we are not committed to and that we have no reason to believe exist would seem like a rather speculative project, though, of course, it could still be fun and interesting.
4. Areas of overlap
The debates about logic and about ontology overlap at various places. Given the division of ontology into (O1)–(O4), and the division of logic into (L1)–(L4) we can consider several issues where logic, understood a certain way, overlaps with ontology, understood a certain way. In the following we will discuss some paradigmatic debates related to the relationship between logic and ontology, organized by areas of overlap.
Suppose we have a set of beliefs, and we wonder what the answer to the ontological question ‘Are there numbers?’ is, assuming (O4) tells us this is the ontological question about numbers. One strategy to see whether our beliefs already commit us to an answer of this question is as follows: first, write out all those beliefs in a public language, like English. This by itself might not seem to help much, since if it wasn’t clear what my beliefs commit me to, why would it help to look at what acceptance of what these sentences say commits me to? But now, secondly, write these sentences in what is often called ‘canonical notation’. Canonical notation can be understood as a formal or semi-formal language that brings out the true underlying structure, or ‘logical form’ of a natural language sentence. In particular, such a canonical notation will make explicit which quantifiers do occur in these sentences, what their scope is, and the like. This is where formal languages come into the picture. After that, and thirdly, look at the variables that are bound by these quantifiers. [ 3 ] What values do they have to have in order for these sentences all to be true? If the answer is that the variables have to have numbers as their values, then you are committed to numbers. If not then you aren’t committed to numbers. The latter doesn’t mean that there are no numbers, of course, just as you being committed to them doesn’t mean that there are numbers. But if your beliefs are all true then there have to be numbers, if you are committed to numbers. Or so this strategy goes.
All this might seem a lot of extra work for little. What do we really gain from these ‘canonical notations’ in determining ontological commitment? One attempt to answer this, which partly motivates the above way of doing things, is based on the following consideration: We might wonder why we should think that quantifiers are of great importance for making ontological commitments explicit. After all, if I accept the apparently trivial mathematical fact that there is a number between 6 and 8, does this already commit me to an answer to the ontological question whether there are numbers out there, as part of reality? The above strategy tries to make explicit that and why it in fact does commit me to such an answer. This is so since natural language quantifiers are fully captured by their formal analogues in canonical notation, and the latter make ontological commitments obvious because of their semantics. Such formal quantifiers are given what is called an ‘objectual semantics’. This is to say that a particular quantified statement \(\lsquo \exists x\,Fx\rsquo\) is true just in case there is an object in the domain of quantification that, when assigned as the value of the variable \(\lsquo x\rsquo\), satisfies the open formula \(\lsquo Fx\rsquo\). This makes obvious that the truth of a quantified statement is ontologically relevant, and in fact ideally suited to make ontological commitment explicit, since we need entities to assign as the values of the variables. Thus (L1) is tied to (O1). The philosopher most closely associated with this way of determining ontological commitment, and with the meta-ontological view on which it is based, is Quine (in particular Quine 1948). See also van Inwagen 1998 for a presentation sympathetic to Quine.
The above account of ontological commitment has been criticized from a variety of different angles. One criticism focuses on the semantics that is given for quantifiers in the formal language that is used as the canonical notation of the natural language representations of the contents of beliefs. The above, objectual semantics is not the only one that can be given to quantifiers. One widely discussed alternative is the so-called ‘substitutional semantics’. According to it we do not assign entities as values of variables. Rather, a particular quantified statement \(\lsquo \exists x\,Fx\rsquo\) is true just in case there is a term in the language that when substituted for \(\lsquo x\rsquo\) in \(\lsquo Fx\rsquo\) has a true sentence as its result. Thus, \(\lsquo \exists x\,Fx\rsquo\) is true just in case there is an instance \(\lsquo Ft\rsquo\) which is true, for \(\lsquo t\rsquo\) a term in the language in question, substituted for all (free) occurrences of \(\lsquo x\rsquo\) in \(\lsquo Fx\rsquo\). The substitutional semantics for the quantifiers has often been used to argue that there are ontologically innocent uses of quantifiers, and that what quantified statements we accept does not directly reveal ontological commitment. Gottlieb (1980) provides more details on substitutional quantification, and an attempt to use it in the philosophy of mathematics. Earlier work was done by Ruth Marcus, and is reprinted in Marcus 1993.
Another objection to the above account of determining ontological commitment goes further and questions the use of a canonical notation, and of formal tools in general. It states that if the ontological question about numbers simply is the question ‘Are there numbers?’ then all that matters for ontological commitment is whether or not what we accept implies ‘There are numbers’. In particular, it is irrelevant what the semantics for quantifiers in a formal language is, in particular, whether it is objectual or substitutional. What ontological commitment comes down to can be determined at the level of ordinary English. Formal tools are of no, or at best limited, importance. Ontological commitment can thus according to this line of thought be formulated simply as follows: you are committed to numbers if what you believe implies that there are numbers. Notwithstanding the debate between the substitutional and objectual semantics, we do not need any formal tools to spell out the semantics of quantifiers. All that matters is that a certain quantified statement ‘There are \(F\)s’ is implied by what we believe for us to be committed to \(F\)s. What does not matter is whether the semantics of the quantifier in “There are \(F\)s” (assuming it contains a quantifier [ 4 ] ) is objectual or substitutional.
However, even if one agrees that what matters for ontological commitment is whether or not what one believes implies that there are \(F\)s, for a certain kind of thing \(F\), there might still be room for formal tools. First of all, it isn’t clear what implies what. Whether or not a set of statements that express my beliefs imply that there are entities of a certain kind might not be obvious, and might even be controversial. Formal methods can be useful in determining what implies what. On the other hand, even though formal methods can be useful in determining what implies what, it is not clear which formal tools are the right ones for modeling a natural system of representations. It might seem that to determine which are the right formal tools we already need to know what the implicational relations are between the natural representations that we attempt to model, at least in basic cases. This could mean that formal tools are only of limited use for deciding controversial cases of implication.
But then, again, it has been argued that often it is not at all clear which statements really involve quantifiers at a more fundamental level of analysis, or logical form . Russell (1905) famously argued that “the King of France” is a quantified expression, even though it appears to be a referring expression on the face of it, a claim now accepted by many. And Davidson (1967) argued, that ‘action sentences’ like “Fred buttered the toast” involve quantification over events in the logical form, though not on the surface, a claim that is more controversial. One might argue in light of these debates that which sentences involve quantification over what can’t be finally settled until we have a formal semantics of all of our natural language, and that this formal semantics will give us the ultimate answer to what we are quantifying over. But then again, how are we to tell that the formal semantics proposed is correct, if we don’t know the inferential relations in our own language?
One further use that formal tools could have besides all the above is to make ambiguities and different ‘readings’ explicit, and to model their respective inferential behavior. For example, formal tools are especially useful to make scope ambiguities explicit, since different scope readings of one and the same natural language sentence can be represented with different formal sentences which themselves have no scope ambiguities. This use of formal tools is not restricted to ontology, but applies to any debates where ambiguities can be a hindrance. It does help in ontology, though, if some of the relevant expressions in ontological debates, like the quantifiers themselves, exhibit such different readings. Then formal tools will be most useful to make this explicit. Whether or not quantifiers indeed do have different readings is a question that will not be solved with formal tools, but if they do then these tools will be most useful in specifying what these readings are. For a proposal of this latter kind, see Hofweber 2016. One consequence of this is a meta-ontology different from Quine’s, as we will discuss below.
All this discussion in this section assumed that ontological commitment is connected to a conception of ontology that concerns what there is. But this is not universally accepted, in particular recently. Maybe ontology does not concern what there is, but what is fundamental, in some sense of the word. If so, then issues tied to quantifiers are not of the most central importance when it comes to ontological commitment, although they would still play a role. The main question would then be connected to fundamentality. And here, too, formal languages might play a role in determining what one is committed to being fundamental. We will more closely discuss the role of formal languages in conceptions of ontology as concerning the fundamental below, in section 4.5.
Logically valid inferences are those that are guaranteed to be valid by their form. And above we spelled this out as follows: an inference is valid by its form if as long as we fix the meaning of certain special expressions, the logical constants, we can ignore the meaning of the other expressions in the statements involved in the inference, and we are always guaranteed that the inference is valid, no matter what the meaning of the other expressions is, as long as the whole is meaningful. A logical truth can be understood as a statement whose truth is guaranteed as long as the meanings of the logical constants are fixed, no matter what the meaning of the other expressions is. Alternatively, a logical truth is one that is a logical consequence from no assumptions, i.e. an empty set of premises.
Do logical truths entail the existence of any entities, or is their truth independent of what exists? There are some well known considerations that seem to support the view that logic should be neutral with respect to what there is. On the other hand, there are also some well known arguments to the contrary. In this section we will survey some of this debate.
If logical truth are ones whose truth is guaranteed as long as the meaning of the logical constants is kept fixed then logical truths are good candidates for being analytic truths. Can analytic truths imply the existence of any entities? This is an old debate, often conducted using “conceptual truths” instead of “analytic truths”. The most prominent debate of this kind is the debate about the ontological argument for the existence of God. Many philosophers have maintained that there can be no conceptual contradiction in denying the existence of particular entities, and thus there can be no proof of their existence with conceptual truths alone. In particular, an ontological argument for the existence of God is impossible. A famous discussion to this effect is Kant’s discussion of the ontological argument (Kant 1781/7, KrV A592/B620 ff). On the other hand, many other philosophers have maintained that such an ontological argument is possible, and they have made a variety of different proposals how it can go. We will not discuss the ontological argument here, however, it is discussed in detail in different formulations in the entry on ontological arguments in this encyclopedia.
Whatever one says about the possibility of proving the existence of an object purely with conceptual truths, many philosophers have maintained that at least logic has to be neutral about what there is. One of the reasons for this insistence is the idea that logic is topic neutral, or purely general. The logical truths are the ones that hold no matter what the representations are about, and thus they hold in any domain. In particular, they hold in an empty domain, one where there is nothing at all. And if that is true then logical truths can’t imply that anything exists. But that argument might be turned around by a believer in logical objects, objects whose existence is implied by logic alone. If it is granted that logical truths have to hold in any domain, then any domain has to contain the logical objects. Thus for a believer in logical objects there can be no empty domain.
There is a close relationship between this debate and a common criticism that standard formal logics (in the sense of (L1)) won’t be able to capture the logical truths (in the sense of (L3)). It is the debate about the status of the empty domain in the semantics of first and second order logical systems.
It is a logical truth in (standard) first order logic that something exists, i.e., ‘\(\exists x\,x=x\)’. Similarly, it is a logical truth in (standard versions of) second order logic that ‘\(\exists F\forall x\,(Fx \vee \neg Fx)\)’. These are existentially quantified statements. Thus, one might argue, logic is not neutral with respect to what there is. There are logical truths that state that something exists. However, it would be premature to conclude that logic is not neutral about what there is, simply because there are logical truths in (standard) first or second order logic which are existential statements. If we look more closely how it comes about that these existential statements are logical truths in these logical systems we see that it is only so because, by definition, a model for (standard) first order logic has to have a non-empty domain. It is possible to allow for models with an empty domain as well (where nothing exists), but models with an empty domain are excluded, again, by definition from the (standard) semantics in first order logic. Thus (standard) first order logic is sometimes called the logic of first order models with a non-empty domain. If we allow an empty domain as well we will need different axioms or rules of inference to have a sound proof system, but this can be done. Thus even though there are formal logical systems, in the sense of (L1) in which there are logical truths that are existential statements, this does not answer the question whether or not there are logical truths, in the sense of (L2), that are existential statements. The question rather is which formal system, in the sense of (L1), best captures the logical truths, in the sense of (L2). So, even if we agree that a first order logical system is a good formal system to represent logical inferences, should we adopt the axioms and rules for models with or without an empty domain?
A related debate is the debate about free logic. Free logics are formal systems that drop the assumption made in standard first and higher order logic that every closed term denotes an object in the domain of the model. Free logic allows for terms that denote nothing, and in free logic certain rules about the inferential interaction between quantifiers and terms have to be modified. Whether free or un-free (standard) logic is the better formal model for natural language logical inference is a further question. For more discussion of logic with an empty domain see Quine 1954 and Williamson 1999. For a sound and complete proof system for logic with an empty domain, see Tennant 1990. For a survey article on free logic, see Lambert 2001.
How innocent logic is with respect to ontology is also at the heart of the debate about the status of second order logic as logic. Quine (1970) argued that second order logic was “set theory in sheep’s clothing”, and thus not properly logic at all. Quine was concerned with the questions of whether second order quantifiers should be understood as ranging over properties or over sets of individuals. The former were considered dubious in various ways, the latter turn second order logic into set theory. This approach to second order logic has been extensively criticized by various authors, most notably George Boolos, who in a series of papers, collected in part I of Boolos 1998, attempted to vindicate second order logic, and to propose a plural interpretation, which is discussed in the article on plural quantification .
A particularly important and pressing case of the ontological implications of logic are logicist programs in the philosophy of mathematics, in particular Frege’s conception of logical objects and his philosophy of arithmetic. Frege and neo-Fregeans following him believe that arithmetic is logic (plus definitions) and that numbers are objects whose existence is implied by arithmetic. Thus in particular, logic implies the existence of certain objects, and numbers are among them. Frege’s position has been criticized as being untenable since logic has to be neutral about what there is. Thus mathematics, or even a part thereof, can’t be both logic and about objects. The inconsistency of Frege’s original formulation of his position sometimes has been taken to show this, but since consistent formulations of Frege’s philosophy of arithmetic have surfaced this last point is moot. Frege’s argument for numbers as objects and arithmetic as logic is probably the best known argument for logic implying the existence of entities. It has been very carefully investigated in recent years, but whether or not it succeeds is controversial. Followers of Frege defend it as the solution to major problems in the philosophy of mathematics; their critics find the argument flawed or even just a cheap trick that is obviously going nowhere. We will not discuss the details here, but a detailed presentation of the argument can be found in the entry on Frege’s theorem and foundations of arithmetic as well as Rosen 1993, which gives a clear and readable presentation of the main argument of Wright (1983), which in turn is partially responsible for a revival of Fregean ideas along these lines. Frege’s own version is in his classic Grundlagen (1884). A discussion of recent attempts to revive Frege can be found in Hale and Wright 2001, Boolos 1998 and Fine 2002. A discussion of Frege’s and Kant’s conceptions of logic is in MacFarlane 2002 which also contains many historical references.
Formal ontologies are theories that attempt to give precise mathematical formulations of the properties and relations of certain entities. Such theories usually propose axioms about these entities in question, spelled out in some formal language based on some system of formal logic. Formal ontology can been seen as coming in three kinds, depending on their philosophical ambition. Let’s call them representational, descriptive, and systematic. We will in this section briefly discuss what philosophers, and others, have hoped to do with such formal ontologies.
A formal ontology is a mathematical theory of certain entities, formulated in a formal, artificial language, which in turn is based on some logical system like first order logic, or some form of the lambda calculus, or the like. Such a formal ontology will specify axioms about what entities of this kind there are, what their relations among each other are, and so on. Formal ontologies could also only have axioms that state how the things the theory is about, whatever they may be, relate to each other, but no axioms that state that certain things exist. For example, a formal ontology of events won’t say which events there are. That is an empirical question. But it might say under what operations events are closed under, and what structure all the events there are exhibit. Similarly for formal ontologies of the part-whole relation, and others. See Simons 1987 for a well known book on various formal versions of mereology, the study of parts and wholes.
Formal ontologies can be useful in a variety of different ways. One contemporary use is as a framework to represent information in an especially useful way. Information represented in a particular formal ontology can be more easily accessible to automated information processing, and how best to do this is an active area of research in computer science. The use of the formal ontology here is representational. It is a framework to represent information, and as such it can be representationally successful whether or not the formal theory used in fact truly describes a domain of entities. So, a formal ontology of states of affairs, lets say, can be most useful to represent information that might otherwise be represented in plain English, and this can be so whether or not there indeed are any states of affairs in the world. Such uses of formal ontologies are thus representational.
A different philosophical use of a formal ontology is one that aims to be descriptive. A descriptive formal ontology aims to correctly describe a certain domain of entities, say sets, or numbers, as opposed to all things there are. Take common conceptions of set theory as one example. Many people take set theory to aim at correctly describing a domain of entities, the pure sets. This is, of course, a controversial claim in the philosophy of set theory, but if it is correct then set theory could be seen as a descriptive formal ontology of pure sets. It would imply that among incompatible formal theories of sets only one could be correct. If set theory were merely representational then both of the incompatible theories could be equally useful as representational tools, though probably for different representational tasks.
Finally, formal ontologies have been proposed as systematic theories of what there is, with some restrictions. Such systematic theories hope to give one formal theory for all there is, or at least a good part of it. Hardly anyone would claim that there can be a simple formal theory that correctly states what concrete physical objects there are. There does not seem to be a simple principle that determines whether there are an even or odd number of mice at a particular time. But maybe this apparent randomness only holds for concrete physical objects. It might not hold for abstract objects, which according to many exist not contingently, but necessarily if at all. Maybe a systematic, simple formal theory is possible of all abstract objects. Such a systematic formal ontology will most commonly have one kind of entities which are the primary subject of the theory, and a variety of different notions of reduction that specify how other (abstract) objects really are entities of this special kind. A simple view of this kind would be one according to which all abstract objects are sets, and numbers, properties, etc. are really special kinds of sets. However, more sophisticated versions of systematic formal ontologies have been developed. An ambitious systematic formal ontology can be found in Zalta 1983 and Zalta 1999 [2022] (see the Other Internet Resources).
Representational formal ontologies, somewhat paradoxically, are independent of any strictly ontological issues. Their success or failure is independent of what there is. Descriptive formal ontologies are just like representational ones, except with the ambition of describing a domain of entities. Systematic formal ontologies go further in not only describing one domain, but in relating all entities (of a certain kind) to each other, often with particular notions of reduction. These theories seem to be the most ambitious. Their motivation comes from an attempt to find a simple and systematic theory of all, say, abstract entities, and they can rely on the paradigm of aiming for simplicity in the physical sciences as a guide. They, just like descriptive theories, will have to have as their starting point a reasonable degree of certainty that we indeed are ontologically committed to the entities they aim to capture. Without that these enterprises seem to have little attraction. But even if the latter philosophical ambitions fail, a formal ontology can still be a most useful representational tool.
One interesting view about the relationship between formal languages, ontology, and meta-ontology is the one developed by Carnap in the first half of the 20th century, and which is one of the starting points of the contemporary debate in ontology, leading to the well-known exchange between Carnap and Quine, to be discussed below. According to Carnap one crucial project in philosophy is to develop frameworks that can be used by scientists to formulate theories of the world. Such frameworks are formal languages that have a clearly defined relationship to experience or empirical evidence as part of their semantics. For Carnap it was a matter of usefulness and practicality which one of these frameworks will be selected by the scientists to formulate their theories in, and there is no one correct framework that truly mirrors the world as it is in itself. The adoption of one framework rather than another is thus a practical question.
Carnap distinguished two kinds of questions that can be asked about what there is. One are the so-called ‘internal questions’, questions like ‘Are there infinitely many prime numbers?’ These questions make sense once a framework that contains talk about numbers has been adopted. Such questions vary in degree of difficulty. Some are very hard, like ‘Are there infinitely many twin prime numbers?’, some are of medium difficulty, like ‘Are there infinitely many prime numbers?’, some are easy like ‘Are there prime numbers?’, and some are completely trivial, like ‘Are there numbers?’. Internal questions are thus questions that can be asked once a framework that allows talk about certain things has been adopted, and general internal questions, like ‘Are there numbers?’ are completely trivial since once the framework of talk about numbers has been adopted the question if there are any is settled within that framework.
But since the internal general questions are completely trivial they can’t be what the philosophers and metaphysicians are after when they ask the ontological question ‘Are there numbers?’ The philosophers aim to ask a difficult and deep question, not a trivial one. What the philosophers aim to ask, according to Carnap, is not a question internal to the framework, but external to it. They aim to ask whether the framework correctly corresponds to reality, whether or not there really are numbers. However, the words used in the question ‘Are there numbers?’ only have meaning within the framework of talk about numbers, and thus if they are meaningful at all they form an internal question, with a trivial answer. The external questions that the metaphysician tries to ask are meaningless. Ontology, the philosophical discipline that tries to answer hard questions about what there really is is based on a mistake. The question it tries to answer are meaningless questions, and this enterprise should be abandoned. The words ‘Are there numbers?’ thus can be used in two ways: as an internal question, in which case the answer is trivially ‘yes’, but this has nothing to do with metaphysics or ontology, or as an external question, which is the one the philosophers are trying to ask, but which is meaningless. Philosophers should thus not be concerned with (O2), which is a discipline that tries to answer meaningless questions, but with (L1), which is a discipline that, in part, develops frameworks for science to use to formulate and answer real questions. Carnap’s ideas about ontology and meta-ontology are developed in a classic essay (Carnap 1956b). A nice summary of Carnap’s views can be found in his intellectual autobiography (Carnap 1963).
Carnap’s rejection of ontology, and metaphysics more generally, has been widely criticized from a number of different angles. One common criticism is that it relies on a too simplistic conception of natural language that ties it too closely to science or to evidence and verification. In particular, Carnap’s more general rejection of metaphysics used a verificationist conception of meaning, which is widely seen as too simplistic. Carnap’s rejection of ontology has been criticized most prominently by Quine, and the debate between Carnap and Quine on ontology is a classic in this field. Quine rejected Carnap’s conception that when scientists are faced with data that don’t fit their theory they have two choices. First they could change the theory, but stay in the same framework. Secondly, they could move to a different framework, and formulate a new theory within that framework. These two moves for Carnap are substantially different. Quine would want to see them as fundamentally similar. In particular, Quine rejects the idea that there could be truths which are the trivial internal statements, like “There are numbers”, whose truth is a given once the framework of numbers has been adopted. Thus some such internal statements would be analytic truths, and Quine is well known for thinking that the distinction between analytic and synthetic truths is untenable. Thus Carnap’s distinction between internal and external questions is to be rejected alongside with the rejection of the distinction between analytic and synthetic truths. On the other hand, Quine and Carnap agree that ontology in the traditional philosophical sense is to be rejected. Traditionally ontology has often, but not always, been an armchair, a priori, investigation into the fundamental building blocks of reality. As such it is completely separated from science. Quine (1951) rejects this approach to ontology since he holds that there can’t be such an investigation into reality that is completely separate and prior to the rest of inquiry. See Yablo 1998 for more on the debate between Quine and Carnap, which contains many references to the relevant passages. The view on ontological commitment discussed in section 4.1., which is usually attributed to Quine, was developed as a reaction to Carnap’s position discussed in this section. Simply put, Quine’s view is that to see what we are committed to we have to see what our best overall theory of the world quantifies over. In particular, we look at our best overall scientific theory of the world, which contains physics and the rest.
Carnap’s arguments for the rejection of ontology are presently widely rejected. However, several philosophers have recently attempted to revive some parts or others of Carnap’s ideas. For example, Stephen Yablo has argued that an internal-external distinction could be understood along the lines of the fictional-literal distinction. And he argues (Yablo 1998) that since there is no fact about this distinction, ontology, in the sense of (O2), rests on a mistake and is to be rejected, as Carnap did. On the other hand, Thomas Hofweber has argued that an internal-external distinction with many of the features that Carnap wanted can be defended on the basis of facts about natural language, but that such a distinction will not lead to a rejection of ontology, in the sense of (O2). See Hofweber 2016. Hilary Putnam (1987) has developed a view that revives some of the pragmatic aspects of Carnap’s position. See Sosa 1993 for a critical discussion of Putnam’s view, and Sosa 1999 for a related, positive proposal. Robert Kraut (2016) has defended an expressivist reading of the internal-external distinction, and with it some Carnapian consequences for ontology. And most of all, Eli Hirsch and Amie Thomasson have defended different versions of approaches to ontology that capture a good part of the spirit of Carnap’s view. See in particular Hirsch 2011 and Thomasson 2015. For various views about the effects of Carnap on the contemporary debate in ontology, see Blatti and Lapointe 2016.
Although ontology is often understood as the discipline that tries to find out what there is, or what exists, this is rejected by many in the contemporary debate. These philosophers think that the job of ontology is something different, and there is disagreement among them what it is more precisely. Among the proposed options are the projects of finding out what is real, or what is fundamental, or what the primary substances are, or what reality is like in itself, or something like this. Proponents of these approaches often find the questions about what there is too inconsequential and trivial to take them to be the questions for ontology. Whether there are numbers, say, is trivially answered in the affirmative, but whether numbers are real, or whether they are fundamental, or primary substances, etc., is the hard and ontological question. See Fine (2009) and Schaffer (2009) for two approaches along these lines. But such approaches have their own problems. For example, it is not clear whether the question whether numbers are real is any different than the question whether numbers exist. If one were to ask whether the Loch Ness monster is real, it would naturally be understood as just the same question as whether the Loch Ness monster exists. If it is supposed to be a different question, is this due to simple stipulation, or can we make the difference intelligible? Similarly, it is not clear whether the notion of what is fundamental can carry the intended metaphysical weight. After all, there is a perfectly clear sense in which prime numbers are more fundamental in arithmetic than even numbers, but this isn’t to hold the metaphysical priority of prime numbers over other numbers, but simply to hold that they are mathematically special among the numbers. Thus to ask whether numbers are fundamental is not easily seen as a metaphysical alternative to the approach to ontology that asks whether numbers exist. See Hofweber (2009; 2006, chapter 13) for a critical discussion of some approaches to ontology that rely on notions of reality or fundamentality. Whether such approaches to ontology are correct is a controversial topic in the debate about ontology which we will not focus on here. However, this approach gives rise to a special connection between logic and ontology which we will discuss in the following.
The relation between the different approaches to ontology mentioned just above is unclear. Is something that is part of reality as it is in itself something which is fundamental, or which is real in the relevant sense? Although it is unclear how these different approaches relate to each other, all of them have the potential for allowing for that our ordinary description of the world in terms of mid-size objects, mathematics, morality, and so on, is literally true, while at the same time these truths leave it open what the world, so to speak, deep down, really, and ultimately is like. To use one way of articulating this, even though there are tables, numbers, and values, reality in itself might contain none of them. Reality in itself might contain no objects at all, and nothing normative. Or it might. The ordinary description of the world, on this conception, leaves it largely open what reality in itself is like. To find that out is the job of metaphysics, in particular ontology. We might, given our cognitive setup, be forced to think of the world as one of objects, say. But that might merely reflect how reality is for us. How it is in itself is left open.
Whether the distinction between reality as it is for us and as it is in itself can be made sense of is an open question, in particular if it is not simply the distinction between reality as it appears to us, and as it really is. This distinction would not allow for the option that our ordinary description of reality is true, while the question how reality is in itself is left open by this. If our ordinary description were true then this would mean that how reality appears to us is how it in fact is. But if this distinction can be made sense of as intended then it gives rise to a problem about how to characterize reality as it is in itself, and this gives rise to a role for logic in the sense of (L1).
If we are forced to think of the world in terms of objects because of our cognitive makeup then it would be no surprise that our natural language forces us to describe the world in terms of objects. And arguably some of the central features of natural languages do exactly that. It represents information in terms of subject and predicate, where the subject paradigmatically picks out an object and the predicate paradigmatically attributes a property to it. If this is correct about natural language then it seems that natural language is utterly unsuitable to describe reality as it is in itself if the latter does not contain any objects at all. But then, how are we to describe reality as it is in itself?
Some philosophers have proposed that natural language might be unsuitable for the purposes of ontology. It might be unsuitable since it carries with it too much baggage from our particular conceptual scheme. See Burgess 2005 for a discussion. Or it might be unsuitable since various expressions in it are not precise enough, too context sensitive, or in other ways not ideally suited for the philosophical project. These philosophers propose instead to find a new, better suited language. Such a language likely will be a major departure from natural language and instead will be a formal, artificial language. This to be found language is often called ‘ontologese’ (Dorr 2005, Sider 2009, Sider 2011), or ‘the fundamental language’. The task thus is to find the fundamental language, a language in the sense of (L1), to properly carry out ontology, in the new and revised sense of (O2): the project of finding out what reality fundamentally, or in itself, etc., is like. For a critical discussion of the proposal that we should be asking the questions of ontology in ontologese, see Thomasson 2015 (chapter 10).
But this idea of a connection between (L1) and (O2) is not unproblematic. First there is a problem about making this approach to (O2) more precise. How to understand the notion of ‘reality in itself’ is not at all clear, as is well known. It can’t just mean: reality as it would be if we weren’t in it. On this understanding it would simply be the world as it is except with no humans in it, which would in many of its grander features be just as it in fact is. But then what does it mean? Similar, but different, worries apply to those who rely on notions like ‘fundamental’, ‘substance’, and the like. We won’t pursue this issue here, though. Second, there is a serious worry about how the formal language which is supposed to be the fundamental language is to be understood. In particular, is it supposed to be merely an auxiliary tool, or an essential one? This question is tied to the motivation for a formal fundamental language in the first place. If it is merely to overcome ambiguities, imperfections, and context sensitivities, then it most likely will merely be an auxiliary, but not essential tool. After all, within natural language we have many means available to get rid of ambiguities, imperfections and context sensitivities. Scope ambiguities can be often quite easily be overcome with scope markers. For example, the ambiguities in ‘\(A\) and \(B\) or \(C\)’ can be overcome as: ‘either \(A\) and \(B\) or \(C\)’ on the one hand, and ‘\(A\) and either \(B\) or \(C\)’, on the other. Other imprecisions can often, and maybe always, be overcome in some form or other. Formal languages are useful and often convenient for precisification, but they don’t seem to be essential for it.
On other hand, the formal fundamental language might be taken to be essential for overcoming shortcomings or inherent features of our natural language as the one alluded to above. If the subject-predicate structure of our natural languages brings with it a object-property way of representing the world, and if that way of representing the world is unsuitable for representing how reality is in itself, then a completely different language might be required, and not simply be useful, to describe fundamental reality. Alternatively, if the formal language is needed to articulate real existence , as we might be tempted to put it, something which we can’t express in English or other natural languages, then, too, it would be essential for the project of ontology. But if the formal language is needed to do something that our natural language can’t do, then what do the sentences in the formal language mean? Since they do something our natural language can’t do, we won’t be able to translate their meanings into our natural language. If we could then our natural language would be able to say what these sentences say, which by assumption it can’t do. But then what do sentences in the fundamental language mean? If we can’t say or think what these sentences say, what is the point for us to use it to try to describe reality as it is in itself with them? Can we even make sense of the project of finding out which sentences in such a language are correct? And why should we care, given that we can’t understand what these sentences mean?
A sample debate related to the issues discussed in this section is the debate about whether it might be that reality in itself does not contain any objects. See, for example, Hawthorne and Cortens 1995, Burgess 2005, and Turner 2011. Here the use of a variable and quantifier free language like predicate functor logic as the fundamental language is a recurring theme.
Formal languages might be called upon to help overcome an inherent flaw built into our natural languages, as discussed above, or they might be called upon to overcome a limitation of our natural languages. One such limitation might be an expressive one. For example, it is unclear whether our natural languages contain truly higher-order quantifiers, ones that interact with, say, predicates or sentences directly, as opposed to just terms. If our natural languages are limited in this way, then it might be tempting to carry out metaphysics and ontology in a higher-order formal languages, which does not suffer such a limitation. And since such formal languages can be characterized precisely, that gives rise to the possibility of giving precise proofs of various statements of metaphysical significance. This project is pursued under the label higher-order metaphysics by several contemporary philosophers in some from or other, see, for example, Williamson 2003 and Dorr 2016. For criticism, see Hofweber 2022. This approach can be seen as generalizing formal ontology in the sense of section 4.3 to formal metaphysics, and is related to some of the approaches to formal ontology discussed above.
One way to understand logic is as the study of the most general forms of thought or judgment, what we called (L4). And one way to understand ontology is as the study of the most general features of what there is, our (O3). Now, there is a striking similarity between the most general forms of thought and the most general features of what there is. Take one example. Many thoughts have a subject of which they predicate something. What there is contains individuals that have properties. It seems that there is a kind of a correspondence between thought and reality: the form of the thought corresponds to the structure of a fact in the world. And similarly for other forms and structures. Does this matching between thought and the world ask for a substantial philosophical explanation? Is it a deep philosophical puzzle?
To take the simplest example, the form of our subject-predicate thoughts corresponds perfectly to the structure of object-property facts. If there is an explanation of this correspondence to be given it seems it could go in one of three ways: either the form of thought explains the structure of reality (a form of idealism), or the other way round (a form of realism), or maybe there is a common explanation of why there is a correspondence between them, for example on a form of theism where God guarantees a match.
At first it might seem clear that we should try to give an explanation of the second kind: the structure of the facts explains the forms of our thoughts that represent these facts. And an idea for such an explanation suggests itself. Our minds developed in a world full of objects having properties. If we had a separate simple representation for these different facts then this would be highly inefficient. After all, it is often the same object that has different properties and figures in different facts, and it is often the same property that is had by different objects. So, it makes sense to split up our representations of the objects and of the properties into different parts, and to put them back together in different combinations in the representation of a fact. And thus it makes sense that our minds developed to represent object-property facts with subject-predicate representations. Therefore we have a mind whose thoughts have a form which mirrors the structure of the facts that make up the world.
This kind of an explanation is a nice try, and plausible, but it is rather speculative. That our minds really developed this way in light of those pressures is a question that is not easy to answer from the armchair. Maybe the facts do have a different structure, but our forms are close enough for practical purposes, i.e. for survival and flourishing. And maybe the correspondence does obtain, but not for this largely evolutionary reason, but for a different, more direct and more philosophical or metaphysical reason.
To explain the connection differently one could endorse the opposite order of explanatory priority, and argue that the form of thought explains the structure of the world. This would most likely lead to an idealist position of sorts. It would hold that the general features of our minds explain some of the most general features of reality. The most famous way to do something like this is Kant’s in the Critique of Pure Reason (Kant 1781/7). We won’t be able to discuss it here in any detail. This strategy for explaining the similarity has the problem of explaining how there can be a world that exists independently of us, and will continue to exist after we have died, but nonetheless the structure of this world is explained by the forms of our thoughts. Maybe this route could only be taken if one denies that the world exists independently of us, or maybe one could make this tension go away. In addition one would have to say how the form of thought explains the structure of reality. For one attempt to do this, see Hofweber 2019, for another one, see Gaskin 2020.
But maybe there is not much to explain here. Maybe reality does not have anything like a structure that mirrors the form of our thoughts, at least not understood a certain way. One might hold that the truth of the thought “John smokes” does not require a world split up into objects and properties, it only requires a smoking John. And all that is required for that is a world that contains John, but not also another thing, the property of smoking. Thus a structural match would be less demanding, only requiring a match between objects and object directed thought, but no further match. Such a view would be broadly nominalistic about properties, and it is rather controversial.
Another way in which there might be nothing to explain is connected to philosophical debates about truth. If a correspondence theory of truth is correct, and if thus for a sentence to be true it has to correspond to the world in a way that mirrors the structure and matches parts of the sentence properly with parts of the world, then the form of a true sentence would have to be mirrored in the world. But if, on the other extreme, a coherence theory of truth is correct then the truth of a sentence does not require a structural correspondence to the world, but merely a coherence with other sentences. For more on all aspects of truth see Künne 2003.
Whether or not there is a substantial metaphysical puzzle about the correspondence of the form of thoughts and the structure of reality will itself depend on certain controversial philosophical topics. And if there is a puzzle here, it might be a trivial one, or it might be quite deep. And as usual in these parts of philosophy, how substantial a question is is itself a hard question.
With the many conceptions of logic and the many different philosophical projects under the heading of ontology, there are many problems that are in the intersection of these areas. We have touched on several above, but there are also others. Although there is no single problem about the relationship between logic and ontology, there are many interesting connections between them, some closely connected to central philosophical questions. The references and links below are intended to provide a more in depth discussions of these topics.
- Barwise, J. and R. Cooper, 1981. ‘Generalized Quantifiers in Natural Language,’ in Linguistics and Philosophy , 4: 159–219.
- Blatti, S. and S. Lapointe (eds.), 2016, Ontology after Carnap , Oxford: Oxford University Press
- Boolos, G., 1998. Logic, Logic, and Logic , Cambridge, MA: Harvard University Press.
- Burgess, J., 2005. ‘Being explained away,’ reprinted in his Mathematics, Models, and Modality , Cambridge: Cambridge University Press, 2009.
- Carnap, R., 1956a. Meaning and Necessity: a study in semantics and modal logic , Chicago: University of Chicago Press, 2nd edition.
- –––, 1956b. ‘Empiricism, semantics, and ontology,’ in Carnap 1956a, pp. 203–221.
- –––, 1963. ‘Intellectual autobiography,’ in Schilpp 1963, pp. 3–84.
- Christensen, D., 2005. Putting Logic in Its Place: Formal Constraints on Rational Belief , Oxford University Press.
- Davidson, D., 1967. ‘The Logical Form of Action Sentences,’ in Davidson 1980.
- –––, 1980. Essays on Actions and Events , Oxford: Oxford University Press.
- Dorr, C., 2005. ‘What We Disagree About When We Disagree About Ontology’,’ in Fictionalist Approaches to Metaphysics , Mark Kalderon (ed.), Oxford: Oxford University Press, 234–286.
- –––, 2016. ‘To be F is to be G,’ Philosophical Perspectives , 30: 39–134.
- Engel, P., 1991. The Norm of Truth: an introduction to the philosophy of logic , Toronto: University of Toronto Press.
- Everett, A. and T. Hofweber (eds.), 2000. Empty Names, Fiction, and the Puzzles of Non-Existence , Stanford: CSLI Publications.
- Field, H., 2009. ‘What is the normative role of logic?,’ The Proceedings of the Aristotelean Society , LXXXIII: 251–268.
- Fine, K., 2002. The Limits of Abstraction , Oxford: Oxford University Press.
- –––, 2009. ‘The question of ontology,’ in Metametaphysics , D. Chalmers, D. Manley, and R. Wasserman (eds.), Oxford: Oxford University Press
- Fodor, J., 1975. The Language of Thought , Cambridge, MA: Harvard University Press.
- Frege, G., 1884. Die Grundlagen der Arithmetic: eine logisch-philosophische Untersuchung zum Begriff der Zahl , Breslau: w. Koebner; translated by J. L. Austin as The Foundations of Arithmetic: A Logic-Mathematical Enquiry into the Concept of Number , Oxford: Blackwell, second revised edition, 1974.
- Garskin, R., 2020. Language and World: a defense of linguistic idealism , London: Routledge.
- Goble, L., 2001. Philosophical Logic , Oxford: Blackwell Publishers.
- Gottlieb, D., 1980. Ontological Economy: substitutional quantification and mathematics , Oxford: Oxford University Press.
- Haack, S., 1978. Philosophy of Logics , Cambridge: Cambridge University Press.
- Hacking, I., 1979. ‘What is Logic?,’ Journal of Philosophy , LXXVI (6): 285–319.
- Hale, B. and C. Wright, 2001. The Reason’s Proper Study , Oxford: Oxford University Press.
- Harman, G., 1986. Change in View , Cambridge, MA: MIT Press.
- Hawthorne, J. and A. Cortens, 1995. ‘Towards ontological nihilism,’ Philosophical Studies , 79 (2): 143–165.
- Hirsch, E., 2011. Quantifier Variance and Realism: essays in Metaontology , Oxford: Oxford University Press.
- Hofweber, T., 2009. ‘Ambitious, yet modest, metaphysics,’ in Metametaphysics , D. Chalmers, D. Manley, and R. Wasserman (eds.), Oxford: Oxford University Press
- –––, 2016. Ontology and the Ambitions of Metaphysics , Oxford: Oxford University Press
- –––, 2019. ‘Idealism and the Harmony of Thought and Reality,’ Mind , 128(511): 699–734.
- –––, 2022. ‘The Case Against Higher-Order Metaphysics,’ Metaphysics , 5(1): 29–50.
- Kant, I., 1781/7. Kritik der reinen Vernunft , various translations as Critique of Pure Reason .
- Kneale, W. and Kneale, M., 1985. The Development of Logic , Oxford: Oxford University Press.
- Künne, W., 2003. Conceptions of Truth , Oxford: Oxford University Press.
- Kraut, R., 2018. ‘Three Carnaps on Ontology’ in Blatti & Lapointe 2016, pp. 31–58.
- Lambert, K., 2001. ‘Free logics,’ in Goble 2001, pp. 258–279.
- MacFarlane, J., 2002. ‘Frege, Kant, and the Logic in Logicism,’ The Philosophical Review , 111: 25–65.
- Mauthner, I.F., 1946. ‘An Extension of Klein’s Erlanger Program: Logic as Invariant Theory,’ American Journal of Mathematics , 68: 345–384.
- Marcus, R., 1993. Modalities , Oxford: Oxford University Press.
- Parsons, T., 1980. Nonexistent Objects , New Haven: Yale University Press.
- Putnam, H., 1987. The Many Faces of Realism , La Salle: Open Court.
- Quine, W.V., 1948. ‘On what there is,’ Review of Metaphysics , 2: 21–38; reprinted in Quine 1980.
- –––, 1951. ‘Two Dogmas of Empiricism,’ The Philosophical Review , 60: 20–43; reprinted in Quine 1980.
- –––, 1954. ‘Quantification and the Empty Domain,’ Journal of Symbolic Logic , 19: 177–179.
- –––, 1970. Philosophy of Logic , Cambridge, MA: Harvard University Press.
- –––, 1980. From a Logical Point of View , 2nd edition, Cambridge, MA: Harvard University Press.
- Reid, S., 1995. Thinking about Logic , Oxford: Oxford University Press.
- Rosen, G., 1993. ‘The Refutation of Nominalism (?),’ Philosophical Topics , 21: 149–86.
- Russell, B., 1905. ‘On Denoting,’ Mind , 14: 479–493.
- Schaffer, J., 2009 ‘On what grounds what,’ in Metametaphysics , D. Chalmers, D. Manley, and R. Wasserman (eds.), Oxford: Oxford University Press
- Schilpp, P.A., 1963. The Philosophy of Rudolf Carnap , La Salle: Open Court
- Sider, T., 2009. ‘Ontological Realism,’ in Metametaphysics , D. Chalmers, D. Manley, and R. Wasserman (eds.), Oxford: Oxford University Press.
- –––, 2011. Writing the Book of the World , Oxford: Oxford University Press.
- Simons, P., 1987. Parts: A Study in Ontology , Oxford: Oxford University Press.
- Sosa, E., 1993. ‘Putnam’s pragmatic realism,’ Journal of Philosophy , 90: 605–26.
- –––, 1999. ‘Existential Relativity,’ Midwest Studies in Philosophy , 22: 132–143.
- Tarski, A., 1986. ‘What are Logical Notions?,’ History and Philosophy of Logic , 7: 143–154.
- Tennant, N., 1990. Natural Logic , 2nd edition, Edinburgh: Edinburgh University Press.
- Thomasson, A., 2016. Ontology made Easy , New York: Oxford University Press.
- Turner, J., 2011. ‘Ontological Nihilism,’ Oxford Studies in Metaphysics (Volume 6), K. Bennett and D. Zimmerman (eds.), Oxford: Oxford University Press, pp. 3–55.
- van Benthem, J., 1986. Essays in Logical Semantics , Dordrecht: D. Reidel.
- –––, 1989. ‘Logical Constants across Varying Types,’ Notre Dame Journal of Formal Logic , 30 (3): 315–342.
- van Inwagen, P., 1998. ‘Meta-ontology,’ Erkenntnis , 48: 233–250; reprinted in van Inwagen 2001.
- –––, 2001. Ontology, Identity and Modality , Cambridge: Cambridge University Press
- Velleman, J. D., 2000. ‘On the Aim of Belief,’ Chapter 11 of The Possibility of Practical Reason , Oxford: Oxford University Press.
- Williamson, T., 1999. ‘A note on truth, satisfaction and the empty domain,’ Analysis , 59: 3–8.
- –––, 2003. ‘Everything,’ Philosophical Perspectives , 17(1): 415–465.
- Wright, C., 1983. Frege’s Conception of Numbers as Objects , Aberdeen: Aberdeen University Press.
- Yablo, S., 1998. ‘Does ontology rest on a mistake?,’ Proceedings of the Aristotelean Society , 72: 229–61.
- Zalta, E. N., 1983. Abstract Objects: an Introduction to Axiomatic Metaphysics , Dordrecht: D. Reidel.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
- Buffalo Ontology Site .
- Empiricism, semantics and ontology . Online version of Carnap’s famous essay, formatted in HTML by Andrew Chrucky
- Rudolf Carnap , Internet Encyclopedia of Philosophy article on Carnap.
- Frege, Gottlob, Grundlagen der Arithmetik (in German) (PDF), the original of what is translated as the Foundations of Arithmetic .
- Zalta, E., 1999 [2022], Principia Logico-Metaphysica , manuscript of Edward N. Zalta’s systematic formal ontology.
Carnap, Rudolf | Frege, Gottlob | Frege, Gottlob: theorem and foundations for arithmetic | logic: free | logical consequence | logical constants | object | ontological arguments | ontological commitment
Acknowledgments
Thanks to various anonymous referees for their helpful suggestions on earlier versions of this article. Thanks also to Jamin Asay, Rafael Laboissiere, Ricardo Pereira, Adam Golding, Gary Davis, programadoor, Barnaby Dromgool, Chris Meister, and especially James Cole for reporting several errors, typos, or omissions.
Copyright © 2023 by Thomas Hofweber < hofweber @ unc . edu >
Support SEP
Mirror sites.
View this site from another server:
- Info about mirror sites
The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054

- March 8, 2018
Human beings have been thinking logically (and sometimes illogically) since the earliest era of human existence. However, they have not always been aware of the general principles that distinguish logical from illogical forms of thought. Logic, as an academic subject, is the systematic study of those principles. The logician asks, Which rules should we follow if we want our reasoning to be the best possible?
The rules of logic are guides to correct reasoning just as the rules of arithmetic are guides to correctly adding, subtracting, multiplying, and dividing numbers, the principles of photography are guides to taking good photos, and so on. You can improve your reasoning by studying the principles of logic, just as you can improve your number-crunching abilities by studying the principles of mathematics. Because correct reasoning can be applied to any subject matter whatsoever, the number of potential applications of logical theory is practically unlimited.
The Greek philosopher Aristotle (384–322 BC) wrote the first book on the standards of correct reasoning and later wrote four additional treatises on the subject. Thus, in five highly original (and extremely complex) works, collectively known as the Organon (Greek for “tool,” as in “general tool of thought”), Aristotle launched the study of the principles of correct reasoning and earned the title historians have conferred on him: founder of logic. [i] The noted twentieth-century logician and philosopher Benson Mates writes:
[W]e can say flatly that the history of logic begins with the Greek philosopher Aristotle . . . Although it is almost a platitude among historians that great intellectual advances are never the work of only one person (in founding the science of geometry Euclid made use of the results of Eudoxus and others; in the case of mechanics Newton stood upon the shoulders of Descartes, Galileo, and Kepler; and so on), Aristotle, according to all available evidence, created the science of logic absolutely ex nihilo. [ii]
Logic was first taught as an academic subject in the universities of ancient Athens, Greece during the fourth century BC, making it one of the oldest of all academic subjects. For twenty-five hundred years, it has been considered a core academic requirement at institutions of higher learning around the world. Logic remains part of the core curriculum around the world today because the principles of correct reasoning can help anyone reason more accurately, no matter what subject, making it an all-purpose “tool kit” for your mind.
Major Divisions of Logic
Formal logic studies the abstract patterns or forms of correct reasoning. Here the focus is on form rather than content, that is, on the logical structure of reasoning apart from what it is specifically about. Since ancient times, logicians have used special symbols and formulas, similar to those used in mathematics, to record the abstract logical forms they have discovered. This is why formal logic is sometimes also called “symbolic logic” or “mathematical logic.”
Informal logic studies the non-formal aspects of reasoning—qualities that cannot be accurately translated into abstract symbols. This is why informal logic for the most part dispenses with special symbols and formulas. In this division of logic, the focus is often reasoning expressed within everyday language.
Logical theory begins with the notion of an argument , which is defined as one or more statements, called “premises,” offered as evidence, or reason to believe, that a further statement, called the “conclusion,” is true. In plain terms, an argument is reasoning offered in support of a conclusion. Arguments are part of everyday life. You present one every time you put your reasoning into words to share it with others. In the following example, the premises are marked P1 and P2, and the conclusion is labeled C.
- P1: All songwriters are poets.
- P2: Bob Dylan is a songwriter.
- C: Therefore, Bob Dylan is a poet.
The second building block of logical theory is the distinction, first noted by Aristotle, between deductive and inductive reasoning. A deductive argument aims to establish its conclusion with complete certainty, in such a way that if its premises all are true, then its conclusion must be true. Put another way, the underlying claim in the case of a deductive argument is that it is not even possible the premises all are true and the conclusion is false. For example:
- P1. Tiny Tim played the ukulele.
- P2. Anyone who plays the ukulele is a musician.
- C. Consequently, Tiny Tim was a musician.
Deductive arguments aim for certainty and nothing less. If a deductive argument succeeds in its aim, it is a valid deductive argument. If it does not, it is an invalid deductive argument. A deductive argument is said to be sound if it is (a) valid and (b) all of its premises are true. The following deductive argument is clearly valid although it is not sound.
- P1. All students are millionaires.
- P2. All millionaires drink vodka.
- C. Therefore, necessarily, all students drink vodka.
In contrast, the following argument is invalid (and hence also unsound).
- P1. Ann and Sue are cousins.
- P2. Sue and Rita are cousins.
- C. So, Ann and Rita must be cousins.
The following argument hits the target—it is both valid and sound.
- P1. All whales are mammals.
- P2. All mammals are warm-blooded.
- C. Ergo, all whales are warm-blooded.
Deductive logic is the study of the standards of correct deductive reasoning. Here is an example of a law of deductive logic. Let A, B, and C be variables ranging over terms that stand for categories—words such as cats, dogs, people, trucks, and so forth. Aristotle proved that the following form or pattern of reasoning, named Barbara by logicians in Europe during the Middle Ages, is a valid form, meaning that any argument—about any subject—that exactly follows this pattern is valid.
The Barbara Argument Form
- All B are C.
- All A are B.
- Therefore, necessarily, all A are C.
Let’s test Barbara. If we replace the variable A with sparrows , the variable B with birds , and substitute animals for the variable C, we get the following “substitution instance” of the corresponding form:
- P1. All birds are animals.
- P2. All sparrows are birds.
- C. Therefore, necessarily, all sparrows are animals.
This argument is clearly valid. Aristotle proved that any argument that exactly follows this form of reasoning is valid. For instance:
- P1. All mammals are animals.
- P2. All cats are mammals.
- C. Therefore, necessarily, all cats are animals.
To return to Barbara for a moment, notice that the form is not about any particular subject—it is an abstract pattern with no material content. Barbara is all form and no content. Aristotle discovered that an argument’s validity is always a function of its form rather than its content. You can learn a lot about reasoning by studying valid argument forms. Logicians have catalogued hundreds of them. The study of logical forms is valuable, for if your argument follows a valid form, then it is guaranteed to be valid and therefore your conclusion must be true if your premises are true. As you may have guessed, formal logic and deductive logic overlap in the study of valid patterns of reasoning, of which there are many.
An inductive argument, on the other hand, does not aim to show that its conclusion is certain. Rather it aims to show that its conclusion is probably, though not definitely, true so that if its premises are true, it is likely that its conclusion is true. This argument aims to establish its conclusion with a probability less than one:
- P1. Joe has eaten a Dick’s Deluxe burger for lunch every day for the past month.
- C. So, it is very probable that he will have a Dick’s Deluxe for lunch tomorrow.
If an inductive argument achieves its aim, it is a strong argument . An inductive argument that does not achieve its aim is a weak argument . An inductive argument is said to be cogent if it is (a) strong, and (b) all of its premises are true. The following inductive argument is strong although it is surely not cogent:
- P1. We interviewed one thousand people from all walks of life and every social group all over Seattle over a ten-week period, and 90 percent said they do not drink coffee.
- C. Therefore, probably about 90 percent of Seattleites do not drink coffee.
The following argument is clearly weak:
- P1. We interviewed one thousand people from all walks of life as they exited coffee shops in Seattle, and 98 percent said they drink coffee.
- C. Therefore, probably about 98 percent of Seattleites drink coffee.
The following argument is better—it is strong as well as cogent:
- P1. NASA announced that it found evidence of water on Mars.
- P2. NASA is a scientifically reliable agency.
- C. Therefore it is likely there is or was water on Mars.
Inductive logic is the study of the standards of good inductive reasoning. One inductive standard pertains to analogical arguments —arguments that take the following form:
- A and B have many features in common.
- A has attribute x and B is not known not to have attribute x .
- Therefore, B probably has attribute x as well.
For instance:
- P1. Monkey hearts are very similar to human hearts.
- P2. Drug X cures heart disease in monkeys.
- P3. Drug x is not known to not cure heart disease in humans.
- C.Therefore, drug X will probably cure heart disease in humans.
Analogical arguments can be evaluated rationally. Here are three principles commonly used to judge their strength:
- The more attributes A and B have in common, the stronger the argument, provided the common features are relevant to the conclusion.
- The more differences there are between A and B, the weaker the argument, provided the differences are relevant to the conclusion.
- The more specific or narrowly drawn the conclusion, the weaker the argument. The more general or widely drawn the conclusion, the stronger the argument.
Informal and inductive logic overlap in the study of the many non-formal aspects of inductive reasoning, which include guides to help us improve our assessments of probability.
Information Spillover
The history of ideas is fascinating because often one idea leads to another which leads to a completely unexpected discovery. Economists call this “information spillover” because freely traded ideas tend to give birth to new ideas that give birth to still more ideas that spill from mind to mind as the process cascades into ever widening circles of knowledge and understanding. Aristotle discovered logical principles so exact they could be expressed in symbols like those used in mathematics. Because they could be expressed so precisely, he was able to develop a system of logic similar to geometry. Recall that geometry begins with statements, called “axioms,” asserted as self-evident. With the addition of precise definitions, the geometer uses precise reasoning to derive further statements, called “theorems.” Aristotle’s system began in a similar way, with precise definitions and exact formulas asserted as self-evident. With the base established, he derived a multitude of theorems that branched out in many directions. When he was finished, his system of logical principles was as exact, and proven, as any system of mathematics of the day.
Some observers thought the rules of his system were too mechanical and abstract to be of any practical use. They were mistaken. Aristotle’s system of logic was actually the first step on the path to the digital computer. The first person to design a computing machine was a logician who, after reflecting on the exact and mechanical nature of Aristotle’s system of logical principles, raised one of the most seminal questions ever: Is it possible to design a machine whose gears, by obeying the “laws” of Aristotle’s logic, compute for us the exact, logically correct answer every time?
The logician who first asked the question that connected logic and computing was Raymond Lull (1232–1315), a philosopher, Aristotelian logician, and Catholic priest. Lull has been called the “father of the computer” because he was the first to conceive and design a logical computing machine. Lull’s device consisted of rotating cogwheels inscribed with logical symbols from Aristotle’s system, aligned to move in accord with the rules of logic. In theory, the operator would enter the premises of an argument by setting the dials, and the machine’s gears would then accurately crank out the logically correct conclusion.
Lull’s design may have been primitive, but for the first time in history someone had the idea of a machine that takes inputs, processes them mechanically on the basis of exact rules of logic, and outputs a logically correct answer. We usually associate computing with mathematics, but the first design for a computer was based not on math but on logic—the logic of Aristotle.
Ideas have consequences, and sometimes ideas that seem impractical have consequences that are quite practical. Lull was the first in a long succession of logical tinkerers, each seeking to design a more powerful computing machine. You have a cell phone in your hand right now thanks to the efforts of these innovators, each trained in logical theory. In addition to Lull, the list includes computer pioneers Leonardo da Vinci (1452–1519), Wilhelm Schickard (1592–1635), William Oughtred (1574–1660), Blaise Pascal (1623–1662), Gottfried Leibniz (1646–1716), Charles Babbage (1791–1871), Vannevar Bush (1890–1974), Howard Aiken (1900–1973), and Alan Turing (1912–1954).
Thus, a continuous line of thought can be traced from Aristotle’s logical treatises to the amazing advances in logic and computing theory of the nineteenth and twentieth centuries which led to the completion of the world’s first digital computer (at Iowa State College in 1937) and from there to the much smaller yet more powerful devices of today. It is no coincidence that the circuits inside every digital computer are called “logic gates.” In the logic classroom, this is my answer to those who suppose that abstract logical theory has no practical applications.
Computer science is only one spin-off of logical theory. The subject Aristotle founded remains as vital today as it was in ancient Athens. Aristotle probably had no idea how important his new subject would be—or how long the spillover and information overflow would continue.
What does all of this have to do with anything? In everyday life as well as in every academic subject, reason is our common currency. It follows that the ability to reason well is an essential life skill. But skills require knowledge as well as practice. Since logic is the study of the principles of correct reasoning, a familiarity with elementary logic and its applications can help anyone improve his or her life. Some people suppose logic is a useless subject; the truth may be the reverse—it may be the most useful subject of all.
[i] An editor applied the name Organon (“tool”) to Aristotle’s logical works after his death. The name reflects Aristotle’s claim that logic is an all-purpose tool of thought, a guide to the precise thinking needed to attain solidly proven truth on any subject.
[ii] Benson Mates, Elementary Logic , 2nd ed. (New York: Oxford University Press, 1972), 206. Ex nihilo is Latin for “out of nothing” and means “from scratch” in this context.
For a deeper look at the fundamentals of this subject, check out the free course “ Short Little Lessons in Logic ” published by Philosophy News. This course will teach you the fundamentals of logic in bite-sized lessons that you can learn at your own pace.
About the author
Paul Herrick received his Ph.D in philosophy from the University of Washington. Since 1983 he has taught philosophy at Shoreline Community College, in Shoreline, Washington, near Seattle. He is the author of Reason and Worldview. An Introduction to Western Philosophy , Think with Socrates: An Introduction to Critical Thinking, The Many Worlds of Logic, and Introduction to Logic .
Other articles by Paul Herrick

Books by Paul Herrick

This is a comprehensive introduction to the fundamentals of logic (both formal logic and critical reasoning), with exceptionally clear yet conversational explanations and a multitude of engaging examples and exercises. Herrick’s examples are on-point and fun, often bringing in real-life situations and popular culture. And more so than other logic textbooks, Introduction to Logic brings in the history of philosophy and logic through interesting boxes/sidebars and discussions, showing logic’s relation to philosophy.

Brief yet comprehensive, Think with Socrates: An Introduction to Critical Thinking uses the methods, ideas, and life of Socrates as a model for critical thinking. It offers a more philosophical, historical, and accessible introduction than longer textbooks while still addressing all of the key topics in logic and argumentation. Applying critical thinking to the Internet, mass media, advertising, personal experience, expert authority, the evaluation of sources, writing argumentative essays, and forming a worldview, Think with Socrates resonates with today’s students and teaches them how to apply critical thinking in the real world. At the same time, it covers the ancient intellectual roots and history of the field, placing critical thinking in its larger context to help students appreciate its perennial value.

A comprehensive look at major movements in philosophy and how those movements helped shape the way we think and behave.
More articles

What is Disagreement? – Part IV
This is Part 4 of a 4-part series on the academic, and specifically philosophical study of disagreement. In Part 1...

What is Disagreement? – Part III
This is Part 3 of a 4-part series on the academic, and specifically philosophical study of disagreement. In Part 1...

What is Disagreement? – Part II
This is Part 2 of a 4-part series on the academic, and specifically philosophical study of disagreement. In Part 1...

What is Disagreement?
This is Part 1 of a 4-part series on the academic, and specifically philosophical study of disagreement. In this series...

International law is a collective delusion
As Israel wages its war against Hamas in Gaza, world leaders have called for international law to be observed. But...

The Power and Presence of Silence
I recently watched a wonderful South Korean film titled The Way Home, a 2002 movie with English subtitles. It tells...

philosophybits: “Everything considered, a determined soul will always manage.” — Albert Camus, The…
philosophybits: “Everything considered, a determined soul will always manage.” — Albert Camus, The Myth of Sisyphus Originally appeared on Philosophy...

[Revised entry by Eric Schwitzgebel on November 15, 2023. Changes to: Main text, Bibliography] Anglophone philosophers of mind generally use...
Evaluating a Text
Analyzing arguments/logical fallacies.

When you read a text whose purpose is to persuade or argue a point, you need to analyze that text to see whether the argument is logical. Logical arguments need to be reasonable; supported with appropriate, relevant evidence from valid sources; and based on acceptable assumptions. Knowing a bit about logical arguments will help you analyze a text intended to persuade, as well as write your own persuasive, logical arguments.
Logical Argument Basics
Main idea, content, warrant.
The claim is the author’s main argument—what the author wants you to do, think, or believe by the time you finish reading the text. The content is the evidence which provides the support and reasoning upon which the claim is built. The underlying assumption, the way the author uses the evidence to support the claim, often called the warrant. These three parts of a logical argument all need to be believable and coordinated for the argument to be valid. [1]
For example, the author’s main idea or claim may be this: Decreasing carbon dioxide emissions from car exhaust, manufacturing processes, fertilizers, and landfills, while slowing deforestation, may help slow the process of global warming. For this claim, the underlying assumption is that global warming is something that should be slowed. To support this claim and link the evidence with the claim, the author included the following types of content as evidence:
- Facts that show the linkage between increased carbon dioxide levels and warmer temperatures
- Statistics about temperature increases and their effects, and future projections based on current statistics
- Studies done showing that fuel emission laws enacted in a certain location cut down on carbon dioxide levels
- Citation of recognized experts in the field
- Testimony of those involved first-hand with the issue
In this example, all of the argument parts coordinate with one another. The evidence seems appropriate, and is especially strong if it comes from valid sources such as scientific studies published in peer-reviewed journals. The underlying assumption is supported by the evidence. As a reader analyzing the text, you could conclude that this is a logical argument.
On the other hand, the author’s argument may be this: Good nutrition should be taught in school rather than at home . For this claim, the underlying assumption might be that parents are not as good at teaching their children as trained teachers, or it might be that schools have more teaching resources than parents. To support this claim and link the evidence with the claim, the author included the following types of content as evidence:
- interviews with teachers
- interviews with school administrators
- statistics from studies done over time, showing that elementary school children who received lessons on good nutrition maintained good eating habits into adulthood more than those did not receive formal lessons
- personal interviews
In this example, it appears that the first warrant was in effect, based on the way the author linked claim and evidence. You might question the underlying assumption in the warrant, as many readers may not accept this belief. As an analytical reader whose purpose is to evaluate the text, you also might question the type of support. Teachers, school administrators, and people who were interviewed might be biased. Statistics on the effectiveness of teaching about nutrition in school do not track a comparative group of children who were taught at home, so the conclusions of the studies in this case might not fully relate to the argument. As a reader analyzing this text, you could conclude that the author’s argument is not logical.
As you analyze an argument, try to isolate, identify, and investigate these three aspects of argument—main idea, content, warrant—to evaluate the quality of the text.
Ethos, Pathos, Logos

Another complementary way to analyze an argument and evaluate a text is to investigate the three main types of appeals authors use to support their claim. These types of appeals are traditionally referred to by their Greek names: logos (appeal to logic), pathos (appeal to emotion), and ethos (appeal to authority).
Logical appeals may include facts, case studies, statistics, experiments, and expert testimony. Authoritative appeals may include citations of recognized experts and testimony of those involved first-hand in the issue. Emotional appeals may include personal anecdotes, stories, impact studies, and first-hand testimony. Many logical arguments rely on some combination of these three types of appeals. However, an argument may not be logical if a certain type of appeal does not coordinate with the claim, and/or if an author relies too heavily on emotional appeal, for example, to the exclusion of factual support.
The two videos below discuss how to apply these concepts to analyze an argument and thus evaluate a text.
Logical Fallacies
When you analyze a text’s arguments in order to evaluate the quality of that text, you also need to determine whether the content contains errors in logic. Errors in logic, called logical fallacies, weaken the argument and thus the validity of the text. When readers spot questionable reasoning or unfair attempts at audience manipulation, more than their evaluation of the author’s argument ( logos ) may be compromised. Their evaluation of the credibility of the speaker ( ethos ), and perhaps their ability to connect with that speaker on the level of shared values ( pathos ), also may be compromised.

Types & Examples of Logical Fallacies
Classifying fallacies as errors of ethos, logos, or pathos may help you both recognize and understand them.
- Fallacies of ethos relate to credibility. These fallacies may unfairly build up the credibility of the author (or his allies) or unfairly attack the credibility of the author’s opponent (or her allies).
- Fallacies of logos give an unfair advantage to the claims of the speaker or writer or an unfair disadvantage to his opponent’s claims.
- Fallacies of pathos rely excessively upon emotional appeals, attaching positive associations to the author’s argument and negative ones to his opponent’s position.
fallacies that misuse appeals to ethos
Ad hominem : attacking the person making an argument rather than the argument itself.
Example: “Of course that doctor advocates vaccination—he probably owns stock in a pharmaceutical company.”
False authority : relying on claims of expertise when the claimed expert (a) lacks adequate background/credentials in the relevant field, (b) departs in major ways from the consensus in the field, or (c) is biased, e.g., has a financial stake in the outcome.
Example: “Dr. X is an engineer, and he doesn’t believe in global warming.”
Guilt by association/Plain Folk : linking the person making an argument to an unpopular person or group, or linking the person making the argument to ordinary people.
Example: “My opponent is a card-carrying member of the ACLU.”
Example: “Who would you vote for—someone raised in a working-class neighborhood who has the support of Joe the Plumber or some elitist whose daddy sent him to a fancy school?”
Name-calling/Poisoning the well : labeling an opponent with words that have negative connotations in an effort to undermine the opponent’s credibility; undermining an opponent’s credibility before offering that person’s ideas.
Example: “These rabble-rousers are nothing but feminazis.”
Example: “The prosecution is going to bring up a series or so-called experts who are getting a lot of money to testify here today.”
fallacies that misuse appeals to logos
Hasty generalization: jumping to conclusions based upon an unrepresentative sample or insufficient evidence.
Example: “10 of the last 14 National Spelling Bee Champions have been Indian American. Indian Americans must all be great spellers!”
Begging the question: circular argument because the premise is the same as the claim that you are trying to prove.
Example: “This legislation is sinful because it is the wrong thing to do.”
False dilemma: misuse of the either/or argument; presenting only two options when other choices exist
Example: “Either we pass this ordinance or there will be rioting in the streets.”
Post hoc ergo propter hoc/Slippery Slope: Post hoc is a Latin phrase meaning “after this, therefore because of this”; assumes that a first event causes a second event without evidence to show that cause. Slippery slope asserts that one thing will inevitably lead to another without offering adequate support.
Example: “My child was diagnosed with autism after receiving vaccinations. That is proof that vaccines are to blame.”
Example: “We can’t legalize marijuana; if we do, then the next thing you know people will be strung out on heroin.”
Non-sequitur: Latin for “does not follow”; the conclusion is not valid because a premise is untrue (or missing) or because the relationship between premises does not support the deduction stated in the claim.
Example (untrue premise): “If she is a Radford student, she is a member of a sorority. She is a Radford student. Therefore she is a member of a sorority.”
Smoke screen : avoiding the real issue or a tough question by introducing an unrelated topic as a distraction; sometimes called a red herring .
Example: “My opponent says I am weak on crime, but I have been one of the most reliable participants in city council meetings.”
fallacies that misuse appeals to pathos
Appeal to fear, guilt, or pity: using scare tactics to exaggerating possible dangers, evoking an emotional reaction and disregarding the issue at hand.
Example: “Without this additional insurance, you could find yourself broke and homeless.”
Example: “I know I missed assignments, but if you fail me, I will lose my financial aid and have to drop out.”
Appeal to popularity (bandwagon): urging a reader to follow a course of action because “everyone does it.”
Example: “Nine out of ten shoppers have switched to Blindingly-Bright-Smile Toothpaste.”
Appeal to tradition: people have been done it a certain way for a long time; assumes that what has been customary in past is correct and proper.
Example: “We always organize our annual meetings in this way; therefore, we should stick with the same organization for the upcoming year.”
Emotionally Loaded Language: using slanted or biased language
Example: “Only someone out of touch with reality in the 21st century doesn’t do online banking.”
The number and array of logical fallacies can be daunting. The main thing to remember is to look at the way in which an author states and supports the argument in a text. If there are a number of errors in reasoning, the text itself may not be valid for your purposes.
Questions to Analyze the Logic of a Text’s Argument

- Is the claim believable?
- Is the underlying assumption (warrant) acceptable?
- Is the supporting evidence relevant, sufficient, and accurate?
- Has the author cited sources or in some way made it possible for the reader to access evidence used?
- Are there different opinions and perspectives included, especially when there are multiple opinions on an issue?
- Does the author avoid selective use of evidence or other types of manipulation of data?
- Does the offer evidence respectfully, using unbiased language?
- Is there an over-reliance on emotional appeals?
Based on your reading of “ Forget Shorter Showers ” by Derrick Jensen, answer the following questions intended to help you analyze the argument.
- Identify one logical fallacy in each of the first three paragraphs and in the next-to-last paragraph. You do not have to name the fallacies by their formal names; just identify the errors in reasoning in your own words.
Paragraph 1
emotionally charged language / appeal to fear – The phrase “would any sane person” misuses an appeal to pathos, since it over-relies on emotion and on creating fear in the reader.
ad hominem – The phrase “would any sane person” also misuses an appeal to ethos, since it attacks <em>people</em> who believe a certain way, rather than showing the logical error in the belief itself.
smoke screen – There’s a misuse of an appeal to logos in this paragraph, because bringing in the images of dumpster diving to stop Hitler, etc., could derail the reader from the point.
Paragraph 2
hasty generalization – The statistics provided at the end of the paragraph are not based on enough evidence; we’d need to know the source of the statistical information
Paragraph 3
hasty generalization – Again, the statistics provided are not based on enough evidence; we’d need to know the source of the statistical information.
post hoc and appeal to fear – The sentence that people are dying because water is being stolen is a misuse of an appeal to logos and pathos. It does not follow that all people are dying because water is being stolen.
Next-to-last paragraph
slippery slope – It does not follow that death is the end point of seeing the uselessness of simple living as a political act.
- Identify one overall logical fallacy in the whole argument of this text.
One main logical fallacy is false dilemma, a misuse of an appeal to logos. The author sets up an either/or situation: either we take assertive action to fight against the industrial economy’s drain of our resources, or we’re pretty much doomed to deal with ever-diminishing resources. There may be other options, some of which the author explains and rejects.
A question for you to consider: Even though the logic in this text is not totally sound, according to a careful analysis in terms of traditional logic, the author provides thought-provoking ideas. Do you think he could have achieved the same effect with stricter attention to logic?
- Analyzing Arguments/Logical Fallacies includes material adapted from English Composition 1; attribution below. Authored by : Susan Oaks. Project : Introduction to College Reading & Writing. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
- pages on Introduction to Supporting Claims, Supporting Claims, Evaluating Appeals to Ethos, Logos, and Pathos. Provided by : Lumen Learning. Located at : https://courses.lumenlearning.com/wm-englishcomposition1/ . Project : English Composition 1. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
- image of a human head drawn as a graphic, with a person's hand and finger pointing to a spot in the brain. Authored by : Gerd Altmann. Provided by : Pixabay. Located at : https://pixabay.com/illustrations/brain-turn-on-education-read-book-770044/ . License : CC0: No Rights Reserved
- video Analyzing the Argument: Premises and Conclusions (Part 1). Authored by : Marc Franco. Provided by : Snap Language. Located at : https://www.youtube.com/watch?v=jVf_iJpSIrM . License : Other . License Terms : YouTube video
- video Analyzing the Argument: The Evidence (Part 2). Authored by : Marc Franco. Provided by : Snap Language. Located at : https://www.youtube.com/watch?v=jVf_iJpSIrM . License : Other . License Terms : YouTube video
- image of finger pointing to the word Error. Authored by : Gerd Altmann. Provided by : Pixabay. Located at : https://pixabay.com/illustrations/error-www-internet-calculator-101409/ . License : CC0: No Rights Reserved

Privacy Policy
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
To log in and use all the features of Khan Academy, please enable JavaScript in your browser.
Course: LSAT > Unit 1
- Getting started with Logical Reasoning
Introduction to arguments
- Catalog of question types
- Types of conclusions
- Types of evidence
- Types of flaws
- Identify the conclusion | Quick guide
- Identify the conclusion | Learn more
- Identify the conclusion | Examples
- Identify an entailment | Quick guide
- Identify an entailment | Learn more
- Strongly supported inferences | Quick guide
- Strongly supported inferences | Learn more
- Disputes | Quick guide
- Disputes | Learn more
- Identify the technique | Quick guide
- Identify the technique | Learn more
- Identify the role | Quick guide
- Identify the role | learn more
- Identify the principle | Quick guide
- Identify the principle | Learn more
- Match structure | Quick guide
- Match structure | Learn more
- Match principles | Quick guide
- Match principles | Learn more
- Identify a flaw | Quick guide
- Identify a flaw | Learn more
- Match a flaw | Quick guide
- Match a flaw | Learn more
- Necessary assumptions | Quick guide
- Necessary assumptions | Learn more
- Sufficient assumptions | Quick guide
- Sufficient assumptions | Learn more
- Strengthen and weaken | Quick guide
- Strengthen and weaken | Learn more
- Helpful to know | Quick guide
- Helpful to know | learn more
- Explain or resolve | Quick guide
- Explain or resolve | Learn more
Logical Reasoning Arguments
What is an argument.
- A main conclusion: This statement is a claim that expresses what the arguer is trying to persuade us to accept, whether or not it actually is true.
- Evidence: Also known as premises or support, the arguer provides these statements in order to show us that the conclusion is true. Essentially, the evidence answers the question, “Why do you believe [the conclusion] to be true?” The simplest arguments on the LSAT have just one piece of evidence; more complex arguments will have several.
Conclusion + evidence
- Sarah will probably receive a job offer, because she has ten years of experience.
Conclusion + evidence + intermediate conclusion
- Sarah will probably receive a job offer, because she has ten years of experience. That means that she’ll soon pay me back for the money I lent her.
Conclusion + evidence + background information
- One of this neighborhood’s residents has been complaining about his sister Sarah having been unemployed for so long. She’s applying for programming jobs at many companies, but she only received her first interview invite last week. She’ll probably receive a job offer because she has ten years of experience. In a job market like the current one, anything over eight years of experience gives a candidate a great advantage.
How do we identify the main conclusion?
Signal words for conclusions.
- It follows that
- As a result
- Nevertheless
- Nonetheless
- The cat will run away if you open the door . That's because the cat doesn't like being inside.
- 90% of adults in the area returned a survey and indicated that they think crime is on the rise. We need to act quickly to combat this increase in crime .

How do we identify the relevant evidence?
- [Insert conclusion here]
- [Insert the “why” reasoning here].
- [Insert premises here]. Therefore ,
- [Insert conclusion here].
Signal words for evidence
- On the grounds that
- As shown by
Looking ahead
- Assumption (sufficient and necessary)
- Match the flaw
- Match the structure
- Identify the role
- Identify the technique
- Identify the conclusion
Related articles
Want to join the conversation.
- Upvote Button navigates to signup page
- Downvote Button navigates to signup page
- Flag Button navigates to signup page
76 Logic Essay Topic Ideas & Examples
🏆 best logic topic ideas & essay examples, 📌 simple & easy logic essay titles, 👍 good essay topics on logic, 💯 free logic essay topic generator.
- Informal Logic-Fallacies Definition Syntactic ambiguity is the second type of ambiguity and is normally identified by the presence of ambiguous grammar usage or the general structure of the statement. Hence, the ambiguity of this sentence is in the […]
- Logic in Islam and Number of Islamic Theologians Combination of the diverse philosophical ideologies resulted into Islamic logic, which has made marked contribution in the Islamic philosophy.”Historians of logic have long recognized that the medieval Muslim philosophers and philosophical theologians rendered variously as […]
- Analyzing the Logic of an Article: Cultural Authenticity and Recovery Maintenance in a Rural First Nation Community The key question of the article is how culture may bolster resilience in substance abuse recovery as well as what constitutes “cultural authenticity” for both indigenous and non-indigenous residents of a remote community.
- Understanding Economics: The Nature and Logic of Capitalism These profits are determined by the prices of the commodities and the cost of production that the producer incurred during the whole process of production and creation of goods and services[3].
- The Logic: Model and Evaluation At the initiation stage of the project, the targeted indicators and deliverables of the project are s sufficiently drawn by the project staff according to the basic needs assessments already conducted.
- Strategic planning and performance measurement : Logic Model Short-term outcomes are influenced by two major factors, which are awareness and knowledge base of the affected. Conversely, intermediate-term outcomes are identified after a certain program has changed the practices that are common to clients […]
- Programming Logic and Design – Program Change In the online processing method, processing of data takes place as it is input into the program, that is, unlike in batch processing it does not wait for the data to be organized into a […]
- Programming Logic – File Processing for Game Design In most of cases, the PLD used for a given prototyping, is the same PLD that will be put into use in the final invention of the end equipment, like games.
- The Logic of Using Quantitative Data As far as the types of quantitative data required to show the results of an intervention are concerned, it can be suggested that the information including the grades that the students receive for their performance, […]
- Yield Management and Service Dominant Logic The reduction in the price of the goods offered means that loyal customer are now able to enjoy the product during different seasons in a year.
- Work and Family: Institutional Logic The recognition of the practical and theoretical benefits of the institutional approach led to the creation of the notion of institutional logic, which comprises “the socially constructed, historical patterns of material practices, assumptions, values, beliefs, […]
- Radix Sort Algorithm, Its Logic and Applications The sorting process starts from the rightmost digit based on the key or the positions of the numbers being sorted. LSD radix sorts the integers from the least to the most significant digit.
- The Logic of Modern Physics The purpose of this paper is to reflect on the writings of these three scholars and generate three questions that can be discussed in class.
- Say “Stop” to Childhood Obesity: Logic Model The company is related to the priority population since it aims at reducing the rates of childhood obesity among Hispanic children.
- Logic and Philosophy Questions As a rule, a traditional logical inference has two basic elements, i.e, a premise and a conclusion. Therefore, A.
- Relational Logic in “I-It” and “I-You” Relations While considering the concept of “I-It”, specific attention should be paid to the perception of the self through It unless a person is not involved in relation with another thing or object.
- The Use of Logic in the Declaration of Independence: Following Jefferson’s Argument By emphasizing the notions of egalitarianism and the principles of natural law, Jefferson successfully appeals to logic and makes a convincing presentation of the crucial social and legal principles to his opposition.
- Women, Instagram and Calligraphy: Neoliberal Logic in Production of Aesthetic Objects Such a reality imposes the need for the research of a valuable topic that deals with the role of women in the creation of aesthetic content for online commerce on social media.
- History of Logic: Brief Review of Inferences or Judgments The history of logic relates to the progress of the science of valid inference. The logic of Aristotle was of importance during the period of the Renaissance too.
- Logic Dialectic and Rhetoric: Compare and Contrast In addition, the prominent thinker estimated rhetoric in the context of logic, because logic, as well as rhetoric and dialectic, point out the studying of persuasion methods.
- Importance to Reason and Logic Prior to evaluating the strengths and weaknesses of reason as a way of knowing, we should first discuss such concept as knowledge, because even now philosophers and scholars have not come to the agreement as […]
- Language and Logic: The Similarities and Differences A major function of language is that the symbols are subjective. There are various areas of study that will allow one to get the right interpretation of language and logic.
- Logic and Philosophy Relations Aristotle is reputed to be the first man to study the logic concept although there have been other numerous contributions to the concept over the years.
- NGO Logic Model: Review The successful implementation of the proposed project depends on the stakeholders’ ability to be involved and focus on the anticipated short-term and long-term goals.
- Komatsu Company’s Service-Dominant Logic In this case, the company continues to focus on providing unique solutions to customers, but it has more opportunities for development, innovation, and addressing customers’ needs with the help of a new model elements.
- Rene Descartes: Education and Rules of Logic I believe it is a considerable drawback of schooling, and it should be fixed in the near future, as young adults need to learn how to apply the knowledge they get.
- Postmodernism, or, the Cultural Logic of Late Capitalism I agree with the statement because people with different cultures have different ways of doing things and architecture is one of the crucial tools used to express the culture of the people.
- Dangers of Logic and Artificial Intelligence The following are the dangers of logic and artificial intelligence when applied in various areas. The last danger of logic and artificial intelligence relates to autonomous weapons.
- Feelings and Logic in the Literature Works In his short story, Poe covers the side of the senses and the rigor of the mind. Another metaphor is the combination of the heart and the clock that beat in the head of the […]
- Logic and Design: Flowcharts and Pseudocode The basic understanding of logic and design is that processes should be presented in a way that demonstrates certain algorithms, i.e.the description of a process should be precise and should contain detailed instructions on what […]
- Mathematical Platonism: Philosophy’s Loss of Logic In 1953, Gottlob Frege posted a strong argument that the language of mathematics tends to refer to and quantify the mathematical objects and the corresponding theories are true. Frege argues that mathematical language is quantifiable, […]
- Is Female Thinking and Logic Truly Different From the Male’s One It is necessary to analyze this question from a scientific point of view and to understand whether the thought processes of different genders are different.
- Aristotle’s View on the Concept of Logic Thus, it was shown that logic is not just a specific doctrine of specific things or terms, but the science of the laws of syllogisms, such as modus ponens or modus tollens, expressed in variables. […]
- Logic and Statistical Significance This week, we were asked to evaluate the footnote, which stated that due to the fact that the research was explanatory, the level of significance was relaxed to 0.1.
- Value Innovation: The strategic Logic of High Growth
- Virtual to Virtuous Money: A Virtue Ethics Perspective on Video Game Business Logic
- The Prevailing Logic Of Global Microbial Diversity
- The Nature Of Logic As It Relates To Critical Thinking
- Understanding the Source and Logic Behind Violent Conflicts
- The Ramist Logic of Edward Taylor’s Upon a Spider Catching a Fly
- The Pure Logic of Accounting: A Critique of the Fair Value Revolution
- The Relevance of Logic in Our Everyday Lives
- Use of Logic in Monty Python and the Holy Grail
- The Undercover Parent: Coben’s Spyware Logic
- Zen Action, Zen Person And Nagarjuna: The Logic Of Emptiness
- The Teachings of Christ: The Logic to Morality
- Use of Programmable Logic Control in Modern Vehicle
- The Strategic Logic of Suicide Terrorism
- Understanding Logic: Inductive or Deductive
- Value, Price and Exploitation: The Logic of the Transformation Problem
- The Threat From Logic And Compassion
- The Symbolism of the Costume of Anita in Dog Logic
- What Is The Fundamental Economic Logic Of Minoli s Turnaround
- What Love & Logic Means to Effective Parenting
- Verifying Logic Circuits by Benders Decomposition
- To What Extent Can Logic, Math or Music Be Classified as a Language?
- Value Co Creation And Service Dominant Logic
- Using A Logic Table For More Efficient Research
- Theory Ok Knowledge: Emotion’s Role in Logic and Reason
- The Moral Logic and Growth of Suicide Terrorism
- What Did Aristotle Contribute to the Discipline of Logic
- The Role Of Cognitive Development, Logic, And Emotionality
- Understanding the Logic of Learned Education
- What Logic Was Forwarded by Schwcitzguebel in Support of Tourism
- The Sanctions Debate and the Logic of Choice/Diplomacy
- The Theory of Fuzzy Logic and its Application to Real Estate Valuation
- The Notion of Hyperreality in Frederic Jameson’s Cultural Logic of Late Capitalism
- Use Of Logic To Seduce Women In John Donne’s ‘The Flea’ And Andrew Marvell’s ‘To His Coy Mistress’
- The World Religion Dataset, 1945–2010: Logic, Estimates, and Trends
- Understanding The Logic Between Material And Ideological
- Vocab: Logic and Sounds. Deductive Reasoning
- The Reason And Logic Behind The Law
- The Nature of Logic and Perception
- The Nature and Logic of Capitalism by Heilbroner
- The Political-Economic Logic of World Governance
- The New Growth Theory: Its Logic and Trade Policy Implications
- Chicago (A-D)
- Chicago (N-B)
IvyPanda. (2023, September 26). 76 Logic Essay Topic Ideas & Examples. https://ivypanda.com/essays/topic/logic-essay-topics/
"76 Logic Essay Topic Ideas & Examples." IvyPanda , 26 Sept. 2023, ivypanda.com/essays/topic/logic-essay-topics/.
IvyPanda . (2023) '76 Logic Essay Topic Ideas & Examples'. 26 September.
IvyPanda . 2023. "76 Logic Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/logic-essay-topics/.
1. IvyPanda . "76 Logic Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/logic-essay-topics/.
Bibliography
IvyPanda . "76 Logic Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/logic-essay-topics/.
- Consciousness Ideas
- Experiment Questions
- Scientist Paper Topics
- Mind Research Ideas
- Brain Titles
- Systems Thinking Essay Ideas
- LEGO Paper Topics
- Virtue Essay Ideas
- Urban Planning Research Ideas
- Virtual Reality Topics
- Theology Topics
- Video Game Topics
- Technology Essay Ideas
- Structuralism Essay Topics
- Philosophy of Education Paper Topics
Here is your short essay on logic
Logic can be defined as the systematic study of the methods and principles of correct reasoning or arguments. Logic teaches us the techniques and methods for testing the correctness of different kinds of reasoning. It helps us to detect errors in reasoning by examining and analysing the various common fallacies in reasoning.
Let us examine some of the proposed definitions of logic. Some logicians define logic as an art of reasoning.
According to this view since logic develops the skill or ability to reason correctly, it is an art. As an art, logic provides the methods and technique for testing the correctness or incorrectness of arguments. Music, dance, cooking are instances of art. They aim to develop our skills. In these disciplines practice makes a person more skillful.
A student of logic is required to work out the exercises as a part of his or her learning the subject. So logic is an art. Some define logic as both science and art of reasoning. A science is a systematic study of phenomena which are within the area of its investigation. It undertakes to formulate the laws or principle which holds well without exception.
ADVERTISEMENTS:
Logic is a science as it is a systematic study of the method and principles of correct reasoning. Logic also studies and clarifies the different types of fallacies which are committed in correct reasoning.
A distinction can be drawn between positive and normative sciences. A positive science describes how the facts in its area of investigation actually behave. It arrives at general laws by the methods of observation and experiment.
A normative science, on the other hand, investigates the norms as standard that should be applied. Logic is not a positive science, since it does not report how people actually reason or argue. Since it deals with the standards or principles of correct thinking, it is a normative science.
The use of the word ‘reasoning’ in the above definitions may be misleading. The term ‘reasoning’ may signify a mental process or a mental product. In logic, we are not concerned with the actual process of reasoning rather with arguments which is a product. A thought when expressed in language becomes an argument.
Thus, the statement that logic is an art and at the same time science of reasoning gives important insights into the nature of logic but as a definition, it is not very accurate.
Some logicians claim that logic is the science of laws of thought. But such a view is not correct because all reasoning involves thinking but all thinking cannot be called reasoning. Logic deals with correct reasoning and not with all types of thinking. There are many mental processes such as remembering, imagining, day-dreaming etc. which can be instances of thinking without involving any reasoning. Psychology studies all these phenomena, but logic deals only with reasoning.
Further logic does not discover any descriptive laws but it formulates the principles of correct reasoning. We can sum up by stating that logic helps one to improve upon the quality of reasoning. It provides technique to strengthen and polish the skill of reasoning. It aims at providing a solid foundation by which one can distinguish between correct and incorrect reasoning.
Related Articles:
- Here is your short paragraph on Logic
- What are the Fundamental Principles of Logic?
- Key notes on the Discipline of Logic
- What do you mean by fallacy?
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- How to structure an essay: Templates and tips
How to Structure an Essay | Tips & Templates
Published on September 18, 2020 by Jack Caulfield . Revised on July 23, 2023.
The basic structure of an essay always consists of an introduction , a body , and a conclusion . But for many students, the most difficult part of structuring an essay is deciding how to organize information within the body.
Table of contents
The basics of essay structure, chronological structure, compare-and-contrast structure, problems-methods-solutions structure, signposting to clarify your structure, other interesting articles, frequently asked questions about essay structure.
There are two main things to keep in mind when working on your essay structure: making sure to include the right information in each part, and deciding how you’ll organize the information within the body.
Parts of an essay
The three parts that make up all essays are described in the table below.
Order of information
You’ll also have to consider how to present information within the body. There are a few general principles that can guide you here.
The first is that your argument should move from the simplest claim to the most complex . The body of a good argumentative essay often begins with simple and widely accepted claims, and then moves towards more complex and contentious ones.
For example, you might begin by describing a generally accepted philosophical concept, and then apply it to a new topic. The grounding in the general concept will allow the reader to understand your unique application of it.
The second principle is that background information should appear towards the beginning of your essay . General background is presented in the introduction. If you have additional background to present, this information will usually come at the start of the body.
The third principle is that everything in your essay should be relevant to the thesis . Ask yourself whether each piece of information advances your argument or provides necessary background. And make sure that the text clearly expresses each piece of information’s relevance.
The sections below present several organizational templates for essays: the chronological approach, the compare-and-contrast approach, and the problems-methods-solutions approach.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
The chronological approach (sometimes called the cause-and-effect approach) is probably the simplest way to structure an essay. It just means discussing events in the order in which they occurred, discussing how they are related (i.e. the cause and effect involved) as you go.
A chronological approach can be useful when your essay is about a series of events. Don’t rule out other approaches, though—even when the chronological approach is the obvious one, you might be able to bring out more with a different structure.
Explore the tabs below to see a general template and a specific example outline from an essay on the invention of the printing press.
- Thesis statement
- Discussion of event/period
- Consequences
- Importance of topic
- Strong closing statement
- Claim that the printing press marks the end of the Middle Ages
- Background on the low levels of literacy before the printing press
- Thesis statement: The invention of the printing press increased circulation of information in Europe, paving the way for the Reformation
- High levels of illiteracy in medieval Europe
- Literacy and thus knowledge and education were mainly the domain of religious and political elites
- Consequence: this discouraged political and religious change
- Invention of the printing press in 1440 by Johannes Gutenberg
- Implications of the new technology for book production
- Consequence: Rapid spread of the technology and the printing of the Gutenberg Bible
- Trend for translating the Bible into vernacular languages during the years following the printing press’s invention
- Luther’s own translation of the Bible during the Reformation
- Consequence: The large-scale effects the Reformation would have on religion and politics
- Summarize the history described
- Stress the significance of the printing press to the events of this period
Essays with two or more main subjects are often structured around comparing and contrasting . For example, a literary analysis essay might compare two different texts, and an argumentative essay might compare the strengths of different arguments.
There are two main ways of structuring a compare-and-contrast essay: the alternating method, and the block method.
Alternating
In the alternating method, each paragraph compares your subjects in terms of a specific point of comparison. These points of comparison are therefore what defines each paragraph.
The tabs below show a general template for this structure, and a specific example for an essay comparing and contrasting distance learning with traditional classroom learning.
- Synthesis of arguments
- Topical relevance of distance learning in lockdown
- Increasing prevalence of distance learning over the last decade
- Thesis statement: While distance learning has certain advantages, it introduces multiple new accessibility issues that must be addressed for it to be as effective as classroom learning
- Classroom learning: Ease of identifying difficulties and privately discussing them
- Distance learning: Difficulty of noticing and unobtrusively helping
- Classroom learning: Difficulties accessing the classroom (disability, distance travelled from home)
- Distance learning: Difficulties with online work (lack of tech literacy, unreliable connection, distractions)
- Classroom learning: Tends to encourage personal engagement among students and with teacher, more relaxed social environment
- Distance learning: Greater ability to reach out to teacher privately
- Sum up, emphasize that distance learning introduces more difficulties than it solves
- Stress the importance of addressing issues with distance learning as it becomes increasingly common
- Distance learning may prove to be the future, but it still has a long way to go
In the block method, each subject is covered all in one go, potentially across multiple paragraphs. For example, you might write two paragraphs about your first subject and then two about your second subject, making comparisons back to the first.
The tabs again show a general template, followed by another essay on distance learning, this time with the body structured in blocks.
- Point 1 (compare)
- Point 2 (compare)
- Point 3 (compare)
- Point 4 (compare)
- Advantages: Flexibility, accessibility
- Disadvantages: Discomfort, challenges for those with poor internet or tech literacy
- Advantages: Potential for teacher to discuss issues with a student in a separate private call
- Disadvantages: Difficulty of identifying struggling students and aiding them unobtrusively, lack of personal interaction among students
- Advantages: More accessible to those with low tech literacy, equality of all sharing one learning environment
- Disadvantages: Students must live close enough to attend, commutes may vary, classrooms not always accessible for disabled students
- Advantages: Ease of picking up on signs a student is struggling, more personal interaction among students
- Disadvantages: May be harder for students to approach teacher privately in person to raise issues
An essay that concerns a specific problem (practical or theoretical) may be structured according to the problems-methods-solutions approach.
This is just what it sounds like: You define the problem, characterize a method or theory that may solve it, and finally analyze the problem, using this method or theory to arrive at a solution. If the problem is theoretical, the solution might be the analysis you present in the essay itself; otherwise, you might just present a proposed solution.
The tabs below show a template for this structure and an example outline for an essay about the problem of fake news.
- Introduce the problem
- Provide background
- Describe your approach to solving it
- Define the problem precisely
- Describe why it’s important
- Indicate previous approaches to the problem
- Present your new approach, and why it’s better
- Apply the new method or theory to the problem
- Indicate the solution you arrive at by doing so
- Assess (potential or actual) effectiveness of solution
- Describe the implications
- Problem: The growth of “fake news” online
- Prevalence of polarized/conspiracy-focused news sources online
- Thesis statement: Rather than attempting to stamp out online fake news through social media moderation, an effective approach to combating it must work with educational institutions to improve media literacy
- Definition: Deliberate disinformation designed to spread virally online
- Popularization of the term, growth of the phenomenon
- Previous approaches: Labeling and moderation on social media platforms
- Critique: This approach feeds conspiracies; the real solution is to improve media literacy so users can better identify fake news
- Greater emphasis should be placed on media literacy education in schools
- This allows people to assess news sources independently, rather than just being told which ones to trust
- This is a long-term solution but could be highly effective
- It would require significant organization and investment, but would equip people to judge news sources more effectively
- Rather than trying to contain the spread of fake news, we must teach the next generation not to fall for it
A faster, more affordable way to improve your paper
Scribbr’s new AI Proofreader checks your document and corrects spelling, grammar, and punctuation mistakes with near-human accuracy and the efficiency of AI!

Proofread my paper
Signposting means guiding the reader through your essay with language that describes or hints at the structure of what follows. It can help you clarify your structure for yourself as well as helping your reader follow your ideas.
The essay overview
In longer essays whose body is split into multiple named sections, the introduction often ends with an overview of the rest of the essay. This gives a brief description of the main idea or argument of each section.
The overview allows the reader to immediately understand what will be covered in the essay and in what order. Though it describes what comes later in the text, it is generally written in the present tense . The following example is from a literary analysis essay on Mary Shelley’s Frankenstein .
Transitions
Transition words and phrases are used throughout all good essays to link together different ideas. They help guide the reader through your text, and an essay that uses them effectively will be much easier to follow.
Various different relationships can be expressed by transition words, as shown in this example.
Because Hitler failed to respond to the British ultimatum, France and the UK declared war on Germany. Although it was an outcome the Allies had hoped to avoid, they were prepared to back up their ultimatum in order to combat the existential threat posed by the Third Reich.
Transition sentences may be included to transition between different paragraphs or sections of an essay. A good transition sentence moves the reader on to the next topic while indicating how it relates to the previous one.
… Distance learning, then, seems to improve accessibility in some ways while representing a step backwards in others.
However , considering the issue of personal interaction among students presents a different picture.
If you want to know more about AI tools , college essays , or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools!
- Ad hominem fallacy
- Post hoc fallacy
- Appeal to authority fallacy
- False cause fallacy
- Sunk cost fallacy
College essays
- Choosing Essay Topic
- Write a College Essay
- Write a Diversity Essay
- College Essay Format & Structure
- Comparing and Contrasting in an Essay
(AI) Tools
- Grammar Checker
- Paraphrasing Tool
- Text Summarizer
- AI Detector
- Plagiarism Checker
- Citation Generator
The structure of an essay is divided into an introduction that presents your topic and thesis statement , a body containing your in-depth analysis and arguments, and a conclusion wrapping up your ideas.
The structure of the body is flexible, but you should always spend some time thinking about how you can organize your essay to best serve your ideas.
An essay isn’t just a loose collection of facts and ideas. Instead, it should be centered on an overarching argument (summarized in your thesis statement ) that every part of the essay relates to.
The way you structure your essay is crucial to presenting your argument coherently. A well-structured essay helps your reader follow the logic of your ideas and understand your overall point.
Comparisons in essays are generally structured in one of two ways:
- The alternating method, where you compare your subjects side by side according to one specific aspect at a time.
- The block method, where you cover each subject separately in its entirety.
It’s also possible to combine both methods, for example by writing a full paragraph on each of your topics and then a final paragraph contrasting the two according to a specific metric.
You should try to follow your outline as you write your essay . However, if your ideas change or it becomes clear that your structure could be better, it’s okay to depart from your essay outline . Just make sure you know why you’re doing so.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Caulfield, J. (2023, July 23). How to Structure an Essay | Tips & Templates. Scribbr. Retrieved November 15, 2023, from https://www.scribbr.com/academic-essay/essay-structure/
Is this article helpful?

Jack Caulfield
Other students also liked, comparing and contrasting in an essay | tips & examples, how to write the body of an essay | drafting & redrafting, transition sentences | tips & examples for clear writing, what is your plagiarism score.
Help | Advanced Search
Computer Science > Computation and Language
Title: language models can be logical solvers.
Abstract: Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical questions into symbolic representations first and then adopt external logical solvers to take in the symbolic representations and output the answers. Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of the external logical solver and no answer to the logical questions. In this paper, we introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers and bypasses the parsing errors by learning to strict adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers. Experimental results on two public deductive reasoning datasets demonstrate that LoGiPT outperforms state-of-the-art solver-augmented LMs and few-shot prompting methods on competitive LLMs like ChatGPT or GPT-4.
Submission history
Access paper:.
- Download PDF
- Other Formats
References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation

Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Use Mastering Assistant in Logic Pro
With Mastering Assistant in Logic Pro for Mac and iPad, make your final mix sound great on any playback device.
Insert Mastering Assistant on the stereo output channel strip when you have finished your final mix down. After Mastering Assistant analyzes your audio and applies processing to your mix, you can adjust individual parameters like changing presets, adjusting EQ, and more. When you’re satisfied with the results, bounce your mix.
Insert Mastering Assistant
Insert Mastering Assistant on the stereo output channel of your project, after any other plug-ins you might already have inserted on the stereo output channel:
- On Mac, click Mastering, which is the last Audio Effect slot on the stereo output channel strip. Or Choose Mix > Mastering Assistant.
- On iPad, tap the Mixer button, tap the Setup button in the toolbar of the Mixer, then tap Mastering at the bottom of the Audio Effect slot of the stereo output channel strip.

Mastering Assistant immediately analyzes your project, including all software instruments and effects. Based on its analysis, Mastering Assistant might do any of the following to your mix:
- Apply corrective EQ
- Adjust the loudness
- Adjust the overall stereo spread
Play your project to listen to how Mastering Assistants affects your mix. To turn Mastering Assistant on and off, click or tap Bypass in the Mastering Assistant window (or toggle it on and off in the Audio Effect slot).
If you set locators, you can have Mastering Assistant analyze just that section of your project. For example, you might want Mastering Assistant to analyze just the loudest part of your project.
Adjust Mastering Assistant
You can adjust a variety of parameters in Mastering Assistant after it initially analyzes the audio in your project. You can click Reanalyze at any point after adjusting these parameters if you don’t like the changes you’ve made.
Choose a different Character preset
When you initially insert Mastering Assistant, it uses the default Character preset called “Transparent” to process your mix. Transparent is suitable for most genres of music, and is based on classic analog hardware mastering signal paths.
If you’re using Logic Pro on a Mac with Apple silicon or Logic Pro for iPad, you can choose different Character presets that have different sonic characteristics:
- Open the Mastering Assistant plug-in window.
- For a clean yet punchy sound, choose Clean, which is designed for EDM and acoustic music.
- For deep low end and refined high end, choose Valve, which is a good option for both acoustic music and hip-hop.
- For an aggressive sound with a subtle emphasis on mid-range frequencies, choose Punch, which is ideally suited for rock music.
Mastering Assistant applies corrective EQ based on its analysis of your mix. You can adjust the overall level of EQ and make manual adjustments to the EQ curve:
- To adjust the overall level of EQ applied by Mastering Assistant, drag the Auto EQ slider up or down. The EQ curve in the window changes depending on how you set the Auto EQ slider.

The spectrum analyzer under the EQ curve shows the level of the frequencies across the entire frequency spectrum in real time as you play your track.
Adjust loudness and other dynamics
Mastering Assistant adjusts the loudness to optimize the perceived volume of your mix according industry standards. You can also manually adjust the Loudness knob and other dynamics parameters:
- Turn the Loudness knob to increase or decrease loudness. When the knob is in the center position, the output of your mix registers at around –14 LUFS-I (Loudness Units, relative to Full Scale, Integrated), which is typically the target loudness for many streaming platforms. Increasing the loudness can reduce dynamics in your mix.

- To add crispness to your mix, try turning on Excite. Excite adds saturation to the upper-mid range frequencies of your mix.
The Dynamics section also has meters and other tools that can help optimize the loudness of your mix.
Adjust the stereo width of your mix
Mastering Assistant analyzes the stereo width of your mix and sets the Width knob accordingly. To manually increase or decrease the width of your mix, turn the Width knob. Turn the knob all the way to the left to create a mono mix that you can use to check mono compatibility.
You can use the Correlation meter to check the phase relationship of a your mix. Your mix should be above zero for good mono compatibility.
Bounce your final mix
Before you bounce your mix, check this list so you get the best results:
- If you’ve made any adjustments to your mix after Mastering Assistant’s initial analysis, including altering the volume or EQ on individual tracks, click Reanalyze.
- Turn off Loudness Compensation.
Start a discussion in Apple Support Communities
- Ask a Question
- Write a Blog Post
- Login / Sign-up
Technical Articles

ABAP RAP: Defaulting action parameters with complex business logic in a Fiori Elements application

- Default action parameter with a constant value . @EndUserText.label: 'Abstract entity for Supplier' @Metadata.allowExtensions: true define root abstract entity zrk_a_supplier { @UI.defaultValue: 'S000000003' ToBesupplier : zrk_sup_no; }

- Default action parameter with complex business logic: This is the primary focus of this blog post to enrich the business logic in this context. The true strength of ABAP RAP Defaulting Actions lies in their ability to handle complex business logic. Developers can embed intricate algorithms, conditions, and calculations within the defaulting actions, allowing for the dynamic determination of default values. This ensures that the Fiori Elements application not only provides a streamlined user experience but also adheres to the unique requirements of the business. This involves a combination of conditional statements, calculations, and data queries to dynamically determine default values. For example, defaulting a sales price based on historical data, currency exchange rates, and current market trends.
Business example :
The business case is to convert a purchase requisition into a purchase contract. There could be different cases.
- Some of the fields in a PR are optional but mandatory to create a PC and hence to be prefilled. Ex. Validity dates
- Some of the fields can be modified before creating a PC. Ex. Description of a PC
- Some of the fields require a business logic to determine. Ex: Determination of source of supply if the supplier is not filled in PR
Implementation :
The action definition needs to be enriched with a function in the base behavior definition and the function needs to be consumed in the projection layer. Below are the detailed steps.
- Define the action in the base behavior with a function. Note : The function name must start with GetDefaultsFor and followed by action name. action ( features : instance, precheck ) Convert_Into_PC parameter ZRK_A_ActionParam_PR_To_PC result [1] $self { default function GetDefaultsForConvert_Into_PC; }
- Quick fix assistant can be used to create the method definition and implementation. METHODS convert_into_pc FOR MODIFY IMPORTING keys FOR ACTION _prhead~convert_into_pc RESULT result . METHODS GetDefaultsForConvert_Into_PC FOR READ IMPORTING keys FOR FUNCTION _PRHead~GetDefaultsForConvert_Into_PC RESULT result.

- Enrich the method implementation with business logic in the function to default the values. Note : If it is a create or create by association action, then %cid needs to be used instead of %tky METHOD GetDefaultsForConvert_Into_PC. " Read the requisition header READ ENTITIES OF zrk_i_pur_req_h IN LOCAL MODE ENTITY _PRHead ALL FIELDS WITH CORRESPONDING #( keys ) RESULT DATA(lt_pur_req). CHECK lt_pur_req IS NOT INITIAL. " Read the requisition item READ ENTITIES OF zrk_i_pur_req_h IN LOCAL MODE ENTITY _PRHead BY \_PRItem ALL FIELDS WITH CORRESPONDING #( keys ) RESULT DATA(lt_pur_req_item). CHECK lt_pur_req_item IS NOT INITIAL. LOOP AT lt_pur_req ASSIGNING FIELD-SYMBOL(<fs_pur_req>). APPEND INITIAL LINE TO result ASSIGNING FIELD-SYMBOL(<fs_result>). " If it is create operation, then %cid needs to be used instead of %tky <fs_result>-%tky = <fs_pur_req>-%tky. <fs_result>-%param-description = |Created from { <fs_pur_req>-ObjectId }|. <fs_result>-%param-buyer = COND #( WHEN <fs_pur_req>-Buyer IS NOT INITIAL THEN <fs_pur_req>-Buyer ELSE sy-uname ). " Default the company code from the user attributes in the Org structure <fs_result>-%param-Company_code = zrk_cl_mng_pur_con=>determine_company_code( ). " Calculate the validity dates <fs_result>-%param-valid_from = cl_abap_context_info=>get_system_date( ). <fs_result>-%param-valid_to = cl_abap_context_info=>get_system_date( ) + 365. " Take the first supplier from the requisition item LOOP AT lt_pur_req_item ASSIGNING FIELD-SYMBOL(<fs_item>) WHERE Supplier IS NOT INITIAL. <fs_result>-%param-supplier = <fs_item>-Supplier. EXIT. ENDLOOP. " If the supplier is not assigned to any item in PR, then determine from source of supply IF <fs_result>-%param-supplier IS INITIAL. <fs_result>-%param-supplier = zrk_cl_mng_pur_con=>determine_supplier_material( iv_material = VALUE #( lt_pur_req_item[ 1 ]-PartNo ) ). ENDIF. ENDLOOP. ENDMETHOD.
- Consume the function in the projection behavior for UI usage. use function GetDefaultsForConvert_Into_PC ;
- Proceed with the action implementation to be executed based on the inputs. For more details on the implementation approach, please refer to the earlier blog .

- [ Optional ] Notice that the below annotation is already mapped in metadata which developers are supposed to write in BAS/WebIDE. <Annotations Target="SAP__self.Convert_Into_PC(SAP__self.PRHeadType)"> <Annotation Term="SAP__core.OperationAvailable" Path="_it/__OperationControl/Convert_Into_PC"/> <Annotation Term="SAP__common.DefaultValuesFunction" String="com.sap.gateway.srvd.zrk_ui_pur_req.v0001.GetDefaultsForConvert_Into_PC"/> </Annotations>
Additional notes:
- If there are multiple records selected in a list report page and different default values are determined, then the framework ignores the field and it will be blank.
- The function name must start with GetDefaultsFor and followed by action name.
- If it is a create or create by association action, then %cid needs to be used instead of %tky
- This feature is released as part of 2311 BTP release.
Conclusion:
ABAP RAP Defaulting action parameters with complex business logic empower Fiori Elements applications to be not just user-friendly but also intelligent and adaptable to the dynamic nature of business processes. The ability to customize defaulting actions according to specific business needs ensures a more seamless and efficient user experience.
References:
https://help.sap.com/docs/abap-cloud/abap-rap/operation-defaulting
https://community.sap.com/topics/abap/rap
For more similar content, Follow me on community or LinkedIn
Assigned Tags

Insert/edit link
Enter the destination URL
Or link to existing content
7 Different Ways to Change the Audio Volume in Logic Pro
Wield varied methods to change the volume of your audio in Logic Pro.
While changing the volume of audio and software instruments in Logic Pro is a straightforward task, different methods are better suited to managing specific audio contexts. Some ways of altering the volume may even complicate your mixing process down the line, so make sure you're applying the right method to provide creative freedom rather than limitations.
1. Volume Faders
In Logic Pro, you can find horizontal volume faders in the track header of each track, and vertical faders in the left and right channel strips found in the inspector section ( I ) on the left.
The horizontal faders in the track header are only visible if you are sufficiently zoomed in ( Cmd + up/down arrow ). However, it's a good idea to check your left inspector channel strip, which should show the selected track, to read the exact dB level.
The right inspector channel strip will often show the Stereo Output channel with its corresponding fader. It's best to leave this fader untouched; use it to monitor the overall dB level of your output and mix. For more mixing tips, look into some of the best practices to improve your production skills .
If you've set up a send that routes a track to a bus/aux track, click on the Sends in the left channel strip, and you can then adjust its volume in the right inspector channel strip.
These faders represent the go-to method to quickly and precisely alter the level of all your audio elements.
Another way you can do this in a streamlined manner is by pressing X to show the mixing window. This enables you to view all your channel strips, so you can change their volume while reading their exact dB levels.
Remember to monitor the levels of your tracks to avoid clipping. If you're new to Logic, look into the beginner's guide to Logic Pro .
2. Region Inspector
The Region Inspector can be used in Logic Pro to change the gain of audio regions. This makes it a great tool for adjusting the level of individual/selected audio regions rather than a universal volume change to a given audio track.
To do so, press the arrow next to Region in the inspector window, and double-click on the field next to Gain in the drop-down menu. Then, input your desired positive/negative dB value.
You can also press the More option in the Region Inspector to quickly fade audio regions in Logic Pro or even reverse your audio in Logic Pro .
The level of MIDI regions cannot be altered in the same way via the Region Inspector. One workaround is to convert your MIDI regions into audio regions via bouncing ( C trl + B ). Another method is to go into the Piano Roll editor and change the velocity of your MIDI notes.
3. MIDI Velocity Values
Software instruments determine how loud or soft they play a MIDI note via its velocity value. This often determines the articulation of a MIDI note.
Double-click on a MIDI region in Logic Pro to open up the Piano Roll Editor ( P ). You can then select single/multiple MIDI notes and alter their velocity using the Velocity slider in the bottom-left.
One way you can add life to your MIDI instruments is to randomize the velocity of your MIDI notes within a specified range. Look into the best MIDI editing tools in Logic Pro for more ways to improve your MIDI regions.
4. Track and Region Volume Automation
Track and region volume automation provides you with tools to dynamically change the volume of a given track or region over time. Learn how to use automation to make the most of this essential tool.
To get started, press A to enable automation mode, and make sure Volume is selected in the track header (next to the dB level). Track automation is enabled by default which allows you to automate changes to the entire track. Alternatively, you can press the blue Track button to switch it to Region ; this means any automation changes you make are confined to that region.
Your volume fader will lock on to any track automation changes you make; any manual volume fader edit post-track automation will not take effect. Region automation can be beneficial as it restricts this lock-on effect to specific regions and not the entire track.
One way to alter the volume of a track even after track automation is set is to hover over the dB value box where Trim should appear. Then drag up or down to apply a universal volume change to all automated levels and tweaks.
5. Gain Plugins
An alternative to complete reliance on volume faders and volume automation is the use of Logic's stock gain plugin or other gain tools. Just like faders, you can increase or reduce the dB of a given track via the gain dial in such plugins.
The reason for doing so is to avoid the limitations that occur when you apply volume automation on a track.
Automate the gain of your gain plugin instead of your track's volume. This will allow you to continue to edit with your volume faders freely throughout your mixing process.
6. In-Built Plugin Output Controls
Many plugins have an output slider that will impact the level of the track they are working on. Generally, EQs and compressors are plugins you will want to use this feature on.
The reason is that, in most cases, you find a good level for your track(s) before the use of such plugins. In other words, EQs and compressors aren't used as a tool to get the volume level you're after; they refine and tighten up your sound. As a result, you want to get the same volume level on a track pre- and post-compression and equalization.
If your EQ cuts reduce the level of your track by 2 dB, push up the output/gain option by 2 dB. If your compressor applies 3 dB of gain reduction, push up the make-up gain/output control by 3 dB. For more information on these plugins, look into how to use EQs and how to use compression plugins .
7. Master Volume Slider
Similar to the Stereo Output, there often isn't much reason to increase the Master Volume Slider (found in the top-right above the workspace area or far-right in the mixing window). Doing so can impact the overall sonic quality of your work.
However, you may want to max the volume of the Master Slider and Stereo Output if your laptop or device has a problem with its audio output, and you happen to be without headphones or speakers. Conversely, you may need to drop the volume rapidly to avoid clipping and screeching feedback issues.
Just remember to reset ( A lt + click ) the sliders at the end as such changes can ruin a mix and subsequent mastering attempts.
Master the Balance of Levels in Logic Pro
Changing the volume in Logic Pro is simple; mastering the different methods takes time. Use the volume faders in the track headers, inspector channel strips, and mixing window to monitor and make quick changes to the dB levels. Go for the Region Inspector to edit the level of individual audio regions, and the Piano Roll Editor to change the velocity of MIDI notes.
Use gain plugins to get around the limitations of track volume automation, and plugin output controls to balance levels pre- and post-effects. Add in the Master Volume slider for emergencies, and you have a volume tool for every situation in Logic Pro.

IMAGES
VIDEO
COMMENTS
Logic is a formal system of analysis that helps writers invent, demonstrate, and prove arguments. It works by testing propositions against one another to determine their accuracy.
Logic is the discipline that aims to distinguish good reasoning from bad. Good reasoning is not necessarily effective reasoning. In fact, as we shall see in a subsequent chapter on logical fallacies, bad reasoning is pervasive and often extremely effective—in the sense that people are often persuaded by it.
Logic is the study of correct reasoning.It includes both formal and informal logic.Formal logic is the science of deductively valid inferences or logical truths.It studies how conclusions follow from premises due to the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory.
Logic, the study of correct reasoning, especially as it involves the drawing of inferences. This article discusses the basic elements and problems of contemporary logic and provides an overview of its different fields. For treatment of the historical development of logic, see logic, history of. For
Logic is a system of principles that uses reason to determine if a conclusion is true or untrue. A person using logic will come to a generalized conclusion by looking at the given information...
Not unlike this proliferation of meanings, the subject matter of logic has been said to be the "laws of thought ," "the rules of right reasoning," "the principles of valid argumentation," "the use of certain words labelled 'logical constants'," "truths (true propositions) based solely on the meanings of the terms they contain," and so on.
24319. Matthew Knachel. University of Wisconsin - Milwaukee. Like many human activities, reasoning can be done well, or it can be done badly. The goal of logic is to distinguish good reasoning from bad. Good reasoning is not necessarily effective reasoning; in fact, as we shall see, bad reasoning is pervasive and often extremely effective—in ...
This chapter is an overview of Logic as presented in this book. We start with a discussion of possible worlds and illustrate the notion in an application area known as Sorority World. We then give an informal introduction to the key elements of Logic - logical sentences, logical entailment, and logical proofs.
In simple words, logic is "the study of correct reasoning, especially regarding making inferences." Logic began as a philosophical term and is now used in other disciplines like math and computer science. While the definition sounds simple enough, understanding logic is a little more complex.
1. Introduction. Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap from time to time and problems or questions arise that concern both. This survey article is intended to discuss some of these areas of overlap.
What is Logic? Date March 8, 2018 author Paul Herrick Dr. Paul Herrick, author of three logic texts, gives an overview of logic, its history, and its importance. Even if you don't have a background in philosophy, you will find this summary helpful and informative.
A Logical Reasoning question is made up of these parts: Passage/stimulus: This text is where we'll find the argument or the information that forms the basis for answering the question. Sometimes there will be two arguments, if two people are presented as speakers. Question/task: This text, found beneath the stimulus, poses a question.
Learning Objectives. identify patterns of logical organization in texts. identify basic features of rhetorical patterns (narrative, comparison, definition, etc.) identify logical structures in argument. identify logical fallacies. Human beings love order, and we will try to impose order in almost every situation. That includes reading.
Logic investigates inferences in terms of the arguments that represent them. Recall that an argument is a collection of statements (declarative sentences), one of which is designated as the conclusion, and the remainder of which are designated as the premises. Also recall that usually in an argument the premises are offered to
Logical arguments need to be reasonable; supported with appropriate, relevant evidence from valid sources; and based on acceptable assumptions. Knowing a bit about logical arguments will help you analyze a text intended to persuade, as well as write your own persuasive, logical arguments. Logical Argument Basics Main Idea, Content, Warrant
A logical fallacy is an argument that can be disproven through reasoning. This is different from a subjective argument or one that can be disproven with facts; for a position to be a logical fallacy, it must be logically flawed or deceptive in some way. Compare the following two disprovable arguments. Only one of them contains a logical fallacy:
Logical Reasoning Arguments. The Logical Reasoning section of the LSAT assesses your ability to analyze arguments. In this article, we'll introduce you to the components of an argument and how to recognize them. If this is new to you, it's a good idea to spend quite a bit of time on these foundational skills.
76 Logic Essay Topic Ideas & Examples Updated: Sep 26th, 2023 6 min Table of Contents We will write a custom essay specifically for you for only 11.00 9.35/page Learn More 🏆 Best Logic Topic Ideas & Essay Examples Informal Logic-Fallacies Definition
Logic is a science as it is a systematic study of the method and principles of correct reasoning. Logic also studies and clarifies the different types of fallacies which are committed in correct reasoning. A distinction can be drawn between positive and normative sciences.
The chronological approach (sometimes called the cause-and-effect approach) is probably the simplest way to structure an essay. It just means discussing events in the order in which they occurred, discussing how they are related (i.e. the cause and effect involved) as you go. A chronological approach can be useful when your essay is about a ...
Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical ...
What are essay transition words? In general, transition words and phrases bridge the gap between two topics whose connection isn't obvious. Transition words and phrases like however, although, likewise, and on the contrary cue the reader that a change is coming so they know to expect it.. The type of transition word or phrase signals which type of change is coming.
Use virtual MIDI devices in Logic Pro. Virtual MIDI devices overview in Logic Pro for Mac; Record MIDI messages from another music app; Send MIDI messages from Logic Pro for Mac to another music app; Work with projects. Projects overview; Create projects; Open projects; Save projects; Delete projects; Play and navigate projects. Play a project ...
Insert Mastering Assistant on the stereo output channel of your project, after any other plug-ins you might already have inserted on the stereo output channel: On Mac, click Mastering, which is the last Audio Effect slot on the stereo output channel strip. Or Choose Mix > Mastering Assistant. On iPad, tap the Mixer button, tap the Setup button ...
Default action parameter with complex business logic:This is the primary focus of this blog post to enrich the business logic in this context. The true strength of ABAP RAP Defaulting Actions lies in their ability to handle complex business logic. Developers can embed intricate algorithms, conditions, and calculations within the defaulting ...
7 Different Ways to Change the Audio Volume in Logic Pro. Wield varied methods to change the volume of your audio in Logic Pro.
undefined. We are very excited to announce the Public Preview for the workflow assistant in Azure Logic Apps!! The workflow assistant is a chat interface to answer any questions about Azure Logic Apps, from directly within designer. You can also use it to describe existing workflow or prompt for specific answers in the context of your workflow.