A datum on the reception of the 'Verifiability Criterion of Meaning'

One of my pet views about logical empiricism is that the verifiability criterion of meaning, for those who actually espoused some version of it (as opposed to attributed it to someone else), often does not mean exactly what the average professional philosopher in 2011 thinks it means.

I just stumbled across a new data point that suggests the reception of the verifiability criterion was more accurate than the straw-man version popular today. Here's Susan Stebbing, in 1933's "Logical Positivism and Analysis":
A proposition is understood only if it is verifiable; it is verifiable if, and only if, we know the conditions under which the proposition would be true, and the conditions under which it is false.(p.13)
Just as Carnap says in "Overcoming Metaphysics through Logical Analysis of Language," the meaning of a sentence is given by its truth-conditions. (One cannot be a complete revolutionary about the verifiability principle: the texts rule that out. Discussions of observations appear in treatments of the verifiability principle--these 'truth-conditions' are often further articulated as something like 'sets of possible experiences,' where 'possible' is taken very broadly.)


A new kind of semantics for confusion

(There's a decent amount of set-up/ review in this post, before I reach the main point -- the new idea comes in the 4th paragraph.) I took a grad seminar with Joe Camp on the topic of confusion 8 years ago, and have been thinking on-and-off about it ever since. Camp illustrates the phenomenon with the example of Fred, who buys an ant colony. At the time of purchase, Fred is told that there is one big ant in the colony, and a bunch of smaller ones. Unbeknownst to Fred, however, there are actually two big ants in his colony (we'll call them 'Ant A' and 'Ant B'). Fred says to himself "I'm going to call the big ant in my colony 'Charley'." Fred then goes on to say various sentences including the word 'Charley,' and to make various inferences involving such sentences.

The questions that most interest me about confusion concern truth and consequence: 1. What truth-value, if any, should we assign to such sentences? (Think about 'Charley is an ant', Charley is not an ant,' or 'Charley=Charley'.) 2. How should we make sense of logical consequence in languages containing 'Charley' and similar words? (Think about: 'Everything is an ant; thus Charley is an ant', 'Charley is a big ant; thus there is a big ant,' and 'Charley is an ant; thus Charley exists').

For people who don't want to say that every atomic sentence containing 'Charley' is false, or is truth-valueless, the (apparently?) most common strategy is to use a supervaluational strategy: 'Charley is an ant' is true, because it is true on every disambiguation: 'Ant A is an ant' is true, and so is 'Ant B is an ant'. The same goes for 'Charley=Charley.' For logical consequence, there are two ways we could go: 'local' or 'global', in the jargon current in recent work on the logic of vagueness. The 'global' option: If every premise is true on every complete disambiguation, then the conclusion is true on every complete disambiguation. The 'local' option: for every complete disambiguation, if all the premises are true, then so is the conclusion. (If an argument is locally valid, then it's globally valid, but not conversely.)

So here's my new idea: what if, instead of using supervaluations (which were initially introduced in the 60's to handle empty names), could we instead use something like the other main contender for the model theory of empty names, usually called 'inner domain-outer domain' semantics? In this semantics, extra entities are added to the 'outer domain,' which serve as the referents for empty names (such as 'Santa Claus'). But the quantifiers only range over the 'inner domain,' in which all the objects exist. So 'Santa exists' will be false.

The idea to try an inner domain-outer domain semantics for confused terms resulted from re-reading Krista Lawlor's "A Notional Worlds Approach to Confusion." She objects to the supervaluational approach, because "[i]n supervaluing, we give up on understanding the confused belief." Why?
Fred’s ontological commitments involve one big ant (‘Charley’), not two. Our assignment of truth and falsity to Fred’s beliefs rests on our ontology, not Fred’s. We evaluate Fred’s beliefs for how far they might lead us astray, by our lights. In a very clear sense we give up on understanding Fred, in favor of using him, we might say, as an instrument (and a not-too-well-calibrated one at that), for detecting the facts as we understand them. (p.153)
I don't know yet whether I agree with this argument. But I do think it's a plausible argument, and thus it is worth trying to devise a type of formal semantics that respects the idea behind it. If there were one object in the outer domain that is the referent of 'Charley,' perhaps we have not 'given up on understanding' Fred's belief.

The obvious next question is: what are these individuals in the outer domain? Which one is the referent of 'Charley'? The short answer is 'I don't know,' but I think there have to be some constraints on this individual, related to Ant A and Ant B's properties (this would be a difference with the old inner/outer domain semantics for empty names -- there, the inner-domain individuals do not themselves impose constraints on the outer-domain individuals). Ruth Garrett Millikan describes confused concepts as "amalgams" of distinct concepts; so could we somehow make the individual in the outer domain associated with the name 'Charley' an amalgam of Ant A and Ant B? But what would such an amalgamated individual be (or: 'How should we model such an amalgamated individual in this formal semantics')? First-thought candidates include the set {Ant A, Ant B}, or the mereological fusion of Ant A and Ant B, but neither of those seem obviously right. Obviously, I'm just at the very beginning of thinking about this, and any thoughts would be very appreciated.


Carnap's FBI files

Carnap's FBI file is available on the web:


Thanks to Chris Wüthrich for the tip.