tag:blogger.com,1999:blog-141171622024-03-07T00:27:35.184-08:00Obscure and Confused Ideasidiosyncratic perspectives on philosophy of science, its history, and related issues in logicUnknownnoreply@blogger.comBlogger205125tag:blogger.com,1999:blog-14117162.post-14375949276730814952021-05-28T11:36:00.019-07:002021-06-01T07:43:31.434-07:00One argument for free logic over classical logicI have just started working on an 'opinionated introduction' to <a href="https://plato.stanford.edu/entries/logic-free/" target="_blank">free logic</a> for the Cambridge Elements series in Philosophy and Logic. In classical logic, for a character or strong of characters to be a name, it must refer to exactly one individual. Free Logic relaxes this assumption. Names can be 'empty' in Free Logic.
<p>
I'm currently working on the section that motivates Free Logic. I wrote up what I think are the 'standard' reasons in favor of Free Logic, and then added the following one, which I'm not 100% sure about yet. I don't think I've seen it anywhere else before, but please correct me if someone else has made this argument already!
<p>
Suppose <i>s</i> is a string of characters with all the grammatical or syntactic markers of a name of an individual. We explicitly leave it open whether <i>s</i> refers to exactly one individual, i.e. we leave it open whether <i>s</i> is like 'Angela Merkel', or is instead like 'Zeus'. For the classical logician, 'Zeus' cannot be a name. So the classical logician holds that some strings of characters with the form <i>s</i>=<i>s</i> are not true (e.g., ‘Zeus = Zeus’), while other strings with the same form are true (e.g., ‘Angela Merkel = Angela Merkel’). However, this fact about classical logic conflicts with the <a href="https://plato.stanford.edu/entries/logical-truth/#For" target="_blank">widely-accepted principle</a> that a logical truth is true in virtue of its logical form. That is:
<blockquote>
(FORMAL) If a string of characters is a logical truth, then every string with the same grammatical or syntactic form is also true.
</blockquote>
Since ‘Angela Merkel = Angela Merkel’ has the same grammatical or syntactic form as ‘Zeus = Zeus’, and classical logic classifies the first but not the second as a logical truth, classical logic violates this widely-held principle (FORMAL). [EDIT/ UPDATE (June 1 2021): This argument might be improved by replacing every instance of 'logical truth' with 'theorem', and 'logical form' with 'syntactic form'; see the 7th comment in the comment thread to this post]
<p>
'Positive' free logicians can avoid this problem: they make every instance of <i>s</i>=<i>s</i>, including `Zeus = Zeus', a logical truth (again, where <i>s</i> has all the grammatical features of a name). And 'negative' and 'neutral' free logics make no instances of <i>s</i>=<i>s</i> logical truths. (However, as Nolt points out in the Free Logic SEP entry, "[I]n negative or neutral free logic [it] is not the case [that] ... any substitution instance of a valid formula ... is itself a valid formula"; see the <a href="https://plato.stanford.edu/entries/logic-free/#probs" target="_blank">reference there</a> for explanation.)Unknownnoreply@blogger.com22tag:blogger.com,1999:blog-14117162.post-84228356341202369852021-05-23T11:04:00.014-07:002021-09-16T07:19:38.752-07:00Marquis's double standardI teach bioethics most years. And like many people who teach bioethics, I teach <a href=” https://www.jstor.org/stable/2026961”>Don Marquis’s article</a> that argues that typical cases of voluntary abortion are very morally wrong. There are many objections that one can and should make to the claims in this article. This semester, another one came to me. I have not seen it before. This objection may very well already be out there; google scholar says Marquis’s article has been cited 618 times. I wrote it down, mostly in order to get it clear in my own head. If something like this argument is already out there somewhere among those 618 citing articles, please let me know.
<p>
Everyone agrees that in the large majority of cases, murder is seriously morally wrong. Marquis asks the question: Why? What makes murder seriously morally wrong?
Marquis’s answer:
<br>
<blockquote>
(FLO) If something has a <b>future like ours</b> (with many “experiences, activities, projects, and enjoyments” (189)), then it is prima facie seriously morally wrong to destroy that thing.
</blockquote>
Marquis combines (FLO) with the (dubious*) claim that a typical fetus has a future like ours to derive an anti-abortionist conclusion.
<p>
One initial critical reaction to (FLO): Are you saying it’s morally permissible to kill people who are near the end of their lives, since they have very little future left?
<p>
Marquis’s (dialectically correct and fair) reaction: No. If you read principle (FLO) carefully, you’ll see that an entity’s having a future like ours is SUFFICIENT to make it morally wrong to destroy that entity. It is NOT a necessary condition: Marquis is not claiming that if a being lacks a future like ours, then it is morally permissible to kill it. He points out that there can be other reasons, besides having a future like ours, why it is wrong to kill people who have very little future remaining.
<p>
My ultimate conclusion in this post is that Marquis does not allow one of his opponents the exactly parallel ‘correct and fair’ reaction, for their competing position. That is, Marquis’s defense relies on a double standard: he allows himself to have multiple (non-competing) explanations for why different killings are morally wrong, but he does not allow his opponents to have multiple non-competing explanations for why different killings are morally permissible.
<p>
Let's get started. One way to resist Marquis’s argument is to offer an alternative answer to the question ‘What makes murder wrong?’ One alternative answer Marquis considers is the ‘Desire Account’:
<br>
<blockquote>
(DESIRE) If a being has a desire to live, then it is prima facie seriously morally wrong to destroy it. </blockquote>
Clearly, if one accepts this as correct explanation of what makes murder wrong, then the main motivation to accept (FLO) disappears, and with it a central motivation for accepting Marquis’s anti-abortion conclusion.
<p>
Marquis responds as follows. (DESIRE) does not generate a valid argument that abortion is typically morally permissible, even if the anti-abortionist grants that the fetus lacks a desire to live. To create a valid pro-choice argument from the premise that the fetus desires to live, we would need the converse direction of the conditional in (DESIRE), namely
<blockquote>
(CONVERSE DESIRE) If a being lacks the desire to live, then it is prima facie morally permissible to destroy it. </blockquote>
And (CONVERSE DESIRE) is incorrect. As Marquis points out, it is not morally permissible to kill a person who is currently asleep, or who is strongly suicidal, even though neither of those two types of people currently has a desire to live.
<p>
So much for set-up; now I can state my point. I happily grant that (CONVERSE DESIRE) is incorrect. But I deny that someone who accepts the Desire Account—even if they accept it in order to undercut Marquis’s argument—must accept (CONVERSE DESIRE).
<p>
Obviously, logically, one can accept (DESIRE) without accepting (CONVERSE DESIRE). So Marquis’s reply must be that a person who criticizes his argument by appealing to (DESIRE) must dialectically be committed to (CONVERSE DESIRE), if they actually hope to undercut his argument. Spelling this out: Marquis’s imagined critic thinks (DESIRE) better explains what makes murder wrong than (FLO); thus this critic accepts (DESIRE) in place of (FLO), thereby removing needed evidential (abductive, inference-to-the-best-explanation) support for one the premises of Marquis’s argument. But Marquis thinks that, in order to undermine his anti-abortionist argument, his opponent needs (CONVERSE DESIRE). Why? (To be honest I’m not 100% sure, but:) The combination of (DESIRE) plus the claim that the fetus lacks a desire to live does not entail that it’s morally permissible to destroy a fetus. In order for ‘Fetuses lack desires’ to deliver a validly derived pro-choice conclusion, (CONVERSE DESIRE) is needed as a premise. Therefore, Marquis concludes, for proponents of the Desire Account to have an argument for their pro-choice position, they must accept (CONVERSE DESIRE).
<p>
But this is incorrect. Someone can believe that the Desire Account’s explanation of why killing adults is wrong is at least as good as Marquis’s FLO, and use some other rationale to justify their pro-choice position that does not use (CONVERSE DESIRE) as a premise. (For example, following J. J. Thomson, they could claim that abortion is morally permissible because I have a prima facie right to control the degree to which other being use my body.) Marquis’s criticism would only be legitimate if the only way a proponent of the Desire Account could argue for the permissibility of abortion is by appealing to whether or not beings had a desire to live. But that seems pretty clearly false, or at least not independently motivated.
<p>
Now, one might imagine Marquis responding to this by saying something like ‘My overall account is better, because the FLO, unlike (DESIRE), gives a unified treatment of abortion cases and murder,’ or ‘The Desire Account proponent you have described has to make an ad hoc maneuver, having one explanation for what makes killing adults wrong, and a totally different explanation for what makes destroying fetuses morally acceptable.’ And here (at last) we get to the ‘double standard’ I mentioned at the beginning. Marquis allows himself to have one explanation for what makes killing a typical child or middle-aged adult wrong, and a separate, distinct explanation for what makes killing a very elderly person wrong. If he didn’t allow himself two distinct explanations, then the first (prima facie unfair) criticism we saw of his view would work: his explanation of what makes murder wrong would have to be rejected, on the grounds that it does not entail that a nursing home massacre is a moral atrocity (of course, it doesn’t entail that it isn’t an atrocity, either).
<p>
And eliminating the double-standard would seriously undermine Marquis’s position. For what happens if we apply a uniform standard to both the future-like-ours account of what makes killing wrong and the desire account of what makes killing wrong? Then we would either (i) allow the desire account proponent to have a second, separate explanation for what makes killing a suicidal person wrong (so that the desire account is saved from Marquis’s criticism), or (ii) Marquis’s future-like-ours account is undermined by the fact that it does not entail that killing very elderly people is extremely morally wrong.
<p>
- - - -<br>
* I think that if a pregnant person has decided to get an abortion, then that fetus no longer has a future (like ours). For a fetus to have a future at all, it needs the continued support of the pregnant person’s body. The existence of the fetus’s future depends on this physiological support continuing; remove that support (for whatever reason, e.g. the biological/physiological conditions that create a spontaneous abortion), and the fetus no longer has a future. The pregnant person’s decision to get an abortion is one way to end that physiological support. So while I grant that a fetus in the uterus of a pregnant person who plans to take the pregnancy to term will usually have a future like ours, a fetus in a pregnant person who plans to get an abortion does not have a future like ours, any more than a fetus that has some sort of 'purely biological' condition or genetic trait that would prevent it from coming to term and/or living beyond a few years. Now, one might argue that there is a difference between a genetic atypicality that prevents a fetus from coming to term (e.g. chromosomal aneuploidy), and a decision made by the pregnant person to terminate the pregnancy: the former is not under anyone's voluntary control, whereas the second one is under the pregnant person's control. I think this difference does not make a moral difference, at least in the present dialectical situation. Imagine I'm an employee at a small-to-medium sized company where the entire upper management is extremely anti-Republican. Further imagine that the upper management find out that I am a very active member of the Republican party. I think most people would agree with the claim that I don't have (much of) a future at that company: they'll fire me at their first opportunity, and certainly won't ever promote me. This indicates that (resolute) decisions can make it so that people don't have futures.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-87062398675870404002020-11-08T08:28:00.025-08:002020-11-08T10:14:49.474-08:00Causal attribution & election resultsMany people on my timeline are sharing this excellent interview with Rep. Alexandria Ocasio-Cortez:
<br>
<a href="https://www.nytimes.com/2020/11/07/us/politics/aoc-biden-progressives.html" target="_blank">https://www.nytimes.com/2020/11/07/us/politics/aoc-biden-progressives.html</a>
<p>
Here's a representative quote from her:
<blockquote>
If the party believes after 94 percent of Detroit went to Biden, after Black organizers just doubled and tripled turnout down in Georgia, after so many people organized Philadelphia, the signal from the Democratic Party is the John Kasichs won us this election? I mean, I can’t even describe how dangerous that is.
</blockquote>
<p>
After reading that, the philosopher part of my brain started wondering about how we should think through causal questions in the neighborhood. Did 94% of Detroit going to Biden win him the election, or did winning over the ‘John Kasichs’ (right-leaning centrists) of the swing-state electorate do it? (Let’s suppose, for the sake of argument, that there were a non-trivial number of John Kasichs in swing states who either voted for Biden or chose not to vote for Trump in the 2020 election; I am completely open to that latter being factually false.) The scenario I describe below is highly idealized, and may be massively disanalgous to what actually happened in the 2020 electorate, but the idealized scenario brings out something that MIGHT be going on in this causal debate about the 2020 results.
<p>
Suppose 9 people are voting on a proposal. Further imagine that the proposal passes, by a 5-4 vote. In a strict sense, all five of those ‘yea’ votes were necessary to bring about the effect of the proposal passing. (And we can even imagine that each of the 5 votes ‘yea’ for a different reason.) But we often speak of one (or more) of those 5 as THE cause, or at least the decisive cause, of the proposal passing. Relatedly, maybe one of those 5 voters is seen as especially responsible for the proposal’s passage (I recognize that responsibility and causation are not identical; but they are related). Often, this one is called the ‘swing vote.’
<p>
But all 5 of those votes was necessary to bring about the effect—so how can we pick out one as privileged over the 4 others? If we hold all votes but one of the 5 'yea's constant, and 'wiggle'/ intervene on that one 'yea', then the effect flips from passage to failure -- and that is equally true for all 5 of the 'yea' votes. Now one reasonable reaction to this is to simply reject the idea that one of the 5 yea-votes is in any way specical or privileged. People may think one of them is special, but they are wrong. Although this is a reasonable response, I am curious whether there might be anything salvageable or reasonable in ever causally privileging one 'yea' over the other 'yea's.
<p>
I am very much <b>not</b> an expert on the <a href="https://philpapers.org/rec/LIVEPA">causal attribution literature</a>, so I strongly suspect that someone has already said this. I couldn't find anyone saying it after a little googling, but if any readers know of someone who has already published this point, please let me know in the comments. Anyway, here’s my (probably not-new) hypothesis.
<blockquote>Out of a set of partial causes, each of which was necessary to bring about an actual effect, we privilege the cause that fails to hold in the <b>closest possible world</b> to our own.</blockquote> This is why the swing voter is considered especially responsible for the proposal’s passage: out of all 5 ‘yea’ votes, the actual world would have to undergo the smallest change to flip a swing voter from yea to nay.
<p>
And this matches other causal attributions we make as well. We say that the match lit because it was struck, not because there is oxygen in the air, even though both those conditions are necessary for sustained burning to occur. On the hypothesis above, this is because the world in which I don’t strike the match is closer to our actual world than the world in which I am in a very low-oxygen environment.
<p>
So on this view, questions about whether Biden’s victory is caused by the John Kasichs of the electorate, or increasing turnout in Georgia, come down to the following question: Which is closer to the actual world, (a) the Kasichs of the electorate voting for Trump at roughly the same rate as in 2016, or (b) Black turnout in Georgia remaining at roughly 2016 levels? I genuinely have no idea.
<p>
As I think about it, the causal question seems actually not to matter for the political question of what the party should do, to win in the future – unless distance between possible worlds can be measured by money and other resources. The question, in terms of promoting future success, is not ‘What caused the Biden victory?’ (and then try to replicate that cause, next time around) but rather ‘What is the most cost-effective intervention to create more favorable vote margins?’. These are related, in that a possible world where I bought 1 more blueberry muffin than I actually did this morning is closer than the possible world where I bought 2 more blueberry muffins than I actually did. But it would be surprising if a cross-world metaphysical metric could be given by just tallying up dollars and cents. (Suppose in the actual world I bought 5 blueberry muffins today. Further suppose a muffin costs the same as an apple. Which is closer to the actual world: (i) the world in which I buy 1 apple in addition to the 5 muffins, or (ii) the world in which I buy 2 more blueberry muffins, in addition to the original 5?) That would make modality and causation very anthropocentric, it seems.
<p>
Another extremely important aspect of all this not addressed above is that there are also moral reasons to prefer one plan of action over another. When people’s ability to vote is being substantially suppressed or hindered, there is also a serious moral obligation to remove those obstacles, even if the dollar-per-vote-gained wouldn’t be as high as another TV ad targeting centrist voters who do not face significant obstacles to the ballot box. You don’t have to be an orthodox Rawlsian to think considerations of justice should outweigh considerations of efficiency, at least in most cases. This may be part of why Ocasio-Cortez says it would be so "dangerous" to focus future campaigns on flipping the John Kasichs of the electorate, instead of ensuring everyone is enfranchised in a substantive and meaningful way.
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-14117162.post-308002886352999642020-08-04T11:35:00.021-07:002022-05-18T08:24:28.676-07:00One way to test scientific realismOne way of formulating Scientific Realism is as follows:<br>
<i>What our successful scientific theories say about <b>unobservable</b> entities and processes is approximately true.</i><br>
This is not the only way to formulate scientific realism, but it is one of the more common ones, and it does effectively separate realism from versions of anti-realism which hold that we are not justified in believing what our theories say about unboservables.
<br>
<br>
Obviously, this version of Scientific Realism cannot be directly tested using our current theories and current technology, since what is currently unobservable can't be observed now.
<p>
However, what is observable <b>shifts over time</b> (at least in one important sense of the word 'observable'). This can happen either because (1) we develop the ability to reach new regimes of old variables (e.g. scientists create technology to make materials colder or hotter than we previously could, or we can study bodies moving at higher and higher velocities), or because (2) scientists develop new instruments that enable new types of observation reports (e.g. telescopes, microscopes, fMRI machines, or mass spectrometers).
<p>
This suggests a way to test realism diachronically, using the historical record. First, find something that went from being unobservable to being observable. Then find theories that were (considered) genuinely successful at that earlier time, and see what claims it made about the previously-unobservable-but-now-observable world. Finally, check those claims against the now-observable reality.
<p>
Scientific Realism (at least the version stated above) predicts that the old claims about the previously-unobservable things will usually approximately match the new observations of those things. (I say 'usually' instead of 'always,' because sensible realists are fallibilists.)
<p>
I have not run this test myself. To do it in an intellectually responsible way, a large survey of past transitions from unobservable-to-observable would have to be collected, and steps would have to be taken to make that sample of transitions representative. However, at first glance, it looks like at least some cherry-picked famous examples don't bode well for the realist's prediction:
<ul>
<li>The telescope played a significant role in the scientific revolution</li>
<li>The vacuum pump played an significant role in the scientific revolution</li>
<li>The ability to cool things down further and further led to the discovery of superconductivity</li>
<li>The ability to study bodies at higher and higher speeds was crucial in the transition from classical mechanics to special relativity</li>
</ul>
<p>
There are historical examples that run in the realist's favor too; I think one good example is that (on the whole, i.e. usually) phylogenetic trees generated via molecular data matched previously existing phylogenetic trees fairly closely (i.e. the old trees were usually 'approximately true,' which is all the realist wants). This is why, as I said, we need a large survey to figure out which historical transitions reflect the overall, general pattern, and which cases are outliers.
<br>
{ADDED LATER (May 2022): Simon Allzen's <a href="http://philsci-archive.pitt.edu/20327/">"From Unobservable to Observable: Scientific Realism and the Discovery of Radium"</a> is another nice, detailed example that's intended as an example in the realist's favor. Here's a representative quotation: "an entity considered to be unobservable can be inferred at one stage in the process by virtue of its role as indispensable for predictive success [i.e. via IBE -- GF-A], only to change into an observable at a later stage, thus confirming the reliability of the inference. As a case study of the conceptual changes of entities I use the discovery of radium."}
<p>
Finally, in terms of already-existing arguments, this is not really very different from the Pessimistic Induction (if at all). I think of it as a specialized version of that argument, focusing on the realist's claim that the observable/ unobservable boundary does not mark an epistemically important distinction. For this reason, I think of the above as a diachronic version of Kitcher's "<a href="https://www.jstor.org/stable/2693674">Real Realism</a>" (which potentially comes to the opposite conclusion of Kitcher's view).Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-14117162.post-63235332343232270752018-10-14T16:35:00.001-07:002018-10-16T07:58:18.048-07:00The perfectionist's paradoxEDIT: ADDED Oct. 16 2018: As Karim Zahidi notes in the first comment below, I made an elementary logical error in thinking that (1) is evidence for (2). So I have crossed out the original mistake <del>like this</del> below. But I still think the argument after that step may work: so now the argument just starts from (2) as a supposedly plausible claim, instead of trying to justify (2) via (1).<br />
<br />
---------- <br />
<br />
This might sound initially like a too-clever undergrad ‘gotcha’ paradox. However, I think that at least for some folks who have perfectionist tendencies, the following is experienced as a genuine difficulty in their lives.<br />
<br />
The following strikes me as plausible:<br />
<del><quote><br />
(1) It’s OK to make some mistakes.<br />
</quote><br />
If (1) is true, then it seems the following should be true as well, since it’s just a restricted version of (1):</del><br />
<quote><br />
(2) It’s morally OK to make some moral mistakes.<br />
</quote><br />
But<br />
<quote> <br />
(3) <i>Making a moral mistake</i> is doing something morally impermissible, <br />
</quote><br />
and<br />
<quote><br />
(4) if something is <i>morally OK</i>, then it is morally permissible. <br />
</quote><br />
And (2)-(4) logically entail<br />
<quote><br />
(C) It’s morally permissible to do some morally impermissible things.<br />
</quote><br />
And (C) looks like a contradiction. <br />
<br />
(For all I know this is already out there somewhere, but it was not on the interesting <a href="https://plato.stanford.edu/entries/logic-deontic/#4">list of paradoxes of deontic logic in the Stanford Encyclopedia</a>.)<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibweqqXz9kAt5JSbrx97v4-FcFWB1QHxkkusrBWHia3EuruKsQBg4vsCz-l3noxNyEhs2RB5_UqXJemcjjstDOZvUCc-BHMst9cUWXNXCqz76llb9VEZNoMZmfJwOVICaMTIGAgw/s1600/Pobodys+Nerfect.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibweqqXz9kAt5JSbrx97v4-FcFWB1QHxkkusrBWHia3EuruKsQBg4vsCz-l3noxNyEhs2RB5_UqXJemcjjstDOZvUCc-BHMst9cUWXNXCqz76llb9VEZNoMZmfJwOVICaMTIGAgw/s320/Pobodys+Nerfect.jpg" width="320" height="240" data-original-width="480" data-original-height="360" /></a></div>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-14117162.post-38166098331484072562018-07-17T12:02:00.002-07:002022-05-18T08:30:55.639-07:00'Extra-Weak' UnderdeterminationI’ll start briefly with a few reminders and fix a little terminology. I then introduce a new(?) sub-species of underdetermination argument, whose premises are logically weaker than existing underdetermination arguments, but still(?) deliver the anti-realist’s conclusion.<br />
<br />
Underdetermination arguments in philosophy of science aim to show that an epistemically rational person (= someone who weighs their available evidence correctly) should suspend belief in the (approximate) truth of current scientific theories, even if such theories make very accurate predictions.<br />
<br />
A scientific theory <i>T</i> is <i>strongly underdetermined</i> = <i>T</i> has a genuine competitor theory <i>T*</i>, and <i>T</i> and <i>T*</i> make <b>all</b> the same observable predictions. (So the two theories’ disagreement must concern unobservable stuff only.)<br />
<br />
A scientific theory <i>T</i> is <i>weakly underdetermined</i> = <i>T</i> has a genuine competitor theory <i>T*</i>, and all the observable data/ evidence gathered <b>thus far</b> is predicted equally well by both <i>T</i> and <i>T*</i>.(†) (So collecting new data could end a weak underdetermination situation.)<br />
<br />
Anti-realists then argue from the purported fact that (all/most) of our current scientific theories are undetermined, to the conclusion that an epistemically rational person should suspend belief in (all/most) of our current scientific theories.<br />
<br />
Realists can reasonably respond by arguing that even weak underdetermination, in the above sense, is not common: even if one grants that there is an alternative theory <i>T*</i> that is consistent with the data gathered so far, that certainly does not entail that <i>T</i> and <i>T*</i> are perfectly <b>equally</b> supported by the available data is unlikely. There is no reason to expect <i>T</i> and <i>T*</i> would be a perfect ‘tie’ for every other theoretical virtue besides consistency with the data. (Theoretical virtues here include e.g. simplicity, scope, relation to other theories, etc.) The evidential support for a hypothesis is not merely a matter of the consistency of that hypothesis with available data.<br />
<br />
At this point, anti-realists could dig in their heels and simply deny the immediately preceding sentence. (The other theoretical virtues are ‘merely pragmatic,’ i.e. not evidential.) But that generates a standoff/stalemate, and furthermore I find that response unsatisfying, since an anti-realist who really believes that should probably be a radical Cartesian skeptic (yet scientific anti-realism was supposed to be peculiar to science).<br />
<br />
So here’s another reply the anti-realist could make: grant the realist’s claims that even weak determination is not all that common in the population of current scientific theories, and furthermore that typically our current theory <i>T</i> is in fact better supported than any of the competitors <i>T<sub>1</sub></i>, <i>T<sub>2</sub></i>, ... that are also consistent with the data collected so far. The anti-realist can grant these points, and still reach the standard underdetermination-argument conclusion that we should suspend belief in the truth of <i>T</i>, IF the sum of the credences one should assign to <i>T<sub>1</sub></i>, <i>T<sub>2</sub></i>, ... is at least 0.5. <br />
<br />
For example: suppose there are exactly 3 hypotheses consistent with all the data collected thus far, and further suppose<br />
Pr(<i>T</i>) = 0.4,<br />
Pr(<i>T<sub>1</sub></i>) = 0.35, and<br />
Pr(<i>T<sub>2</sub></i>) = 0.25. <br />
In this scenario, <i>T</i> is better supported by the evidence than <i>T<sub>1</sub></i> is or <i>T<sub>2</sub></i> is, so <i>T</i> is not weakly underdetermined. However, assuming that one should not believe <i>p</i> is true unless Pr(<i>p</i>)>0.5, one should still not believe <i>T</i> in the above example. <br />
<br />
I call such a <i>T</i> <i>extra-weakly underdetermined</i>: the sum of rational degrees of belief one should have in <i>T</i>’s competitors is greater than or equal to the rational degree of belief one should have in <i>T</i>.<br />
<br />
We can think about this using the typical toy example used to introduce the idea of underdetermination in our classes, where we draw multiple curves through a finite set of data points:<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTkS7OvWrTqovnzWfKxKlFyHj6k8Y_Sfq11HT9W-l-R3FSMMSjHPFVfHiOLsoZ9mO-7llEKT7oLl3kX9sukV-yAX5NnGr9c1cGslk8BbdDsoCwKV-sUJGQgUobjGF7GZGK3VtYkw/s1600/Underdetermination+graph.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="356" data-original-width="715" height="199" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTkS7OvWrTqovnzWfKxKlFyHj6k8Y_Sfq11HT9W-l-R3FSMMSjHPFVfHiOLsoZ9mO-7llEKT7oLl3kX9sukV-yAX5NnGr9c1cGslk8BbdDsoCwKV-sUJGQgUobjGF7GZGK3VtYkw/s400/Underdetermination+graph.jpg" width="400" /></a></div>We can simultaneously maintain that the straight-line hypothesis (Theory A) is more probable than the others, but nonetheless deny that we should believe it, as long as the other hypotheses’ rational credence levels sum to 0.5 or higher. And there are of course infinitely many competitors to Theory A, so it is an infinite sum. The realist, in response to this argument, will thus have to say that that infinite sum will converge to less than 0.5.<br />
<br />
The above argument from extra-weak underdetermination is clearly related to the ‘catch-all hypothesis’ (in the terminology above, <i>~T</i>) point that has been discussed elsewhere in the literature on realism, especially in connection with Bayesian approaches (see <a href="https://academic.oup.com/bjps/article/60/2/253/1480342">here</a> and the references therein). But I think there is something novel about the extra-weak underdetermination argument: as we add new competitor theories to the pool (<i>T<sub>3</sub></i>, <i>T<sub>4</sub></i>… in the example above), the rational credence level we assign to each hypothesis will presumably go down. (I include ‘presumably,’ because it is certainly mathematically possible for the new hypothesis to only bring down the rational credence level for some but not all of the old hypotheses.) So the point here is not just that there is some catch-all hypothesis, which it is difficult(?) to assign a degree of rational belief to (that's the old news), but also that we increase the probability of something like the 'catch-all' hypothesis by adding new hypotheses to it. (I have to say 'something like' it, because <i>T<sub>1</sub></i>, <i>T<sub>2</sub></i>... are specific theories, not just the negation of <i>T</i>.)<br />
<br />
----<br />
<br />
(†): Note that, unlike some presentations of underdetermination, I do not require that <i>T</i> and <i>T*</i> both predict ALL the available data. I take "Every theory is born refuted" seriously. And I actually think this makes underdetermination scenarios more likely, since a competitor theory need not be perfect -- and the imperfections/ shortcomings of <i>T</i> could be different from those of <i>T*</i> (e.g. <i>T</i> might be more complicated, while <i>T*</i> has a narrower range of predictions). Hasok Chang's <i>Is Water H2O?</i> makes this point concretely and (to my mind) compellingly, about the case of Lavoisier's Oxygen-based chemistry vs. Priestley's Phlogiston-based chemistry.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-42590704841872521982018-03-13T07:25:00.000-07:002018-03-13T08:00:26.302-07:00cognitive impenetrability of some aesthetic perceptionFor me, one of the interesting experiences of getting older is seeing, from the internal first-person perspective, many of the generalizations one hears about 'getting older' come true in my own life. One of the most obvious/ salient ones for me is about musical tastes. I love a lot of hip hop from the early-to-mid 90's. (<a href="https://www.youtube.com/watch?v=PczlQsTohO4">This</a> is probably still my favorite hip hop album of all time.) I do also like some of the stuff that is coming out now, but on average, the beats in particular just sound bad to me. In particular, the typical snare sound -- I can't get over how terrible and thin it sounds.<br />
<br />
But on the other hand, I know full well that, as people get older, they start thinking 'Young people's music today is so much worse than when I was a kid!' And that I heavily discounted old people's views about new music when I was in school. <br />
<br />
Yet this makes absolutely no difference to my perceiving the typical trap snare sound today as really insubstantial and weak -- just ugly. The theoretical knowledge makes zero difference to my experience. <br />
<br />
This reminded me of Fodor's famous argument from the Müller-Lyer illusion* for the cognitive impenetrability of perception. No matter how many times I am told that the two horizontal lines are the same length, no matter how many times I lay a ruler next to each line in succession and measure them to be the same length, I still <b>perceive</b> one line as shorter than the other. My theoretical knowledge just can't affect my perception. In a bit of jargon, the illusion is <i>mandatory</i>.<br />
<br />
My experience of the typical hip hop snare sound today is similarly mandatory for me, despite the fact that I know (theoretically/ cognitively) that, as an old person, I should discount my aesthetic impressions of music coming out today. <br />
<br />
This seems like it could make trouble for a Fodorian who wants to use the mandatoriness of illusions as an argument that perception is unbiased/ theory-neutral -- in a conversation about the best hip hop albums of all time, my aesthetic data would extremely biased towards stuff that came out between 1989-1995.<br />
<br />
-----<br />
*(Have you seen the dynamic Müller-Lyer illusions? Go <a href="https://www.giannisarcone.com/Muller_lyer_illusion.html">here</a> and scroll down to see a few variants.)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-28428873336680151432017-12-01T07:36:00.002-08:002017-12-01T20:59:30.699-08:00Morals and Mood (Situationism vs virtue ethics, once more)<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBR9SyHLC5SEd2nFo4MmWSdR4tOFn5bwPG-_Uj6eGnD-fyQH187J6nR__gjD4q8sU9gDC06gzpORbpdbZ4oeY484ix-cmCK9hwTgt0KhY9NaczaID4OcLxzynoQVAw4MBgRqO8-A/s1600/That+cat+is+in+such+a+bad+mood.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBR9SyHLC5SEd2nFo4MmWSdR4tOFn5bwPG-_Uj6eGnD-fyQH187J6nR__gjD4q8sU9gDC06gzpORbpdbZ4oeY484ix-cmCK9hwTgt0KhY9NaczaID4OcLxzynoQVAw4MBgRqO8-A/s320/That+cat+is+in+such+a+bad+mood.jpg" width="239" height="320" data-original-width="625" data-original-height="836" /></a><br />
Given how much has been written in the last couple of decades about the situationist challenge to virtue ethics, I'm guessing someone has probably already said (something like) this before. But I haven't seen it, and I'm teaching both the <i>Nicomachean Ethics</i> and the section in Appiah's <i>Experiments in Ethics</i> on situationism vs. virtue ethics now, so the material is bouncing around in my head.<br />
<br />
First, a little background. (If you want more detail, there are a couple nice summaries of the debate on the Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/experimental-moral/#ChaVir">here</a> (by Alfano) and <a href="https://plato.stanford.edu/entries/moral-psych-emp/#VirtEthiSkepAbouChar">here</a> (by Doris and Stich).) The basic idea behind the situationist challenge to virtue ethics is the following: there are no virtues (of the sort the virtue ethicist posits), because human behavior is extremely sensitive to apparently minor -- and definitely morally irrelevant -- changes in the environment. For example, studies show that someone is MUCH more likely to help a stranger with a small task (e.g. picking up dropped papers, or making change for a dollar) outside a nice-smelling bakery than a Staples office supply store, or after finding a dime in a pay phone's coin return, or in a relatively quiet place than a relatively loud one. The Situationist charges that if the personality trait of generosity really did exist, and was an important driver of people's behavior, then whether people perform generous actions or not would NOT depend on tiny, morally irrelevant factors like smelling cinnamon rolls, finding a dime, or loud noises. A virtue has to be stable and global; that behavior can change so much in response to apparently very minor environmental changes suggests that there is no such stable, global psychological thing contributing significantly to our behavior. That's the basic Situationist challenge.<br />
<br />
Defenders of Virtue Ethics have offered a number of responses to this situationist challenge (the SEP articles linked in the previous paragraph describe a few). Here is a response that I have not personally seen yet: a person's <b>mood</b> is a significant <i>mediator</i> between the minor situational differences and the tendency to help a stranger. When we describe the experimental results as a correlation between a tiny, apparently unimportant environmental change and a massive change in helping behavior, then the experimental results look very surprising -- perhaps even shocking. But it would be less surprising or shocking, if instead of thinking of what's going on in these experiments as "The likelihood of helping others is extremely sensitively attuned to apparently trivial aspects of our environment," we rather think of what's happening as "All these minor environmental changes have a fairly sizeable effect on our mood." For to say that someone in a particularly good mood is much more likely to help a stranger is MUCH less surprising than "We help people outside the donut shop, but not outside the Staples." In other words: if mood is a major mediator for helping behaviors, then we don't have to think of our behaviors as tossed about by morally irrelevant aspects of our environments. That said, we <i>would</i> have to think of our behavior as shaped heavily by our moods -- but I'm guessing most people would probably agree with that, even if they've never taken a philosophy or psychology class. <br />
<br />
Now, you might think this is simply a case of a distinction without a difference, or "Out of the frying pan, into the fire": swapping smells for moods makes no real difference of any importance to morality. I want to resist this, for two reasons; one more theoretical/ philosophical, and the other more practical.<br />
<br />
First, the theoretical reason recognizing that mood is a mediator in these experiments matters: I don't think a virtue ethicist would have to be committed to the claim that mood does not have an effect on helping behaviors. Virtue ethicists can agree that generosity should not be sensitive to smells and dimes <i>per se</i>. However, the fact that someone who is in a better mood is (all else equal) more likely to help strangers than someone in a worse mood is probably not devastating evidence against the virtue ethicist's thesis that personality traits (like generosity) exist and play an important role in producing behavior.<br />
<br />
Second, more practically: one concern (e.g. mentioned by Hagop Sarkissian <a href="https://quod.lib.umich.edu/cgi/p/pod/dod-idx/minor-tweaks-major-payoffs-the-problems-and-promise.pdf?c=phimp;idno=3521354.0010.009;format=pdf">here</a>) about the Situationist experimental data in general is that often times we are not consciously aware of the thing in our situation/environment that is driving the change in our behavior (I may not have noticed the nice smells, or the absence of loud noises). But mood is different: I have both better access to my mood, and a baseline/ immediate knowledge that my mood often affects my behavior. Whereas given the situationist's characterization of the data, I often don't know which variables in my environment are causing me to help the stranger or not. So if I am in a foul mood, and realize I am in a foul mood, I could potentially consciously 'correct' my automatic, 'system-1' level of willingness to help others.<br />
<br />
Of course, on this way of thinking about it, i.e. mood as mediating, I often won't know what is CAUSING my good mood. But that's OK, because I will still be able to <i>detect</i> my mood (usually -- of course, sometimes we are sad, or angry, or whatever, without really noticing it. But my point is just that we are better detectors of our current mood than we are of the various elements of our environment that could potentially be influencing our mood positively or negatively). <br />
<br />
So in short: I think the situationist's challenge to virtue ethics is blunted somewhat if we think of mood as a mediator between apparently trivial situational variables and helping behaviors. Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-14117162.post-67038451817124766232017-06-22T08:07:00.002-07:002017-06-22T08:25:45.844-07:00Tarski, Carnap, and semanticsTwo things:<br />
<br />
1. Synthese recently published Pierre Wagner's article <a href="https://link.springer.com/article/10.1007/s11229-015-0853-7">Carnapian and Tarskian Semantics</a>, which outlines some important differences between semantics as Tarski conceived it (at least in the 1930s-40s) and as Carnap conceived it. This is important for anyone who cares about the development of semantics in logic; I'd been hoping someone would write this paper, because (a) I thought it should be written, but (b) I didn't really want to do it myself. Wagner's piece is really valuable, in my opinion. And not merely for antiquarian reasons: many today have the feeling that semantics in logic (roughly: model theory) is the natural/ inevitable way to come at semantics in logic. But how exactly to pursue semantics was actually very much up for debate and in flux for about 20 years after Tarski's 1933 "On the Concept of Truth in Formalized Languages." And the semantics in that monograph is NOT what you would find in a logic textbook today.<br />
<br />
2. I am currently putting the finishing touches on volume 7 of the Collected Works of Rudolf Carnap, which is composed of Carnap's three books on semantics (<i>Introduction to Semantics, Formalization of Logic,</i> and <i>Meaning and Necessity</i>). There is a remark in <i>Intro to Semantics</i> that is relevant to Wagner's topic, which Wagner cited (p.104), but I think might be worth trying to investigate in more detail. Carnap writes: <br />
<quote><br />
our [= Tarski's and my] conceptions of semantics seem to diverge at certain points. First ... I emphasize the distinction between semantics and syntax, i.e. between semantical systems as interpreted language systems and purely formal, uninterpreted calculi, while for Tarski there seems to be no sharp demarcation. (1942, pp. vi-vii) <br />
</quote><br />
I have two thoughts about this quotation: <br />
<break>(i) Is Carnap right? Or did he misunderstand Tarski? (Carnap had had LOTS of private conversations with Tarski by this point, so the prior probability I assign to me understanding Tarski better than Carnap does is pretty low.) <br />
<break>(ii) If Carnap is right about Tarski on this point, then (in my opinion) we today should give much more credit to Carnap for our current way of doing semantics in logic than most folks currently do. We often talk about 'Tarskian semantics' today as a short-hand label for what we are doing, but if there were 'no sharp demarcation' between model theory and proof theory (i.e. between semantics and syntax), then the discipline of logic would look very different today.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-14117162.post-77806079758927812572017-03-30T09:15:00.003-07:002017-03-30T09:31:00.062-07:00Against Selective Realism (given methodological naturalism)The most popular versions of realism in the scientific realism debates today are species of <i>selective realism</i>. A selective realist does not hold that mature, widely accepted scientific theories are, taken as wholes, approximately true---rather, she holds that (at least for some theories) only certain parts are approximately true, but others parts are not, and thus do not merit rational belief. The key question selective realists have grappled with over the last few decades is: which are the 'good' parts (the "working posits," in Kitcher's widely used terminology) and which are the 'bad' parts (the "idle wheels") of a theory? <br />
<br />
An argument against any sort of philosophical selective realism just occurred to me, and I wanted to try to spell it out here. Suppose (as the selective realist must) there is some scientific theory that scientists believe/ accept, and which according to the selective realist makes at least one claim (call it <i>p</i>) that is an idle wheel, and thus should not be rationally accepted. <br />
<br />
It seems to me that in such a situation, the selective realist has abandoned (Quinean) <i>methodological naturalism</i> in philosophy, which many philosophers---and many philosophers of science, in particular---take as a basic guideline for inquiry. <a href="https://plato.stanford.edu/entries/naturalism/#Sci">Methodological naturalism</a> (as I'm thinking of it here) is the view that philosophy does not have any special, supra-scientific evidential standards; the standards philosophers use to evaluate claims should not be any more stringent or rigorous than standards scientists themselves use. And in our imagined case, the scientists think there is sufficient evidence for <i>p</i>, whereas the selective realist does not. <br />
<br />
To spell out more fully the inconsistency of selective realism and methodological naturalism in philosophy, consider the following dilemma:<br />
<blockquote>By scientific standards, one either should or should not accept <i>p</i>.<br />
<br />
If, by scientific standards, one should <b>not</b> accept <i>p</i>, then presumably the scientific community <i>already</i> does not accept it (unless the community members have made a mistake, and are not living up to their own evidential standards). The community could have re-written the original pre-theory accordingly to eliminate the idle wheel, or they could have explicitly flagged the supposed idle wheel as a false idealization, e.g. letting population size go to infinity. But however the community does it, selective realism would not recommend anything different from what the scientific community itself says, so selective realism becomes otiose ... i.e., an idle wheel. (Sorry, I couldn't help myself.)<br />
<br />
On the other hand, if, by scientific standards, one should accept <i>p</i>, then the selective realist can't be a methodological naturalist: the selective realist has to tell the scientific community that they are wrong to accept <i>p</i>.<br />
</blockquote>I can imagine at least one possible line of reply for the selective realist: embrace the parenthetical remark in the first horn of the dilemma above, namely, scientists are making a mistake <i>by their own lights</i> in believing <i>p</i>. Then the selective realist would need to show that there is a standard operative in the scientific community that the scientists who accept <i>p</i> don't realize should apply in the particular case of <i>p</i>. But that may prove difficult to show at this level of abstraction.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-14117162.post-32848327426416091512017-02-11T06:57:00.002-08:002017-02-11T07:14:56.546-08:00Confirmation holism and justification of individual claimsHow do epistemological(/confirmation) holists think about the justification of the individual claims that compose the relevant ‘whole’?<br />
<br />
Epistemological holism or confirmation holism, I take it, holds that sentences cannot be justified or disconfirmed in isolation. In other words, we can only fundamentally justify or disconfirm sufficiently large conjunctions of individual claims. What counts as ‘sufficient’ depends on how big a chunk of theory you think is needed to be justifiable: some holists allow big sets of beliefs to be confirmed/justified even if they are proper subsets of your total belief-set. (I will use 'individual claim' to mean a claim that is 'too small' i.e. 'too isolated' to admit of confirmation, according to the holist.)<br />
<br />
I’m guessing confirmation holists also think that the individual claims that make up a justified whole are themselves justified. (If holists didn’t think that, then it seems like any individual claim a holist made would be unjustified, by the holist’s own lights, unless they uttered it in conjunction with a sufficiently large set of other utterances.) The individual claims are justified, presumably, by being part of a sufficiently large conjunction of claims that are <i>fundamentally/ basically</i> justified. Individual claims, if justified, can only be <i>derivatively</i> justified.<br />
<br />
Presumably, if one believes that ‘A<sub>1</sub> & A<sub>2</sub> & … & A<sub>n</sub>’ (call this sentence AA) is justified, then that person has (or: thinks they should have) a rational degree of belief in AA over 0.5. <br />
<br />
But now I have questions: <br />
<br />
(1) How does a holist get from the degree of belief she has in AA, to the degree of belief she has in a particular conjunct? There are many, many ways consistent with the probability calculus to assign probabilities to each of the A<sub><i>i</i></sub>’s to get any particular rational degree of belief (except 1). <br />
<br />
(2) We might try to solve that ‘underdetermination’ problem in (1) by specifying that every conjunct is assigned the same degree of belief. This seems <i>prima facie</i> odd to me, since presumably some conjuncts are more plausible than others, but I don’t see how the holist could justify having different levels of rational belief in each conjunct, since each conjunct gets its justification only through the whole. (Perhaps the partial holist can tell a story to be told about claims participating in multiple sufficiently large conjunctions that are each justified?)<br />
<br />
(3) Various ways of intuitively assigning degrees of belief to the individual conjuncts seem to run into problems: <br />
<br />
<p style="margin-left: 40px">(i) The holist might say: if I have degree of belief <i>k</i> in AA, then I will have degree of belief <i>k</i> in each conjunct. Problem: that violates the axioms of the probability calculus (unless <i>k</i>=1).</p><br />
<p style="margin-left: 40px">(ii) Alternatively, if the holist wants to obey the axioms of the probability calculus, then the rational degree of belief she will need to have in each conjunct must be VERY high. For example, if the degree of belief in AA is over 0.5, and each conjunct is assigned the same value (per (2)), and there are 100 individual conjuncts, then one’s degree of belief in each conjunct must be over 0.993. And that seems really high to me.</p><br />
<p style="margin-left: 40px">(iii) One alternative to that would be to say that each conjunct of a large conjunction has to be over 0.5. But then you would have to say that the big 100-conjunct conjunction is justified when your rational degree of belief in it is anything above 7.9x10<sup>-31</sup>. And that doesn’t sound like a justified sentence.</p><br />
Two final remarks: First, it seems like someone must have thought of this before, at least in rough outline. But my 10 minutes of googling didn’t turn up anything. So if you have pointers to the literature, please send them along. Second, for what it's worth, this occurred to me while thinking about the preface paradox: if you think justification only fundamentally accrues to large conjunctions and not individual conjuncts, then it seems like you couldn’t run (something like) the preface paradox, since you couldn't have a high rational degree of belief in (an analogue of) the claim ‘At least one of the sentences in this book is wrong.’<br />
Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-14117162.post-26002777917007166602016-09-08T09:24:00.003-07:002016-09-09T04:36:07.174-07:00Defining 'inductive argument' should not be this difficultCarnap, give me strength. I cannot define ‘inductive argument’ and ‘fallacious argument’ in a way that correctly captures the intuitive boundary between inductive and fallacious arguments.<br />
<br />
Like (almost) everyone else, I define ‘deductively correct,' i.e. 'valid' as follows: <br />
An argument A is deductively correct (valid) = <br />
If all of A’s premises are true, then A’s conclusion must be true = <br />
If all of A’s premises are true, then it is impossible for A’s conclusion to be untrue.<br />
<br />
Now, there are two (or maybe three) definitions of ‘inductive argument’ that follow this pattern of definition.<br />
<br />
<blockquote><i>(Definition 1: probable simpliciter)</i><br />
An argument B is inductively correct =<br />
If all of B’s premises are true, then B’s conclusion is probably true = <br />
If all of B’s premises are true, it is unlikely that B’s conclusion is untrue</blockquote><br />
<blockquote><i>(Definition 2: more probable)</i><br />
An argument C is inductively correct = <br />
If all of C’s premises are true, then C’s conclusion is more likely to be true = <br />
If all of C’s premises are true, the probability of the conclusion’s untruth decreases</blockquote><br />
In other words:<br />
Definition 1: Pr(Conclusion | Premises) > 0.5<br />
Definition 2: Pr(Conclusion | Premises) > Pr(Conclusion)<br />
<br />
(If you think >0.5 is too low, you can pick whatever higher cutoff you like. My current problem is different from picking where to set that threshold number.)<br />
<br />
Now I can state my problem: it looks like neither definition makes the correct classifications for some paradigm examples of fallacies.<br />
<br />
On Definition 1, any argument whose conclusion highly probable regardless/independent of the truth or falsity of the premises, will count as inductively correct. That is, any <b>non sequitur</b> whose conclusion is probably true will count as inductively correct. (This is the inductive analog of the fact that a logical truth is a deductive consequence of any set of premises. But it just feels much more wrong in the inductive case, for some reason; maybe just because I've been exposed to this claim about deductive inference for so long that it has lost its un-intuitiveness?)<br />
<br />
On Definition 2, <b>hasty generalization</b> (i.e. sample size too small) will count as inductively correct: suppose I talk to 3 likely US Presidential voters and all 3 say they are voting for Clinton. It is intuitively fallacious to conclude that Clinton will win the Presidency, but surely those 3 responses give some (small, tiny) boost to the hypothesis that she will win the Presidency.<br />
<br />
But non sequiturs and hasty generalizations are both paradigm examples of fallacies, so neither Definition 1 nor Definition 2 will work.<br />
<br />
I said above that there might be a third definition. This would simply be the conjunction of Definitions 1 and 2: If the premises are true, then the conclusion must be BOTH probable simpliciter (Def. 1) AND more probable (Def. 2). It seems like this would rule out both non sequiturs (because the truth of the premises increases the probability of the conclusion) and hasty generalizations (because the conclusion wouldn’t be probable simpliciter). <br />
<br />
Problem solved? I don’t think it is, because there could be a hasty generalization for an argument whose conclusion is probable even if the premises are all false. Given our current background information (as of Sept. 6 2016) about the US Presidential race, the above example probably fits this description: ‘Clinton will win’ is more likely to be true than not, and the sample of three voters would boost a rational agent’s confidence in that claim (by a miniscule amount). That said, I will grant that a reasonable person might think this example is NOT a fallacy, but rather just an inductively correct argument that is so weak it is ALMOST a fallacy.<br />
<br />
Before signing off, I will float a fourth candidate definition:<br />
<br />
Definition 4: Pr(Conclusion | Premises) >> Pr(Conclusion)<br />
put otherwise:<br />
Definition 4: Pr(Conclusion | Premises) > Pr(Conclusion)+<i>n</i>, for some non-tiny <i>n</i>>0.<br />
<br />
You could also conjoin this with Definition 1 if you wanted. This would take care of hasty generalizations. But does it create other problems? (You might object “That <i>n</i> will be arbitrary!” My initial reaction is that setting the line between inductive and fallacious at >0.5 [or wherever you chose to set it] is probably arbitrary in a similar way.)<br />
Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-14117162.post-73076600500960526382016-09-08T09:24:00.000-07:002016-09-09T04:35:51.570-07:00Defining 'inductive argument' should not be this difficultCarnap, give me strength. I cannot define ‘inductive argument’ and ‘fallacious argument’ in a way that correctly captures the intuitive boundary between inductive and fallacious arguments.<br />
<br />
Like (almost) everyone else, I define ‘deductively correct,' i.e. 'valid' as follows: <br />
An argument A is deductively correct (valid) = <br />
If all of A’s premises are true, then A’s conclusion must be true = <br />
If all of A’s premises are true, then it is impossible for A’s conclusion to be untrue.<br />
<br />
Now, there are two (or maybe three) definitions of ‘inductive argument’ that follow this pattern of definition.<br />
<br />
<blockquote><i>(Definition 1: probable simpliciter)</i><br />
An argument B is inductively correct =<br />
If all of B’s premises are true, then B’s conclusion is probably true = <br />
If all of B’s premises are true, it is unlikely that B’s conclusion is untrue</blockquote><br />
<blockquote><i>(Definition 2: more probable)</i><br />
An argument C is inductively correct = <br />
If all of C’s premises are true, then C’s conclusion is more likely to be true = <br />
If all of C’s premises are true, the probability of the conclusion’s untruth decreases</blockquote><br />
In other words:<br />
Definition 1: Pr(Conclusion | Premises) > 0.5<br />
Definition 2: Pr(Conclusion | Premises) > Pr(Conclusion)<br />
<br />
(If you think >0.5 is too low, you can pick whatever higher cutoff you like. My current problem is different from picking where to set that threshold number.)<br />
<br />
Now I can state my problem: it looks like neither definition will make the right classifications for some paradigm examples of fallacies.<br />
<br />
On Definition 1, any argument whose conclusion highly probable regardless/independent of the truth or falsity of the premises, will count as inductively correct. That is, any <b>non sequitur</b> whose conclusion is probably true will count as inductively correct. (This is the inductive analog of the fact that a logical truth is a deductive consequence of any set of premises. But it just feels much more wrong in the inductive case, for some reason; maybe just because I've been exposed to this claim about deductive inference for so long that it has lost its un-intuitiveness?)<br />
<br />
On Definition 2, <b>hasty generalization</b> (i.e. sample size too small) will count as inductively correct: suppose I talk to 3 likely US Presidential voters and all 3 say they are voting for Clinton. It is intuitively fallacious to conclude that Clinton will win the Presidency, but surely those 3 responses give some (small, tiny) boost to the hypothesis that she will win the Presidency.<br />
<br />
But non sequiturs and hasty generalizations are both paradigm examples of fallacies, so neither Definition 1 nor Definition 2 will work.<br />
<br />
I said above that there might be a third definition. This would simply be the conjunction of Definitions 1 and 2: If the premises are true, then the conclusion must be BOTH probable simpliciter (Def. 1) AND more probable (Def. 2). It seems like this would rule out both non sequiturs (because the truth of the premises increases the probability of the conclusion) and hasty generalizations (because the conclusion wouldn’t be probable simpliciter). <br />
<br />
Problem solved? I don’t think it is, because there could be a hasty generalization for an argument whose conclusion is probable even if the premises are all false. Given our current background information (as of Sept. 6 2016) about the US Presidential race, the above example probably fits this description: ‘Clinton will win’ is more likely to be true than not, and the sample of three voters would boost a rational agent’s confidence in that claim (by a miniscule amount). That said, I will grant that a reasonable person might think this example is NOT a fallacy, but rather just an inductively correct argument that is so weak it is ALMOST a fallacy.<br />
<br />
Before signing off, I will float a fourth candidate definition:<br />
<br />
Definition 4: Pr(Conclusion | Premises) >> Pr(Conclusion)<br />
put otherwise:<br />
Definition 4: Pr(Conclusion | Premises) > Pr(Conclusion)+n, for some non-tiny n>0.<br />
<br />
You could also conjoin this with Definition 1 if you wanted. This would take care of hasty generalizations. But does it create other problems? (You might object “That n will be arbitrary!” My initial reaction is that setting the line between inductive and fallacious at >0.5 [or wherever you chose to set it] is probably arbitrary in a similar way.)<br />
Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-14117162.post-1558109228196207792016-03-12T20:28:00.002-08:002016-03-13T09:31:01.520-07:00the no-miracles argument may not commit the base-rate fallacyCertain philosophers argue that the No-Miracles Argument for realism (Colin Howson, Peter Lipton), the Pessimistic Induction against realism (Peter Lewis), or both arguments (P.D. Magnus and Craig Callender) commit the base-rate fallacy. I am not sure these objections are correct, and will try to articulate the reason for my doubt here.<br />
<br />
I need to give some set-up; many readers will be familiar with some or all of this. So you can skip the next few paragraphs if you already know about the base-rate objection to the No-Miracles Argument and the Pessimistic Induction.<br />
<br />
I suspect many readers are familiar with the base-rate fallacy; there are plenty of explanations of it around the internet. But just to have a concrete example, let’s consider a classic case of base-rate neglect. We are given information like the following, about a disease and a diagnostic test for this disease:<br />
<br />
(1) There is a disease D that, at any given time, 1 in every 1000 members of the population has: Pr(D)=.001.<br />
<br />
(2) If someone actually has disease D, then the test always comes back positive: Pr(+|D)=1.<br />
<br />
(3) But the test has a false positive rate of 5%. That is, if someone does NOT have D, there is a 5% chance the test still comes back positive: Pr(+|~D)=.05.<br />
<br />
Now suppose a patient tests positive. What is the probability that this patient actually has disease D?<br />
Someone commits the base-rate fallacy if they say the probability is fairly high, because they discount or ignore the information about the ‘base rate’ of the disease in the population. Only 1 in 1000 people have the disease. But for every 1000 people who don’t have it, 50 people will test positive. You have to use Bayes’ Theorem to get the exact probability that someone who tests positive has the disease; the probability turns out to be slightly under 2%. <br />
<br />
In the context of the No-Miracles and Pessimistic Induction arguments, the objection is that both arguments ignore a relevant base rate. For example, the No-Miracles argument says:<br />
<br />
(A) Pr (T is empirically successful | T is approximately true) = 1<br />
<br />
(B) Pr (T is empirically successful | ~ (T is approximately true)) <<1.
Inequality (B) is supposed to capture the ‘no-miracles intuition’: the probability that a false theory would be empirically successful is so low that it would be a MIRACLE if that theory were empirically successful.
Hopefully you can see that (A) corresponds to (2) in the original, medical base-rate fallacy example, and (B) corresponds to (3). Empirical success is analogous to a positive test for the truth of a theory, and the no-miracles intuition is that the false-positive rate is very low (so low that a false positive would be a miracle).
The base-rate objection to the No-Miracles argument is just that the No-Miracles argument ignores the base rate of true theories in the population of theories. In other words, in the NMA, there is no analogue of (1) in the original example. Without that information, even a very low false-positive rate cannot license the conclusion that an arbitrary empirically successful theory is probably true. (And furthermore, that base rate is somewhere between extremely difficult and impossible to obtain: what exactly is the probability that an arbitrary theory in the space of all possible theories is approximately true?)
<hline><br />
<br />
OK, that concludes the set-up. Now I can state my concern: I am not sure the objectors’ demand for the base rate of approximately true theories in the space of all possible theories is legitimate. Why? Think about the original medical example again. There, we are simply GIVEN the base rate, namely (1). But how would one acquire that sort of information, if one did not already have it? Well, you would have to run tests on large numbers of people in the population at large, to determine whether or not they had disease D. These tests need not be fast-diagnosing blood or swab tests; they might involve looking for symptoms more ‘directly,’ but they will still be tests. And this test, which we are using to establish the base rate of D in the population, will still presumably have SOME false positives. (I’m guessing that most diagnostic tests are not perfect.) But if there are some false positives, and we don’t yet know the base rate of the disease in the population, then—if we follow the reasoning of the base-rate objectors to the NMA and the PI—any conclusion we draw about the proportion of the population that has the disease is fallacious, for we have neglected the base rate. But on that reasoning, we can never determine the base rate of a disease (unless we have an absolutely perfect diagnostic test), because of an infinite regress. <br />
<br />
In short: if the NMA commits the base-rate fallacy, then any attempt to discover a base rate (when detection tools have false positives) also commits the base-rate fallacy. But presumably, we do sometimes discover base rates (at least approximately) without committing the base-rate fallacy, so by modus tollens, the NMA does not commit the base-rate fallacy.<br />
<br />
NMA does not commit the base rate fallacy, because it does not ignore AVAILABLE evidence about the base rate of true theories in the population of theories. In the medical example above, the base rate (1) is available information; under-weighing generates the fallacy. In the scientific realism case, however, the base rate is not available. If we did somehow have the base rate of approximately true theories in the population of all theories (the gods of science revealed it to us, say), then yes, it would be fallacious to ignore or discount that information when drawing conclusions about the approximate truth of a theory from its empirical success, i.e. the NMA would be committing the base-rate fallacy. But unfortunately the gods of science have not revealed that information to us. Not taking into account unavailable information is not a fallacy; in other words, the base-rate fallacy only occurs when one fails to take into account available information.<br />
<br />
I am not certain about the above. I definitely want to talk to some more statistically savvy people about this. Any thoughts?<br />
<br />
<br />
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-14117162.post-16414819548406812892016-02-15T10:46:00.000-08:002016-02-15T10:46:04.408-08:00huhwait, what?<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgkU2MnSo6biMWwG2Bc19FljE0IIo6eJX434fzQp5c3RV2MGBnBhZFmzMWuYkHWyoXTZASfr_rU8iRo1jjneLkZHlAYZJ6Pjpc6Ql1__vAC7VPOlBUa5Adb52zU3IgFRJiQhBpkg/s1600/ACI+blog+index.tiff" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgkU2MnSo6biMWwG2Bc19FljE0IIo6eJX434fzQp5c3RV2MGBnBhZFmzMWuYkHWyoXTZASfr_rU8iRo1jjneLkZHlAYZJ6Pjpc6Ql1__vAC7VPOlBUa5Adb52zU3IgFRJiQhBpkg/s400/ACI+blog+index.tiff" /></a></div><br />
I'm not sure I can articulate why, but this makes me want to stop blogging...Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-14117162.post-22056923567870946342015-11-22T18:33:00.002-08:002015-11-22T18:35:43.774-08:00HOPOS 2016: Submit an abstractHOPOS 2016 Call for Submissions<br />
June 22-25, 2016, Minneapolis, Minnesota, USA<br />
<a href="http://hopos2016.umn.edu/">http://hopos2016.umn.edu/</a><br />
<br />
Keynote Speakers:<br />
<br />
Karine Chemla (REHSEIS, CNRS, and Université Paris Diderot)<br />
<br />
Thomas Uebel (University of Manchester)<br />
<br />
HOPOS: The International Society for the History of Philosophy of Science will hold its eleventh international congress in Minneapolis, on June 22-25, 2016. The Society hereby requests proposals for papers and for symposia to be presented at the meeting. HOPOS is devoted to promoting research on the history of the philosophy of science. We construe this subject broadly, to include topics in the history of related disciplines, including computing, in all historical periods, studied through diverse methodologies. In order to encourage scholarly exchange across the temporal reach of HOPOS, the program committee especially encourages submissions that take up philosophical themes that cross time periods. If you have inquiries about the conference or about the submission process, please write to Maarten van Dyck: maarten.vandyck [at] ugent.be.<br />
<br />
SUBMISSION DEADLINE: January 4, 2016<br />
<br />
To submit a proposal for a paper or symposium, please visit the conference website: <a href="http://hopos2016.umn.edu/call-submissions">http://hopos2016.umn.edu/call-submissions</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-5000359809244130512015-10-15T14:16:00.002-07:002015-10-15T14:23:36.113-07:00Descartes on Mathematical Truth and Mathematical ExistenceThis is not so much a post as a note to myself for something I would like to think about in the future.<br />
<br />
In the first Meditation, Descartes writes: <blockquote>"arithmetic, geometry, and other such disciplines, which treat of nothing but the simplest and most general things... are <b>indifferent as to whether these things do or do not in fact exist</b>, contain something certain and indubitable."</blockquote>I should look more into this apparent 'truth-independent-of-reference' position, that mathematical truth is independent of the existence of mathematical entities, especially as an alternative to the <a href="http://plato.stanford.edu/entries/mathphil-indis/">Quine-Putnam indispensibility argument</a> for the reality of mathematical objects. <br />
<br />
Relevant secondary literature:<br />
- Gregory Brown (in "Vera Entia: The Nature of Mathematical Objects in Descartes" <i>Journal of the History of Philosophy</i>, 1980:23-37) contains a nice discussion of the kind of existence mathematical objects have for Descartes, esp. section III: <blockquote>"mathematical objects in particular, have a "being" that is independent of their actual existence in (physical) space or time, and that is characterized by what Descartes calls 'possible existence'"(p.36).</blockquote><br />
- Brown quotes Anthony Kenny ("The Cartesian Circle and Eternal Truths," <i>Journal of Philosophy</i>, 1970):<br />
<blockquote>"the objects of mathematics are not independent of physical substances; but they do not support the view that the objects of mathematics depend for their essences on physical existents... . Descartes held that a geometrical figure was a mode of physical or corporeal substance; it could not exist, unless there existed a physical substance for it to exist in. But whether it existed or not, it had a kind of being that was sufficient to distinguish it from nothing, and it had its eternal and immutable essence."</blockquote>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-14117162.post-22470572372274332912015-09-05T08:07:00.001-07:002015-09-05T08:08:51.326-07:00Ontological Commitment, "To be is to be the value of a bound variable," and Schematic Letters in QuineI am currently working on a paper on Quine's shifting ontological thoughts. Something occurred to me while reading some of his stuff from the late 1930s and 40s, which probably won't make it into the paper, but that I wanted to try to get clear for myself.<br />
<br />
Most readers of this blog have heard Quine's famous ontological dictum "To be is to be the value of a bound variable." This is a criterion of ontological commitment for a theory: what the theory says exists is whatever the values of its bound variables are. <br />
<br />
Quine includes 'bound', I take it, so that (what he calls) <i>schematic letters</i> do not have existential import. For example, in the expression <i>(x)(P(x) --> P(x))</i>, the <i>P</i> cannot be bound by a quantifier <i>(P)</i> without the language being committed to the existence of properties (or traits, or sets, or whatever you think predicate letters signify). The <i>P</i> is instead a 'dummy letter': the full expression <i>(x)(P(x) --> P(x))</i> is a schema, not a full sentence in first-order logic, but the schema allows us to say that any sentence that results from substituting an actual predicate for <i>P</i> is a theorem.<br />
<br />
Now I can get to what's bothering me. Consider a theory+language, such as primitive recursive arithmetic (PRA), that has (what normally would be called) variables, but does not have any explicitly written-down quantifiers. In such a language, when we see a sentence like <i>x=y = y+x</i>, we can say ‘If we were expressing this in first-order logic, we would understand a pair of universal quantifiers ‘(x)(y)’ out front to make this a sentence,’ but there are actually no quantifier-symbols as part of the language we are considering. So what I’m wondering is: if someone accepts Quine’s line of thought about the difference between (ontologically-committing) variables vs. (ontologically-innocent) schematic letters, then should [/can] that person also say that the x’s and y’s of PRA are schematic letters, not variables? And thus that PRA does not [/need not] commit its users to the existence of the natural numbers -- or to anything else for that matter?<br />
<br />
Here is a first potential problem for the Quinean. Let's call Language 1 (L1) the quantifier-free PRA described just above. And let L2 be the first-order logic translation of L1, i.e. L2 just puts the appropriate universal quantifiers in front of every sentence of L1 which contains variables. Now if to be is to be the value of a <b>bound</b> variable, L1 is not committed to numbers (or something number-like enough to satisfy the axioms of PRA), but L2 is. Yet L1 and L2 constitute a paradigm case of ‘merely notational variants’: the same theory, expressed using different notations. So L1 and L2 should either both be committed to the existence of numbers, or neither should.<br />
<br />
Now, I can imagine a dedicated Quinean at this point could grasp the second option: we can consistently take the view that L2 is somehow not 'really' ontologically committed to numbers, because we can translate L2 back into (bound-variable-free) L1 (by just erasing every universal quantifier in every L2 sentence). The general principle underlying this is something like: a theory is committed to X just in case X is a value of a bound variable in <i>every</i> adequate formalization of that theory. <br />
<br />
This position strikes me as unintuitive. But I think there is a further reason to reject it. For now consider language L3, which is just L2 + the standard definition (x) = ~(∃x)~. We will then clearly have some ontological commitments (albeit negative ones, i.e. commitments that such-and-such does NOT exist). So perhaps the Quinean will say that "To be is to be the value of a bound variable" is only a recipe for finding the <i>positive</i> ontological commitments of a theory. I'm not sure about that move; perhaps it can be made to work.<br />
<br />
So in sum, this makes me wonder whether Quine’s contrast between schematic letters on the one hand, vs. genuine variables on the other, may not be as sharp as he needs it to be. In other words, it is not clear to me that schematic letters can be made ontologically innocent in the way Quine wants them to be.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-14117162.post-8830590660658842052015-08-11T04:39:00.001-07:002015-10-15T13:59:53.955-07:00A few thoughts on Moti Mizrahi's "The Pessimistic Induction: A Bad Argument Gone Too Far" This post is exactly what the title says. I found this paper especially thought-provoking, so I wanted to try writing down/ nailing down exactly what those provoked thoughts were.<br />
<br />
If you want to read the paper, the free, penultimate version is <a href="https://www.academia.edu/1239941/The_Pessimistic_Induction_A_Bad_Argument_Gone_Too_Far">here</a>, and the published, ridiculously expensive version is <a href="http://link.springer.com/article/10.1007%2Fs11229-012-0138-3">here</a> (<i>Synthese</i>, 2013: 3209-3226).<br />
<br />
Here's the bit from the paper that I want to focus on:<br />
<blockquote>The theories on Laudan's list were not randomly selected, but rather were cherry-picked in order to argue against a thesis of scientific realism. If this is correct, then the pessimistic inductive generalization is a weak inductive argument.<br />
<br />
To this pessimists might object that, if we simply do the required random sampling, then the pessimistic inductive generalization would be vindicated and shown to be a strong inductive generalization. So, to get a random of sample of scientific theories (i.e., a sample where theories have an equal chance of being selected for the sample), I used the following methodology:<br />
<br />
- Using Oxford Reference Online, I searched for instances of the word 'theory' in the following titles: A Dictionary of Biology, A Dictionary of Chemistry, A Dictionary of Physics, and The Oxford Companion to the History of Modern Science.<br />
~ I limited myself to these reference sources to make the task more manageable.<br />
~ Since it is not clear how to individuate theories (e.g., is the Modern Evolutionary Synthesis a theory or is each of its theoretical claims, such as the claims about natural selection and genetic drift, a theory in its own right?), I limited myself to instances of the word 'theory.'<br />
<br />
- After collecting 124 instances of 'theory' and assigning a number to each instance, I used a random number generator to select 40 instances out of the 124.<br />
<br />
- I divided the sample of 40 theories into three categories: accepted theories (i.e., theories that are accepted by the scientific community), abandoned theories (i.e., theories that were abandoned by the scientific community), and debated theories (i.e., theories whose status as accepted or rejected is in question) (See Table 1).<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzbxwWfZ7hu0YlDD43R4T-kToluexVzqf0lwXKXxiQP6TplCwYnbkxVjBkC-nbDojCH_NUt22aW30ppdvZFqShAPU1ARTk1HSi7Kid_hRk8z7BIGj8lUSNyaI-Q1OeuxXZH2xUug/s1600/Mizrahi+Table+1.tiff" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzbxwWfZ7hu0YlDD43R4T-kToluexVzqf0lwXKXxiQP6TplCwYnbkxVjBkC-nbDojCH_NUt22aW30ppdvZFqShAPU1ARTk1HSi7Kid_hRk8z7BIGj8lUSNyaI-Q1OeuxXZH2xUug/s400/Mizrahi+Table+1.tiff" /></a></div><br />
...<br />
Based on this sample, pessimists could construct the following inductive generalization:<br />
<br />
15% of sampled theories are abandoned theories (i.e., considered false). Therefore, 15% of all theories are abandoned theories (i.e., considered false).<br />
<br />
Clearly, this inductive generalization hardly justifies the pessimistic claim that most successful theories are false. Even if we consider the debated theories as false, the percentages do not improve much in favor of pessimists:<br />
<br />
27% of sampled theories are abandoned theories (i.e., considered false). Therefore, 27% of all theories are abandoned theories (i.e., considered false).<br />
</blockquote><br />
The first thing I wanted to say is that I really like Mizrahi's basic idea here. Philosophers (myself included) sometimes throw up our hands to soon and say that some question is intractable, so I really appreciate that Mizrahi did the work of collecting some data that could place constraints on answers to certain versions of the pessimistic induction.<br />
<br />
Here are four thoughts I had about the particulars of Mizrahi's method.<br />
<br />
1) 3 of the 4 textual sources are (apparently) supposed to be <b>present</b> reference works for contemporary science, and I assume discarded/ superceded theories are much less likely to appear in such a reference work than in <b>history</b> of science reference works (which the last of the 4 is -- so I am curious whether the percentages would change significantly if we just looked at that 4th one, and/or other works that purport to cover the history of science, up to the present day). <br />
<br />
2) Anti-realists have said this before, but I think it's relevant here too. The more recent a theory, the less likely it is there is evidence against it: the theory was framed to capture the data available at the time, and so the more recent a theory is, the less time there has been to accumulate/discover anomalous data.<br />
<br />
3) The scientific realism debate is often/usually supposed to be restricted to ‘fundamental’ theories -- whatever those are. I don’t know how many of the theories in Tables 1 and 2 would qualify as fundamental. I have attached the table, so you can see for yourself; I'm pretty sure a good portion of them are fundamental, but I also think some of them are not. I don't know several of these theories (RRKM theory, anyone?), but again I wonder how that would affect the percentages.<br />
<br />
4) I don't have very strong leanings/ intuitions pro or contra scientific realism (I currently think of myself as an agnostic/ quietist, looking for slightly more well-posed questions in the neighborhood). But something that happens either 15% or 27% of the time does not feel like a miracle (as in 'No-Miracles Argument') to me. Of course, more moderate realists may well say that all they claim is that Pr(Theory is true | Theory is successful) > 0.5. But I have heard a few realistically-inclined people recently talk about 'the no-miracles intuition' or something similar -- but presumably a miracle does not need to be invoked if I predicted your dice roll would come up '4', and then you rolled a 4.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-14117162.post-68034407327329719202015-05-01T19:01:00.002-07:002015-05-01T19:01:22.359-07:00"outgroup homogeneity" and 'continental philosophy'One phenomenon that social psychologists have found pretty consistently is called 'outgroup homogeneity.' The idea, as I understand it, is that people judge outgroup members (i.e. people who are not in a group they identify with) as being more homogeneous in the stereotypical traits attributed to the outgroup than they judge ingroup members on those same traits.<br />
<br />
What gets lumped under the heading 'continental philosophy' today is a very diverse range of traditions and thinkers: phenomenology, structuralism, post-structuralism, deconstruction, existentialism, Nietzsche, Kierkegaard, and so on. Many of these are so different and even opposed to one another that it doesn't really make all that much sense to lump them together under one heading. 'Continental philosophy' is a phrase <i>analytic</i> philosophers devised (Glendinning 2006). So what I'm wondering now is whether the creation of that phrase/ category was facilitated by the outgroup homogeneity effect -- since without it, it would have been harder to amass together, under a single heading, all the disparate traditions.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-16809770017239328722015-04-07T12:51:00.000-07:002015-04-07T12:52:29.539-07:00Das beste Blog der WeltI'm sure most folks who check this blog already know about this, but just in case you missed it: André Carus has recently started writing some really interesting posts on his (aptly titled) <a href="http://awcarus.com/">Carnap Blog</a>. It is required reading for anyone interested in Carnapia.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUeLDrjwiJ31zQlGQv0XwXD8psVGNYYPzwlx8nNHXkY1VtcyLo6gHKWVq75YRj6BGtYOM18NHyVL0pML_P6gB_E1zgkzvyjBBiW4tIVkcqLPgCU-zFuLiugqk9GYbpza1x532uUw/s1600/Carnap.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUeLDrjwiJ31zQlGQv0XwXD8psVGNYYPzwlx8nNHXkY1VtcyLo6gHKWVq75YRj6BGtYOM18NHyVL0pML_P6gB_E1zgkzvyjBBiW4tIVkcqLPgCU-zFuLiugqk9GYbpza1x532uUw/s320/Carnap.jpg" /></a></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-65693911596448710802015-04-05T17:11:00.002-07:002015-04-05T17:53:53.269-07:00Thoughts from the Pacific APA meeting<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCR3FnT0PynLqb7ElORV7AKpdelAuOy-UlyJ6VauMO-3-732TJVCvfK21f0Iw3K1NXVNnP5bCfl-3NMCKq3w23uTV8ZKQFw-zI_xCvrjusBMLkmLNmo7FU3PrtmiMM1oNM_mKcZg/s1600/vancouver+hotel.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCR3FnT0PynLqb7ElORV7AKpdelAuOy-UlyJ6VauMO-3-732TJVCvfK21f0Iw3K1NXVNnP5bCfl-3NMCKq3w23uTV8ZKQFw-zI_xCvrjusBMLkmLNmo7FU3PrtmiMM1oNM_mKcZg/s400/vancouver+hotel.JPG" /></a></div><br />
I just got back from the Pacific APA meeting. There were a lot of highlights for me: the session on Eugenics and Philosophy was really excellent (I especially got a lot out of <a href="http://www.artsrn.ualberta.ca/raw/">Rob Wilson</a>'s opening remarks about his work on sterilized people in his province, and well as <a href="http://philosophy.utk.edu/staff/cureton.html">Adam Cureton</a>'s paper on disability and parenting); Nancy Cartwright's Dewey lecture was really interesting; and I was happy to see History of Analytic very well represented in several spots on the program. That included the author-meets-critics session on my Carnap, Tarski, Quine book -- I was very fortunate to have great commentators: Rick Creath, Gary Ebbs, and Greg Lavers. I'm very thankful for Richard Zach for organizing the session too, and Sean Morris for stepping in to chair at the last minute. And the conversation with the audience was helpful to me as well. Happily, even if you weren't at the session, you'll still be able to see what they said: their insightful comments will eventually appear in a symposium in <i>Metascience</i>.<br />
<br />
One thing that I noticed was that there were not a lot of talks on philosophy of science proper. (Though happily there were some, e.g. an author-meets-critics on Jim Tabery's <a href="http://www.amazon.com/Beyond-Versus-Understand-Interaction-Philosophical/dp/0262027372/ref=asap_bc?ie=UTF8">Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture</a>.) Interestingly, there were a decent number of philosophers of science there, but often they were presenting something that was not philosophy of science (like me), or speaking on something philosophy-of-science adjacent (e.g. a philosopher of biology speaking on bioethics).<br />
<br />
I was wondering whether anyone had hypotheses about this -- one hypothesis is that because the PSA exists and is pretty big, the PSA 'cannibalizes' the presentations from the APA. Another tack would be that my perception of the percentage of the profession that identify as philosophers of science is not accurate, and the APA program accurately reflected the true percentage. But I am very curious to hear other explanations.<br />
<br />
(And the baked-goods highlight of the trip was the coffee bun at <a href="http://papparoti.ca/">Papparoti</a> -- it was the most interesting pastry I've had in a while.)Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-14117162.post-14479512236366830972014-10-21T07:33:00.001-07:002014-10-21T07:33:17.881-07:00Historiographical reflectionsI know this "scumbag analytic philosopher" meme is played out, but this one just came to me:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpBn7E_e8Czf9sfMUrn6mmT2Y0dfJxe93ZDmJPpxx4OhEzE4z8OFaq6QhIH3aFRRem3nVLOblP53KwJ_tA65B-aKfJpPombCRXQi9ev1nEuDeV3zdpomD-xxzU7v7yuG9eqMH7bw/s1600/internal+histories.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpBn7E_e8Czf9sfMUrn6mmT2Y0dfJxe93ZDmJPpxx4OhEzE4z8OFaq6QhIH3aFRRem3nVLOblP53KwJ_tA65B-aKfJpPombCRXQi9ev1nEuDeV3zdpomD-xxzU7v7yuG9eqMH7bw/s400/internal+histories.jpg" /></a></div><br />
Commence groaning...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-14117162.post-20913043943048792012014-10-04T08:12:00.002-07:002014-10-04T08:12:43.741-07:00(Why) must ethical theories be consistent with intuitions about possible cases?Since Brian <a href="http://brianweatherson.tumblr.com/post/98729042444/i-was-wrong">Weatherson recently classified this blog as 'active,'</a> I thought I should try to live up to that billing.<br />
<br />
A question came up in one of my classes yesterday. The student asked (in effect): Why wouldn't a moral theory that makes all the right 'predictions' about <b>actual</b> cases be good enough? Why demand that a moral theory must also be consistent with our intuitions about possible cases, even science-fiction-ish ones, as well?<br />
<br />
(The immediate context was a discussion of common objections to utilitarianism, specifically, slavery and the <a href="http://en.wikipedia.org/wiki/Utility_monster">utility monster</a>. The student said, sensibly I think, that the utilitarian could reply that all actual cases of slavery are bad on utilitarian grounds, and there are no utility monsters.)<br />
<br />
I know that some philosophers have argued that if a moral claim is true, then it is (metaphysically?) necessarily true: there is no possible world in which e.g. kicking kittens merely for fun is morally permissible. If you accept that all moral claims are like this, then I can see why you would demand our moral theories be consistent with our intuitions about all possible cases. But if one does not accept that all moral truths are metaphysically necessary, is there any other reason to demand the theory make the right prediction about merely possible cases? <br />
<br />
This question seems especially pressing to me, if we think one of the main uses of moral theories is as a guide to action, since we only ever act in the actual world. However, now that I say that explicitly, I realize that whenever we make a decision, the option(s) we decided against <b>are</b> merely possible situations. So maybe that could explain why an ethical theory needs to cover merely possible cases? (Though even there, it need not cover all metaphysically possible cases -- e.g. the utility monster worries never need to be part of my actual decision-making process.) Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-14117162.post-57458866535768012482014-09-15T13:24:00.002-07:002014-09-15T13:24:53.681-07:00Are video games counter-examples to Suits' definition of 'game'?Many readers are familiar with Bernard Suits' definition of 'game' in <i>The Grasshopper</i>. For those of you who aren't, Suits offers this definition of playing a game: "engaging in an activity directed towards bringing about a specific state of affairs, using only means permitted by rules, where the rules prohibit more efficient in favour of less efficient means, and where such rules are accepted just because they make possible such activity" (pp. 48-9).<br />
<br />
I'm wondering whether video games are a counter-examples, because of the condition "the rules prohibit more efficient in favour of less efficient means." This makes sense for most games: in golf, it would be more efficient to just carry the ball by hand and put it into the hole; in poker, it would be more efficient to just reach across the table and take all your opponents' chips/cash. But what is the analogue of these 'more efficient' ways in a video game? <br />
<br />
One might point to <a href="http://en.wikipedia.org/wiki/Konami_Code">cheat codes</a>, but even if a cheat code does satisfy this condition of Suits' definition, we can at least imagine a video game that doesn't have cheat codes.Unknownnoreply@blogger.com24