Wednesday, March 16, 2016

ABM fundamentalism

image: Chernobyl control room

Quite a few recent posts have examined the power and flexibility of ABM models as platforms for simulating a wide range of social phenomena. Joshua Epstein is one of the high-profile contributors to this field, and he is famous for making a particularly strong claim on behalf of ABM methods. He argues that “generative” explanations are the uniquely best form of social explanation. A generative explanation is one that demonstrates how an upper-level structure or causal power comes about as a consequence of the operations of the units that make it up. As an aphorism, here is Epstein's slogan: "If you didn't grow it, you didn't explain it." 

Here is how he puts the point in a Brookings working paper, “Remarks on the foundations of agent-based generative social science” (link; also chapter 1 of Generative Social Science: Studies in Agent-Based Computational Modeling):

"To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question:
"How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?
"Accordingly, to explain macroscopic social patterns, we generate—or “grow”—them in agent models." (1)

And Epstein is quite explicit in saying that this formulation represents a necessary condition on all putative social explanations: "In summary, generative sufficiency is a necessary, but not sufficient condition for explanation." (5).

There is an apparent logic to this view of explanation. However, several earlier posts cast doubt on the conclusion. First, we have seen that all ABMs necessarily make abstractive assumptions about the behavioral features of the actors, and they have a difficult time incorporating "structural" factors like organizations. We found that the ABM simulations of ethnic and civil conflict (including Epstein's own model) are radically over-simplified representations of the field of civil conflict (link).  So it is problematic to assume the general applicability and superiority of ABM approaches for all issues of social explanation.

Second, we have also emphasized the importance of distinguishing between "generativeness" and "reducibility" (link). The former is a claim about ontology -- the notion that the features of the lower level suffice to determine the features of the upper level through pathways we may not understand at all. The latter is a claim about inter-theoretic deductive relationships -- relationships between our formalized beliefs about the lower level and the feasibility of deriving the features of the upper level from these beliefs. But I argued in the earlier post that the fact that A is generated by B does not imply that A is reducible to B. 

So there seem to be two distinct ways in which J. Epstein is over-reaching here: he is assuming that agent-based models can be sufficiently detailed to reproduce complex social phenomena like civil unrest; and second, he is assuming without justification that only reductive explanations are scientifically acceptable.

Consider an example that provides an explanation of a collective behavior that has explanatory weight, that is not generative, and that probably could not be fully reproduced as an ABM.  A relevant example is Charles Perrow's analysis of technology failure as a consequence of organizational properties (Normal Accidents: Living with High-Risk Technologies). An earlier post considered these kinds of examples in more detail (link). Here is my summary of organizational approaches to the explanation of the incidence of accidents and system safety:
However, most safety experts agree that the social and organizational characteristics of the dangerous activity are the most common causes of bad safety performance. Poor supervision and inspection of maintenance operations leads to mechanical failures, potentially harming workers or the public. A workplace culture that discourages disclosure of unsafe conditions makes the likelihood of accidental harm much greater. A communications system that permits ambiguous or unclear messages to occur can lead to air crashes and wrong-site surgeries. (link)
I would say that this organizational approach is a legitimate schema for social explanation of an important effect (the occurrence of large technology failures). Further, it is not a generativist explanation; it does not originate in a simplification of a particular kind of failure and demonstrate through iterated runs that failures occur X% of the time. Rather, it is based on a different kind of scientific reasoning, based on causal analysis grounded in careful analysis and comparison of cases. Process tracing (starting with a failure and working backwards to find the key causal branches that led to the failure) and small-N comparison of cases allows the researcher to arrive at confident judgments about the causes of technology failure. And this kind of analysis can refute competing hypotheses: "operator error generally causes technology failure", "poor technology design generally causes technology failure", or even "technological over-confidence causes technology failure". All these hypotheses have defenders; so it is a substantive empirical hypothesis to argue that certain features of organizational deficiency (training, supervision, communications processes) are the most common causes of technological accidents.

Other examples from sociology could be provided as well: Michael Mann's explanation of the causes of European fascism (Fascists), George Steinmetz's explanation of variations in the characteristics of German colonial rule (The Devil's Handwriting: Precoloniality and the German Colonial State in Qingdao, Samoa, and Southwest Africa), or Kathleen Thelen's explanation of the persistence and change in training regimes in capitalist economies (How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan). Each is explanatory, each identifies causal factors that are genuinely explanatory of the phenomena in question, and none is generativist in Epstein's sense. These are examples drawn from historical sociology and institutional sociology; but examples from other parts of the disciplines of sociology are available as well.

I certainly believe that ABMs sometimes provide convincing and scientifically valuable explanations. The fundamentalism that I'm taking issue with here is the idea that all convincing and scientifically valuable social explanations must take this form -- a much stronger view and one that is not well supported by the practice of a range of social science research programs.

Or in other words, the over-reach of the ABM camp comes down to this: the claims of exclusivity and general adequacy of the simulation-based approach to explanation. ABM fundamentalists claim that only simulations from units to wholes will be satisfactory (exclusivity), and they claim that ABM simulations can always be designed for any problem that are generally adequate to grounding an explanation (general adequacy). Neither proposition can be embraced as a general or universal claim. Instead, we need to recognize the plurality of legitimate forms of causal reasoning in the social sciences, and we need to recognize, along with their strengths, some of the common weaknesses of the ABM approach for some kinds of problems.

Tuesday, March 15, 2016

What is anchor individualism?


Brian Epstein has attempted to shake up some of our fundamental assumptions about the social world in the past several years by challenging the idea of "ontological individualism" -- the idea that social things consist of facts about individuals in action, thought, and interaction, and nothing else. Here is how he puts the idea in "Ontological Individualism Reconsidered": "Ontological individualism is the thesis that facts about individuals exhaustively determine social facts” (link). He believes this ontological concept is false; he disputes the idea that the social world supervenes upon facts about individuals; and he argues that there are some social facts or circumstances that cannot be parsed in terms of facts about combinations of individuals. His arguments are pulled together in a very coherent way in The Ant Trap: Rebuilding the Foundations of the Social Sciences, but he has made the case in earlier articles as well (link).

Epstein's primary reason for doubting ontological individualism is a notion he shares with John Searle: that social action often involves a setting of law, convention, interpretation, presupposition, implicature, or rule that cannot be "reduced" to facts interior to the individuals involved in an activity. Searle's concept of a "status fact" is an example (link): the fact that John is an Imam is not a purely individual-level fact about John. Instead, it presupposes a structure of religious institutions, rules, procedures, and beliefs, in light of which John's history of interactions with other individuals and settings qualifies him as "Imam".

There is another kind of individualism that Epstein considers as a more adequate version -- what he refers to as "anchor individualism." The diagram below represents his graphical explanation of the relationship between anchor individualism and ontological individualism. What does he mean by this idea?


Here is one of his efforts to explain the point:
What I will call "anchor individualism" is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (101)
Frames, evidently, are institutional contexts, or contexts of meaning, in the terms of which individual actions are situated. They constitute the difference between a bare set of behaviors and a full-blooded social action. Alfred lifts his right hand to his cap; this is a bodily motion. Alfred salutes his superior officer; this is an institutionally defined action that depends upon a frame of military authority and obligation, in the context of which the behavior constitutes a certain kind of social action. (This sounds rather similar, incidentally, to Ryle and Geertz on the "wink" and the distinction between thin and thick description; Geertz, "Thick Description" in The Interpretation Of Cultures.) A frame principle is a stipulation of how an action, performance, or symbolic artifact is constituted, what makes it the socially meaningful thing that it is -- a hundred dollar bill, a first-degree murder, or an Orthodox rabbi. Plainly a frame principle looks a lot like a rule or a constitutive declaration: "any person who received the degree of Bachelors of Science in Accounting, completed 150 credit hours of study, and passed the CPA exam is counted as a "certified public accountant".)

But a mere stipulation of status is not sufficient. If one person individually decides that a university president shall be henceforward be understood to have the authority to perform marriage ceremonies, this private declaration does not change the status definition of "university president." Rather, the stipulation must itself have some sort of social validity. It must be "anchored". We can say specifically what would be required to anchor the status definition of university president considered here; it would require a valid act of legislation that creates this power, and there would need to be widespread recognition of the political legitimacy and bindingness of the new legislation.

Epstein observes that Searle believes that anchoring of a frame principle always comes down to "collective acceptance" (103). But Epstein notes that other theorists have a broader conception of anchoring: attitudes, conforming behaviors, conventions, shared values about political legitimacy, acts of legislatures, and so on. What anchor individualism asserts is that each of these forms of anchoring can be related to the attitudes, beliefs, and performances of individuals and groups of individuals.

So on Epstein's view, there are two complementary versions of individualism. Ontological individualism is a thesis about what is required for grounding a social fact. Ontological individualism maintains that social facts are grounded in the behaviors and thoughts of individuals. But Epstein thinks there is still something else to represent in our picture of social ontology. We need to be able to specify what circumstances anchor the frame principles themselves. That is the circumstances that make an action or performance the kind of action that it is. To call a performance a "marriage" brings with it a long set of presuppositions about history, status, and validity. These presuppositions constitute a certain kind of frame principle. But we can then ask the question, what makes the frame principle binding in the circumstances? This is where anchoring comes in; anchoring is the set of facts that create or document the "bindingness" of the frame principles in question.

In my reading what makes this a distinctive view from traditional thinking about the relationship between individuals and social facts is the effort it represents to formalize the logical standing of circumstances that are intuitively crucial in social interactions: the significance, rule-abiding-ness, legitimacy, and conventionality of a given individual-level behavior. And these circumstances are necessarily distributed across a large group of people, involving the kinds of socially reflexive ideas that Searle thinks are constitutive of the social world: presuppositions, implicatures, rules, rituals, conventions, meanings, and practices. There is no private language, and there is no private practice. (There are things we do purely individually and privately; but then these do not constitute "practices" in the socially meaningful sense.) So the kinds of things that an anchor analysis calls out are social things.

But it also seems fair to observe that the facts that anchor a practice, convention, or rule are indeed facts that depend upon states of mind and action of individual actors. So anchor individualism remains a coherent kind of individualism. These anchoring facts have microfoundations in the thoughts, behavior, habits, and practices of socially situated individuals.

Saturday, March 12, 2016

Wendt's strong claims about quantum consciousness


Alex Wendt takes a provocative step in Quantum Mind and Social Science: Unifying Physical and Social Ontology by proposing that quantum mechanics plays a role in all levels of the human and social world (as well as all life). And he doesn't mean in the trivial sense that all of nature is constituted by quantum-mechanical micro-realities (or unrealities). Instead, he means that we need to treat human beings and social structures as quantum-mechanical wave functions. He wants to see whether some of the peculiarities of social (and individual) phenomena might be explained on the hypothesis that mental phenomena are deeply and actively quantum phenomena. This is a very large pill to swallow, since much considered judgment across the sciences concurs that the macroscopic world — billiard balls, viruses, neurons — are on a physical and temporal scale where quantum effects have undergone “decoherence” and behave as strictly classical entities.

Wendt’s work rests upon a small but serious body of scholarship in physics, the neurosciences, and philosophy on the topics of “quantum consciousness” and “quantum biology”. An earlier post described some tangible but non-controversial progress that has been made on the biology side, where physicists and chemists have explored a possible pathway accounting for birds’ ability to sense the earth’s magnetic field directly through a chemical process that depends upon entangled electrons.

Here I’d like to probe Alex’s argument a bit more deeply by taking an inventory of the strong claims that he considers in the book. (He doesn’t endorse all these claims, but regards them as potentially true and worth investigating.)
  1. Walking wave functions: "I argue that human beings and therefore social life exhibit quantum coherence – in effect, that we are walking wave functions. I intend the argument not as an analogy or metaphor, but as a realist claim about what people really are. (3) ... "My claim is that life is a macroscopic instantiation of quantum coherence. (137) ... "Quantum consciousness theory suggests that human beings are literally walking wave functions. (154)
  2. "The central claim of this book is that all intentional phenomena are quantum mechanical. (149)  ... "The basic directive of a quantum social science, its positive heuristic if you will, is to re-think human behavior through the lens of quantum theory. (32)
  3. "I argued that a very different picture emerges if we imagine ourselves under a quantum constraint with a panpsychist ontology. Quantum Man is physical but not wholly material, conscious, in superposed rather than well-defined states, subject to and also a source of non-local causation, free, purposeful, and very much alive. (207)
  4. "Quantum consciousness theory builds on these intuitions by combining two propositions: (1) the physical claim of quantum brain theory that the brain is capable of sustaining coherent quantum states (Chapter 5), and (2) the metaphysical claim of panpsychism that consciousness inheres in the very structure of matter (Chapter 6). (92)
  5. Quantum decision theory: "[There is] growing experimental evidence that long-standing anomalies of human behavior can be predicted by “quantum decision theory.” (4)
  6. Panpsychism: "Quantum theory actually implies a panpsychist ontology: that consciousness goes “all the way down” to the sub-atomic level. Exploiting this possibility, quantum consciousness theorists have identified mechanisms in the brain that might allow this sub-atomic proto-consciousness to be amplified to the macroscopic level. (5)
  7. Consciousness: "The hard problem, in contrast, is explaining consciousness. (15) ... "As long as the brain is assumed to be a classical system, there is no reason to think even future neuroscience will give us “the slightest idea how anything material could be conscious.” (17) ... "Hence the central question(s) of this book: (a) how might a quantum theoretic approach explain consciousness and by extension intentional phenomena, and thereby unify physical and social ontology, and (b) what are some implications of the result for contemporary debates in social theory? (29)
  8. The quantum brain: "Quantum brain theory hypothesizes that the brain is able to sustain quantum coherence – a wave function – at the macro, whole-organism level. (30) ... "Quantum brain theory challenges this assumption by proposing that the mind is actually a quantum computer. Classical computers are based on binary digits or “bits” with well-defined values (0 or 1), which are transformed in serial operations by a program into an output. Quantum computers in contrast are based on “qubits” that can be in superpositions of 0 and 1 at the same time and also interact non-locally, enabling every qubit to be operated on simultaneously. (95)
  9. Weak and strong quantum minds: "In parsing quantum brain theory an initial distinction should be made between two different arguments that are often discussed under this heading. What might be called the “weak” argument hypothesizes that the firing of individual neurons is affected by quantum processes, but it does not posit quantum effects at the level of the whole brain. (97)
  10. Vitalism: "Principally, because my argument is vitalist, though the issue is complicated by the variety of forms vitalism has taken historically, some of which overlap with other doctrines. (144)
  11. Will and decision: "In Chapter 6, I equated this power with an aspect of wave function collapse, viewed as a process of temporal symmetry-breaking, in which advanced action moves through Will and retarded action through Experience. (174) ... "Will controls the direction of the body's movement over time by harnessing temporal non-locality, potentially over long “distances.” As advanced action, Will projects itself into what will become the future and creates a destiny state there that, through the enforcement of correlations with what will become the past, steers us purposefully toward that end. (182)
  12. Entangled people: "It is the burden of my argument to show that despite its strong intuitive appeal, the separability assumption does not hold in social life. The burden only extends so far, since I am not going to defend the opposite assumption, that human beings are completely inseparable. This is not true even at the sub-atomic level, where entangled particles retain some individuality. Rather, what characterizes people entangled in social structures is that they are not fully separable. (208-209)
  13. Quantum semantics: "This suggests that the “ground state” of a concept may be represented as a superposition of potential meanings, with each of the latter a distinct “vector” within its wave function. (216)
  14. Social structure: "If the physical basis of the mind and language is quantum mechanical, then, given this definition, that is true of social structures as well. Which is to say, what social structures actually are, physically, are superpositions of shared mental states – social wave functions. (258) ...  "A quantum social ontology suggests – as structuration theorists and critical realists alike have long argued – that agents and social structures are “mutually constitutive.” I should emphasize that this does not mean “reciprocal causation” or “co-determination,” with which “mutual constitution” is often conflated in social theory. As quantum entanglement, the relationship of agents and social structures is not a process of causal interaction over time, but a non-local, synchronic state from which both are emergent. (260) ... "First, a social wave function constitutes a different probability distribution for agents’ actions than would exist in its absence. Being entangled in a social structure makes certain practices more likely than others, which I take to involve formal causation. (264-265)
  15. The state and other structures: "The answer is that the state is a kind of hologram. This hologram is different from those created artificially by scientists in the lab, and also from the holographic projection that I argued in Chapter 11 enables us to see ordinary material objects, since in these cases there is something there visible to the naked eye. (271) ... Collective consciousness: "A quantum interpretation of extended consciousness takes us part way toward collective consciousness, but only part, because even extended consciousness is still centered in individual brains and thus solipsistic. A plausible second step therefore would be to invoke the concept of ‘We-feeling,’ which seems to get at something like ‘collective consciousness,’ and is not only widely used by philosophers of collective intentionality, but has been studied empirically by social psychologists as well. (277)
In my view the key premise here is the quantum interpretation of the brain and consciousness that Alex advocates. He wants us to consider that the operations of the brain -- the input-output relations and the intervening mechanisms -- are not "classical" but rather quantum-mechanical. And this is a very, very strong claim. It is vastly stronger than the idea that neurons may be affected by quantum-level events (considered in an earlier post and subject to active research by people interested in how microtubules work within neurons). But Alex would not be satisfied with the idea that "neurons are quantum machines" (point 9 above); he wants to make the vastly stronger argument that "brains are quantum computers". And even stronger than that -- he wants to claim that the brain itself is a wave function, which implies that we cannot understand its working by understanding the workings of its (quantum) components. (I don't think that computer engineers who are designing real quantum computers believe that the device itself is a wave function; only that the components (qubits) behave according to quantum mathematics.) Here is his brain-holism:
Quantum brain theory hypothesizes that quantum processes at the elementary level are amplified and kept in superposition at the level of the organism, and then, through downward causation constrain what is going on deep within the brain. (95)
So the brain as a whole is in superposition, and only resolves with perception or will as a whole in an event of the collapse of its wave function. (He sometimes refers to "a decoherence-free sub-space of the brain within which quantum computational processes are performed" (95), which implies that the brain as a whole is perhaps a classical thing encompassing "quantum sub-regions".) But whether it is the whole brain (implied by "walking wave function") or a relatively voluminous sub-region, the conjurer's move occurs here: extending known though kinky properties of very special isolated systems of micro-entities (a handful of electrons, photons, or atoms) to a description of macro-sized entities maintaining those same kinky properties.

So the "brain as wave function" theory is very implausible given current knowledge. But if this view of the brain and thought cannot be made more credible than it currently is -- both empirically and theoretically -- then Wendt's whole system falls apart: entangled individuals involved in structures and meanings, life as a quantum-vital state, and panpsychism all have no inherent credibility by themselves.

There are many eye-widening claims here -- and yet Alex is clear enough and well-versed enough in relevant areas of research in neuroscience and philosophy of mind to give his case some credibility. He lays out his case with calm good humor and rational care. Alex relies heavily on the fact that there are difficult unresolved problems in the philosophy of mind and the philosophy of physics (the nature of consciousness, freedom of the will, the interpretation of the quantum wave function). This gives impetus to his call for a fresh way of approaching the whole field -- as suggested by historians of science like Kuhn and Lakatos. However, failing to reach an answer to the question, "How is freedom of the will possible?", does not warrant us to jump to highly questionable assumptions about neurophysiology.

But really -- in the end this just is not a plausible theory in my mind. I'm not ready to accept the ideas of quantum brains, quantum meanings, or quantum societies. The idea of entanglement has a specific meaning when it comes to electrons and photons; but metaphorical extension of the idea to pairs or groups of individuals seems like a stretch. I'm not persuaded that we are "walking wave functions" or that entanglement accounts for the workings of social institutions. The ideas of structures and meanings as entangled wave functions (individuals) strike me as entirely speculative, depending on granting the possibility that the brain itself is a single extended wave function. And this is a lot to grant.

(Here is a brief description of the engineering goals of developing a quantum computer (link):
Quantum computing differs fundamentally from classical computing, in that it is based on the generation and processing of qubits. Unlike classical bits, which can have a state of either 1 or 0, qubits allow a superposition of the 1 and 0 states (both simultaneously). Strikingly, multiple qubits can be linked in so-called 'entangled' states, in which the manipulation of a single qubit changes the entire system, even if individual qubits are physically distant. This property is the basis for quantum information processing, with the goal of building superfast quantum computers and transferring information in a completely secure way.
See the referenced research article in Science for a current advance in optical quantum computing; link.)

(The image above is from a research report from a team which has succeeded in creating entanglement of a record number of atoms -- 3,000. Compare that to the hundreds of billions of neurons in the brain, and once again the implausibility of the "walking wave function" idea becomes overwhelming. And note the extreme conditions of low temperature that are required to create this entangled group; the atoms were cooled to 10-millionths of a degree Kelvin, trapped between two mirrors, and subjected to exposure by a single photon (link) And yet presumably decoherence occurs if the temperature raises substantially.)

Here is an interesting lecture on quantum computing by Microsoft scientist Krysta Svore, presented at the Institute for Quantum Computing at the University of Waterloo.


Quantum biology?



I have discussed several times an emerging literature on "quantum consciousness", focusing on Alex Wendt's provocative book Quantum Mind and Social Science: Unifying Physical and Social Ontology. Is it possible in theory for cognitive processes, or neuroanatomical functioning, to be affected by events at the quantum level? Are there known quantum effects within biological systems? Here is one interesting case that is currently being explored by biologists: an explanation of the ability of birds to navigate by the earth's magnetic field in terms of the chemistry of entangled electrons.

Quantum entanglement is defined as a relation between two or more micro-particles (photons, electrons, …) in which the quantum state of one is entangled with the quantum state of the other. When observation of the first part of the pair brings about alteration of the quantum state in that particle, quantum theory entails that the state of the second particle will change as well.

It has been hypothesized that the ability of birds to navigate by reference to the earth’s magnetic field may be explained by quantum effects of electrons in molecules (cryptochromes) in the bird’s retina. Thorsten Ritz is a leader in this area of research. In "Magnetic Compass of Birds Is Based on a Molecule with Optimal Directional Sensitivity" he and his co-authors describes the hypothesis in these terms (link):
The radical-pair model (7,8) assumes that these properties of the avian magnetic compass—light-dependence and insensitivity to polarity—directly reflect characteristics of the primary processes of magnetoreception. It postulates a crucial role for specialized photopigments in the retina. A light-induced electron-transfer reaction creates a spin- correlated radical pair with singlet and triplet states. (3451)
Here is the chemistry from the same article (3452):

Markus Tiersch and Hans Briegel address these findings in "Decoherence in the chemical compass: the role of decoherence for avian magnetoreception". They describe the hypothetical mechanism of paired-electron chemistry as a mechanism in birds for detecting magnetic fields (link):
Certain birds, including the European robin, have the remarkable ability to orient themselves, during migration, with the help of the Earth's magnetic field [3-6]. Responsible for this 'magnetic sense' of the robin, according to one of the main hypotheses, seems to be a molecular process called the radical pair mechanism [7,8] (also, see [9,10] for reviews that include the historical development and the detailed facts leading to the hypothesis). It involves a photo-induced spatial separation of two electrons, whose spins interact with the Earth's magnetic field until they recombine and give rise to chemical products depending on their spin state upon recombination, and thereby to a different neural signal. The spin, as a genuine quantum mechanical degree of freedom, thereby controls in a non-trivial way a chemical reaction that gives rise to a macroscopic signal on the retina of the robin, which in turn influences the behaviour of the bird. When inspected from the viewpoint of decoherence, it is an intriguing interplay of the coherence (and entanglement) of the initial electron state and the environmentally induced decoherence in the radical pair mechanism that plays an essential role for the working of the magnetic compass. (4518)
So the hypothesis is that birds (and possibly other organisms) have evolved ways of exploiting "spin chemistry" to gain a signal from the presence of a magnetic field. What is spin chemistry? Here is a definition from the spin chemistry website (yes, spin chemistry has its own website!) (link):
Broadly defined, Spin Chemistry deals with the effects of electron and nuclear spins in particular, and magnetic interactions in general, on the rates and yields of chemical reactions. It is manifested as spin polarization in EPR and NMR spectra and the magnetic field dependence of chemical processes. Applications include studies of the mechanisms and kinetics of free radical and biradical reactions in solution, the energetics of photosynthetic electron transfer reactions, and various magnetokinetic effects, including possible biological effects of extremely low frequency and radiofrequency electromagnetic fields, the mechanisms by which animals can sense the Earth’s magnetic field for orientation and navigation, and the possibility of manipulating radical lifetimes so as to control the outcome of their reactions. (link)
Tiersch and Briegel go through the quantum-mathematical details on how this process might work in the case of molecules that might be found in birds' retinas. Here is the conclusion drawn by Tiersch and Briegel:
It seems that the radical pair mechanism provides an instructive example of how the behaviour of macroscopic entities, like the European robin, may indeed remain connected, in an intriguing way, to quantum processes on the molecular level. (4538)
This line of thought is still unconfirmed, as both Ritz and Tiersch and Briegel are careful to emphasize. If confirmed, it would provide an affirmative answer to the question posed above -- are there biological effects of quantum-mechanical events? But even if confirmed, it doesn't seem like an enormously surprising result. It traces out a chemical reaction which proceeds differently depending on whether entangled electrons in molecules stimulated by a photon have been influenced by a magnetic field; this gives the biological system a signal about the presence of a magnetic field that does in fact depend on the quantum states of a pair of electrons. Entanglement is now well confirmed, so this line of thought isn't particularly radical. But this is entirely less weird than the idea that quantum particles are "conscious", or that consciousness extends all the way down to the quantum level (quantum interactive dualism, as Henry Stapp calls it; link). And it is nowhere nearly as perplexing as the claim that "making up one's mind" is a form of a collapsing quantum state represented by a part of the brain.

(Of interest on this set of topics is a recent collection, Quantum physics meets the philosophy of mind, edited by Antonella Corradini and Uwe Meixne. Here is a video in which Hans Briegel discusses research on modeling quantum effects on agents: https://phaidra.univie.ac.at/detail_object/o:300666.)

Thursday, March 10, 2016

Non-generative social facts


Is every social process generated by facts about individuals? For example, consider a television advertising campaign for a General Motors truck. This is a complicated sequence of events, actions, contracts, relationships, and interactions among organizational units as well as individuals. The campaign itself is constituted by the schedule of television spots on which the adverts are broadcast. The causes of the nuances of the temporally extended production need to be traced to both individual choices and interactions among organizational units. So in this context, let's ask the question: Is this complex social production "generated" by a set of individual-level facts? Not exactly.

Behind the finished campaign lies the organization and entrepreneur who designed the campaign, and the company who paid for it (social facts). Simultaneous with the campaign is the suite of facts about the media and the public in virtue of which it makes sense to purchase the spots and broadcast the adverts (also social facts). And subsequent to the campaign is its reception and the consequences of the campaign (once again, social). This description involves social-level facts. These social facts are embodied through the actions and thoughts of individuals; but the causal action is at the level of the organization, not the individuals. It is hard to see the logic in saying that, given the antecedent state of the individuals in their situations ahead of time, the campaign was generated. Rather, the campaign was generated by the quasi-intentional activities of several interlocking and interacting organizations, as well as the known social properties of the public and the media.

How does this example fit into the scheme of generativeness and emergence? There is nothing mysterious about this scenario; each of the social units mentioned here has microfoundations at the level of the individuals whose actions and thoughts contribute to its operations. The public is constituted by the part of the population who view the media. Clearly the public's properties supervene on the attitudes and states of mind and action of the individuals. General Motors is a giant corporation, a business organization consisting of many semi-autonomous divisions, and located within a market and a regulatory environment. The marketing division is one of those semi-autonomous divisions. It is broadly commissioned to help position the company in the public awareness and to promote sales of the vehicles. The Marketing division may be regarded as an agent with a performance space both within and outside the corporation. If Marketing performs badly -- develops bad content, places ads in front of the wrong demographic, fails to produce the surge of new sales -- the manager is likely to lose his/her job.The sales department of the media organization is similar, with an imperative to sell advertising time slots.

It seems inapt to say that this scenario is generated by antecedent states of individuals. Rather, it is part of the play of the agents and institutions within this kind of social environment, with actors at various levels doing things and being influenced by the play of events and doings, both individual and collective. It is certainly not explanatorily interesting that GM, Marketing, WXYZ, WXYZ Sales, and the public are all composed of individual actors. Instead, we want to know what it is about the circumstances facing these various social actors (the corporation, its divisions, the PR firm, ...) in virtue of which they do the things they do: design the graphics, purchase specific packages of time, demand a given price for the time, and reacts with a new desire to buy a Chevy truck. In other words, we need a social, semiotic, economic, competitive, and organizational account of the activities that transpire here to bring about the advertising campaign.

The logic of this scenario seems quite different from that of Schelling's residential segregation model, where the patterns of segregation are in fact generated by the preferences and decision rules of the participants. In this case the outcome is fundamentally structured at the social level; the individuals merely play their roles within the corporation, the marketing department, the PR firm, etc. It is certainly hard to see how an ABM model might help to reproduce the sequence of social activities identified here.

Further, it is evident that there is downward causation in this story. The whole point of the marketing campaign is to change the attitudes of the public through the instrumentality of the media. But likewise, the actors within the various organizations are affected by their roles. They act and choose differently because of their location and history in the corporation. At a higher level the structure of WXYZ as a corporation is affected by something higher -- the market relations in which it exists and the government regulatory environment which governs it.

This example makes it seem that there is some space between "A is generated by facts at level B" and "A has microfoundations in facts at level B".

So this complicated example of a fairly routine social process seems to be one that throws attention on the causal and intentional properties of the meso-level social structures rather than on the states of agency of the individuals who constitute those structures. And this in turn suggests that it is not the case that all social events are "generated" by the states of mind and action of the individuals who constitute them, even though each of the subordinate events in the sequences possesses microfoundations at the level of the individual actors.

Tuesday, March 8, 2016

Reduction and generativeness


Providing an ontology of complex entities seems to force us to refer to some notion of higher-level and lower-level things. Proteins consist of atoms; atoms consist of protons, electrons, and neutrons; and cells are agglomerations of many things, including proteins. This describes a relation of composition between a set of lower-level things and the higher-level thing. And this in turn seems to involve some kind of notion of "levels" of things in the world. Things at each level have relations and properties constituting the domain of facts at that level, and the properties of the higher-level thing are sometimes different from the properties of the lower-level things. (Not all the properties, of course -- proteins and atoms alike have mass and momentum.) But for the properties that differ, we have an important question to answer: what explains or determines the properties of the higher-level thing? Several positions have been considered:

  • Facts about things and properties of B are generated by facts of A
  • Facts about things and properties of B can be reduced to facts of A
  • Facts about things and properties of B supervene upon properties of A
I want to discuss these relations here, but it's worth recalling the other important relations across levels that are sometimes invoked.
  • Facts about things and properties of B are weakly emergent from properties of A
  • Facts about things and properties of B are strongly emergent from properties of A
  • Facts about things and properties of B are in part independent from the properties of A
  • Facts about things and properties of B causally influence the properties of A

So let's focus here on reduction and generation. These are sometimes thought to be equivalent notions; but they are not. Let's grant that the facts about B jointly serve to generate the facts about A. Then A supervenes upon B, by definition. Do these facts imply that A is reducible to B, or that facts of A can be or should be reduced to B? Emphatically not. Reducibility is a feature of the relationship between bodies of knowledge or theories -- our knowledge of A and our knowledge of B. To reduce A to B means deriving what we know about B from what we know about A. For example, the laws of planetary motion are derivable from the law of universal gravitation: by working through the mathematics of gravity it is possible to derive the orbits of the planets around the sun. So the laws of planetary motion are reducible to the law of universal gravitation.

Generativity is not a feature of theories; instead, it is an ontological feature of the world. Physicalism is such a conception. Physicalism maintains that facts about the physical body, including the nervous system, jointly generate all mental phenomena. Generativity involves the idea that, taking the full reality of the properties and powers of B, the properties of A result. The properties of the entities at level B suffice to generate all the properties of the entities at level A. But there is no assurance that our current knowledge about B permits a mathematical derivation of A. Further, there is no assurance that a "full and complete theory" of B would permit such a derivation -- because there is no assurance that such a theory exists at all. And then there is the issue of computability: it may be radically in feasible to perform the calculations necessary to derive A from B.

And so it is clear that reducibility does not follow from generativeness.

There is a second level argument separating generativeness from reducibility as well. This is the fact that there are numerous scientific purposes for which reduction is unnecessary even if it were feasible. It might be possible to derive the motion of a cannonball from a calculation of the motions of the component molecules. But this would be silly. We have no scientific interest or need in doing so.

So it is fully consistent for us to take the position of generativeness and anti-reductionism. And this position makes very good sense in the case of macro and micro social facts. We can take the view that all social entities are embodied in facts about various individuals, their social interactions, and their states of mind. This implies that social facts are generated by facts at the actor level, or that the facts of A supervene upon the facts of B. And yet we can also be emphatic in affirming that there is not need or general possibility for reduction from the one level to the other.

Or in other words, the generativeness of the situation is wholly uninformative.

Sunday, March 6, 2016

Critical realism meets peasant studies


Critical realism is a philosophical theory of social ontology and social science knowledge. This philosophy has been expressed through the writings of systematic thinkers such as Roy Bhaskar, Margaret Archer, and other philosophers and sociologists over the past 40 years. Most of the leaders have emphasized the systematic nature of the theory of critical realism. It builds on a philosophical base, the application of the transcendental method of philosophy, developed by Roy Bhaskar. The theory is now being recommended within sociology as a better way of thinking about sociological method and theory.

Critical realism has a number of very positive aspects for consideration by social scientists. It is inspired by a deep critique of the philosophy of science associated with logical positivism, it offers a clear defense of the idea that there is a social and natural reality which it is the task of scientific inquiry to learn about, and it gives valuable attention and priority to the challenge of discovering concrete causal mechanisms which lead to real outcomes in the natural and social world. There is, however, some tendency for this tradition to express itself in an inward-looking and even dogmatic fashion.

So how can the fields of sociological method and critical realism progress today? One thing is clear: the value and relevance of critical realism is not to provide a template for scientific research or the form that a good scientific research project should take. There are no such templates. Mechanical application of any philosophy, whether critical realism, positivism, or any other theory of science, is not a fruitful way of proceeding as a scientist. However, with this point understood, it is in fact valuable for sociologists and other social scientists to think reflectively and seriously about some of the assumptions about the social world and the nature of social explanation which are involved in critical realism. The advice to look for real and persistent structures and processes underlying observable phenomena, the idea that "generative causal mechanisms" are crucial to processes of change and stability, the ideas associated with morphogenesis, and the idea that causation is not simply a summary of constant conjunction -- these are valuable contributions to social science thinking.

This answers one half of the question raised here: sociological method can benefit from involvement in some open-minded debates inspired by the field of critical realism.

But what about the field of critical realism itself? How can this research community move forward? It would seem that the process involved in textual argumentation--"what would Roy say about this question or that question?"--is not a good way of making progress in critical realism or any other field of philosophy of science. More constructive would be for philosophers and social scientists within the field of critical realism to think open-mindedly about some of the shortcomings and blind spots of this field. And an open-minded consideration of some complementary or competing visions of the social world would strengthen the field as well -- the ideas of heterogeneity, plasticity, the social construction of the self, and assemblage, for example.

I think that one good way of posing this challenge to critical realism might be to undertake a careful, rigorous study of very strong examples of social research that involves good inquiry and good theoretical models. The field of critical realism has tended to be to self-contained, with the result that debates are increasingly hermetically separated from actual research problems in the social sciences. Careful and non-dogmatic study of extended, clear examples of social inquiry would be very productive.

As a first step, it would be very stimulating to identify the empirical and explanatory work of a genuinely innovative social scientist like James Scott, and do a careful, reflective, and serious investigation of the definition of research problem, the research methods which were used, the central theoretical or explanatory ideas which were introduced, and the overall trajectory and development of this thinker's thought.

Scott's key ideas include moral economy, hidden transcripts, Zomia, weapons of the weak, seeing like a state, and the social reality of anarchism. And Scott attempts to explain social phenomena as diverse as peasant rebellion, resistance to agricultural modernization, the ways in which English novelists represent class conflict, the strategies of the state and its elusive opponents in southeast Asia, and many other topics of rural society. Many of Scott's narratives can be analyzed in terms of the discovery of novel social mechanisms, strategies of resistance and domination, and embodied large social forces like taxation and conscription. Scott's social worlds are populated by real social actors engaged in concrete social mechanisms and processes which can be known through research. Scott is a realist, but realist in his own terms: he discovers real social relations, social mechanisms and processes, and modes of social change at the local level and the national level and he puts substantial empirical detail on these things. His way of thinking about peasant society is relational--he pays close attention to the relationships that exist within a village, across lines of property and kinship, in cooperation towards collective action. He gives a role to the important powers of the state, but always with an understanding that the power of the state must be conveyed through a set of capillaries of agents in positions extending down to the village level. And in fact, his treatment in anarchy and seeing like a state is a summing up of many of the mechanisms of control and supervision which traditional states have used to control rural populations. (Scott's work has been discussed frequently in earlier posts.)

In fact, I could imagine a series of carefully chosen case studies of innovative, insightful social researchers who have changed the terms of debate and understanding in a particular field. Other examples might include researchers such as Robert Putnam, Robert Axelrod, Charles Tilly, Michael Mann, Clifford Geertz, Albert Soboul, Simon Schama, Bin Wong, Robert Darnton, and Benedict Anderson.

Studies like these would have the potential for significantly broadening the terms of discussion and debate within the field of CR and help it engage more deeply with social scientists in several disciplines. This kind of inquiry might help open up some of the blind spots as well. These kinds of discussions might give greater importance to processes leading to the social construction of the self, greater awareness of the heterogeneity of social processes, and a bit more openness to philosophical ideas outside the corpus. No philosophy can proceed solely on the basis of its own premises; interaction with the practices of innovative scientists can significantly broaden the approach in a positive way.