From cornflakes to communism

James Burke’s Knowledge Web is like six degrees of separation for the literati — okay, sometimes more like 10 degrees, and altogether fascinating:

  • Cornflakes, invented in 1894 by…
  • J. H. Kellogg, whose first job was as a typesetter for…
  • Mrs Ellen White, “who opened a water-cure establishment” that was inspired by…
  • Vincent Priessnitz, an advocate of “sleeping in wet sheets,” a prescription that was also administered by…
  • James Gully at a Priessnitz-like spa in Western Egland, which was visited by…
  • King Carlyle, whose official royal portrait was commissioned by…
  • Whistler, the artist with an incredibly Rolodex of artsy friends, including…
  • William Morris who was, among many other things, a socialist community organizer and whose meetings were attended by…
  • Eleanor Marx, daughter of…
  • Karl Marx, the intellectual forefather of…
  • Communism.

Another fun tracing of people, places, and ideas is Frederick the Great to the Bottle Cap.

A video overview of k-web.org is found here.

Below, video 1 of 7 from “Re-Connections” — “the 25h anniversary celebration of James Burke’s ground-breaking ‘Connections‘ and ‘The Day The Universe Changed‘ series.

August 20, 2010 at 8:32 am Leave a comment

“Let’s abolish the PhD orals”

This is the takeaway message from Katherine L. Jako — who advocated to end the oral examination portion of doctoral programs back in 1974. [Download PDF here.]

Studying for orals usually means absorbing scraps of knowledge merely for the sake of having them available, reviewing old notes of readings mercifully forgotten, clawing fearfully through references one really “should” look at—all of this in order to be ready to answer a question that might be asked. One of my professors used to refer to Whitehead’s notion of “inert ideas” as “sodden baggage”; it struck me as a beautiful description, and just the sort of thing one lugs dutifully to an examination and deposits on the way out.

Related: The ProfHacker pens an open letter to new gradate students. There’s some sage advice in the comments section, as well.

August 20, 2010 at 12:29 am Leave a comment

On the limits of social science

Jim Manzi explores “What Social Science Does—and Doesn’t—Know“:

Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs. The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates. How do we know that our physical theories concerning the wing are true? In the end, not because of equations on blackboards or compelling speeches by famous physicists but because airplanes stay up. Social scientists may make claims as fascinating and counterintuitive as the proposition that a heavy piece of machinery can fly, but these claims are frequently untested by experiment, which means that debates like [those about the influence of the stimulus on the United States’ economy] will never be settled.

A friend of mine, Amanda, pointed me to a similar, if not “more scathing” criticism of educational and psychological intervention research. The opening salvo by Levin, O’Donnell, and Kratochwill (2003) provides a “sobering account of exactly how far the credibility of educational research is perceived to have advanced in two generations”:

The problems that are faced in experimental design in the social sciences are quite unlike those of the physical sciences. Problems of experimental design have had to be solved in the actual conduct of social-sciences research; now their solutions have to be formalized more efficiently and taught more efficiently. Looking through issues of Review of Educational Research, one is struck time and again by the complete failures of the authors to recognize the simplest points about scientific evidence in a statistical field. The fact that 85% of National Merit Scholars are first-born is quoted as if it means something, without figures for the over-all population proportion in small families and over-all population proportion that is first-born. One cannot apply anything one learns from descriptive research to the construction of theories or to the improvement of education without having some causal data to with which to implement it (Scriven, 1960, p. 426).

Education research does not provide critical, trustworthy, policy-relevant information about problems of compelling interest to the education public. A recent report of the U.S. Government Accounting Office (GAO, 1997) offers a damning indictment of evaluation research. The report notes that over a 30-year period the nation has invested over $31 billion in Head Start and has served 15 million children. However, the very limited research base available does not permit one to offer compelling evidence that Head Start makes a lasting difference or to discount the view that it has conclusively established its value. There simply are too few high-quality studies available to provide sound policy direction for a hugely important national program. The GAO found only 22 studies out of hundreds conducted that met its standards, noting that many of those rejected failed the basic methodological requirements of establishing compatible comparison groups. No study using a nationally representative sample was found to exist (Stroufe, 1997, p. 27).

These articles remind me of a recent profile in the New Yorker of M.I.T. development economist Esther Duflo. As the co-founder of a poverty “action lab,” Duflo and her colleagues, sometimes referred to as “the randomistas,” are making waves in the social sciences for “borrowing from medicine a very robust and simple tool: they subject social-policy ideas to randomized control trials, as one would test a drug.”

Related: In 1923, Fred Boucke concluded that “social science is a philosophy of values as much as an analysis of specific magnitudes.” [Source]

***

Full citation of Levin, O’Donnell, and Kratcochwill:

Levin, J.R., O’Donnell, A. M., & Kratochwill, T. R. (2003). Educational/psychological intervention research. In I. B. Weiner (Series Ed.) & W. M. Reynolds & G. E. Miller. (Vol. Eds.), Handbook of psychology: Vol. 7. Educational psychology (pp. 557-581). Hoboken, NJ: Wiley.

Thanks, Amanda!

[picture]

UPDATE: When I shared Manzi’s article on my Facebook wall, a stimulating conversation ensued. Here are some excerpts from the online discussion…

MJR: Linguistics spends a giant chunk of its time predicting human social behaviours, in ways that are so “nonobvious” that it’s a pain to explain them to people. I wonder if it’s simply that people don’t expect to understand the hard sciences, but when they don’t grasp the social sciences they take it to mean that the content isn’t there in the first place.

AL: One wonders what class of disciplines the author believes to be encompassed by the term “social science”. It appears that he finds it to be synonymous with “political science”, and only the applied branch thereof at that. If psychology/cognitive science, linguistics, human biology, and other related fields are part of the social sciences, then the premise that experimentation has not led to a large body of non-obvious principles that predict human behavior is simply false.

AS: To [AL], I think it depends on what you mean by “non-obvious.” One problem I noticed in social sciences, even clinical medical research, is that the researchers often devise poor research design methods. They’ve taken maybe two or three classes in statistics at best and usually it’s only applied. Sometimes when construcint hypotheses, sorcial science research tends to do the “looking under the lamplight on a dark street for one’s missing keys bc it’s the only light we have.” Also, many studies done at universities use undergraduates as guinea pigs.

On the other hand, should we be looking for a large body of non-obvious principles at all when doing social science research? Can humans really be explained down to some natural principles or laws-is that an appropriate metaphor? I mean, not even classical Newtonian laws of mechanics hold up at the quantum level in the sense as we know it.

Jason, there was a DeCal that dealt with Duflo’s various methods for approaching randomized testing of economic policies. The thing about those methods is that they acknowledge that those findings are limited and not overly generalizable. Ted Miguel is at Berkeley still, I think, and he did a large study with Kramer on improving school attendance rates in Kenya. But more than a case study, in a generalizable sense, they further delineated how positive externalities can work in such a setting.

MJR: I see a lot of econometrics papers that try desperately and transparently to shoehorn hardcore experimental procedure into their papers. The authors know it’s tacked on and generally not important to the analysis, but they do it anyway because that’s what’s hot right now, and so it has to be there. This is probably the kind of thing that [Manzi] is talking about.

AL: AS, If the goal is to predict human behavior (as mine is), then yes, we should most certainly be searching for the natural, causal principles that regulate the human mind. And this is no metaphor–like all organisms that were designed by natural selection, humans are constellations of behavioral regulation systems; although ascertaining the nature of all of the input-output mappings entailed by these mechanisms is a formidable task that will take many decades to complete, it is one that can in principle be accomplished. Although I agree that many researchers design poor studies–experimental and otherwise–many more are highly competent in this regard.

Oh, and Newtonian principles and quantum mechanics do not need to apply to one another in order to be fully compatible, since they occur at different levels of hierarchical organization–the probabilistic presence of quantum-level material provides macro-level matter with the structural density is needs to operate as if it endures through time and space.

AS: You put it very eloquently, AL, but can human behavior can even be put into input-output mappings at the large, generalizable scale without resorting to what Manzi probably would refer to as obvious conclusions? (e.g. people are motivated to avoid physical and psychic pain or that all humans search for a place and some semblance of personal order in this “blooming, buzzing confusion.”) When studying decision-making, we learn in econ that psychology differs in the belief that humans are rather contextual creatures. Economics assumed revealed preferences and doesn’t question how they are formed. How do you map all of these possible contexts and permutations in a meaningful manner? I see social science more akin to meteorology or weather forecasting than physics or chemistry. In principle, defining all these mappings could be done but what is the likelihood or probability of doing so?

But assuming that humans are designed by natural selection (given that I lack the background or framework to actually dispute the nuances and ramifications this intelligently…nevertheless a lot of questions are raised in my mind. too many for a facebook post haha), how do you . Roger Newton (also a mathematical physicist, not related to Sir Isaac) raises the interesting question of how can you explain the world as we experience it unambiguously in linguistic terms, as social sciences really must do. Mathematics is really the unambiguous language, and we can only use its shadow to really apply to the actual world.

Regarding the other Newtonian thing– first of all, your reference to correspondence limits illustrates my point if I understand both you and remember physics (this is likely not the case too lol). You have to bring in probability to use the big principles to describe everyday life as we observe it. and in introducing probabilities, how can you then claim to have causal inference? So if one were to take your argument that causality in human minds can be found because we’re also products of immutable natural selection mechanisms…let’s pretend we do find various causalities for human behavior. But how do we know the things we can observe or measure actually lose their power of causality at the underlying level? (Not sure if this clear…) Plus, it’s really difficult to isolate single factors in humans. We’re both part of various systems and an entire system of our own, if you look at the body alone.

And I’m not still not sure whether anyone has tested the competence of statistics in research. And again, doesn’t academic research largely uses undergraduates as test subjects, which would be a huge problem… And then humans trying to describe humans creates problems of interpretation and focus, hence my streetlamp metaphor. One of my undergrad papers was to compare how two similar research design studies from major economists found completely different results in people’s sensitivity to price in clean drinking water tablets and microloans in Zambia and S. Africa, although they were testing the same thing. Turns out, the main difference was how they interpreted their statistical findings, kind of like how heads and tails are part of the same coin, just two different sides that never meet. Neither team was really incorrect in their interpretation either, just depended more on their background…

JA: Somewhat related — an excerpt from a commentary by Tom Siegfried about the shortcomings of statistics:

“It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.”

Full article is here: http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are%2C_Its_Wrong

AL: A few points —

(1) Yes, I do believe that a large fraction of the mechanisms comprising the mind can be mapped–in principle and in practice. Just consider the visual system in humans (whose computational logic has been described in great detail), or the predictions we can make about the behavioral decisions of non-humans (with whom and when members of a species will mate, where members of a species will forage under different resource patch distributions, when they will undergo cue-triggered sex changes, under what conditions they will cooperate, etc, etc.). Why is there reason to suspect that we would be less successful in studying humans (other than the fact that we don’t have the luxury of keeping humans captive, performing invasive manipulations, cutting open their brains, etc.)? I feel like we must read different literatures if you believe that the human behavioral sciences are akin to meteorology. (For the record, the model of human nature employed by behavioral economists–which assumes rational pursuit of “self-interest” as an organizing principle, etc.–is almost certainly false in almost every way, which is why economists have always been baffled by humans’ decisions in econ games like the dictator game, prisoner’s dilemma, trust game, etc., etc.).

I agree with you that humans studying humans is a recipe for biased interpretations of observations (as is humans studying non-humans, which can –> anthropomorphism). As such, well-formed theories that do not rely on intuition–and rather are derived from first principles–are essential in this regard. This is why I believe we should study humans with the same meta-theoretical tools as we use (successfully) to study non-humans: Evolutionary theory (which, hybridized with cognitive science = evolutionary psychology). But this is a can of worms….

(2) I agree that language has its limits in describing the computational operations performed by the nervous system, but carefully-constructed contingency statements (If… then…) actually map on rather perfectly to the logic of causality. I also agree that mathematical statements–which also systematize contingencies–may increase the precision of theoretical models. Thus, ultimately, we should be interested in forming computational theories of the psychological mechanisms comprising the mind that do in fact model the operations it’s likely to perform.

(3) All causality is probabilistic–there are distributions of causes and distributions of corresponding effects; this does not undermine the idea that effects have causes. I take this as an uncontroversial assumption we make as scientists (if it is false, we should all just go become stock brokers). In making causal inferences, the goal of the scientist is to decrease our error of prediction in these distributions of cause and effect. Will we ever predict all causes and all effects (in any domain)? Nope. But we can be less wrong than we were the day before, and more wrong than we will be tomorrow. This is true for the physicist as much as the human behavioral scientist–just ask a physicist!

(4) Statistical inference is indeed tricky, sometimes misleading, and often done incorrectly. However, the problems associated with statistical inference in science decrease as the emphasis on p < .05 decreases and the emphasis on effect size increases. Cohen, Rosenthal, and many others have written extensively about this:
http://www.psych.yorku.ca/sp/Amer%20Psychologist%201994%20Cohen%20Earth%20is%20Round.pdf

Others have been hard at work attempting to devise alternatives to traditional inferential criteria:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1473027/

JL: To AS, I like your take, and would expand on some of the things you’ve said. First, that most research relies on creating models based on observed behavior of a finite set (sample) of a larger “population” of available data. So, research is always selective, and based on what the individual doing the researcher is looking for (willing/ready to see/observe). That’s why all science only creates “theory.”

This is true even in Physics, which Manzi thinks is somehow less mutable than social science – a point I disagree with. Theories and even “laws” of Physics are subject to the conditions they were observed within. Newtownian physics does not apply in super-high-focus (speed/density) views.

Regarding human behavior, there are definitely solid rules that apply pretty consistently across the board. The problem, I think, is that they’ve been observed in so many different fields (psychology, sociology, evolutionary biology, physical education, education, behavioral economics, etc.) that very few people (if any) have created coherent statements of what those rules are or how to apply them.

The people who I think have done the best job of creating good, usable, and testable rules/theories about human behavior are marketers. Want to know how people “work,” read marketing papers/books. Aside from that, we have a lot of “common sense” rules about how people work – “You attract more bees with honey than with vinegar.” “Keep your mouth shut and don’t rat on your friends” (ok, that’s from Goodfellas), etc.

Will those explanations (or any) ever be absolute on/off “answers?” No! That’s the nature of Existence and the perspectival nature of our participation in that Existence…especially the polar nature of language. Is it useful to look for “on/off” functions in nature and in human behavior? Yes, I think so, as long as you don’t lose sight of the fact that what you’re doing when you look for those things (and create them) is tool-making, and that you (or we) made the tool, and that the tool is not reality…just a way for use to grasp reality so that we can manipulate it.

All conversations of “cause” are post-facto…lets not forget that. So there is no clear “cause” of anything, just the selection of a series/sequence of events from what has been observed, and the attribution of “cause” to those events. I don’t mean to be vague or wishy-washy here. There are predictable sequences of events, and we can say that, if I drop a glass on concrete, the glass will (most likely) shatter. However, what is the “cause” of the glass shattering? My dropping it? It striking the concrete with a certain amount of force? The relative density of the glass versus the concrete? Gravity pulling the glass down at a certain velocity so that it strikes the concrete with enough force to shatter? Depending on my perspective, I’ll choose the explanation that suits me best.

August 3, 2010 at 12:47 am 6 comments

Fairness, justice, and the essential nature of cheerleading

Varsity cheerleading is not a sport.  Or so says a federal judge in Connecticut, who today issued a 95-page opinion that “Quinnipiac University violated Title IX of the Education Amendments of 1972 by failing to provide equal opportunities for athletics participation to female students.” The Chronicle of Higher Education reports:

The ruling said that a varsity cheerleading team, which the university created this past year, may not be considered a varsity sport for purposes of complying with federal gender-equity law.

Members of the women’s volleyball team, along with their coach, had sued Quinnipiac last spring after the private university said it would cut the team—along with men’s golf and men’s outdoor track—to save money. District Judge [Stefan A.] Underhill later ordered the university to reinstate the volleyball team while the case was pending.

That a judge would deliberate about which activities are considered a sport reminds me of the Supreme Court case PGA Tour v. Martin. See the below TED talk by Harvard political philosopher Michael Sandel — around the 5:15 mark, he engages the audience in a provocative exercise about  justice by tracing the logic of Supreme Court Justices who wrestled with the question about whether walking is an essential, or simply an incidental, feature of golf.

July 21, 2010 at 2:37 pm 2 comments

Literary sport

Lapham’s Quarterly is among my favorite publications and the summer issue on Sports & Games deserves special mention. (I urge all those interested in either the history of games or ideas about human movement to invest $15 in this handsome magazine.) From Lewis Lapham’s introduction:

One not need be American to know that sport is play and play is freedom. It’s not a secret kept from children in Tahiti or Brazil. Dogs romp, whales leap, penguins dance. That play is older than the kingdoms of the Euphrates and the Nile is a truth told by the Dutch scholar, Johan Huizinga, in Homo Ludens, his study of history that discovers in the “primeval soil of play” the origin of “the great instinctive forces of civilized life,” of myth and ritual, law and order, poetry and science. “Play,” he said, “cannot be denied. You can deny, if you like, nearly all abstractions: justice, beauty, truth, goodness, mind, God. You can deny seriousness, but not play.” […]

The glory of [sports and games] isn’t the winning or losing, the bombastic Rooseveltian beating of the others; it is Einstein’s equation made flesh, the unity of energy and mass seen in a movement of light. Huizinga expresses something of the same thought. Play as the making of civilization, which becomes possible only when “an influx of mind breaks down the absolute determinism of the cosmos,” not serious and yet entirely serious, brimming with possibility and tending to become beautiful.

LQ’s literary treatment of sports makes for a nice segue to reference kottke’s post about the novelist Nic Brown, who challenged his friend and professional tennis player Tripp Phillips in a game to win a single point.  Writes Brown:

What I can’t do, no matter how hard I try, is win a single point. Not one. “You have no weapons,” he tells me two days later, over a lunch of cheap tacos and cheese dip. He reviews the match in this specific analytical way I’ve experienced with other professional athletes. To them, match review is engineering, not personal nicety. The performance is fact, not opinion. “No matter what,” he says, “I was going to have you off balance. And no matter what you did, I was going to be perfectly balanced. I knew where you were going to hit it before you hit it. It’s the difference between me and you. But if I played Roger Federer right now, he’d do the exact same thing to me.”

Kottke observes, “That bit reminds me of David Foster Wallace’s article on tennis pro Michael Joyce (Esquire, July ’96). Specifically, how much of a skill difference there was between Joyce (the 79th best player in the world), the players he competed against in qualifiers, and the then-#1 ranked Andre Agassi.”

(photo)

July 21, 2010 at 1:55 pm 1 comment

The problem with neuroplasticity

Dr. Vaughn Bell, a contributor to the stimulating Mind Hacks blog, explains, “As your brain is always changing, the term neuroplasticity is empty on its own“:

It’s currently popular to solemnly declare that a particular experience must be taken seriously because it ‘rewires the brain’ despite the fact that everything we experience ‘rewires the brain’.

It’s like a reporter from a crime scene saying there was ‘movement’ during the incident. We have learnt nothing we didn’t already know.

Neuroplasticity is common in popular culture at this point in time because mentioning the brain makes a claim about human nature seem more scientific, even if it is irrelevant (a tendency called ‘neuroessentialism’).

Clearly this is rubbish and every time you hear anyone, scientist or journalist, refer to neuroplasticity, ask yourself what specifically they are talking about. If they don’t specify or can’t tell you, they are blowing hot air. In fact, if we banned the word, we would be no worse off.

In his critical and necessary essay,Vaughn clearly explains the differences among a host of structural changes in the brain, including synaptogenesis, neuronal migration, and neurogenesis.

//

Below is an engrossing Discovery Channel production about the fascinating resilience and adaptability of the human brain.

June 10, 2010 at 2:07 pm 11 comments

We are purpose maximizers

The talented scribes at Cognitive Media provide a whimsical and engaging illustration of Dan Pink‘s talk about “the surprising truth that motivates us.”

[A more traditional presentation by Dan Pink that focuses on the role of autonomy can be found here.]

Around 9:40 in the above video, Pink notes that the most creative, successful, innovate companies “are animated by a purpose motive.”

“Our goal,” said the founder of Skype, “is to be disruptive, but in the cause of making the world a better place.”

Steve Jobs once explained that his ambition was, “To put a ding in the universe.”

Pink’s explanation of “the purpose motive” reminds me of Simon Senek’s TED talk on “How great leaders inspire action.”

People don’t buy what you do; they buy why you do it…

The goal is not just to sell to people who need what you have; the goal is to sell to people who believe what you believe. The goal is not just to hire people who need a job; it’s to hired people who believe what you believe. I always say that if you hire people just because they can do a job, they’ll work for your money, but if you hire people who believe what you believe, they’ll work for your you with blood and sweat and tears…

The goal is to do business with people who believe what you believe.

The challenge for us, then, is “to write your own sentence”:

June 1, 2010 at 6:00 am 1 comment

Older Posts


Jason R. Atwood

I'm an avid trail runner and doctoral student at U.C. Berkeley who studies motivation and the relationship between the mind and body. This blog is a forum to share research, news, and musings about these topics of interest. More


Play is the beginning of knowledge.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 82 other followers

Twitter Feed


Follow

Get every new post delivered to your Inbox.

Join 82 other followers