SCIENCE TEACHERS MANCH PUNJAB -- ਸਾਇੰਸ ਟੀਚਰਜ ਮੰਚ ਪੰਜਾਬ
ਪੰਜਾਬ ਸਕੂਲ ਸਿੱਖਿਆ ਸੁਧਾਰ ਲਈ ਸੁਝਾਅ ਅਤੇ ਦਰਪੇਸ਼ ਮੁਸ਼ਕਲਾਂ ਸਾਂਝੀਆਂ ਕਰਨ ਲਈ ਪੰਜਾਬ ਸਾਇੰਸ ਅਧਿਆਪਕਾਂ ਦਾ ਮੰਚ
Thursday, 13 September 2012
Sunday, 9 September 2012
Friday, 7 September 2012
NEWS
Morning coffee can stop pain from long sitting hours at work
PTI | 2 hrs ago|Drinking a cup of coffee with breakfast can reduce pain triggered by spending hours at a computer at your workplace, a new study has claimed.Space walk successful; Sunita Williams installs bolt, breaks record
PTI | 5 hrs ago|Sunita Williams and her Japanese counterpart Akihiko Hoshide have successfully restored power to the International Space Station on their second attempt.Once-a-day tablets can cure diabetes
PTI | 11 hrs ago|A once-a-day drug that could revolutionize treatment for patients with Type 2 diabetes has been discovered by scientists.Now, you can name an asteroid at a Nasa contest
PTI | 12 hrs ago|Students worldwide have a chance to name an asteroid, which is the subject of an upcoming Nasa mission. The mission, scheduled to be launched in 2016 is called Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx ).Feline danger: Parasite tied to cats a health risk for humans
ANI | 12 hrs ago|Pet cats pose a serious risk of illness and even death to humans, experts have revealed.Why men won't say: Honey, this dress doesn’t suit you
Agencies | 12 hrs ago|Nearly 50% men are scared to tell a woman if she wears clothes that don’t suit as they fear it would upset her, according to a new UK survey.You can be both fat and fit: Study
Agencies | 12 hrs ago|Nearly half of overweight people per se are physically fit and healthy and at no greater risk of heart disease or cancer, than their slim peers, researchers claim.Longer CPR boosts survival chance
Kounteya Sinha | 12 hrs ago|On an average, a doctor spends 12 minutes conducting the life saving chest compressions - also known as CPR to save a patient of cardiac arrest. But a Lancet study announced on Wednesday says that increasing CPR to 30 minutes can actually save more patients.Stressed out men behind miscarriages of wives: Study
Kounteya Sinha & Durgesh Nandan Jha | 16 hrs ago|Most men who reported consecutive miscarriages by their wives had fragmented or damaged DNA in their sperm caused by infections or unhealthy lifestyle.Watching harrowing footage can lead to post-traumatic stress
PTI | 22 hrs ago|Repeated exposure to violent images could have a long-lasting impact on your mental health, including acute and post-traumatic stress, a new study has found.Asthma inhalers stunt kids' growth
PTI | 5 Sep 2012, 06:58 ISTHousework cuts breast cancer risk by 13%: Study
PTI | 5 Sep 2012, 06:57 ISTHelmet gives fighter pilots ‘X-ray’ vision
PTI | 5 Sep 2012, 06:55 ISTStem cell jab restores feeling in paralysed
PTI | 5 Sep 2012, 06:53 IST17 bn km from sun, Voyager to exit solar system anytime
AP | 5 Sep 2012, 06:51 ISTRats to solve the mystery of depression
PTI | 4 Sep 2012, 06:41 ISTF1 tech to repair damaged eardrums?
PTI | 4 Sep 2012, 06:38 ISTA Ninja-style plane that can fly faster than sound
PTI | 4 Sep 2012, 06:31 ISTObesity affects kids’ brain power, hits academics
PTI | 4 Sep 2012, 06:29 ISTThursday, 6 September 2012
Rejected by Science !!
I just got a rejection for a manuscript sent to the journal Science. That's strike two for me with Science (I sent them my paper on agricultural terraces, back in the early 1990s; it ended up in JFA). I have something else up my sleeve for Science; maybe the third time will work. Because so many papers are submitted to Science, they have a bulk system for evaluating them. Each manuscript gets a quick once-over by one of a small number of editors, who say "no" to most papers. If they say "yes," then the paper gets sent out for peer review. This means that rejections come fast - it took them just a few days to reject my paper. I have to admire that efficiency.
This experience got me thinking about how the Science review process affects the kinds of archaeology papers published in the journal. If you pay attention to the journal, you will know that they tend to favor high-tech methods, archaeometry, fancy quantitative methods, and reports about "the earliest" this or that. While I can only recall one or two papers in Science that I thought were incompetent (a much better record than most archaeology journals, some of which are full of incompetent articles), their selection of archaeology papers is definitely biased in a certain direction. I think one way of expressing this might be that Sciencepublishes archaeology articles that will appeal on methodological grounds to non-archaeological scientists. My guess is that papers that are more synthetic or less methods-heavy don't make it through the initial review (which is done by non-archaeological scientists).
This suggestion links up with the issue of what does "science" mean in archaeology. Not in some big ontological sense, but in practical terms. What kinds of archaeology can be called scientific, and what kinds of archaeology are recognized by other scientists (such as editors at the journal Science) as being scientific in nature? "Scientific method" in archaeology has two meanings. (1) On the one hand science means research done following a scientific epistemology (empirically testable, logically coherent, done with a critical spirit, etc.), whether it employs high-tech methods or not. (2) Scientific methods in archaeology also means the use of non-archaeological scientific techniques: archaeometry and the like. Now ideally, these two meanings of "science" go together, but often they do not. Much research that is epistemologically scientific does not use jazzy methods (as in the paper that was recently rejected byScience). And scientific methods (sense #2) are often used in non-scientific research (sense #1).
What do I mean by that last observation? Consider two examples. First, there are post-processual archaeologists who explicitly reject a scientific epistemology for archaeology, yet they embrace archaeometric methods. This is science of definition #2, done in opposition to science of definition #1. Would this kind of research get by the editors of the journal Science? Good question. Second, there is research that would claim to follow a scientific epistemology, but is too sloppy to be considered good science. Many archaeometric sourcing studies fit here. The archaeologists picks a bunch of artifacts of type X, and subjects them to technical provenience analyses. But if those artifacts were not selected with a rigorous sampling scheme, then this is simply not a good scientific research design. The results cannot be generalized beyond the sample that was analyzed (although archaeologists who are sloppy in picking their samples tend to also be sloppy in overgeneralizing their results). Now this kind of work can easily get past the editors and reviewers of journals, which always puzzles me and bugs the heck out of me. I have pissed off a number of authors and editors over the years with my complaints about the publication of such papers.
So, what's a scientific archaeologist (definition #1, whether or not using methods from definition #2) to do? I guess try another journal. For the sake of the discipline, one can only hope that these powerful editors at Science are not too often fooled by science #2 that does not conform to science #1.
This experience got me thinking about how the Science review process affects the kinds of archaeology papers published in the journal. If you pay attention to the journal, you will know that they tend to favor high-tech methods, archaeometry, fancy quantitative methods, and reports about "the earliest" this or that. While I can only recall one or two papers in Science that I thought were incompetent (a much better record than most archaeology journals, some of which are full of incompetent articles), their selection of archaeology papers is definitely biased in a certain direction. I think one way of expressing this might be that Sciencepublishes archaeology articles that will appeal on methodological grounds to non-archaeological scientists. My guess is that papers that are more synthetic or less methods-heavy don't make it through the initial review (which is done by non-archaeological scientists).
This suggestion links up with the issue of what does "science" mean in archaeology. Not in some big ontological sense, but in practical terms. What kinds of archaeology can be called scientific, and what kinds of archaeology are recognized by other scientists (such as editors at the journal Science) as being scientific in nature? "Scientific method" in archaeology has two meanings. (1) On the one hand science means research done following a scientific epistemology (empirically testable, logically coherent, done with a critical spirit, etc.), whether it employs high-tech methods or not. (2) Scientific methods in archaeology also means the use of non-archaeological scientific techniques: archaeometry and the like. Now ideally, these two meanings of "science" go together, but often they do not. Much research that is epistemologically scientific does not use jazzy methods (as in the paper that was recently rejected byScience). And scientific methods (sense #2) are often used in non-scientific research (sense #1).
What do I mean by that last observation? Consider two examples. First, there are post-processual archaeologists who explicitly reject a scientific epistemology for archaeology, yet they embrace archaeometric methods. This is science of definition #2, done in opposition to science of definition #1. Would this kind of research get by the editors of the journal Science? Good question. Second, there is research that would claim to follow a scientific epistemology, but is too sloppy to be considered good science. Many archaeometric sourcing studies fit here. The archaeologists picks a bunch of artifacts of type X, and subjects them to technical provenience analyses. But if those artifacts were not selected with a rigorous sampling scheme, then this is simply not a good scientific research design. The results cannot be generalized beyond the sample that was analyzed (although archaeologists who are sloppy in picking their samples tend to also be sloppy in overgeneralizing their results). Now this kind of work can easily get past the editors and reviewers of journals, which always puzzles me and bugs the heck out of me. I have pissed off a number of authors and editors over the years with my complaints about the publication of such papers.
So, what's a scientific archaeologist (definition #1, whether or not using methods from definition #2) to do? I guess try another journal. For the sake of the discipline, one can only hope that these powerful editors at Science are not too often fooled by science #2 that does not conform to science #1.
Labels: Archaeology and the media, Journals, Peer review, Science
ARTICLES
ਸਾਇੰਸ ਟੀਚਰਜ ਵਿਚਾਰ ਮੰਚ
I've been wondering recently why more archaeologists don't use the method and concept of "natural experiment." In one sense, natural experiments are not uncommon in archaeology; we sometimes call them "controlled comparisons," probably borrowing that term from cultural anthropology (Eggan 1954). But we rarely use the phrase "natural experiment," which has been gaining ground in the comparative branches of the social sciences, history, and ecology. This isn't just a terminological issue; natural experiments are all about how to determine causality. Most archaeologists, however, avoid discussing causality, and this may account for the rarity of the natural experiment concept in our field.
(Postmodern, postprocessualist, and other "post" archaeologists can probably stop reading here, unless you are looking for more fodder to critique simplistic scientistic Smith).
A natural experiment is "an observational study that nonetheless has the properties of an experimental research design.” (Gerring 2007: 216). The recent collection edited by Jared Diamond and James Robinson (2010)presents a series of historical natural experiments, including one archaeological case study. In an insightful review of the book in the journalScience, James Mahoney notes:
It is probably no surprise that the archaeological example in this book is by Patrick Kirch (2010), who has been doing "natural experiments" or "controlled comparisons" for years. Island societies are productive candidates for natural experiments because of their boundedness and relative isolation. In this chapter he compares Hawaii, the Marquesas and Mangaia as contrasting environments in which initially similar ancestral Polynesian societies evolved in very different directions.
Diamond, Jared and James A. Robinson (editors)
2010 Natural Experiments of History. Harvard University Press, Cambridge.
Dunning, Thad
2008 Improving Causal Inference: Strengths and Limitations of Natural Experiments. Political Research Quarterly 61: 282-293
Eggan, Fred
1954 Social Anthropology and the Method of Controlled Comparison. American Anthropologist 56:743-763.
Gerring, John
2005 Causation: A Unified Framework for the Social Sciences. Journal of Theoretical Politics 17: 163-198.
Gerring, John
2007 Case Study Research: Principles and Practices. Cambridge University Press, New York.
Heckman, James J.
2005 The Scientific Model of Causality. Sociological Methodology 35: 1-97.
Kirch, Patrick V.
2010 Controlled Comparison and Polynesian Cultural Evolution. In Natural Experiments of History, edited by Jared Diamond and James A. Robinson, pp. 15-52. Harvard University Press, Cambridge.
Labzina, Elena
2011 No Free Lunch: Costs and Benefits of Using the Concept of Natural Experiments in Political Science. M.A. thesis, Department of Political Sciences, Central European University.
Morgan, Stephen L. and Christopher Winship
2007 Counterfacturals and Causal Inference: Methods and Principles for Social Research. Cambridge University Press, New York.
My Endnote database just passed 20,000 references. I was getting psyched out about seeing the 20,000th reference, but then of course I went along merrily adding references and forgot to savor the moment. Now it has 20,001. I just got a new computer, with Windows 7 and Office 2010, and I upgraded other computers. So of course Endnote X3 isn't compatible with Word 2010 and I had to upgrade Endnote to version X6. Not too different.
20,001 references, it reminds me of the movie "20,001: A Bibliography Odyssey." The aliens plant an obelisk that teaches the ape-people how to construct databases for citations. They get all their information into the database, but then Hal locks them out and they lose everything. "Open the database door, Hal!" "I'm sorry, Dave. You won't be able to use your citations anymore." Now was this the original movie, or am I mixing it up with my bibliography nightmare?
I just found two Mesoamerican articles in the online-first section at the Proceedings of the National Academy of Sciences. It's great to see good archaeology receive high-profile coverage in places that are seen by a wide range of disciplines. These papers describe early steps toward larger research goals, although they tend to be phrased as if they were reaching those goals right now.
Chase, Arlen F., Diane Z. Chase, Christopher T. Fisher, Stephen J. Leisz, and John F. Weishampel
2012 Geospatial revolution and remote sensing LiDAR in Mesoamerican archaeology. Proceedings of the National Academy of Sciences (published online first).
Scarborough, Vernon L., Nicholas P. Dunning, Kenneth B. Tankersley, Christopher Carr, Eric Weaver, Liwy Grazioso, Brian Lane, John G. Jones, Palma Buttles, Fred Valdez, and David L. Lentz
2012 Water and sustainable land use at the ancient tropical city of Tikal, Guatemala. Proceedings of the National Academy of Sciences (published online first)
LIDAR!
The first paper is a brief description of recent LIDAR mapping at the Maya city of Caracol and the western Mexican city of Angamuco. Most readers will probably already have seen some LIDAR maps; if not, check out this article and some of the other publications. These maps are absolutely incredible. Arlen and Diane Chase (and their crew) spent decades instrument mapping at Caracol, and they still had only covered a portion of the site. Now with one application, the LIDAR map covers the entire (huge) urban area, with high resolution and great accuracy. Arlen first showed me the maps two years ago and I was blown away. It is hard to express just how much of a leap forward this is for archaeological mapping, particularly in highly vegetated areas like the Maya lowlands. The PNAS paper only has a couple of images (see above); see some of the other publications, cited in that paper, for more images.
The Angamuco map, done for the project directed by Chris Fisher, is also pretty amazing (above). It is one of a series of west Mexican urban settlements built on lava flows. Some French teams have been working on other sites of this type, which have the potential to greatly illuminate our understanding of urban form (since many house foundations and other features can be mapped).
So what are the larger research goals that can be addressed with these and other LIDAR-mapped sites? Leaving aside the obvious goals of providing more details about individual archaeological sites, I am excited about this work because of the potential to advance our understanding of urban morphology in ancient cities. It is going to take some time to reach this goal, since we presently lack the methods to translate good maps (whether made with LIDAR or with old non-electronic instrument mapping, or with a compass and tape) into rigorous results about city layout and planning. It is striking to see high-tech spatial methods (LIDAR, various prospecting methods like ground-penetrating radar, NASA satellite imagery) used to make visually arresting maps, which are then interpreted in a subjective and impressionistic manner.
Perhaps the situation is analogous to provenience studies of artifacts. For many years, even decades, we have had good data on the places of origins of lots of artifacts, but few models or concepts on how to translate those data into reliable economic inferences. I have complained about this for many years, in various review articles and such. Methods and data often far out-run our interpretive approaches. Now, finally, archaeologists are working out methods for reconstructing things like market systems from artifact sourcing studies (see, for example, Garraty and Stark, eds, 2010, Archaeological Approaches to Market Exchange in Ancient Societies, Univ Press of Colorado). So, I hope that archaeologists and others will made the kinds of advances in studying urban form that are needed to really take advantage of the great maps produced by LIDAR (and other methods).
MAYA RESERVOIRS AND SUSTAINABILITY
The second PNAS paper is a nice study of the construction and use of reservoirs at the Maya city of Tikal. Over many years, Vincent Scarborough has led an effort to show how the ancient Maya managed water resources, and the social context of water. His excellent study (The Flow of Power: Ancient Water Systems and Landscapes, 2003, SAR Press) helps put the Maya case into a broader comparative framework. The reservoirs at Tikal (see image above) have been known for a long time, but now Scarborough and his colleagues have learned how and when they were built and how they were used.
I am a bit skeptical about the sustainability argument of this paper. There isn't much of an explicit argument here. The implied argument seems to be that the schemes for water control and use identified by fieldwork were a form of sustainable land use, thereby permitting the city of Tikal to flourish for many centuries. Perhaps. That seems a reasonable notion, but how can it be confirmed or falsified? This is a causal argument (these practices caused -- or at least allowed and stimulated -- a long occupation). But to confirm this hypothesis, more cases and better interpretive models are needed. What would non-sustainable practices look like? Are there some cities that used major water control methods and lasted for many centuries, while other similar cities did not use the water technology and did not last as long?
It is difficult, perhaps impossible, to make a convincing causal argument from a single case. One minimally needs to consider the counterfactual case -- suppose that the rulers or builders of Tikal had NOT designed such clever water control features. What would have been the consequences? Perhaps an argument can be made that the city would not have lasted so long, or would not have grown so large, without these features. But even though this kind of explicit counterfactual argument can suggest a causal model, any real conclusions about ancient sustainable practices require a much larger sample of cases. I develop this argument in my 2010 paper in CAJ: my point is that archaeologists have data to address issues like urban sustainability, but we have yet to assemble rigorous samples and perform the necessary analyses to produce reliable results.
Research like that described in the paper by Scarborough et al in PNAS is important for understanding Tikal and for building knowledge about ancient systems of water control. It could also be important for generating findings about ancient sustainable practices, but to do this, it needs to be joined by many more studies to build a reliable base of information. This paper is an excellent step in that direction, but I think it is premature to make any claims about sustainability from single studies like this.
Publishing Archaeology Blog by Michael E. Smith is licensed under a Creative Commons License.
__________________
Publishing Archaeology
Saturday, September 1, 2012
Natural experiments in archaeology
(Postmodern, postprocessualist, and other "post" archaeologists can probably stop reading here, unless you are looking for more fodder to critique simplistic scientistic Smith).
A natural experiment is "an observational study that nonetheless has the properties of an experimental research design.” (Gerring 2007: 216). The recent collection edited by Jared Diamond and James Robinson (2010)presents a series of historical natural experiments, including one archaeological case study. In an insightful review of the book in the journalScience, James Mahoney notes:
· “Historical analysts cannot, of course, test their ideas by running controlled experiments. They cannot randomly assign cases to treatment and control groups. But they can sometimes make a credible claim that the assignment of cases to different groups is “as if” random. The label “natural experiment” (or “quasi-experiment”) is often reserved for those studies in which this assumption seems especially plausible.”(Science vol. 357, p. 1578)
It is probably no surprise that the archaeological example in this book is by Patrick Kirch (2010), who has been doing "natural experiments" or "controlled comparisons" for years. Island societies are productive candidates for natural experiments because of their boundedness and relative isolation. In this chapter he compares Hawaii, the Marquesas and Mangaia as contrasting environments in which initially similar ancestral Polynesian societies evolved in very different directions.
Gerring 2007: 153 |
I won't go into details here, beyond recommending Diamond and Robinson and other works on natural experiments (Gerring 2007; Dunning 2008; Labizna 2011). But just to show how this line of methodological thinking lines up with archaeology, consider John Gerring's depiction of types of experimental design in case study research (recall that most research in archaeology follows the approach called case study research in other disciplines).
Quadrant #1 is the classic experimental design. Two populations, the treatment group and control group (spatial variation) are observed through time, before and after the "treatment" or the "perturbation" (the term used in Diamond and Robinson 2010). Quadrants 2 and 3, with only temporal or spatial variation, are relatively common in archaeology. The fourth quadrant is for studies where there is no spatial or temporal variation. This is perhaps the most common archaeological situation. You do a study, come up with some results, and then put forward an argument about what they mean: what were the dynamics, how and why did something happen, etc. But since you have made no formal comparison of either a before-and-after nature, or a spatial nature, it is very difficult to demonstrate causality. The typical course of action is make as strong an argument as one can. "That's my story and I'm stickin' to it!" But a methodologically superior method would be to find a way to make an explicit temporal and/or spatial comparison.
When there just happens to be no good comparison one can make, then you need to use counterfactual logic to make a causal claim. There is a rather large literature on formal counterfactual causality in sociology and political science (e.g., Gerring 2005, Heckman 2005, Morgan & Winship 2007). In its simplest form, making a causal claim without an experiment or comparison requires one to consider the counterfactual situation of what would have happened had the hypothesized causal agent not been present, or acted differently. One then shows that such a situation does not match reality, which gives support to the causal hypothesis.
Here is an example. I think that the ability of people (whose houses I have excavated) to cultivate cotton in Aztec-period Morelos was the main source of their economic prosperity. That is, cotton cultivation was a major cause of their prosperity. I don't have a "before and after" comparison (first no cotton, then cotton cultivation), and there are not enough comparative cases of sites where we know about cotton cultivation (presence or absence) and prosperity in enough detail. (Once some current project are complete, this situation will improve). So I explore the counterfactual situation: what would the local economy be like if they were not able to grow cotton? They would have had fewer resources to trade with other areas. Cotton textiles served as money, and thus cotton was far more valuable than other local resources such as maize, bark paper, or basalt. Also, it would have been far more difficult to come up with their taxes, assessed in cotton textiles. (And there would probably be far fewer spindle whorls in domestic middens).
Now I can come up with all sorts of plausible factors to support my causal claim about cotton and prosperity, but the argument will be much more effective once I can add some formal comparisons. For example, were people at Calixtlahuaca, where cotton was not cultivated, less prosperous? This will be a start, but more cases are needed to make a strong argument. Or perhaps I could show that prosperity declined after the Spanish conquest (it almost certainly it did, but demonstrating that is quite difficult), when historical sources tell us that irrigated cotton fields were converted to sugar cane.
In any case, these various comparative scenarios are natural experiments. We mostly do observational (case study) research in archaeology. To the extent that we can design and describe our research in quasi-experimental terms and use this approach to explore causality, our explanations will be more convincing, and scholars and others outside of archaeology will be more likely to view our discipline as an empirical scientific field with something to say about the world. Now, many archaeologists don't want our field to be scientific. They want to use fashionable high-level theory to interpret the past, without the constraints of scientific methods. That is fine for some purposes, but if we want anyone outside of the humanities to pay attention to us and find something of value in our research, then we need to do all we can to beef up our methods in a scientific direction. We need to pursue science #1 (epistemological science) and just just science #2 (jazzy technical methods). And natural experiments are one way to do this.
Diamond, Jared and James A. Robinson (editors)
2010 Natural Experiments of History. Harvard University Press, Cambridge.
Dunning, Thad
2008 Improving Causal Inference: Strengths and Limitations of Natural Experiments. Political Research Quarterly 61: 282-293
Eggan, Fred
1954 Social Anthropology and the Method of Controlled Comparison. American Anthropologist 56:743-763.
Gerring, John
2005 Causation: A Unified Framework for the Social Sciences. Journal of Theoretical Politics 17: 163-198.
Gerring, John
2007 Case Study Research: Principles and Practices. Cambridge University Press, New York.
Heckman, James J.
2005 The Scientific Model of Causality. Sociological Methodology 35: 1-97.
Kirch, Patrick V.
2010 Controlled Comparison and Polynesian Cultural Evolution. In Natural Experiments of History, edited by Jared Diamond and James A. Robinson, pp. 15-52. Harvard University Press, Cambridge.
Labzina, Elena
2011 No Free Lunch: Costs and Benefits of Using the Concept of Natural Experiments in Political Science. M.A. thesis, Department of Political Sciences, Central European University.
Morgan, Stephen L. and Christopher Winship
2007 Counterfacturals and Causal Inference: Methods and Principles for Social Research. Cambridge University Press, New York.
Labels: Causality, Comparisons, Experiments, Science
Wednesday, August 29, 2012
20,001: A Bibliography Odyssey
My Endnote database just passed 20,000 references. I was getting psyched out about seeing the 20,000th reference, but then of course I went along merrily adding references and forgot to savor the moment. Now it has 20,001. I just got a new computer, with Windows 7 and Office 2010, and I upgraded other computers. So of course Endnote X3 isn't compatible with Word 2010 and I had to upgrade Endnote to version X6. Not too different.
20,001 references, it reminds me of the movie "20,001: A Bibliography Odyssey." The aliens plant an obelisk that teaches the ape-people how to construct databases for citations. They get all their information into the database, but then Hal locks them out and they lose everything. "Open the database door, Hal!" "I'm sorry, Dave. You won't be able to use your citations anymore." Now was this the original movie, or am I mixing it up with my bibliography nightmare?
Sunday, August 26, 2012
Some memorable reviews of articles
Most review of manuscripts for journals are rather pedestrian. This part is fine, that part needs work, the text on the map can't be read at that scale, cite so-and-so, cite me, etc. Sometimes the reviews are more memorable, either in a positive or a negative fashion. Here are my recollections of three reviews of manuscripts of mine by reviewers for journals. They follow a progression from amusing to annoying.
Brian Tomasewski and I published a paper in the Journal of Historical Geography. This was an analysis of places in the Toluca Valley, based on the spatial depiction of the specific towns mentioned in individual native historical sources. We used those data to make come inferences about changing political dynamics. The journal sent the manuscript blind to reviewers--that is, the authors' names were omitted. One reviewer provided a helpful detailed review, but complained that the paper didn't cite Michael Smith's work sufficiently! Smith has worked in the Toluca Valley, noted the reviewer, and many of his papers are relevant to this manuscript. Then, the journal editor evidently didn't make the connection between the Smith mentioned in the review and the name of the second author, and asked us to cite this guy Smith. Now some authors cite themselves too much and others too little. I sometimes worry that I am closer to the former than the latter position, so I try not to go overboard. I later told the reviewer I was a co-author and we had a good laugh.
Tomaszewski, Brian M. and Michael E. Smith 2011 Politics, Territory, and Historical Change in Postclassic Matlatzinco (Toluca Valley, central Mexico). Journal of Historical Geography 37: 22-39.
All too often, reviewers complain that an author has not written the paper that the reviewer would like them to write. In this case, I had not written the BOOK the author had wanted. One reviewer of this paper gave it a critical review--twice. First for a journal that rejected the paper, and then for the journal that eventually published it. As I recall, the criticisms were very general, more like objections to my overall approach than specific problems with the manuscript. After extensive revision (thanks to a number of very helpful and detailed critiques), the paper was published. This reviewer later told me that it would take a book to produce the kind of work they wanted me to write, not an article!
Smith, Michael E.2010 Sprawl, Squatters, and Sustainable Cities: Can Archaeological Data Shed Light on Modern Urban Issues? Cambridge Archaeological Journal 20: 229-253.
Shortly after the publication of an edited volume, a colleague in Mexico asked me to submit a paper (in Spanish) to a local journal on the general subject of one of my chapters in that volume. This person suggested that a Spanish translation of the chapter would be sufficient, but I thought it would be more appropriate to write a new paper based on the original chapter but with some new material and perspectives. I wrote the paper and submitted it to the journal. One of the reviewers was harshly critical, complaining about lazy U.S. scholarship. How dare I merely translate a book chapter and try to pass it off to a Mexican journal as a separate paper! And what's worse, the manuscript does not even say that it is a translation of a book chapter. This kind of arrogant scholarship, an example of academic imperialism, should not be tolerated!
I was really angry and ready to withdraw the paper. Luckily, my colleague smoothed things over and the paper was published. Authors often complain that a reviewer didn't really read the paper carefully. In my case, it is clear that the reviewers did not compare the paper to the original book chapter.
Academic imperialism is something archaeologists working in foreign countries need to watch out for. Academic imperialism refers to foreign scholars sweeping in to a foreign country, doing their research, and leaving, without much interaction with their local colleagues and without publishing in the journals of the country and region. I work hard to avoid academic imperialist practices, which is one reason I was so angered by the clueless review.
1. Cite yourself.
Brian Tomasewski and I published a paper in the Journal of Historical Geography. This was an analysis of places in the Toluca Valley, based on the spatial depiction of the specific towns mentioned in individual native historical sources. We used those data to make come inferences about changing political dynamics. The journal sent the manuscript blind to reviewers--that is, the authors' names were omitted. One reviewer provided a helpful detailed review, but complained that the paper didn't cite Michael Smith's work sufficiently! Smith has worked in the Toluca Valley, noted the reviewer, and many of his papers are relevant to this manuscript. Then, the journal editor evidently didn't make the connection between the Smith mentioned in the review and the name of the second author, and asked us to cite this guy Smith. Now some authors cite themselves too much and others too little. I sometimes worry that I am closer to the former than the latter position, so I try not to go overboard. I later told the reviewer I was a co-author and we had a good laugh.
Tomaszewski, Brian M. and Michael E. Smith 2011 Politics, Territory, and Historical Change in Postclassic Matlatzinco (Toluca Valley, central Mexico). Journal of Historical Geography 37: 22-39.
2. You should have written a book, not this paper.
All too often, reviewers complain that an author has not written the paper that the reviewer would like them to write. In this case, I had not written the BOOK the author had wanted. One reviewer of this paper gave it a critical review--twice. First for a journal that rejected the paper, and then for the journal that eventually published it. As I recall, the criticisms were very general, more like objections to my overall approach than specific problems with the manuscript. After extensive revision (thanks to a number of very helpful and detailed critiques), the paper was published. This reviewer later told me that it would take a book to produce the kind of work they wanted me to write, not an article!
Smith, Michael E.2010 Sprawl, Squatters, and Sustainable Cities: Can Archaeological Data Shed Light on Modern Urban Issues? Cambridge Archaeological Journal 20: 229-253.
3. Yankee Imperialist Pig.
Shortly after the publication of an edited volume, a colleague in Mexico asked me to submit a paper (in Spanish) to a local journal on the general subject of one of my chapters in that volume. This person suggested that a Spanish translation of the chapter would be sufficient, but I thought it would be more appropriate to write a new paper based on the original chapter but with some new material and perspectives. I wrote the paper and submitted it to the journal. One of the reviewers was harshly critical, complaining about lazy U.S. scholarship. How dare I merely translate a book chapter and try to pass it off to a Mexican journal as a separate paper! And what's worse, the manuscript does not even say that it is a translation of a book chapter. This kind of arrogant scholarship, an example of academic imperialism, should not be tolerated!
I was really angry and ready to withdraw the paper. Luckily, my colleague smoothed things over and the paper was published. Authors often complain that a reviewer didn't really read the paper carefully. In my case, it is clear that the reviewers did not compare the paper to the original book chapter.
Academic imperialism is something archaeologists working in foreign countries need to watch out for. Academic imperialism refers to foreign scholars sweeping in to a foreign country, doing their research, and leaving, without much interaction with their local colleagues and without publishing in the journals of the country and region. I work hard to avoid academic imperialist practices, which is one reason I was so angered by the clueless review.
Labels: Academic imperialism, Journals, Reviews
Thursday, August 9, 2012
What is the significance of your research?
Are academic archaeologists obsessed with significance? Is this a good thing or a bad thing?
I am putting the final touches on a grant proposal being submitted jointly by archaeologists and some (non-anthropological) social scientists.At a meeting yesterday the sociologist and political scientist surprised me by questioning the "significance" section of the proposal. I review lots of proposals, student and senior, for NSF and Wenner-Gren, and there is almost always a section that describes the "significance" of the research. I always include such a section in my proposals. I insist that my students include a significance section. But these other scholars had rarely seen such a section in proposals in their discipline. Why do we need this? they asked.
We had placed a "significance" section at the end of the proposal in which we stated the significance of the research for each of the four disciplines represented among the PIs, and then we outlined the importance of the project in more general intellectual terms.The latter was fine with everyone, but these non-archaeologists were puzzled at why we wanted to state the significance of the research for each discipline. We archaeologists (Barbara Stark and me) were dumbfounded. We always have a significance section!
The attitude of our non-archaeological colleagues seemed to be that the entire proposal made the case for the significance of the research, so why re-state this at the end? For archaeology, I think our obsession with significance may come from the detailed, painstaking, and local nature of the research process. Fieldwork is quite a picky affair. Many archaeologists do great fieldwork but have trouble putting it into a broader intellectual context.Yet that broader context is highly valued by the academic disciplines of anthropology and archaeology. NSF rarely wants to fund fieldwork that only illuminates a narrow local domain. NSF-Archaeology wants to support fieldwork that has wider scientific implications and relates to big issues. Yet we have to put lots of picky details into our proposals. Having a significance section makes us stand back and contextualize our research with respect to bigger issues.
Mainstream anthropology has similar emphases. The Wenner-Gren grant applications include a significance section with these instructions:
"Item 25. What contribution does your project make to anthropological theory and to the discipline? ... A successful application will emphasize the contribution its proposed research will make, not only to the specific area of research being addressed, but also to the broader field of anthropology."
While I support the emphasis on broader contributions, I question Wenner-Gren's emphasis on "anthropological theory." A lot of good research makes little contribution to "anthropological theory," yet has significance within the discipline. When I was on the W-G review committee, I interpreted this section more broadly than it was written. But the anthropological concern with significance seems parallel to that of archaeology: the research process is painstaking, picky, and local, and so scholars need to step back to put it all into perspective.
One of the fascinating aspects of working with a transdisciplinary research team is experiencing these contrasting elements of disciplinary culture. Archaeologists and anthropologists are obsessed with stating the significance of our research, but other social scientists are not. Or in another example, parts of the research design that seemed fine to me were viewed as too sloppy by the sociologist. The resulting act of tightening up our independent variables proved very instructive and helpful.
Perhaps archaeologists are obsessed with the significance thing, but it is a necessary and understandable obsession.
I am putting the final touches on a grant proposal being submitted jointly by archaeologists and some (non-anthropological) social scientists.At a meeting yesterday the sociologist and political scientist surprised me by questioning the "significance" section of the proposal. I review lots of proposals, student and senior, for NSF and Wenner-Gren, and there is almost always a section that describes the "significance" of the research. I always include such a section in my proposals. I insist that my students include a significance section. But these other scholars had rarely seen such a section in proposals in their discipline. Why do we need this? they asked.
We had placed a "significance" section at the end of the proposal in which we stated the significance of the research for each of the four disciplines represented among the PIs, and then we outlined the importance of the project in more general intellectual terms.The latter was fine with everyone, but these non-archaeologists were puzzled at why we wanted to state the significance of the research for each discipline. We archaeologists (Barbara Stark and me) were dumbfounded. We always have a significance section!
The attitude of our non-archaeological colleagues seemed to be that the entire proposal made the case for the significance of the research, so why re-state this at the end? For archaeology, I think our obsession with significance may come from the detailed, painstaking, and local nature of the research process. Fieldwork is quite a picky affair. Many archaeologists do great fieldwork but have trouble putting it into a broader intellectual context.Yet that broader context is highly valued by the academic disciplines of anthropology and archaeology. NSF rarely wants to fund fieldwork that only illuminates a narrow local domain. NSF-Archaeology wants to support fieldwork that has wider scientific implications and relates to big issues. Yet we have to put lots of picky details into our proposals. Having a significance section makes us stand back and contextualize our research with respect to bigger issues.
Mainstream anthropology has similar emphases. The Wenner-Gren grant applications include a significance section with these instructions:
"Item 25. What contribution does your project make to anthropological theory and to the discipline? ... A successful application will emphasize the contribution its proposed research will make, not only to the specific area of research being addressed, but also to the broader field of anthropology."
While I support the emphasis on broader contributions, I question Wenner-Gren's emphasis on "anthropological theory." A lot of good research makes little contribution to "anthropological theory," yet has significance within the discipline. When I was on the W-G review committee, I interpreted this section more broadly than it was written. But the anthropological concern with significance seems parallel to that of archaeology: the research process is painstaking, picky, and local, and so scholars need to step back to put it all into perspective.
One of the fascinating aspects of working with a transdisciplinary research team is experiencing these contrasting elements of disciplinary culture. Archaeologists and anthropologists are obsessed with stating the significance of our research, but other social scientists are not. Or in another example, parts of the research design that seemed fine to me were viewed as too sloppy by the sociologist. The resulting act of tightening up our independent variables proved very instructive and helpful.
Perhaps archaeologists are obsessed with the significance thing, but it is a necessary and understandable obsession.
Wednesday, July 25, 2012
Archaeology in PNAS
Area of the Puchituk Terminus at Caracol |
Chase, Arlen F., Diane Z. Chase, Christopher T. Fisher, Stephen J. Leisz, and John F. Weishampel
2012 Geospatial revolution and remote sensing LiDAR in Mesoamerican archaeology. Proceedings of the National Academy of Sciences (published online first).
Scarborough, Vernon L., Nicholas P. Dunning, Kenneth B. Tankersley, Christopher Carr, Eric Weaver, Liwy Grazioso, Brian Lane, John G. Jones, Palma Buttles, Fred Valdez, and David L. Lentz
2012 Water and sustainable land use at the ancient tropical city of Tikal, Guatemala. Proceedings of the National Academy of Sciences (published online first)
LIDAR!
The central portion of Angamuco |
The Angamuco map, done for the project directed by Chris Fisher, is also pretty amazing (above). It is one of a series of west Mexican urban settlements built on lava flows. Some French teams have been working on other sites of this type, which have the potential to greatly illuminate our understanding of urban form (since many house foundations and other features can be mapped).
So what are the larger research goals that can be addressed with these and other LIDAR-mapped sites? Leaving aside the obvious goals of providing more details about individual archaeological sites, I am excited about this work because of the potential to advance our understanding of urban morphology in ancient cities. It is going to take some time to reach this goal, since we presently lack the methods to translate good maps (whether made with LIDAR or with old non-electronic instrument mapping, or with a compass and tape) into rigorous results about city layout and planning. It is striking to see high-tech spatial methods (LIDAR, various prospecting methods like ground-penetrating radar, NASA satellite imagery) used to make visually arresting maps, which are then interpreted in a subjective and impressionistic manner.
Perhaps the situation is analogous to provenience studies of artifacts. For many years, even decades, we have had good data on the places of origins of lots of artifacts, but few models or concepts on how to translate those data into reliable economic inferences. I have complained about this for many years, in various review articles and such. Methods and data often far out-run our interpretive approaches. Now, finally, archaeologists are working out methods for reconstructing things like market systems from artifact sourcing studies (see, for example, Garraty and Stark, eds, 2010, Archaeological Approaches to Market Exchange in Ancient Societies, Univ Press of Colorado). So, I hope that archaeologists and others will made the kinds of advances in studying urban form that are needed to really take advantage of the great maps produced by LIDAR (and other methods).
MAYA RESERVOIRS AND SUSTAINABILITY
Reservoirs in central Tikal |
I am a bit skeptical about the sustainability argument of this paper. There isn't much of an explicit argument here. The implied argument seems to be that the schemes for water control and use identified by fieldwork were a form of sustainable land use, thereby permitting the city of Tikal to flourish for many centuries. Perhaps. That seems a reasonable notion, but how can it be confirmed or falsified? This is a causal argument (these practices caused -- or at least allowed and stimulated -- a long occupation). But to confirm this hypothesis, more cases and better interpretive models are needed. What would non-sustainable practices look like? Are there some cities that used major water control methods and lasted for many centuries, while other similar cities did not use the water technology and did not last as long?
It is difficult, perhaps impossible, to make a convincing causal argument from a single case. One minimally needs to consider the counterfactual case -- suppose that the rulers or builders of Tikal had NOT designed such clever water control features. What would have been the consequences? Perhaps an argument can be made that the city would not have lasted so long, or would not have grown so large, without these features. But even though this kind of explicit counterfactual argument can suggest a causal model, any real conclusions about ancient sustainable practices require a much larger sample of cases. I develop this argument in my 2010 paper in CAJ: my point is that archaeologists have data to address issues like urban sustainability, but we have yet to assemble rigorous samples and perform the necessary analyses to produce reliable results.
Research like that described in the paper by Scarborough et al in PNAS is important for understanding Tikal and for building knowledge about ancient systems of water control. It could also be important for generating findings about ancient sustainable practices, but to do this, it needs to be joined by many more studies to build a reliable base of information. This paper is an excellent step in that direction, but I think it is premature to make any claims about sustainability from single studies like this.
Labels: Journals, Mapping, Sustainability
Friday, June 29, 2012
A good day in Mexico: Carbon, thin sections, an index, and chicharron
I received a number of good things by email this morning, including radiocarbon dates, ceramic petrography results, and a completed index for a book! When it rains it pours. Dealing with this stuff left little time for the sherd drawing I am supposed to be doing in the lab. This is the first time I have used a professional indexer for a book. Normally I index my own books, and I like the process. An index is an important tool, and constructing a good index is an intellectual exercise as well as an organizational task. Don't you hate it when a book has a lame, four-page index and you are trying to find some specific information? Don't you REALLY hate it when a book lacks an index entirely? But indexing takes time, and with three co-editors for this book we decided to hire a professional indexer.
This is a good book, buy it, you'll like it. You see, after going on and on in this blog about how most edited volumes in archaeology are worthless, I can't afford to edit a bad book. (Here is my original post on worthless edited volumes, and a later related post). So any volume I edit now must be good, almost by definition (please suspend your critical thinking skills here temporarily).
I can't say much about the petrographic results. This isour first batch of ceramic petrographic samples, submitted by my student Julie Novic and done by Jenny Meanwell of MIT. Julie hasn't had time to see how they look. Do our macroscopic paste types match petrographic reality? What about our ceramic types? Do the petrographic data support our hypotheses on ceramic production, exchange and consumption? Julie is working on neighborhoods and urban spatial organization at Calixtlahuaca, using our surface data (and she is my co-author in chapter 1 of the above book). We have another sampling scheme for petrography for the excavated ceramics, and we should get the results before too long. I'm glad customs or airport security didn't get weirded out by the saw blade I carried to Mexico in my luggage!
The most exciting news today was a new batch of radiocarbon results from the University of Arizona AMS lab. This is the first bunch from our second batch of dates. We are waiting for the entire suite to run them through Ox-Cal, but we are also working with the uncalibrated dates, not to assign ages, but to estimate phase lengths. It turns out that my colleagues George Cowgill and Keith Kintigh wrote a handy-dandy program a while ago using monte carlo simulation to estimate likely phase lengths from a suite of radiocarbon dates. We ran our initial batch of 20 dates, and we will run the entire group when they are all done. The simulation results in conjunction with the calibrated dates will give us estimates for the calendar dates of our ceramic phases. I am in the process of bugging George and Keith to actually publish their nice study and their algorithm, which illustrates some features of radiocarbon results that seem counterintuitive to many archaeologists.
I must admit to my occasional surprise when some archaeological analysis or another comes out with excellent results. With so many potential confounding factors, it sometimes seems amazing when we get solid, rigorous, and clean results about things that happened centuries or millennia ago. It is still soon to get too excited, but the dates look great. Angela and I did the ceramic seriation and defined three phases based solely on ceramic type similarities. Then we looked at the seriated deposits stratigraphically and they were almost always in the right order. So two independent types of evidence agree. Now we look at the radiocarbon ages, and lo and behold the ceramic phases plot out in nice chronological sequence, with only a very small amount of overlap (yes, I know, once we calibrate the dates it will be much messier with lots of overlap. I have the bad fortune of working in a period when the calibration curve goes back on itself and ALL relevant dates have multiple age ranges. This is where the Cowgill/Kintigh procedure will help).
So what could be better than all this stuff arriving first thing in the morning? Well, our lunch at the lab turned out to be fresh chicharron, avocadoes, and double cream cheese (queso de doble crema). It didn't look exactly like this photo from the internet, but we put the stuff into hot tortillas and it doesn't get much better than that! And I did get a few sherds drawn as well.
This is a good book, buy it, you'll like it. You see, after going on and on in this blog about how most edited volumes in archaeology are worthless, I can't afford to edit a bad book. (Here is my original post on worthless edited volumes, and a later related post). So any volume I edit now must be good, almost by definition (please suspend your critical thinking skills here temporarily).
Random ceramic thin section (internet) |
The most exciting news today was a new batch of radiocarbon results from the University of Arizona AMS lab. This is the first bunch from our second batch of dates. We are waiting for the entire suite to run them through Ox-Cal, but we are also working with the uncalibrated dates, not to assign ages, but to estimate phase lengths. It turns out that my colleagues George Cowgill and Keith Kintigh wrote a handy-dandy program a while ago using monte carlo simulation to estimate likely phase lengths from a suite of radiocarbon dates. We ran our initial batch of 20 dates, and we will run the entire group when they are all done. The simulation results in conjunction with the calibrated dates will give us estimates for the calendar dates of our ceramic phases. I am in the process of bugging George and Keith to actually publish their nice study and their algorithm, which illustrates some features of radiocarbon results that seem counterintuitive to many archaeologists.
I must admit to my occasional surprise when some archaeological analysis or another comes out with excellent results. With so many potential confounding factors, it sometimes seems amazing when we get solid, rigorous, and clean results about things that happened centuries or millennia ago. It is still soon to get too excited, but the dates look great. Angela and I did the ceramic seriation and defined three phases based solely on ceramic type similarities. Then we looked at the seriated deposits stratigraphically and they were almost always in the right order. So two independent types of evidence agree. Now we look at the radiocarbon ages, and lo and behold the ceramic phases plot out in nice chronological sequence, with only a very small amount of overlap (yes, I know, once we calibrate the dates it will be much messier with lots of overlap. I have the bad fortune of working in a period when the calibration curve goes back on itself and ALL relevant dates have multiple age ranges. This is where the Cowgill/Kintigh procedure will help).
So what could be better than all this stuff arriving first thing in the morning? Well, our lunch at the lab turned out to be fresh chicharron, avocadoes, and double cream cheese (queso de doble crema). It didn't look exactly like this photo from the internet, but we put the stuff into hot tortillas and it doesn't get much better than that! And I did get a few sherds drawn as well.
Labels: Archaeometry, Chicharron, Dating, Indexes
Wednesday, June 27, 2012
"Rigorous evaluation of human behavior"
I've been reading the June 15 issue of Science, and I am struck by the irony of two articles. The first is a news item titled "Social scientists hope for reprieve from the senate." The U.S. House of Representatives recently voted to prohibit the NSF from funding political science research, and to reduce the scale of the American Community Survey (a census-based social survey). The bill was co-sponsored by an unenlightened congressman from my own (unenlightened) state of Arizona, Jeff Flake (jokes about "what's in a name" come to mind here). "Flake says political science isn't sufficiently rigorous to warrant federal support." Does Flake base his policies and views on rigorous research? I doubt it. Conservative politicians periodically go after the social sciences in Washington, and we should all hope that the current attack is as unsuccessful as previous ones have been. Is archaeology more or less rigorous than political science? If we use science definition 1, I think we are in bad shape. But we can always take refuge inscience definition 2 ("see, we use complicated scientific technology!") to assert our rigor.
The second article in the June 15 Science was a short essay in a section called "Science for Sustainable Development." This essay, titled "Rigorous Evaluation of Human Behavior" is written byEsther Duflo, an economist at MIT. She makes the valid and important point that the role of science in promoting sustainable development and alleviating poverty should include social scientific studies of behavior. I wonder what Jeff Flake would think about this. When I saw the title I was encouraged, but then I got to the heart of Duflo's essay: the way to conduct "rigorous" studies of human behavior is to use randomized controlled trials. "This makes for good science: these experiments make it possible to test scientific hypotheses with a degree of rigor that was not available before."
In some fields of social science and public health, the randomized controlled trial (RTC) has become the supposed "gold standard" of research methods, proclaimed to be far superior to other approaches. Apart from the fact that we simply cannot do RCT's in archaeology (except perhaps in a few very limited situations that I can't think of offhand), I must admit that I am more supportive of the growing critique and contextualization of RCTs in social science. RCT is a narrow approach that achieves internal rigor at the expense of external relevance and validity. Philosopher of science Nancy Cartwright puts it this way, using economics to illustrate the trade-off of internal rigor and external validity:
“Economists make a huge investment to achieve rigor inside their models, that is to achieve internal validity. But how do they decide what lessons to draw about target situations outside from conclusions rigorously derived inside the model? That is, how do they establish external validity? We find: thought, discussion, debate; relatively secure knowledge; past practice; good bets. But not rules, check lists, detailed practicable procedures; nothing with the rigor demanded inside the models.” (Cartwright 2007:18).
Or consider a recent paper by sociologist Robert J. Sampson (2010), who promotes the value of observational research in criminology and sociology. He deflates three myths of RCTs in criminology:
Myth 1: Randomization solves the causal inference problem.
Myth 2: Experiments are assumption (theory) free.
Myth 3: Experiments are more policy relevant than observational studies.
If you want more context on internal vs. external validity, or how various social science methods relate to an experimental ideal, see Gerring (2007). He is one of those political scientists who, according to Congressman Flake, must be non-rigorous. But the message here is analogous to my views on science types 1 and 2 in archaeology. Just as archaeologists can do scientifically rigorous and valid research without involving technological methods from the hard sciences, so too can other social scientists do scientifically rigorous and valid research without the aid of formal experiments (RCTs).
Cartwright, Nancy
2007 Are RCT's the Gold Standard? BioSocieties 2(1):11-20.
Gerring, John
2007 Case Study Research: Principles and Practices. Cambridge University Press, New York.
Sampson, Robert J.
2010 Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology. Journal of Quantitative Criminology 26(4):489-500.
Postscript--No, I don't read or keep up with the Journal of Quantitative Criminology. Robert Sampson is one of my social science heroes--someone whose research I tremendously admire, and whose methods and approaches give me inspiration (John Gerring is another). I remembered reading a passage criticizing the RCT craze in Sampson's (outstanding) 2012 book, Great American City (which is where I got the Cartwright citation). But I am in Toluca, Mexico, right now without access to my books, so I searched for "randomized controlled trials" AND "Robert J. Sampson" on Google-Scholar, and came up with his 2010 paper. My name is Mike Smith and I am a Google-Scholar addict.
The second article in the June 15 Science was a short essay in a section called "Science for Sustainable Development." This essay, titled "Rigorous Evaluation of Human Behavior" is written byEsther Duflo, an economist at MIT. She makes the valid and important point that the role of science in promoting sustainable development and alleviating poverty should include social scientific studies of behavior. I wonder what Jeff Flake would think about this. When I saw the title I was encouraged, but then I got to the heart of Duflo's essay: the way to conduct "rigorous" studies of human behavior is to use randomized controlled trials. "This makes for good science: these experiments make it possible to test scientific hypotheses with a degree of rigor that was not available before."
In some fields of social science and public health, the randomized controlled trial (RTC) has become the supposed "gold standard" of research methods, proclaimed to be far superior to other approaches. Apart from the fact that we simply cannot do RCT's in archaeology (except perhaps in a few very limited situations that I can't think of offhand), I must admit that I am more supportive of the growing critique and contextualization of RCTs in social science. RCT is a narrow approach that achieves internal rigor at the expense of external relevance and validity. Philosopher of science Nancy Cartwright puts it this way, using economics to illustrate the trade-off of internal rigor and external validity:
“Economists make a huge investment to achieve rigor inside their models, that is to achieve internal validity. But how do they decide what lessons to draw about target situations outside from conclusions rigorously derived inside the model? That is, how do they establish external validity? We find: thought, discussion, debate; relatively secure knowledge; past practice; good bets. But not rules, check lists, detailed practicable procedures; nothing with the rigor demanded inside the models.” (Cartwright 2007:18).
Or consider a recent paper by sociologist Robert J. Sampson (2010), who promotes the value of observational research in criminology and sociology. He deflates three myths of RCTs in criminology:
Myth 1: Randomization solves the causal inference problem.
Myth 2: Experiments are assumption (theory) free.
Myth 3: Experiments are more policy relevant than observational studies.
If you want more context on internal vs. external validity, or how various social science methods relate to an experimental ideal, see Gerring (2007). He is one of those political scientists who, according to Congressman Flake, must be non-rigorous. But the message here is analogous to my views on science types 1 and 2 in archaeology. Just as archaeologists can do scientifically rigorous and valid research without involving technological methods from the hard sciences, so too can other social scientists do scientifically rigorous and valid research without the aid of formal experiments (RCTs).
Cartwright, Nancy
2007 Are RCT's the Gold Standard? BioSocieties 2(1):11-20.
Gerring, John
2007 Case Study Research: Principles and Practices. Cambridge University Press, New York.
Sampson, Robert J.
2010 Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology. Journal of Quantitative Criminology 26(4):489-500.
Postscript--No, I don't read or keep up with the Journal of Quantitative Criminology. Robert Sampson is one of my social science heroes--someone whose research I tremendously admire, and whose methods and approaches give me inspiration (John Gerring is another). I remembered reading a passage criticizing the RCT craze in Sampson's (outstanding) 2012 book, Great American City (which is where I got the Cartwright citation). But I am in Toluca, Mexico, right now without access to my books, so I searched for "randomized controlled trials" AND "Robert J. Sampson" on Google-Scholar, and came up with his 2010 paper. My name is Mike Smith and I am a Google-Scholar addict.
Labels: Experiments, Politics of science, Social science
Sunday, June 10, 2012
Science type 1 vs. Science type 2
In a previous post, Rejected by Science!, I identified two different concepts of science in archaeology. Archaeological science type 1 is the pursuit of knowledge in a way that conforms to a scientific epistemology. In the words of John Gerring:
Archaeological science type 2, on the other hand, is the use of non-archaeological scientific techniques by archaeologists, for whatever purpose. Ideally, science type 2 is done in pursuit of the goals of science type 1, but such is not always the case. In my previous post, I identified two situations when archaeological science type 2 is done in ways that do not conform to type 1:
There is a third condition where archaeological science type 2 can be done at odds with type 1 science that I did not discuss:
To my mind, this episode illustrates the problems that can occur when the two types of archaeological science are in conflict with one another. But right now it is merely a controversy in the realm of press releases and blogs and the internet. The rubber will hit the road when the research is submitted to a scholarly journal. And at that point one can only hope, as I suggested in my earlier post, that the editors will not be fooled into thinking that archaeological science type 2 that is done is opposition to science type 1 is really a scientific endeavor epistemologically.
Gerring, John
2012 Social Science Methodology: A Unified Framework. 2nd ed. Cambridge University Press, New York.
By the way, that earlier post "Rejected by Science!" is BY FAR the most popular post in the history of this blog, with perhaps more hits than all of the other posts combined. I am puzzled by this, not sure why it is so popular. I am not complaining, just curious. If you have any ideas, let me know.
· “Inquiry of a scientific mature, I stipulate, aims to be cumulative, evidence-based (empirical), falsifiable, generalizing, nonsubjective, replicable, rigorous, skeptical, systematic, transparent, and grounded in rational argument. There are differences of opinion over whether, or to what extent, science lives up to these high ideals. Even so, these are the ideals to which natural and social scientists generally aspire, and they help to define the enterprise in a general way and to demarcate it from other realms.” (Gerring 2012:11).
- Relativist, post-modern archaeologists who criticize a scientific epistemology for archaeology often use archaeometric methods (science type 2), in pursuit of goals that are not scientific.
- Methodologically sloppy archaeologists sometimes aim to use science type 2 methods to further science type 1 ends, but their sloppiness prevents progress.
There is a third condition where archaeological science type 2 can be done at odds with type 1 science that I did not discuss:
- Non-archaeological scientific techniques are often used to make exaggerated, sensationalist claims that go beyond the "replicable, rigorous, skeptical" nature of scientific research.
To my mind, this episode illustrates the problems that can occur when the two types of archaeological science are in conflict with one another. But right now it is merely a controversy in the realm of press releases and blogs and the internet. The rubber will hit the road when the research is submitted to a scholarly journal. And at that point one can only hope, as I suggested in my earlier post, that the editors will not be fooled into thinking that archaeological science type 2 that is done is opposition to science type 1 is really a scientific endeavor epistemologically.
Gerring, John
2012 Social Science Methodology: A Unified Framework. 2nd ed. Cambridge University Press, New York.
By the way, that earlier post "Rejected by Science!" is BY FAR the most popular post in the history of this blog, with perhaps more hits than all of the other posts combined. I am puzzled by this, not sure why it is so popular. I am not complaining, just curious. If you have any ideas, let me know.
Wednesday, May 30, 2012
More drive-by history: world history as television
I watched the second and final installment of Niall Ferguson's TV show, "Civilization" last night. (See my post on the first installment here). He covered three of his "killer applications" that explain why western civilization is so much better than the rest of the world.
1. Medicine. A main point in this section was that imperialism (French imperialism in west Africa) isn't so bad, because some medical advances were made by imperialist physicians in Senegal. At the end of the show, Ferguson decries the fact that "empire" has become a "dirty word." He is frustrated by the fact that people just ignore the great benefits of empires and imperialism. This is so blatant that I can't even think of a clever response!
2. Consumerism. Blue jeans caused the Prague spring in 1968, and they also caused the Soviet invasion of Czechoslovakia. Blue jeans later caused the fall of the Berlin wall. And the Chinese are the worst-dressed people in the world.
3. Work. There was actually a nice piece on Max Weber and his theory of how Protestantism furthers capitalism. But any insights were offset by explanations like this: Why is church attendance higher in the U.S. than in Europe? Because state monopolies are inefficient.
The field of "world history" has been gaining influence in the past decade. There are now textbooks, journals, and lots of publications. It is good to see historians expanding their horizons in two ways: (1) seeking connections and contacts among widely separated regions; and (2) making comparisons among regions (sometimes even including such bizarre places for historians as Pre-Columbian America). From the perspectives of anthropology and archaeology, these two approaches are pretty pedestrian, but when historians--who command large amounts of very important regional data--take them up, this is a very positive development for scientific scholarship on the past. But books and TV shows like Ferguson's Civilization set back progress in this area. The superficial nature of the ideas, the many very wrong-headed notions, and the glitzy production, all make world history look like prime-time television - in both style and substance.
I hope serious proponents of world history fight back.
1. Medicine. A main point in this section was that imperialism (French imperialism in west Africa) isn't so bad, because some medical advances were made by imperialist physicians in Senegal. At the end of the show, Ferguson decries the fact that "empire" has become a "dirty word." He is frustrated by the fact that people just ignore the great benefits of empires and imperialism. This is so blatant that I can't even think of a clever response!
2. Consumerism. Blue jeans caused the Prague spring in 1968, and they also caused the Soviet invasion of Czechoslovakia. Blue jeans later caused the fall of the Berlin wall. And the Chinese are the worst-dressed people in the world.
3. Work. There was actually a nice piece on Max Weber and his theory of how Protestantism furthers capitalism. But any insights were offset by explanations like this: Why is church attendance higher in the U.S. than in Europe? Because state monopolies are inefficient.
The field of "world history" has been gaining influence in the past decade. There are now textbooks, journals, and lots of publications. It is good to see historians expanding their horizons in two ways: (1) seeking connections and contacts among widely separated regions; and (2) making comparisons among regions (sometimes even including such bizarre places for historians as Pre-Columbian America). From the perspectives of anthropology and archaeology, these two approaches are pretty pedestrian, but when historians--who command large amounts of very important regional data--take them up, this is a very positive development for scientific scholarship on the past. But books and TV shows like Ferguson's Civilization set back progress in this area. The superficial nature of the ideas, the many very wrong-headed notions, and the glitzy production, all make world history look like prime-time television - in both style and substance.
I hope serious proponents of world history fight back.
Labels: Comparisons, World history
Tuesday, May 22, 2012
Niall Ferguson: Drive-By History
I watched the first installment of Niall Ferguson's TV series, "Civilization: The West and the Rest,"tonight. Much of it was entertaining, and there were many insights. But overall I found it a superficial and simplistic triumphal history. How did the West come to dominate the East? Ferguson attributes the West's victory to six factors (he calls them "killer applications;" it's not clear why he uses a software metaphor): competition, science, property rights, medicine, the consumer society, and the work ethic. Well, this certainly is not a rigorous comparative study. Where did these factors come from? What theoretical model generated this set of six?
David Bronwich published an eloquent, understated, and highly critical review of Ferguson's book of the same name in the New York Review of Books (December 8, 2011) On the six factors, he states, "These make an absurd catalog. It is like saying that the ingedients of a statesman are an Oxford degree, principles, a beard, sociability, and ownership of a sports car."
I have to admit that my attention started to wane after an hour. I was getting tired of hearing how backward the Ottomans were in compared with the Europeans. I picked up my phone and started playing a game, but then my ears pricked up when Ferguson started comparing Spanish and British colonization of the New World. He claims at the outset that "Britain won" this competition. Well, Spain got far richer off its colonies than Britain ever did, and Spain held onto its colonies longer than Britain. So just how did Britain win? What Ferguson meant was that the United States would later develop into a much better society than modern Latin America. I don't think he used the word "better," but that is the clear message of this segment.
Ferguson focused on an important comparison--that colonial development in North America involved many small property owners, whereas Latin America had far fewer, larger, landowners. OK, that is certainly a major difference between the two areas. But how and why did these to different property systems get started? We are told the systems originated because the two sets of European colonists simply made different decisions in the two areas. The British decided to have a small property system, and the Spaniards decided to have big estates. What is completely lacking is the context of this distinction. Key factors that are ignored include demography (the very different size and density of native societies in the two areas; the numbers of natives who survived vs. the.numbers of colonists), the indigenous political structure at time of conquest (states and empires in Latin America, vs. tribal societies in North America), and the nature of the resources in the two areas (mining vs. agriculture, and their labor and organizational requirements).
Thus Ferguson did identify a crucial distinction between two areas, but by completely ignoring the context, he fails to show how and why that distinction originated and developed. His explanation is superficial and misleading.
Ferguson is a respected historian with a number of solid empirical studies to his name. Why did he step down from scholarship to produce a popular book and TV show based on some rather silly ideas? Could publicity and royalties have anything to do with this? Is the image of a nineteenth century capitalist on the cover significant? People may gripe about Jared Diamond's works, but they are based on solid scholarship that relies on work by experts, interpreted in a new fashion. If the TV show is indicative of the book (and the reviews suggest that it is), then Ferguson is not anywhere near Diamond's level of scholarship.
This is drive-by history, a quick and superficial look at the issues. If you want to find about about how and why China and Europe diverged, try reading Ian Morris's far superior book, Why the West Rules (for Now).
David Bronwich published an eloquent, understated, and highly critical review of Ferguson's book of the same name in the New York Review of Books (December 8, 2011) On the six factors, he states, "These make an absurd catalog. It is like saying that the ingedients of a statesman are an Oxford degree, principles, a beard, sociability, and ownership of a sports car."
I have to admit that my attention started to wane after an hour. I was getting tired of hearing how backward the Ottomans were in compared with the Europeans. I picked up my phone and started playing a game, but then my ears pricked up when Ferguson started comparing Spanish and British colonization of the New World. He claims at the outset that "Britain won" this competition. Well, Spain got far richer off its colonies than Britain ever did, and Spain held onto its colonies longer than Britain. So just how did Britain win? What Ferguson meant was that the United States would later develop into a much better society than modern Latin America. I don't think he used the word "better," but that is the clear message of this segment.
Ferguson focused on an important comparison--that colonial development in North America involved many small property owners, whereas Latin America had far fewer, larger, landowners. OK, that is certainly a major difference between the two areas. But how and why did these to different property systems get started? We are told the systems originated because the two sets of European colonists simply made different decisions in the two areas. The British decided to have a small property system, and the Spaniards decided to have big estates. What is completely lacking is the context of this distinction. Key factors that are ignored include demography (the very different size and density of native societies in the two areas; the numbers of natives who survived vs. the.numbers of colonists), the indigenous political structure at time of conquest (states and empires in Latin America, vs. tribal societies in North America), and the nature of the resources in the two areas (mining vs. agriculture, and their labor and organizational requirements).
Thus Ferguson did identify a crucial distinction between two areas, but by completely ignoring the context, he fails to show how and why that distinction originated and developed. His explanation is superficial and misleading.
Ferguson is a respected historian with a number of solid empirical studies to his name. Why did he step down from scholarship to produce a popular book and TV show based on some rather silly ideas? Could publicity and royalties have anything to do with this? Is the image of a nineteenth century capitalist on the cover significant? People may gripe about Jared Diamond's works, but they are based on solid scholarship that relies on work by experts, interpreted in a new fashion. If the TV show is indicative of the book (and the reviews suggest that it is), then Ferguson is not anywhere near Diamond's level of scholarship.
This is drive-by history, a quick and superficial look at the issues. If you want to find about about how and why China and Europe diverged, try reading Ian Morris's far superior book, Why the West Rules (for Now).
Labels: Comparisons, World history
Friday, May 18, 2012
Throwing away my old reprints
This summer I am moving my office and lab to another building. As I pack up, I am tossing lots of old paperwork and junk. So what do I do with the two file drawers full of my old reprints? Most of these papers are posted on my website, so I really don't need to keep reprints. Apart from one or two that are very nice aesthetically (e.g., my reprint from Hansen's city-states book has a beautiful color image of an painting of "good government" on the cover), I should just toss them all. But I am a pack-rat by nature; maybe I will want these someday (yeah, right). More importantly, these reprints are my career! This is what I have accomplished as a scholar. How can I just toss these things into the recycle bin?
I spent a few weeks going back and forth (Toss them all! Save them all!), and then my wife suggested I keep a few complete sets and toss the rest. Maybe our kids will want these someday. Maybe we'll need something to light fires with in post-carbon times.
One useful thing has come out of this. The student who is helping me organize things prior to moving (Theresa Araque) checked to see which reprints are not yet scanned and on my website. So I will now try to get more of my old articles scanned or downloaded and posted. I'm sure thousands of people are waiting with bated breath to read such gems as "The Aztec silent majority: William T. Sanders and the study of the Aztec peasantry" (edited book, 1996).
I spent a few weeks going back and forth (Toss them all! Save them all!), and then my wife suggested I keep a few complete sets and toss the rest. Maybe our kids will want these someday. Maybe we'll need something to light fires with in post-carbon times.
One useful thing has come out of this. The student who is helping me organize things prior to moving (Theresa Araque) checked to see which reprints are not yet scanned and on my website. So I will now try to get more of my old articles scanned or downloaded and posted. I'm sure thousands of people are waiting with bated breath to read such gems as "The Aztec silent majority: William T. Sanders and the study of the Aztec peasantry" (edited book, 1996).
Tuesday, May 15, 2012
Social science quiz: sociology vs. political science
Today's quiz: which discipline--sociology or political science--has a better understanding of ancient states?
You would think political science would have better things to say about ancient state-level societies than sociology. After all, political science focuses on power, governments, and political phenomena. But no, sociology has a MUCH BETTER understanding of ancient states, except perhaps in the area of empires.
I have been reading up in these two disciplines, trying to link up their concepts with those used by archaeologists and anthropologists on ancient states and cities. That was not too hard for sociology. The field of historical sociology, starting with Max Weber, has a big literature on how ancient states work, from tax collection in the Roman Empire to state power in China and the Ottoman Empire. It was not too hard to relate that literature to our understanding of ancient states. There are problems, of course. The biggest one is that historical sociologists rarely look beyond the literate societies of the Mediterranean, except for China, and this gives them a very biased sample of societies. But works like these are very insightful, and very relevant to the concerns of archaeologists working on political and social issues in ancient states: Weber (1978), Eisenstadt (1963), Tilly (1992), or Kiser (1994).
I have used some good work in political science in my work on empires and imperialism (Doyle 1986, Gerring et al. 2011), and so I assumed that political science would have good things to say about other kinds of early states. But I was not finding much information. I originally thought Michael Mann (1986) was a political scientist, but, no, he is a sociologist. Then yesterday I found this statement in a paper in the Annual Review of Political Science: "Political scientists have, however, rarely ventured into world history before the eighteenth century" (von der Muhl 2003:345). I guess that is true.
Then I found a paper in that series that contrasts modern states with premodern states (Spruyt 2002). Aha, maybe this is what I was looking for! Nope. That paper shows an embarrassing level of ignorance of "premodern states." The discussion is highly generalized ("all premodern states are like this..."), it cites only a few examples (such as the Merovingian king Clovis), and contains some whoppers: "Early states had only weakly defined market economies and property rights" (p.130). Well, some had NO markets at all, and some had pretty "strongly defined" market economies. "Taxation hardly existed" (p.130). Well, I wonder how those states supported themselves. I wrote a paper on Aztec taxation, and found they had a ridiculously complicated system of taxation. The same is probably true of other ancient states (this is an area in need of research). And what about the Roman Empire: no taxes? Think again. And, "early states only tangentially affected their societies" (p.131). OK, tell those Egyptian peasants lugging stones up the side of the pyramid that they are not affected by the government. Or what about some Roman merchants trying to get a contract to supply the military garrisons? Or Inca peasants forced to build roads and bridges for the king. Not affected by the government? I don't think so.
I don't think this this level of scholarship would be acceptable in historical sociology (and certainly not in anthropology or archaeology), but I perhaps political scientists can get away with it.
This has been a very disappointing search through the literature. I think political science offers many concepts and approaches that are very promising for understanding early states. Archaeologists have picked up on only a few of these, and we really should be exploring these and other topics in political science. They include the predatory theory of rule and the role of popular participation in governance (Levi 1998; see Blanton and Farther 2008), collective action approaches to the commons (Ostrom 1990), research on empires (see above), and concepts of urban and regional governance (Sellers 2002). And, as I have posted about on several occasions, there is an excellent strand of methodological work by political scientists that is relevant to other disciplines, including archaeology (Gerring 2012; Mahoney et al 2009).
Just don't look to the published literature in political science for useful analyses of ancient states.
References:
You would think political science would have better things to say about ancient state-level societies than sociology. After all, political science focuses on power, governments, and political phenomena. But no, sociology has a MUCH BETTER understanding of ancient states, except perhaps in the area of empires.
I have been reading up in these two disciplines, trying to link up their concepts with those used by archaeologists and anthropologists on ancient states and cities. That was not too hard for sociology. The field of historical sociology, starting with Max Weber, has a big literature on how ancient states work, from tax collection in the Roman Empire to state power in China and the Ottoman Empire. It was not too hard to relate that literature to our understanding of ancient states. There are problems, of course. The biggest one is that historical sociologists rarely look beyond the literate societies of the Mediterranean, except for China, and this gives them a very biased sample of societies. But works like these are very insightful, and very relevant to the concerns of archaeologists working on political and social issues in ancient states: Weber (1978), Eisenstadt (1963), Tilly (1992), or Kiser (1994).
I have used some good work in political science in my work on empires and imperialism (Doyle 1986, Gerring et al. 2011), and so I assumed that political science would have good things to say about other kinds of early states. But I was not finding much information. I originally thought Michael Mann (1986) was a political scientist, but, no, he is a sociologist. Then yesterday I found this statement in a paper in the Annual Review of Political Science: "Political scientists have, however, rarely ventured into world history before the eighteenth century" (von der Muhl 2003:345). I guess that is true.
Then I found a paper in that series that contrasts modern states with premodern states (Spruyt 2002). Aha, maybe this is what I was looking for! Nope. That paper shows an embarrassing level of ignorance of "premodern states." The discussion is highly generalized ("all premodern states are like this..."), it cites only a few examples (such as the Merovingian king Clovis), and contains some whoppers: "Early states had only weakly defined market economies and property rights" (p.130). Well, some had NO markets at all, and some had pretty "strongly defined" market economies. "Taxation hardly existed" (p.130). Well, I wonder how those states supported themselves. I wrote a paper on Aztec taxation, and found they had a ridiculously complicated system of taxation. The same is probably true of other ancient states (this is an area in need of research). And what about the Roman Empire: no taxes? Think again. And, "early states only tangentially affected their societies" (p.131). OK, tell those Egyptian peasants lugging stones up the side of the pyramid that they are not affected by the government. Or what about some Roman merchants trying to get a contract to supply the military garrisons? Or Inca peasants forced to build roads and bridges for the king. Not affected by the government? I don't think so.
I don't think this this level of scholarship would be acceptable in historical sociology (and certainly not in anthropology or archaeology), but I perhaps political scientists can get away with it.
This has been a very disappointing search through the literature. I think political science offers many concepts and approaches that are very promising for understanding early states. Archaeologists have picked up on only a few of these, and we really should be exploring these and other topics in political science. They include the predatory theory of rule and the role of popular participation in governance (Levi 1998; see Blanton and Farther 2008), collective action approaches to the commons (Ostrom 1990), research on empires (see above), and concepts of urban and regional governance (Sellers 2002). And, as I have posted about on several occasions, there is an excellent strand of methodological work by political scientists that is relevant to other disciplines, including archaeology (Gerring 2012; Mahoney et al 2009).
Just don't look to the published literature in political science for useful analyses of ancient states.
References:
Blanton, Richard E. and Lane F. Fargher (2008) Collective Action in the Formation of Pre-Modern States. Springer, New York.
Doyle, Michael W. (1986) Empires. Cornell University Press, Ithaca.
Eisenstadt, S. N. (1963) The Political Systems of Empires. The Free Press, New York.
Gerring, John (2012) Social Science Methodology: A Unified Framework. 2nd ed. Cambridge University Press, New York.
Gerring, John, Daniel Ziblatt, Johan van Gorp and Julián Arévalo (2011) An Institutional Theory of Direct and Indirect Rule. World Politics63(3):377-433.
Kiser, Edgar (1994) Markets and Hierarchies in Early Modern Tax Systems: A Principal-Agent Analysis. Politics and Society 22:284-315.
Levi, Margaret (1988) Of Rule and Revenue. University of California Press, Berkeley.
Mahoney, James, Erin Kimball and Kendra L. Koivu (2009) The Logic of Historical Explanation in the Social Sciences. Comparative Political Studies 42(1):114-146.
Ostrom, Elinor (1990) Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, New York.
Sellers, Jefferey M. (2002) Governing from Below: Urban Regions and the Global Economy. Cambridge University Press, New York.
Spruyt, Hendrick (2002) The Origins, Development, and Possible Decline of the Modern State. Annual Review of Political Science 5:127-149.
Tilly, Charles (1992) Coercion, Capital, and European States, AD 990-1990. Blackwell, Oxford.
Von der Muhll, George E. (2003) Ancient Empires, Modern States, and the Study of Government. Annual Review of Political Science 6:345-376.
Weber, Max (1978) Economy and Society: An Outline of Interpretive Sociology. 2 vols. University of California Press, Berkeley.
Subscribe to: Posts (Atom)
About This Blog
This blog contains information and opinions (mostly mine) on professional publishing issues in archaeology. I am especially concerned with quality control, Open Access, and communication with other disciplines.
My latest books are The Comparative Archaeology of Complex Societies(2011), and Aztec City-State Capitals (2008). Read a review of the 2008 book from Urban History.
My latest books are The Comparative Archaeology of Complex Societies(2011), and Aztec City-State Capitals (2008). Read a review of the 2008 book from Urban History.
Contributors
Some Blogs I Like
Recent Comments
Labels
- 2012 (1)
- AAA (13)
- Academic imperialism (1)
- Academic integrity (2)
- Aliens (1)
- Anthropology (8)
- Archaeology and anthropology (2)
- Archaeology and other disciplines (30)
- Archaeology and the media (17)
- Archaeology and the public (12)
- Archaeology websites (1)
- Archaeometry (1)
- Argument and debate (1)
- Aztecs (1)
- Beer (1)
- Bias (3)
- Bibliographic research (1)
- Bibliographies (5)
- biographies (3)
- Blogging (9)
- Bogus professional activities (1)
- Book reviews (16)
- Books (3)
- Causality (1)
- Censorship (1)
- Chicharron (1)
- Citation data (5)
- Commercialization of scholarship (16)
- Comparisons (5)
- Concepts (1)
- Conferences (3)
- Contemporary relevance of archaeology (2)
- Cultural Evolution (2)
- Dating (1)
- Digital archives (1)
- Dissertations (1)
- Edited volumes (6)
- Epistemology (5)
- Ethics (6)
- Experiments (2)
- Explanation (4)
- Fraud (1)
- Fun (4)
- Government policies (1)
- Grant proposals (1)
- Graphics (1)
- Historic preservation (1)
- History of archaeology (1)
- Horror stories (1)
- Identity (1)
- Indexes (1)
- Internet publishing (6)
- Internet resources (2)
- Job market (1)
- Journals (57)
- Legislation (1)
- Looting (3)
- Mapping (1)
- Methods (2)
- Museum collections (5)
- New forms of scholarship (1)
- NSF (1)
- Objectivity (1)
- Open Access (50)
- Other professional issues (1)
- Peer review (9)
- Pet peeves (1)
- Pig-headed (1)
- Political bias (1)
- Politics of science (1)
- Popularization (1)
- Postmodernism (6)
- Publishers (2)
- Publishing data (2)
- Quality control (32)
- Referendce works (1)
- Reviews (1)
- SAA (8)
- Scandals (4)
- Scholarship norms (1)
- Science (12)
- Self archiving (14)
- Social media (1)
- Social science (6)
- Software (1)
- Star Wars (1)
- Student tips (9)
- Sustainability (1)
- Textbooks (1)
- Theoretical perspectives (5)
- Theory (5)
- This blog (1)
- Transdisciplinary (2)
- Types of publication (4)
- Wikipedia (1)
- World history (2)
Publishing Archaeology Blog by Michael E. Smith is licensed under a Creative Commons License.
__________________
Subscribe to:
Posts (Atom)