<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	
	>
<channel>
	<title>
	Comments on: Lott v. Levitt III	</title>
	<atom:link href="https://www.overlawyered.com/2006/04/lott-v-levitt-iii/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/</link>
	<description>Chronicling the high cost of our legal system</description>
	<lastBuildDate>Fri, 21 Apr 2006 09:56:45 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>
		By: Bill Barth		</title>
		<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/comment-page-1/#comment-2738</link>

		<dc:creator><![CDATA[Bill Barth]]></dc:creator>
		<pubDate>Fri, 21 Apr 2006 09:56:45 +0000</pubDate>
		<guid isPermaLink="false">http://overlawyered.com/wpblog/?p=3361#comment-2738</guid>

					<description><![CDATA[I&#039;m not an experimentalist. I do computational work (computational fluid dynamics). As such, I take &quot;replicate&quot; to mean that the author should provide enough data about the methods, algorithms, numerical tolerances, etc and the problem setup (boundary and initial conditions, etc.). that another researcher could reproduce the author&#039;s work by doing a separate implementation of the aforementioned methods, algorithms, etc. and then running the same numerical experiments. I also expect the original author to provide some evidence (convergence analysis, comparison to physical experiment, or both) that the method described in a published article is likely to have been a good simulation of nature.

The engineering community has not yet embraced the idea that computer codes and input datasets must be placed on file with the publishing journal or a standardized third-party repository. I understand that this is de rigueur in the social sience these days. I doubt that it will ever become necessary in the engineering community because the claims that are made  from the results of engineering numerical or physical experiments are not about society in general, and are therefore not nearly as controversial. When the results are controversial, more scrutiny is required (with assistance and openness from the original authors).

As to Pons and Fleischmann, it&#039;s my understanding that either they didn&#039;t provide sufficient information to make replication easy, or that in the face of conflicting data they failed to do a publicly scrutinized demonstration of thier own original work. (Using a pressre release rather than a peer-reviewed journal article to announce their work, didn&#039;t help either). Being able to repeat your own work is an absolute requirement for controversial conclusions in the physical sciences.

It seems to me that when it comes to the sort of statistics and statisical modeling that is so prevalent in the social sciences, that it is incumbent upon researchers to make their exact computer codes, data, and assumptions available for their colleagues to verify that the original authors did not make programming or (data) coding errors that signficantly change the results upon which the societal conclusions are drawn. This, in my mind, constitutes replication. Furthermore, other researchers should attempt de novo examination of the problem in question with thier own data, models, and computer codes to determine the the work done is definitive, or is simply one way of looking at the data (and by looking I mean in the social scientific sense).

In the physical sciences it is often possible to simply look at some result and conclude that it must be in error, since it is completely outside the realm of ordinary scientific possibility. Whenever statistics and statistical reasoning are involved (in the physical and social sciences), you can throw your intuition out the window (see the early history of quantum mechanics). At that point, only careful rexamination of the reported data, repitition of the undertaken experiments, and completely new experiments may verify that a result is certain (or even believable).

Finally, the inanimate portions of nature don&#039;t lie (can&#039;t lie), so only error in the design of apparati, conduct of experiment, or interpretation by the experimenter may lead to error in research into natural phenomena. Suvery respondents lie all the time, and so do law enforcement departments who collect crime data (i.e. there is frequent reclassification of certain crimes to skew the statistics to make a department look good). This means that in the social sciences even if a researcher&#039;s process, assumptions, and models are perfect, the result can be wrong. This is considerably less likely in the physical sciences.

OK, that&#039;s enough babble from me, I&#039;m going to stop now.
]]></description>
			<content:encoded><![CDATA[<p>I&#8217;m not an experimentalist. I do computational work (computational fluid dynamics). As such, I take &#8220;replicate&#8221; to mean that the author should provide enough data about the methods, algorithms, numerical tolerances, etc and the problem setup (boundary and initial conditions, etc.). that another researcher could reproduce the author&#8217;s work by doing a separate implementation of the aforementioned methods, algorithms, etc. and then running the same numerical experiments. I also expect the original author to provide some evidence (convergence analysis, comparison to physical experiment, or both) that the method described in a published article is likely to have been a good simulation of nature.</p>
<p>The engineering community has not yet embraced the idea that computer codes and input datasets must be placed on file with the publishing journal or a standardized third-party repository. I understand that this is de rigueur in the social sience these days. I doubt that it will ever become necessary in the engineering community because the claims that are made  from the results of engineering numerical or physical experiments are not about society in general, and are therefore not nearly as controversial. When the results are controversial, more scrutiny is required (with assistance and openness from the original authors).</p>
<p>As to Pons and Fleischmann, it&#8217;s my understanding that either they didn&#8217;t provide sufficient information to make replication easy, or that in the face of conflicting data they failed to do a publicly scrutinized demonstration of thier own original work. (Using a pressre release rather than a peer-reviewed journal article to announce their work, didn&#8217;t help either). Being able to repeat your own work is an absolute requirement for controversial conclusions in the physical sciences.</p>
<p>It seems to me that when it comes to the sort of statistics and statisical modeling that is so prevalent in the social sciences, that it is incumbent upon researchers to make their exact computer codes, data, and assumptions available for their colleagues to verify that the original authors did not make programming or (data) coding errors that signficantly change the results upon which the societal conclusions are drawn. This, in my mind, constitutes replication. Furthermore, other researchers should attempt de novo examination of the problem in question with thier own data, models, and computer codes to determine the the work done is definitive, or is simply one way of looking at the data (and by looking I mean in the social scientific sense).</p>
<p>In the physical sciences it is often possible to simply look at some result and conclude that it must be in error, since it is completely outside the realm of ordinary scientific possibility. Whenever statistics and statistical reasoning are involved (in the physical and social sciences), you can throw your intuition out the window (see the early history of quantum mechanics). At that point, only careful rexamination of the reported data, repitition of the undertaken experiments, and completely new experiments may verify that a result is certain (or even believable).</p>
<p>Finally, the inanimate portions of nature don&#8217;t lie (can&#8217;t lie), so only error in the design of apparati, conduct of experiment, or interpretation by the experimenter may lead to error in research into natural phenomena. Suvery respondents lie all the time, and so do law enforcement departments who collect crime data (i.e. there is frequent reclassification of certain crimes to skew the statistics to make a department look good). This means that in the social sciences even if a researcher&#8217;s process, assumptions, and models are perfect, the result can be wrong. This is considerably less likely in the physical sciences.</p>
<p>OK, that&#8217;s enough babble from me, I&#8217;m going to stop now.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: John Quiggin		</title>
		<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/comment-page-1/#comment-2737</link>

		<dc:creator><![CDATA[John Quiggin]]></dc:creator>
		<pubDate>Thu, 20 Apr 2006 23:27:45 +0000</pubDate>
		<guid isPermaLink="false">http://overlawyered.com/wpblog/?p=3361#comment-2737</guid>

					<description><![CDATA[As you&#039;re a rocket scientist, Bill, how do you understand &quot;replicate&quot; in the physical sciences? Do you have to follow identical experimental protocols, same sample size, same test statistics and so on. Or do you just do the same experiment with what is, in effect, a new data set? I suggest the latter.


I read lots of articles about people who had failed to replicate the &quot;cold fusion&quot; results and in every case Pons and Fleischman (sp) blamed deviations from the original protocols. But no-one suggested that they could sue on this basis.
]]></description>
			<content:encoded><![CDATA[<p>As you&#8217;re a rocket scientist, Bill, how do you understand &#8220;replicate&#8221; in the physical sciences? Do you have to follow identical experimental protocols, same sample size, same test statistics and so on. Or do you just do the same experiment with what is, in effect, a new data set? I suggest the latter.</p>
<p>I read lots of articles about people who had failed to replicate the &#8220;cold fusion&#8221; results and in every case Pons and Fleischman (sp) blamed deviations from the original protocols. But no-one suggested that they could sue on this basis.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Bill Barth		</title>
		<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/comment-page-1/#comment-2736</link>

		<dc:creator><![CDATA[Bill Barth]]></dc:creator>
		<pubDate>Thu, 20 Apr 2006 19:08:36 +0000</pubDate>
		<guid isPermaLink="false">http://overlawyered.com/wpblog/?p=3361#comment-2736</guid>

					<description><![CDATA[roy:

I&#039;m not a professional economist, but I am a rocket scientist (i.e. I have a PhD in Aerospace Engineering). I was pretty taken aback by the replication claim as I read it in the more technical and narrow sense. Do I count as a layman? I certainly know little about economics and econometrics, but I do know a thing or two about publishing in academic journals (in engineering and physics).

I seems to me that many scientists and engineers may share my (and Lott&#039;s) reading. Might not other types of non-economists as well?
]]></description>
			<content:encoded><![CDATA[<p>roy:</p>
<p>I&#8217;m not a professional economist, but I am a rocket scientist (i.e. I have a PhD in Aerospace Engineering). I was pretty taken aback by the replication claim as I read it in the more technical and narrow sense. Do I count as a layman? I certainly know little about economics and econometrics, but I do know a thing or two about publishing in academic journals (in engineering and physics).</p>
<p>I seems to me that many scientists and engineers may share my (and Lott&#8217;s) reading. Might not other types of non-economists as well?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: roy		</title>
		<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/comment-page-1/#comment-2735</link>

		<dc:creator><![CDATA[roy]]></dc:creator>
		<pubDate>Thu, 20 Apr 2006 12:42:19 +0000</pubDate>
		<guid isPermaLink="false">http://overlawyered.com/wpblog/?p=3361#comment-2735</guid>

					<description><![CDATA[The precise definition of &quot;replicate&quot; in question is only understood as such by professional economists, yes?  Laymen readers -- who are responsible for &lt;i&gt;Freakonomics&lt;/i&gt;&#039;s astronomical sales -- understand or perhaps misunderstand &quot;replicate&quot; to mean &quot;reaching the same conclusion&quot;.  So will damages awarded be based only upon how many professional economists read Levitt&#039;s claim, or how many total readers?

]]></description>
			<content:encoded><![CDATA[<p>The precise definition of &#8220;replicate&#8221; in question is only understood as such by professional economists, yes?  Laymen readers &#8212; who are responsible for <i>Freakonomics</i>&#8216;s astronomical sales &#8212; understand or perhaps misunderstand &#8220;replicate&#8221; to mean &#8220;reaching the same conclusion&#8221;.  So will damages awarded be based only upon how many professional economists read Levitt&#8217;s claim, or how many total readers?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Overlawyered		</title>
		<link>https://www.overlawyered.com/2006/04/lott-v-levitt-iii/comment-page-1/#comment-2739</link>

		<dc:creator><![CDATA[Overlawyered]]></dc:creator>
		<pubDate>Thu, 20 Apr 2006 08:10:47 +0000</pubDate>
		<guid isPermaLink="false">http://overlawyered.com/wpblog/?p=3361#comment-2739</guid>

					<description><![CDATA[&lt;strong&gt;Lott v. Levitt IV&lt;/strong&gt;

David Glenn, in the Chronicle of Higher Education, has the definitive MSM reporting on the affair. (Permanent link here after Apr. 24.) He finds a mixture of scholars who agree and disagree with Lott on...
]]></description>
			<content:encoded><![CDATA[<p><strong>Lott v. Levitt IV</strong></p>
<p>David Glenn, in the Chronicle of Higher Education, has the definitive MSM reporting on the affair. (Permanent link here after Apr. 24.) He finds a mixture of scholars who agree and disagree with Lott on&#8230;</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
