Mary Tai herself also replied and explained the background to the paper a bit more:
> While a doctoral candidate working on my dissertation at Columbia University in 1981, I needed to calculate total area under a curve. During a session with my statistical advisor, and after examining several alternative methods, I worked out the model in front of him. The concept behind it is obviously common sense, and one does not have to consult the trapezoid rule to figure it out. [...]
> Why I call it Tai's model. I never thought of publishing the model as a great discovery or accomplishment; it was not published until 14 years later, in 1994. Because of its accuracy and easy application, many collegues at the Obesity Research Center of St Luke's-Roosvelt Hospital Center and Columbia University began using it and addressed it as "Tai's formula" to distinguish it from others. Later, because the investigators were unable to cite an unpublished work, I submitted it for publication at their request. Therefore, my name was rubber-stamped on the model before its publication. [...]
I was really on the author's side till I read these responses. Initially in my head it was just a case of misunderstanding or ignorance, but through her letters she has made clear that she thinks it is significant enough. This is borne out by her tone which is palpatably aggressive and contradictory at some points. She says she didn't publish this model for glory, but she also named it after herself. Also, no effort was made to connect it to already known elementary mathematics. And I don't understand why people would need a citation instead of just saying its calculated using the trapezoidal rule. The whole point about model vs formula is IMO pathetic, models are not just bunch of known formulas. They are supposed to give a new understanding of the system (also who brings up Marriam Webbster in a medical journal? Even in letters it sounds incredibly condescending).
If they didn't they couldn't have used it and they wouldn't have needed a citation either.
The point is that the "because the investigators were unable to cite an unpublished work, I submitted it for publication at their request" justification doesn't make much sense. She could have just explained to them that no citation was needed for something so trivial.
> The point is that the "because the investigators were unable to cite an unpublished work, I submitted it for publication at their request" justification doesn't make much sense. She could have just explained to them that no citation was needed for something so trivial.
I can see how if you were unaware, and several colleagues in your field were unaware, it might be good to include some kind of external information about the thing. I can also see why you might not want to link to Wikipedia.
I do think writing a medical article about it for people to link to is silly. Surely if you wanted a citation, you could just cite a math textbook.
> She says she didn't publish this model for glory, but she also named it after herself.
Aren't many Nobel Prizes in Economics done like that? Take some well-known math/physics formula, rename variables to make them represent some economic quantities, name the resulting formula after oneself, publish, get the prize?
> rename variables to make them represent some economic quantities
You cannot trivialize this step. Most people's job in data analysis is basically this. Figuring out what quantity fits where in the formula _is_ the research.
edit: To make it clear, it would have been fine if here model explained something using trapezoidal formula. But she seems to be claiming the formula, not a model.
Both the Black-Scholes formula and the Neutron Transport Equation are instances of the heat equation, which describes a wide variety of different phenomena.
> She says she didn't publish this model for glory, but she also named it after herself.
She didn't name it after herself, the users did:
> collegues ... began using it and addressed it as "Tai's formula"
As is typically the way with eponymous scientific works, people citing the work add the author's name e.g. Maxwell's equations or Higgs' boson. The author doesn't typically declare "I have discovered X, which I shall call ${self.name}"
I have published multiple photonics papers and contributed to many and nobody has ever had the need to provide a citation for maxwell's equations. I would consider it reference padding if you do. Point is people could still have used the term without her publishing it. Also, am I supposed to take her word for it, at least she could have cited somewhere which tried to use it and give her credit. This seems like if anyone uses trapezoidal method she would take credit because nobody had a citation so everyone was just using "Tai's model".
> people citing the work add the author's name e.g. Maxwell's equations or Higgs' boson.
I don't think this rule applies here, because the paper introducing "Tai's rule" calls it "Tai's rule" rather than others citing it. As if Higgs called it Higgs' boson in his paper.
There's huge difference in significance though. Maxwell's equations have their distinctive name because he did discover a missing piece in a system (set) of equations that had been known before as separate phenomena and yet weren't used as a whole. The missing piece he discovered on paper, with a pure logical reasoning alone, that would bring consistency to the set of the equations, was later identified and confirmed as electromagnetic fields in reality.
Have you ever spent 10min of your life with a doctor in an interpersonal setting? These people are probably the most intelectually insecure group in our society. We put them on a pedestal non-stop, the average grades in their classes are all A+, super-high wages and status, etc. From my point of view, this behaviour fits my expectation of them.
So basically the author in question didn't even bother to read the relevant part of the book she referenced, and also misunderstood the explanation of the Area and Integral chapter as something that Irving Alder developed.
I didn't bother looking up the other two references to help preserve my sanity.
On a hunch I put the title of the paper into the Wikipedia search. Among other things, I got the "Scholarly peer review" article, clicked on it and checked where it was referenced.
I think it's important to note why the 75 citations isn't just 75 dummies who also haven't heard of integration.
If you're writing a diabetes paper and want to use integration for your analysis (even if you are fully aware of its background), you'll be forced to justify your approach and defend it's applicability to diabetes. Far easier to say 'as per Tai et al...' and take it as given.
You see this a lot- 'we used [x] because Big Name Prof did as well [citation]'
'as per Newton, Leibniz et al...' isn't good enough?
Seriously though: if it is indeed true that you need to justify it this way, that is insane: this is just basic calculus. Do you have to justify every single statistics calculation also? Do you have to say 'as per Whoever et al' every time you calculate the standard deviation of something? What referee would object to 'estimating the area using the trapezoidal rules gives us...'? (note that this paper was purely about the CALCULATION of the area, not justifying whether or not the area is a useful thing to know)
If one feels the need for a citation for something basic like this, a textbook works quite well. Pretty much any high school calculus textbook will have the necessary formulas.
> For instance there's a trend for medical students to believe less in evolution than other biology students
I don't find that particularly surprising, actually! I would assume that most subfields of biology relies more on deep understanding of evolution than medicine does. Like, if you're a botanist or entomologist or whatever, you need to have a very understanding of speciation and evolutionary pressure and whatnot, more so than a doctor would (I would think, anyway).
Now, if medical students believed less in evolution than the general public, THAT would be shocking. But I doubt that is the case.
I seem to recall seeing a breakdown of belief in God by discipline, and it was the opposite of what I would have expected. Abstract mathematicians were the most likely to believe in some form of God, followed by physicists, with biologists being the least likely to believe. Considering how complex and unpredictable biological science is compared to something like particle physics, I was surprised by this.
I don't think it should be surprising if biologists are the least likely group to believe in God.
One common reason for believing in God is how ingeniously designed living things often are. The reason why this turns out not to be a very good reason is that evolution explains this better (dealing e.g. with cases where the design seems needlessly complex, malicious, etc.). Biologists are more familiar with evolution and spend more time thinking about it than other scientists.
Relatedly, evolution has frequently been the target of attack from religious people and groups. Biologists are likely to notice this more and be more bothered by it (imagine a world where there's a strong anti-technology lobby motivated by religion; engineers would likely be more likely to be opposed to religion in that case).
Pure mathematicians are used to thinking of intangible things like numbers, sets of sets of sets of sets, equivalence classes of functions from manifolds to abelian groups, etc., as real things even though they clearly don't "live" in the physical universe, and perhaps even as more perfect than the tangible inhabitants of the material world. I think this may make the idea of a sort of spiritual universe with inhabitants like a Most Perfect Being Imaginable particularly congenial to pure mathematicians.
(On the other hand, pure mathematicians ought to be less vulnerable to some varieties of philosophical flim-flam used by religious apologists. For instance, the so-called "ontological argument" is composed mostly of logical errors, and I would expect pure mathematicians to notice this more readily than anyone else. ... But: Kurt Goedel, who was one hell of a good logician, actually put some effort into coming up with a logically sound version of the ontological argument. Of course it's no good because it relies on axioms there's no reason to think apply to the actual world, but it does indicate that a sufficiently ingenious mathematician may react to a terrible argument not by dismissing it but by finding a way to make it less terrible.)
[In case anyone cares: I am a pure mathematician, or at least I was one before moved into industry where one gets paid more for doing easier work. I was a religious believer for many years but am not any more.]
In a similar vein, a lot of "ingenious biological designs" turn out to be amazingly hacky close up.
The eye, for example, has blood vessels blocking some of the photoreceptors. A moderately intelligent designer wouldn't run the power cables directly in front of the camera's lens!
Or...maybe we're not made in His Image: cephalopods don't have this problem, after all. Cthulhu ftaghn....
>One common reason for believing in God is how ingeniously designed living things often are.
Any decent genetics/medical class will thoroughly disabuse you of this notion. There's a clear lack of any intelligence in our design beyond satisfying evolutionary requirements.
Up to 9/10 first time mothers suffer a perineal laceration during childbirth. 65% require stitches.
Male/female ducks are in an evolutionary arms race for control of breeding. Males have extremely long corkscrew penises for rape, while the females have vaginas with fake paths designed to resist. Humans have the same conflict, but it plays out socially (e.g. faking signals of high value during courtship) or in war.
A number of religious people, both genders, have argued to me that we can't apply human concepts of rape to animals/god. Nuts.
Either God is a sick, twisted, sadistic fucker, or life wasn't intelligently designed. Pick one.
> Any decent genetics/medical class will thoroughly disabuse you of this notion.
That was my point: this is one reason why biologists are particularly unlikely to believe in God. (At least in places where the dominant form of belief-in-God gives prominence to the idea that God is responsible for the design of living things.)
I think the cause may be a lot more prosaic than all of that; namely that doctors are more likely to be the children of ambitious immigrants and are therefore more likely to be religious
Makes sense. Abstract math and physics are very... platonic. I can understand why someone who believes in a Divine Order would be attracted to abstract mathematics or theoretical physics.
Also makes sense that messy experimentalism would be less allured by the idea of especially Abrahamic God.
Medical Doctors are members of a rich and powerful guild. Admission to the guild requires getting into medical schools. That in turn requires entering at least university (really high school) knowing how the pre-med game is played. For this reason, membership in the profession is more hereditary than most and is especially hard to achieve without knowing a doctor through family friends who can talk you through the process.
Also, in the USA, churches were historically important social institutions.
In a country of powerful religious institutions, members of a powerful guild with significant barriers to entry and therefore strong hereditary lines are often religious. That's not really at all surprising. At least to me.
You'd be surprised how many people with supposedly elite educations don't believe in evolution.
For me the most shocking example (or maybe not that shocking) is how it's de rigour in sociology to deny evolution. Yes, they will superficially claim they believe in it, but that belief collapses when placed under the slightest pressure. For example if you ask them for examples of male/female differences that originate in evolutionary selection pressure, you'll discover there are none. Not even men being physically stronger than women has an evolutionary explanation, according to this field.
The rejection of evolution by sociologists is a long standing problem. There's an essay about the problem from 1990:
Per my own anecdata in the US, MDs are a pretty diverse group of people in regards to faith. The 'calling' of being a doctor/healer is strong among many faith based communities in the US. As such, their reasoning for being an MD is not based on science as much as it is on their faith and purpose. The point is to be a doctor/healer, how the knowledge to better do so is generated may be incidental.
Note: Medical researchers are a different bunch, per my own anecdata, I'm talking about practicing MDs specifically here.
Also a lot of pressure for child’s of immigrants to become doctors and engineers. And because immigrants are more likely to be religious, so are their kids
Osteopaths in the UK and most other countries are quacks. Osteopaths in the US are real physicians. This is not a trick of labeling: In the US osteopath training is a science-based, rigorous medical program identical to that of an MD. Yes DOs are also trained in the historical "chiropractic"-like stuff, but most of the ones I know deemphasize that in practice.
Why have it at all? It's magical thinking like religion that needs to be debunked and obliterated. Religion and quackery have no place in scientific medicine!
In practice, it serves the purpose of letting less academically accomplished students become physicians without resorting to Caribbean medical schools. This isn't a net negative to society because much of the overachievement in the pre-med set is overlearning the game, not genuine accomplishment.
Talking to people who have gone to osteopathic medical schools, it seems like the true believers there have enough clout that they're not transforming into normal medical schools any time soon. But OMM is such a small part of the curriculum, it's not a huge burden for the majority of students who think it's nonsense.
I don't know about you, but if I learned that my doctor, with whom I trust my health, spent 4-6 hours a week in a class where they learned to say "Circle Circle Dot Dot, now you got the cootie shot" like a kindergarten rhyme... I don't care if 95% of them know it doesn't work and is a waste of time. I'd have serious concerns about their entire profession that they were even allowed to waste their time on something so unscientific.
(Not to mention a deep fear that I might get one of the 5% that will try to re-sell me rhyme based medicine as proven science).
I think a lot of it is history, a lot of DO schools were founded back with it was more of a viable theory and Med School wasn't much more scientific. Over a hundred years the school teaches changes, but the name and degree doesn't.
Yes. Well, kind of. More accurate to say shortage of MDs in specific specialties. DOs are much more likely to go into family/general practice/pediatrics/etc.
I kind of expect that long-term lots of DOs will be replaced by NPs and PAs. I've been to medical appointments three times this year and only saw a Doctor once. Got excellent care each time.
"After graduating from medical school, DOs take a rigorous national licensure exam, which contains the same material as the exam to become an MD. Both kinds of doctor are licensed by state medical examination boards."
Or, better yet, collude with hospital systems and other care providers to side-step the AMA by giving NPs and PAs more autonomy to operate independently of an MD. You don't need four years of med school and a residency to do an annual health check-up for a pre-teen or put in a few stitches.
From the article: "this paper is perhaps an important example of the importance of interdisciplinary communication."
Indeed. Even within the medical field, huge amounts of progress are never materialized just because the guy who deals with the brain never talks to the guy who deals with the gut. They have another guy who deals with the nervous system who knows little about the brain OR the gut. And you get referrals to all of these, and they contradict each other and argue (by proxy of you) about the best way to treat the thing that none of them fully knows anything about.
I wonder how many of those are from papers discussing failure of peer review or interdisciplinary communication rather than the value of the formula itself.
There are a lot of them that say “using the trapezoidal rule” or something and cite the paper. I think that’s a silly case of unnecessary citations (having been a coauthor on a medical paper there’s a lot of pressure to ‘grab a citation that sort of works’).
But they almost always call it “the trapezoidal rule” - the original paper is quite famous specifically because of the embarrassment it brought to medical researchers, and implication that medicine is full of doofuses who don’t know calculus is pure clickbait.
Credit to the author for (a) calling this out, whilst (b) maintaining decorum and assuming good faith.
> I don’t mean to pick on Dr. Tai, especially since I only have access to the paper’s abstract. In fact, it’s perhaps a credit that s/he rediscovered the rectangle/trapezoidal method. Further and more seriously, this paper is perhaps an important example of the importance of interdisciplinary communication.
I don’t think the author deserves much credit for “calling this out” - the paper was published in 1994 and has been a meme in math circles for about as long.
I appreciate the dig at fellow physicists for not recognizing group theory. As a physicist myself, I’m somewhat impressed by the power of many mathematical abstractions which have not yet utilized. Although I think this Avenue of discovery (searching in mathematics textbooks for things to apply to physics) is now fairly well-mined.
Interesting. I've watched nuclear engineers struggle with higher-order programming concepts and biomedical informatists (MD + PhD CS) struggle with software engineering. And, damn, I saw some exceptionally-beautiful code from some security researcher grad students.. like 17 levels of nested blocks.
I guess not everyone can use Rust, Haskell, OCaml, or Idris. :sigh emoji:
I can’t speak for nuclear engineers and biomedical informatists, but physicists are usually pretty good at abstract programming concepts... But you have to overcome the characteristic laziness (doing only what’s necessary to solve the problem at hand).
Scientists make particularly bad software engineers. You'll get all kinds of problems, and nearly no good code, in any discipline you care to look.
Take the same people and put them for a year in a more predictable environment, and they start to write much better code. (And yeah, this natural experiment happens all the time.)
Calculus is not mandatory for high school graduation in many places, but I would shocked if it was not mandatory for university admission in any science based discipline.
Many _or most_ universities will just tack it on as a remedial requirement. I guess highly competitive programs might just reject you, but I'd be surprised if many state schools did.
It's treated as a prerequisite by the time you are declaring a major. My state engineering school didn't require calc for freshman admission. The degrees all required the college level classes (or the AP equivalent).
Thanks but I know that. To answer my own question, seems the quoting of a sci-hub link would be a civil matter in the US at least. Moderators will decide for themselves.
Speaking only from personal experience, medical researchers in the US may not have any training in Calculus. I'd put it at ~30% that do not. It is only taught in High Schools if the child and their parents want them to take it and is not required for graduation (though this varies a lot from district to district). Depending on the major and the school, Calculus may also not be required for graduation from university. Depending on the grad-school, Calculus is also not required for a PhD.
Speaking personally, I know of entire departments where Calculus is not known among the majority of the faculty members who are publishing in medicine/biology, though this is a bit rare. However, higher order Calculus like Lagrangians, Diff-Eqs, and Matrix algebra is as rare as hen's teeth among bio/medical researchers.
Nope. At least not in an explicit way, because we saw those concepts in free fall problems and similar ones in physics as "instantaneous" quantities. Same with areas, but not that much.
My teacher tried to show us limits in one class, but she didnt had more time to do so.
At least here in Chile (South America), only the rich or with custom curriculum see those concepts at HS. Unless you go to pre-U paid schools or to bachillerato, calculus is only teached in Unis.
I see the same thing in Applied Machine Learning, where psychologist X writes paper on "My mathematical model for predicting Intimate Partner Violence scores." Where the deep learning model is a basic multilayered perceptron implemented with sklearn. Something any CS Freshman could cook up in 10 minutes.
> Further and more seriously, this paper is perhaps an important example of the importance of interdisciplinary communication.
I wouldn’t go that far- medical researchers all did high school math.
If they didn’t understand it, that’s not an interdisciplinary communication problem.
Although it’s true that if Dr. Tai had shown his paper to any physicist (or high school calculus student), with “Tai’s Model” as a label for integration, they would laugh and point out the issue.
XKCD: Purity [1] is semi-relevant. I think many US students learn this kinda stuff in high school, so it's a little odd that the researchers asked for Tai to write something to cite. Neither of them recognized this?
The author of this blog post is wrong to say this is 'integration'[0], and is also later wrong to say it's the 'trapezoidal rule'[1].
It seems to me that it's diagnostically useful to have a standardised method for estimating the area under the curve, rather than everyone inventing their own method.
[0](since you don't know the whole function, you only know point estimates at particular moments in time)
[1] (trapezoidal rule is normally defined by having equal intervals - here, measurements are often taken at irregular intervals)
This comment is just wrong. It's still integration even if you only have enough data to approximate the integral. Also, Wikipedia's definition of the trapezoidal rule permits intervals with different sizes.[1]
I don't know why you'd defend a researcher who lacked basic math knowledge, and more damningly refused to acknowledge her mistake once it was brought to light.
In my view integration is by definition part of calculus, and requires the concept of the infinitesimal. (This is maybe semantic, but Wikipedia agrees with me in this case).
The paper shares one of the many goals of integration, which is to find the area under the curve, but you literally cannot use integration as the tool to do this here.
So, it's not integration.
A comment below calls this "numerical integration" - which I also find dubious. Numerical integration is still using calculus - you have to know the whole function - but without getting to a closed form answer.
Is this something that could be applied in general mathematics? If so, is it truly novel and worthy or a trivial derivation? I’m asking because I don’t know, not rhetorically.
I agree that someone shouldn’t be publicly made fun of in-general for sharing something novel and non-trivial that others didn’t already know, and if it highlights a problem in inadequate reviews, maybe it should’ve been presented to the journals that published it with that info.
This is kind of where grand parent's thesis (that the paper is in fact novel and useful) falls down.
I'd be stunned if the concept of integrating an unknown function that needs to be guessed based on measurements taken at irregular points in time hasn't been studied rigorously from a generalized mathematical point of view. What such study would likely do (that a medical treatment would not) is discuss tradeoffs of different approximation methods, probably explore things like error bounds and behaviours with different unknown functions, and selection of the best integration method.
GP is right in that I'm sure it's useful to have a standardized method that allows for comparison between doctors and patients. But I think it's naive to assume that the findings in this paper are mathematically novel, and further, that mathematicians couldn't do a more rigorous job of deducing an accurate 'standard' way of measuring this.
If you look close enough, you see this kind of thing quite often, eyperts in one field "discovering" the basics of another, non related, field. And then praising themselves for discovering it. No idea if people looked at that, but I sometimes have the impression, that our hyper specialization isn#t helping.
I see this at the moment a lot with supply chain management. Best example used to be masks, and no vacinations. It is kind of funny to see in real time people dicovering, and trying to cope with, the time axis coming with purchase orders, volumes and delivery dates. It is also quite saddening to watch. A medical researched discovering maths I learned already before entering university falls into kind of the same category.
I don’t think there’s anything here for mathematicians. The only nontrivial fact in the paper is that the approximation works well in medical practice (which is not something that Leibniz or Newton would have been able to tell you since there could be complicated biological factors confounding it - what if the ‘real curve’ has a bunch of spikes that don’t show up in measurements?).
However, the original paper really is incomplete because of its seeming ignorance of the real “trapezoidal rule.” It really needs a discussion along the lines of “the trapezoid rule from ordinary calculus can be used by diabetes practitioners with surprising accuracy” and explained that sources of error, discontinuity, etc., aren’t likely to affects the approximation.
But don’t worry about M.M. Tai’s feelings too much :) The paper is almost 30 years old and it’s world-famous for the silly error, Tai has already been throughly roasted.
https://kconrad.math.uconn.edu/math1132s20/handouts/taicomme...
Mary Tai herself also replied and explained the background to the paper a bit more:
> While a doctoral candidate working on my dissertation at Columbia University in 1981, I needed to calculate total area under a curve. During a session with my statistical advisor, and after examining several alternative methods, I worked out the model in front of him. The concept behind it is obviously common sense, and one does not have to consult the trapezoid rule to figure it out. [...]
> Why I call it Tai's model. I never thought of publishing the model as a great discovery or accomplishment; it was not published until 14 years later, in 1994. Because of its accuracy and easy application, many collegues at the Obesity Research Center of St Luke's-Roosvelt Hospital Center and Columbia University began using it and addressed it as "Tai's formula" to distinguish it from others. Later, because the investigators were unable to cite an unpublished work, I submitted it for publication at their request. Therefore, my name was rubber-stamped on the model before its publication. [...]
They didn't know the trapezoidal rule.
The point is that the "because the investigators were unable to cite an unpublished work, I submitted it for publication at their request" justification doesn't make much sense. She could have just explained to them that no citation was needed for something so trivial.
I can see how if you were unaware, and several colleagues in your field were unaware, it might be good to include some kind of external information about the thing. I can also see why you might not want to link to Wikipedia.
I do think writing a medical article about it for people to link to is silly. Surely if you wanted a citation, you could just cite a math textbook.
Aren't many Nobel Prizes in Economics done like that? Take some well-known math/physics formula, rename variables to make them represent some economic quantities, name the resulting formula after oneself, publish, get the prize?
You cannot trivialize this step. Most people's job in data analysis is basically this. Figuring out what quantity fits where in the formula _is_ the research.
edit: To make it clear, it would have been fine if here model explained something using trapezoidal formula. But she seems to be claiming the formula, not a model.
She didn't name it after herself, the users did:
> collegues ... began using it and addressed it as "Tai's formula"
As is typically the way with eponymous scientific works, people citing the work add the author's name e.g. Maxwell's equations or Higgs' boson. The author doesn't typically declare "I have discovered X, which I shall call ${self.name}"
[1] (PDF link https://math.berkeley.edu/~ehallman/math1B/TaisMethod.pdf
I don't think this rule applies here, because the paper introducing "Tai's rule" calls it "Tai's rule" rather than others citing it. As if Higgs called it Higgs' boson in his paper.
At best, a distinction without a practical difference.
> Three formulas have been developed by Alder (3), Vecchio et al. (4), and Wolever et al. (5) to calculate the total area under a curve
[3] Alder I: A New Look at Geometry. New York, The John Day Company, 1966.
This book is in turn available in archive.org:
https://archive.org/details/B-001-001-934/page/n269/mode/2up
So basically the author in question didn't even bother to read the relevant part of the book she referenced, and also misunderstood the explanation of the Area and Integral chapter as something that Irving Alder developed.
I didn't bother looking up the other two references to help preserve my sanity.
If you're writing a diabetes paper and want to use integration for your analysis (even if you are fully aware of its background), you'll be forced to justify your approach and defend it's applicability to diabetes. Far easier to say 'as per Tai et al...' and take it as given.
You see this a lot- 'we used [x] because Big Name Prof did as well [citation]'
Seriously though: if it is indeed true that you need to justify it this way, that is insane: this is just basic calculus. Do you have to justify every single statistics calculation also? Do you have to say 'as per Whoever et al' every time you calculate the standard deviation of something? What referee would object to 'estimating the area using the trapezoidal rules gives us...'? (note that this paper was purely about the CALCULATION of the area, not justifying whether or not the area is a useful thing to know)
For instance there's a trend for medical students to believe less in evolution than other biology students
EDIT: reference to it in this story: https://www.theguardian.com/world/2006/feb/21/religion.highe...
I don't find that particularly surprising, actually! I would assume that most subfields of biology relies more on deep understanding of evolution than medicine does. Like, if you're a botanist or entomologist or whatever, you need to have a very understanding of speciation and evolutionary pressure and whatnot, more so than a doctor would (I would think, anyway).
Now, if medical students believed less in evolution than the general public, THAT would be shocking. But I doubt that is the case.
One common reason for believing in God is how ingeniously designed living things often are. The reason why this turns out not to be a very good reason is that evolution explains this better (dealing e.g. with cases where the design seems needlessly complex, malicious, etc.). Biologists are more familiar with evolution and spend more time thinking about it than other scientists.
Relatedly, evolution has frequently been the target of attack from religious people and groups. Biologists are likely to notice this more and be more bothered by it (imagine a world where there's a strong anti-technology lobby motivated by religion; engineers would likely be more likely to be opposed to religion in that case).
Pure mathematicians are used to thinking of intangible things like numbers, sets of sets of sets of sets, equivalence classes of functions from manifolds to abelian groups, etc., as real things even though they clearly don't "live" in the physical universe, and perhaps even as more perfect than the tangible inhabitants of the material world. I think this may make the idea of a sort of spiritual universe with inhabitants like a Most Perfect Being Imaginable particularly congenial to pure mathematicians.
(On the other hand, pure mathematicians ought to be less vulnerable to some varieties of philosophical flim-flam used by religious apologists. For instance, the so-called "ontological argument" is composed mostly of logical errors, and I would expect pure mathematicians to notice this more readily than anyone else. ... But: Kurt Goedel, who was one hell of a good logician, actually put some effort into coming up with a logically sound version of the ontological argument. Of course it's no good because it relies on axioms there's no reason to think apply to the actual world, but it does indicate that a sufficiently ingenious mathematician may react to a terrible argument not by dismissing it but by finding a way to make it less terrible.)
[In case anyone cares: I am a pure mathematician, or at least I was one before moved into industry where one gets paid more for doing easier work. I was a religious believer for many years but am not any more.]
The eye, for example, has blood vessels blocking some of the photoreceptors. A moderately intelligent designer wouldn't run the power cables directly in front of the camera's lens!
Or...maybe we're not made in His Image: cephalopods don't have this problem, after all. Cthulhu ftaghn....
Any decent genetics/medical class will thoroughly disabuse you of this notion. There's a clear lack of any intelligence in our design beyond satisfying evolutionary requirements.
Up to 9/10 first time mothers suffer a perineal laceration during childbirth. 65% require stitches.
Male/female ducks are in an evolutionary arms race for control of breeding. Males have extremely long corkscrew penises for rape, while the females have vaginas with fake paths designed to resist. Humans have the same conflict, but it plays out socially (e.g. faking signals of high value during courtship) or in war.
A number of religious people, both genders, have argued to me that we can't apply human concepts of rape to animals/god. Nuts.
Either God is a sick, twisted, sadistic fucker, or life wasn't intelligently designed. Pick one.
That was my point: this is one reason why biologists are particularly unlikely to believe in God. (At least in places where the dominant form of belief-in-God gives prominence to the idea that God is responsible for the design of living things.)
Also makes sense that messy experimentalism would be less allured by the idea of especially Abrahamic God.
Also, in the USA, churches were historically important social institutions.
In a country of powerful religious institutions, members of a powerful guild with significant barriers to entry and therefore strong hereditary lines are often religious. That's not really at all surprising. At least to me.
For me the most shocking example (or maybe not that shocking) is how it's de rigour in sociology to deny evolution. Yes, they will superficially claim they believe in it, but that belief collapses when placed under the slightest pressure. For example if you ask them for examples of male/female differences that originate in evolutionary selection pressure, you'll discover there are none. Not even men being physically stronger than women has an evolutionary explanation, according to this field.
The rejection of evolution by sociologists is a long standing problem. There's an essay about the problem from 1990:
https://link.springer.com/article/10.1007%2FBF01112591
and in 2018 people were still writing about the very same thing:
https://www.frontiersin.org/articles/10.3389/fsoc.2018.00024...
The field systematically rejects Darwinism for ideological reasons.
Note: Medical researchers are a different bunch, per my own anecdata, I'm talking about practicing MDs specifically here.
Talking to people who have gone to osteopathic medical schools, it seems like the true believers there have enough clout that they're not transforming into normal medical schools any time soon. But OMM is such a small part of the curriculum, it's not a huge burden for the majority of students who think it's nonsense.
(Not to mention a deep fear that I might get one of the 5% that will try to re-sell me rhyme based medicine as proven science).
Now both MDs and DOs take a more modern approach.
> Osteopathic medical school curricula are virtually identical to those at schools granting the MD degree
https://en.wikipedia.org/wiki/Doctor_of_Osteopathic_Medicine...
Virtually isn't the same. There is no need for DOs. Period.
I kind of expect that long-term lots of DOs will be replaced by NPs and PAs. I've been to medical appointments three times this year and only saw a Doctor once. Got excellent care each time.
"After graduating from medical school, DOs take a rigorous national licensure exam, which contains the same material as the exam to become an MD. Both kinds of doctor are licensed by state medical examination boards."
Indeed. Even within the medical field, huge amounts of progress are never materialized just because the guy who deals with the brain never talks to the guy who deals with the gut. They have another guy who deals with the nervous system who knows little about the brain OR the gut. And you get referrals to all of these, and they contradict each other and argue (by proxy of you) about the best way to treat the thing that none of them fully knows anything about.
source: https://scholar.google.com/scholar?hl=it&as_sdt=0%2C5&q=A+Ma...
It states: "Mean BGwas determined using area-under-the curve analysis (10)." where (10) is the paper we are discussing about.
Hence in this paper they acknowledge the result...
Don't know about the other citers. Also It would be nice to know that, but I cannot immagine how to do it automatically
But they almost always call it “the trapezoidal rule” - the original paper is quite famous specifically because of the embarrassment it brought to medical researchers, and implication that medicine is full of doofuses who don’t know calculus is pure clickbait.
(Tried to do this once with a tongue-in-cheek comment, but the whole section was cut for space).
https://journals.plos.org/plosone/article?id=10.1371/journal...
For comment: https://eagereyes.org/series/peer-review/1-quilt-plots
First author is now an A/Prof at one of Australia's top Universities.
Medical researcher discovers integration, gets 75 citations - https://news.ycombinator.com/item?id=11818833 - June 2016 (3 comments)
Medical researcher discovers integration, gets 75 citations - https://news.ycombinator.com/item?id=1964613 - Dec 2010 (120 comments)
> I don’t mean to pick on Dr. Tai, especially since I only have access to the paper’s abstract. In fact, it’s perhaps a credit that s/he rediscovered the rectangle/trapezoidal method. Further and more seriously, this paper is perhaps an important example of the importance of interdisciplinary communication.
I guess not everyone can use Rust, Haskell, OCaml, or Idris. :sigh emoji:
Take the same people and put them for a year in a more predictable environment, and they start to write much better code. (And yeah, this natural experiment happens all the time.)
Edit: I'm surprised, but it's quite impressive to have discovered the same concept Newton did, all on your own!
did you guys have integrals in HS?
I had only limits and derivatives for optimization via extremas finding
EU, Poland.
Speaking personally, I know of entire departments where Calculus is not known among the majority of the faculty members who are publishing in medicine/biology, though this is a bit rare. However, higher order Calculus like Lagrangians, Diff-Eqs, and Matrix algebra is as rare as hen's teeth among bio/medical researchers.
My teacher tried to show us limits in one class, but she didnt had more time to do so.
At least here in Chile (South America), only the rich or with custom curriculum see those concepts at HS. Unless you go to pre-U paid schools or to bachillerato, calculus is only teached in Unis.
I mentioned 3 things :P
Although it’s true that if Dr. Tai had shown his paper to any physicist (or high school calculus student), with “Tai’s Model” as a label for integration, they would laugh and point out the issue.
I remember the application of this rule being taught in 12th or 13th grade.
https://xkcd.com/435/
That should be worth 50 citations.
It seems to me that it's diagnostically useful to have a standardised method for estimating the area under the curve, rather than everyone inventing their own method.
[0](since you don't know the whole function, you only know point estimates at particular moments in time)
[1] (trapezoidal rule is normally defined by having equal intervals - here, measurements are often taken at irregular intervals)
I don't know why you'd defend a researcher who lacked basic math knowledge, and more damningly refused to acknowledge her mistake once it was brought to light.
[1] https://en.m.wikipedia.org/wiki/Trapezoidal_rule
In my view integration is by definition part of calculus, and requires the concept of the infinitesimal. (This is maybe semantic, but Wikipedia agrees with me in this case).
The paper shares one of the many goals of integration, which is to find the area under the curve, but you literally cannot use integration as the tool to do this here.
So, it's not integration.
A comment below calls this "numerical integration" - which I also find dubious. Numerical integration is still using calculus - you have to know the whole function - but without getting to a closed form answer.
I agree that someone shouldn’t be publicly made fun of in-general for sharing something novel and non-trivial that others didn’t already know, and if it highlights a problem in inadequate reviews, maybe it should’ve been presented to the journals that published it with that info.
I'd be stunned if the concept of integrating an unknown function that needs to be guessed based on measurements taken at irregular points in time hasn't been studied rigorously from a generalized mathematical point of view. What such study would likely do (that a medical treatment would not) is discuss tradeoffs of different approximation methods, probably explore things like error bounds and behaviours with different unknown functions, and selection of the best integration method.
GP is right in that I'm sure it's useful to have a standardized method that allows for comparison between doctors and patients. But I think it's naive to assume that the findings in this paper are mathematically novel, and further, that mathematicians couldn't do a more rigorous job of deducing an accurate 'standard' way of measuring this.
I see this at the moment a lot with supply chain management. Best example used to be masks, and no vacinations. It is kind of funny to see in real time people dicovering, and trying to cope with, the time axis coming with purchase orders, volumes and delivery dates. It is also quite saddening to watch. A medical researched discovering maths I learned already before entering university falls into kind of the same category.
However, the original paper really is incomplete because of its seeming ignorance of the real “trapezoidal rule.” It really needs a discussion along the lines of “the trapezoid rule from ordinary calculus can be used by diabetes practitioners with surprising accuracy” and explained that sources of error, discontinuity, etc., aren’t likely to affects the approximation.
But don’t worry about M.M. Tai’s feelings too much :) The paper is almost 30 years old and it’s world-famous for the silly error, Tai has already been throughly roasted.