I remember the ‘aha’ moment I had in my first year of calculus during a test none the less: “Ohhhh when something is growing in proportion to its current size you set up your derivative equality and get an e^x!” The example used was bunnies with unlimited food; then foxes were introduced.
Was surprised to have that learning moment in the middle of the exam and not prior…
Oddly it was my calculus 1 final that clicked a lot of things for me. Turned out the authors of the test included a professor who could explain calculus much better than my lecturer for that semester. I remember feeling the most intense and lasting feeling of revelation for several days after that test.
> Was surprised to have that learning moment in the middle of the exam and not prior…
I sat my first exam for a university course I was teaching last year. I thought I needed to introduce some new ideas, so the students wouldn't be bored doing it.
From the evaluations, not all students agreed...
> hated when my instructors put "important" results that we have never seen before on an exam.
You would have hated my 1977 quantum mechanics final; not a single question that had been directly covered in the course. Really sorted out those who had been paying attention from those who thought that memorization was enough.
It doesn't seem like many universities do it like that at all anymore. MIT's old comp sci curriculum used to be great. I've since seen them replace teaching fundamentals on lisp with python. My guess is that python is taught so that they can stuff their curriculum with buzzwords.
If I recall correctly, the stated reasoning is that most software construction today relies heavily on combining libraries, and the batteries-included nature of Python allows them to get to this point earlier in the semester. Also, using Python makes more of the material directly transferable to upper-level machine learning and big data subjects.
As far as buzzwords, I think the weight of "MIT" is much heavier than any buzzwords that could be attached. (Though, I'm biased.)
My intro to Physics prof did this. First exam, average was like 35%. I was in top 3 at like 70%. He got into trouble because he also said no grading on curve and most of class complained to his dept head.
We had a physics professor who did the same, except he did grade on a curve. My 38% was a B-. I don’t think we had three people above 70%. #1 was an outlier and might have hit 70.
The guy I studied with sat behind me, and at one point one of us started stress laughing. Then it was two of us in the middle of a lecture laughing like our gun just jammed while the horror movie cereal killer was almost within striking distance.
There were a lot of pissed off people in class for the next couple of weeks.
This is why I have nightmares about undergrad engineering. Professors and TAs lose perspective when they teach the same material repeatedly and think they need to make things interesting. No, your job is to communicate abstract ideas clearly, which is apparently an extremely rare skill.
> The particular topics he wanted me to cover were integrating log x, or ln x as he called it
What's wrong with calling it ln x? The way this is written in the article implies there's something weird about calling it that. The name 'log' can mean log2, log10 or natural logarithm depending on the field.
Removing ambiguities from math notation should be considered a good thing.
The author expressed a worry about math education. Consider that a clear non ambiguous notation would help.
Actually, another possible convention is "I don't care about the base", as in O(n log n) or in general in most of Asymptotic Analysis. This becomes fun when people start talking about O(2^(log n)) where the chosen base becomes relevant again :)
The group is a bigger deal than you might expect. There are an infinite number of true theorems, almost all of which are boring.
Mathematicians decide what is interesting, and that's not a matter of logic. A computer can bang out new theorems at light speed but nobody cares. Mathematics, like science and programming, is as much about humans as about the raw logic and data.
You're welcome to be a group of one and please only yourself. But then you wouldn't care if it were published, and it wouldn't be unless you showed it to someone and they took an interest.
For me it's the opposite - in secondary school education the math teachers made the distinction between "log" and "lon" (how they pronounced it) probably because that's what's written on our Casio calculators!
Whereas in uni log is generally assumed to be the natural log, or else it's specified, or else the base is unimportant (like in big O notation)
It made a difference long before Casio calculators, when tables were your main source of values (that would be this old fart's day). You could argue that as long as you stuck to the same table set, it doesn't really matter - but tell that to the decibels.
I think the reason for this is that derivation from 'first principles' isn't really done. You'll do it once or twice in the intro to derivatives and that's it. The other 40 hours you spend on derivatives won't even touch it.
The issue with being able to derive the formulas for derivation yourself is that it's not very useful. You simply don't have time to make those derivations during a test. It's like trying to use grammar rules in a conversation - conversations happen at a pace where you cannot apply grammar rules. You'll just have to know the patterns.
You learn things in school to do a test. The usefulness of the vast majority of the knowledge they attain is purely to help them do the test. Later in life you might wish you knew more about this or that, but that's not at all apparent to the student.
It's because most exams and curriculums in the UK are so strictly defined that all questions are almost guaranteed to follow one of a small set of structures.
And schools have figured out that rather than teaching the subject from first principles, it's easier to get students to get high grades by teaching them each of the structures. Eg. "Whenever there is a question about differentiating x^7, just put 7x^6 as the answer." They then get the students to try a few examples (x^3 becomes 3x^2, x^77 becomes 77x^76, etc), and thats the way every science-y subject is taught.
I often think it leads to students who do well in exams, but can't solve many real world problems.
It could be solved by having a part of every exam paper be never-seen-before applied problems. For example, for differentiation, one might ask "A road's height in meters as a function of the horizontal distance along the road in kilometers is defined as sin(x)cos(x)tan(x). At what points are the steepest uphills? Would you describe the slope of the road as 'very hilly', and why?"
I think the problem is they want calculus in the curriculum and it is too late to be able to put it in context. There are some great uses for calculus that are accessible to many high school students. In particular, with physics you usually learn about capacitors and nuclear decay. Both of these cases are basically solving the differential equation y' = ky but:
- the physics course can’t depend on the concurrent maths course because you are allowed to take physics without taking maths, so you just learn weird equations full of exponential a instead of the ODE
- I think the maths course doesn’t even teach differential equations. They are in FP1 (from a separate ‘further maths’ course) but definitely not in AS (penultimate year of school) maths. Possibly a few turn up in A2 (final year) but then they can’t have any good examples from physics because not everyone doing maths will be able to depend on knowledge about what a capacitor is or how nuclear decay works. But I guess population models might work.
- there can be some better stuff in the further maths course (e.g. I think they might even have the ‘exponentiate a matrix’ solution to systems of first order linear ODEs)
I recall in my highschool the math and physics (both were mandatory) teachers explicitly coordinated so that the derivatives and other relations were taught right before they got applied in physics. There are much simpler examples than capacitors and nuclear decay, you can explain all aspects of physics (starting with basic mechanics, position/speed/acceleration) simpler if you can rely on calculus.
I've seen pretty bright seeming UK university applicants able to do whatever you ask them but then completely shit the bed when you ask them to differentiate e^y with respect to y rather than e^x with respect to x.
I had a wonderful grade 12 calc teacher in high school who taught everything from first principles. I would leave his class feeling like I had gone to the gym from my brain. Despite his incredible teaching, I only pulled off a low 70s grade in the class.
So I retook the course the next year. Taught by a new teacher fresh from teachers college, theoretically with a specialty in math since they were teaching an upper level math course.
I don't think the new teacher even knew how to do derivatives from first principles. Just rote memorization of the different types of differentiation.
I got an A in that class the second time, having learned nothing.
This article reminded me of my maths journey. I was mechanically dutiful as a student, and would make lateral connections but had a lot of patchwork understanding. It wasn't until I understood the derivations in Real Analysis that things started to click.
One of the things I really like about the tau manifesto (the proposal to use tau == 2 * pi instead of pi in many situations) was their explanation of how tau made this equation all the more interesting (IMHO) by making it "almost like a tautology":
> In mathematics, the hyperoperation sequence is an infinite sequence of arithmetic operations (called hyperoperations in this context) that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).
The reciprocal of e is about 37% and it pops up in a lot of places. Say, for example, you play a lottery 1000 times and there is a 1 in 1000 chance of winning each time you play. The chances you do not win even once is 37%, or 1/e.
I really don't like this way of thinking about it.
e isn't important, the exponential function is. e shows up so often because we've chosen to write exp(x) as e^x. It's a result of a notational choice - the fact that exp(1) = 2.718.. and we call that e is pretty insignificant and boring.
> the fact that exp(1) = 2.718.. and we call that e is pretty insignificant and boring.
The constant itself is still pretty interesting. Using e as a base for all numbers yields optimal information density IIRC. Binary (base 2) is close to e so it's information density is not bad, but this also tells us that trinary (base 3) would be even better on this metric since it's closer.
There are lots of interesting properties like this that end up linked to e.
Indeed. That a member of the real number line has this important relationship to the differential operator, the complex plane and number systems, and thus all of trig, calculus, and quantum mechanics is pretty impressive to put it lightly. (Trig through the many relationships of e^x with cosine and sine functions.)
The GP comment reads as either a grab at elite character at best or flat out anti-intellectual at worst. No need to bring it in here.
The point being made is that the _function_ is different than the _constant_ producing that function through exponentiation. I think that's kind of fair.
Take this headline: The function exp(x) = 1 + x + x^2/2 + x^3/6 + ... is the most beautiful function in mathematics. It is its own derivative, has "product linearity", i.e. exp(x+y) = exp(x) exp(y), and is related to trig functions through complex numbers.
The number e isn't doing the heavy lifting, it is the function. The number e comes from the function, not the other way around. Even the famous equation with pi and e is a consequence of the function. And the Taylor series is the easiest way to see the relationship with trig functions.
To be fair, there might be a difference in dispensation at play. Those who prefer a more causal or "active" feel to mathematics would prefer the function framing while those who prefer a more platonic or "mystical" feel would prefer the constant framing.
Idk feels pretty arbitrary to say the Fourier expansion of a function matters more than any other expression of the function when the whole point of the Fourier transformation is precisely its ability to express any function in terms of an orthonormal set of functions.
I feel like you've missed the point of my comment. I said that the exponential is important and you've repeated that here so we don't disagree about that. My point is to distinguish between the exponential function in general and particular value of the exponential function when evaluated at 1.
Sincerely, you completely changed the way I look at e.
Interesting that ln(x) does follow such a notational style; to change the base one can either divide by ln(b) or use a separate notation (log vs ln). I also had put a lot of weight into e being transcendental but it seems like as long as x is rational the value will also be transcendental, so not that special (if someone could confirm).
The more general formula (e^ix = cosx + i*sinx) looks better to me because it defines exponentiation of a complex number as a rotation around a unit circle. It has a nice proof, some cool visualisations and a lot of implications to a bunch of other things in mathematics - I can get behind calling that beautiful.
The special case of x=pi... it's like being excited that sin(pi)=0 or cos(pi)=-1. It doesn't really say anything meaningful or consequential, people like it only because of the symbols it includes. It feels kind of like a math meme that people like to repeat and I can't get behind it.
Maybe it's just not for me and I should just let other people like what they like.
It doesn't really say anything meaningful or consequential
The impressive aspect of that version of the equation is simply the idea that you can obtain a plain old integer (-1) using nothing but simple arithmetic operations on two random-looking transcendentals.
The thing about that is that an imaginary power of a number isn't really simple - it's not like you can do repeated multiplication "i*pi times" and get a result. An imaginary power is defined as cosx + isinx (or the equivalent Taylor series). That's what I meant when I said that all people are impressed by is that cos(pi) = -1 or that sin(pi) = 0.
I feel this way as well. In fact every time I have tried to remember the "most beautiful equation" I had to think of it in the context of the unit circle and work it out by assigning pi to x. Otherwise I don't get any wow out of it.
One interesting thing is that it means it's not impossible to think that π+e or πe or π^e or some other combination of the two could be a simpler number (right now most of these numbers have no known/proven properties - they could even be rational for all we know).
For everyone that, like me, like to read only the headline and then proceed directly into the comments:
The title of the link currently is "Why E, the Transcendental Math Constant, Is Just the Best".
But the article really is about Euler's constant - the lower case e - and not about any of the capital E:s out there (like the capital E sometimes used in scientific notation, or the expected value in probability theory).
Euler was much more than smart. The man went home during the Black Plague and studied math so hard he went blind in one eye - presumably so his brain could use those neurons for math instead of sight. He was also discredited in his time and for centuries after for an intuitive understanding of calculus through infinitesimal and infinite numbers - which was only relatively recently put into rigor akin to epsilon-delta calculus. Also considered the last person to be able to know all of the known world of mathematics at his point in time.
I kind of wish we had a holiday of some kind to appreciate either Euler himself or even a month to discuss the historical contributions to knowledge by philosophers and scientists alike.
You should come to Stockholm, Sweden, during the week at the beginning of December when the Nobel prizes are awarded. While it is not an entire month of celebrations in the name of science, at least it is one full week: https://www.nobelprize.org/ceremonies/nobel-week-2021/
Yeah the title almost seemed like clickbait since "Euler's constant" usually means γ=0.5772... provoking a reaction of "where does that show up in optimization?". That the constant turned out to be e=2.718... which shows up all over the place was a big disappointment