About the significance of the Schmidt-Kalman filter (from a related abstract):
>In most target tracking formulations, the tracking sensor location is typically assumed perfectly known. Without accounting for navigation errors of the sensor platform, regular Kalman filters tend to be optimistic (i.e., the covariance matrix far below the actual mean squared errors) ... The Schmidt-Kalman filter (SKF) does not estimate the navigation errors explicitly but rather takes into account the navigation error covariance provided by an on-board navigation unit in the tracking filter formulation. By exploring the structural navigation errors, the SKF is not only more consistent but also produces smaller mean squared errors than regular Kalman filters.
Kalman filters are useful for much more than that :)
I'm barely acquainted with them, but this[1] seems to be a good introduction that doesn't shove math under the rug.
Schmidt's modification of Kalman filter allowed for a practical implementation of it on the Apollo computer. The history and the motivation for the changes are excellently described in a survey paper co-authored by Schmidt himself in 1985[2]:
>Dr. Kalman's original formulation would have required an on-board crew to make a continuous sequence of optical measurements equally spaced in time throughout the lunar mission, an impractical scenario. Therefore, to implement our measurement and course-correction schedule, the original formulation had to be revised.
This paper, [2], offers amazing historical tidibtis (especially relevant on HN): computational challenges, project running 6 months behind schedule, why it was better than least-squares methods (computation time!), numerical stability, implementation details that allowed for use of floats (single precision) vs. double precision, FORTRAN implementation vs. specialized hardware, etc.
I find it amazing that nearly 60 years later, we are still dealing with the same issues in machine learning and high-performance computing: from single vs. double precision and numerical stability down to, well, using Kalman filters - and yes, FORTRAN and specialized hardware. And projects running behind schedule.
(Last time I had to touch FORTRAN code was.. a month ago. Also guess what powers all your shiny NumPy/SciPy computations: a lot of FORTRAN.)
> I find it amazing that nearly 60 years later, we are still dealing with the same issues in machine learning and high-performance computing: from single vs. double precision and numerical stability down to, well, using Kalman filters - and yes, FORTRAN and specialized hardware. And projects running behind schedule.
On the other hand, I don't find this amazing at all. Numerics are fundamentally difficult, the people who worked on them at the beginning were very smart. Projects have always been behind schedule. It doesn't surprise me at all that we've been stuck on at least a local maxima on many of these things....
The amazing part to me was that people had reached those maxima so fast: by 1960, barely after the computers have been invented, and with such limited resources.
Perhaps think about it this way: precisely because we had such limited resources at the time, smart people were highly motivated to find those (local?) maxima. These days in many contexts it is easier to say "eh, good enough" and move on to the next thing.
To add to this topic: the fast hardware adder, patented by IBM in 1957, now in some forms present in all fast CPUs, was already invented by Charles Babbage between 1820 and 1830, before the middle of 19th (!) century as he designed his mechanical computing "difference" engine:
"The first idea was, naturally, to add each digit successively. This, however, would occupy much time if the numbers added together consisted of many places of figures.
The next step was to add all the digits of the two numbers each to each at the same instant, but reserving a certain mechanical memorandum, wherever a carriage became due. These carriages were then to be executed successively. "
FORTRAN is immensely better than most its critics are aware, and it god a lot of solutions right. Already since around 1950 it had the format specifications:
We should sit down and chat about FORTRAN II with its three-number if statements. I spent a lot of years in that particular village, and was sufficiently frustrated with its limitations that it drove me to a career in writing compilers.
In addition to the language you note stealing the format idea, you need to include Lisp.
> was sufficiently frustrated with its limitations that it drove me to a career in writing compilers.
I'm genuinely interested!
> FORTRAN II with its three-number if statements
The IBM FORTRAN for 704 (from 1956) had them already, as in the link already posted. Yes, all these labels were definitely annoying, the researchers did work on ALGOL already two years later. FORTRAN itself also improved over time, but for most of the decades the sources were indeed ugly compared to the printed ALGOL in the papers (the implementations were often something else) and all the languages inspired by that. The problem with the numerical labels was recognized even in 1958. Sadly the first standard that allowed not writing them was Fortran 8x (later know as 90):
"Since 1979, the Fortran standardization committee, X3J3, has been labouring over a draft for the next version of the standard. Its initial intention of publishing this draft in 1982 was hopelessly optimistic, and at best it may be ready this year" (1987)
It eventually became ISO in 1991.
At that time, on the microcomputers, Turbo Pascal was already 8 years old. It was the fastest to develop and really convenient. But if you wanted really heavy numerics, you also had to buy additional hardware and... probably use FORTRAN.
So back in the heyday of Fortran II, it did work on smaller computers.
One key frustration I had with Fortran II was that you couldn't put expressions in the elements of a DO loop. Nobody could tell me why.
So I bought a book on compilers https://www.amazon.com/compiler-generator-William-M-McKeeman... and dove in. I ported got the XPL compiler to run on our SIGMA 5 and actually used it in some production jobs. Then, I got a job at Sycor on a team of 3 writing a compiler for a derivative of PL/M targeting the 8085. The compiler was written in Bliss 36. Was serious fun and had major impact on how they developed their software.
I have long left Fortran behind. I have many other languages under my belt now, and no desire to revisit.
Are they just talking about the extended kalman filter? If so it is funny that there was such a big conceptual gap between the linear and linearized versions (i.e. a few years gap). Hindsight is a wonderful thing.
Nope, as far as I can tell, the schmidt-kalman filter is a way of reducing dimensionality in a KF. You split the states between those you are interested in and those you aren't. Sometimes those states you don't care about (e.g. sensor biases) are still important for estimating those you do care about (e.g. position). Adding a state to a typical KF for each of these dimensions is one solution, but it's expensive (covariance calcs scale quadratically with dimension). The Schmidt-Kalman filter is another solution.
Kalman filters are used in many, many things today, including your smartphone's location and motion sensor processing. They are very important and have added a whole lot of economic value since being first used for Apollo.
These days we have yearly budget deficits of a trillion dollars. At some point, we’ll be paying more on interest for the national debt than the Apollo program:
I imagine as a percentage the government spends a lot less on research. R&D pays future dividends. Most of what we waste our money on now has little future value.
How would have you invested the money differently? Double the budget of the US military for one year?
One thing that struck me when I visited the CERN in Switzerland is actually how much of the money spent on fundamental research spilled over into the "real economy" through the byproduct of all the interesting technologies invented.
We waste more in medicaid/medicare fraud than NASA's budget. If you're worried about bad investments, I'd go after the money going to dirtbag criminals who literally steal resources from the sick and needy before I'd go after NASA which has a long history of paying massive quality in life dividends from their relatively tiny budget.
Considering how many cost reductions it's achieved in multi-billion dollar industries, yeah, it was probably a good investment, without even getting into the Apollo program letting us do things we straight up couldn't do before.
comments like these appear all the time when a country like India achieves something significant in Space. People question "why put money in space if millions are starving".
Current efforts out of China and India (for example) aren't making any advances. They have the appearance of "me too" in the sense that they signal a certain level of technical achievement that puts them, at best, on par with other countries. I don't think it's unreasonable to question the priority of that compared with tackling other issues.
To go full devil's advocate, that's because they're not doing much that's new. The space race of decades past and everything surrounding it was pushing human frontiers in science, technology, and exploration. Lots of that technology found its way into everyday use.
As far as I can tell, the launch technology here (the GLSV) is bog standard, the mission is unmanned, and it's using a rover that's less sophisticated than Curiosity. The ground being covered here is well-trodden.
Lest anyone think I'm pooh-poohing the achievement here, I'm not. Human spaceflight is still in its infancy, and being the 5th country to make a soft landing on the moon is a big deal, and a practical demonstration of a lot of technical prowess.
The problem is.. it appears to be mostly a demonstration at the end of the day. Is mapping the surface of the moon and detailing its composition more or less important than lifting 90 million people out of poverty? I know what my opinion is here, but I think it's something that reasonable people can disagree on.
> Is mapping the surface of the moon and detailing its composition more or less important than lifting 90 million people out of poverty?
This question assumes a lot. It's not an either / or. In fact, they may be mutually beneficial, where each effort boosts the other.
The efforts for both endeavors are also not completely interchangeable. We can't productively redirect all resources going into the space program towards poverty reduction. Perhaps some of it may be redirected productively, but most of it cannot.
I understand you may already get this, but I think it is worth repeating.
The efforts are not, but the resources are. Space launches aren't cheap after all. The entire Chandraraayan program cost something in the neighborhood of US$145 million. When you've got a poverty/homeless problem of that magnitude, I think it is reasonable to ask how and why funds are being prioritized the way they are.
Part of the issue is that 'humanity' is divided by borders, and countries do not generally share information and capabilities freely. So the fact that the US has achieved something means very little to other countries beyond 'this is possible to do'. Doing it themselves may still be a significant milestone in their own industrial development.
I wish it were otherwise, but I have a horrible feeling that the only workable alternative is a single global government, which I'm not convinced is a better option in the long run...
An incredibly cold and privileged view. How is feeding/clothing a hundred million people any less valuable than what India's space programme is doing? You say that the "future of humanity" is what's being progressed - well for all you know the next Einstein/Feynman/Tesla could be in that group and his/her potential is going to be wasted.
Industrializing / colonizing space is the future of humanity. Another country investing in space gets humanity closer to that future.
Einsteins don't manifest in rural Indian villages the moment you feed and clothe the local populace. It takes a lot more than a full belly to contribute in a meaningful way, and India isn't getting there any time soon. Plus, it doesn't matter if you surface the next Einstein/Feynman/Tesla if there are no forward-thinking projects for them to work on.
>Another country investing in space gets humanity closer to that future.
Not unless they're doing something new, they don't. This is what I meant by "the ground here is well-trodden". Getting to the moon is within the reach of any nation with a modern space program, the US just hasn't done it for a long while since there hasn't been any pressing need (political or research) for them to do so.
>In most target tracking formulations, the tracking sensor location is typically assumed perfectly known. Without accounting for navigation errors of the sensor platform, regular Kalman filters tend to be optimistic (i.e., the covariance matrix far below the actual mean squared errors) ... The Schmidt-Kalman filter (SKF) does not estimate the navigation errors explicitly but rather takes into account the navigation error covariance provided by an on-board navigation unit in the tracking filter formulation. By exploring the structural navigation errors, the SKF is not only more consistent but also produces smaller mean squared errors than regular Kalman filters.
I'm barely acquainted with them, but this[1] seems to be a good introduction that doesn't shove math under the rug.
Schmidt's modification of Kalman filter allowed for a practical implementation of it on the Apollo computer. The history and the motivation for the changes are excellently described in a survey paper co-authored by Schmidt himself in 1985[2]:
>Dr. Kalman's original formulation would have required an on-board crew to make a continuous sequence of optical measurements equally spaced in time throughout the lunar mission, an impractical scenario. Therefore, to implement our measurement and course-correction schedule, the original formulation had to be revised.
This paper, [2], offers amazing historical tidibtis (especially relevant on HN): computational challenges, project running 6 months behind schedule, why it was better than least-squares methods (computation time!), numerical stability, implementation details that allowed for use of floats (single precision) vs. double precision, FORTRAN implementation vs. specialized hardware, etc.
I find it amazing that nearly 60 years later, we are still dealing with the same issues in machine learning and high-performance computing: from single vs. double precision and numerical stability down to, well, using Kalman filters - and yes, FORTRAN and specialized hardware. And projects running behind schedule.
(Last time I had to touch FORTRAN code was.. a month ago. Also guess what powers all your shiny NumPy/SciPy computations: a lot of FORTRAN.)
[1]https://towardsdatascience.com/kalman-filter-an-algorithm-fo...
[2]https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198600...
On the other hand, I don't find this amazing at all. Numerics are fundamentally difficult, the people who worked on them at the beginning were very smart. Projects have always been behind schedule. It doesn't surprise me at all that we've been stuck on at least a local maxima on many of these things....
https://en.wikipedia.org/wiki/Carry-lookahead_adder
Apparently, he was proud of that invention, only almost nobody was able to appreciate it then.
From "Passages from the Life of a Philosopher (1864) by Charles Babbage"
https://en.wikisource.org/wiki/Passages_from_the_Life_of_a_P...
"The first idea was, naturally, to add each digit successively. This, however, would occupy much time if the numbers added together consisted of many places of figures.
The next step was to add all the digits of the two numbers each to each at the same instant, but reserving a certain mechanical memorandum, wherever a carriage became due. These carriages were then to be executed successively. "
A 3D model:
https://www.youtube.com/watch?v=B2EDE8Srdcw
by the author of:
https://en.wikipedia.org/wiki/The_Thrilling_Adventures_of_Lo...
https://www.fortran.com/FortranForTheIBM704.pdf
quite similar to those used later in C (since around 1970) and even Python (since around 1990) and Java... since version 5.0 in 2004.
Moreover, the format specifications were a part of the language, not "just a string that was traditionally not checked" as in C.
In addition to the language you note stealing the format idea, you need to include Lisp.
I'm genuinely interested!
> FORTRAN II with its three-number if statements
The IBM FORTRAN for 704 (from 1956) had them already, as in the link already posted. Yes, all these labels were definitely annoying, the researchers did work on ALGOL already two years later. FORTRAN itself also improved over time, but for most of the decades the sources were indeed ugly compared to the printed ALGOL in the papers (the implementations were often something else) and all the languages inspired by that. The problem with the numerical labels was recognized even in 1958. Sadly the first standard that allowed not writing them was Fortran 8x (later know as 90):
https://www.sciencedirect.com/science/article/pii/0010465587...
"Since 1979, the Fortran standardization committee, X3J3, has been labouring over a draft for the next version of the standard. Its initial intention of publishing this draft in 1982 was hopelessly optimistic, and at best it may be ready this year" (1987)
It eventually became ISO in 1991.
At that time, on the microcomputers, Turbo Pascal was already 8 years old. It was the fastest to develop and really convenient. But if you wanted really heavy numerics, you also had to buy additional hardware and... probably use FORTRAN.
http://www.classiccmp.org/transputer/documentation/microway/...
One key frustration I had with Fortran II was that you couldn't put expressions in the elements of a DO loop. Nobody could tell me why.
So I bought a book on compilers https://www.amazon.com/compiler-generator-William-M-McKeeman... and dove in. I ported got the XPL compiler to run on our SIGMA 5 and actually used it in some production jobs. Then, I got a job at Sycor on a team of 3 writing a compiler for a derivative of PL/M targeting the 8085. The compiler was written in Bliss 36. Was serious fun and had major impact on how they developed their software.
I have long left Fortran behind. I have many other languages under my belt now, and no desire to revisit.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198600...
"Discovery of the Kalman Filter as a Practical Tool for Aerospace and Industry -- Leonard A. McGee and Stanley F. Schmidt"
Kalman filters are used in many, many things today, including your smartphone's location and motion sensor processing. They are very important and have added a whole lot of economic value since being first used for Apollo.
https://www.nytimes.com/2018/09/25/business/economy/us-gover...
I imagine as a percentage the government spends a lot less on research. R&D pays future dividends. Most of what we waste our money on now has little future value.
One thing that struck me when I visited the CERN in Switzerland is actually how much of the money spent on fundamental research spilled over into the "real economy" through the byproduct of all the interesting technologies invented.
https://en.wikipedia.org/wiki/Apollo_program#Costs
The $168 billion “investment” in the Vietnam War (a trillion today) definitely could have been better spent.
https://www.thebalance.com/vietnam-war-facts-definition-cost...
If we’d put all that money into another moonshot, say the war on cancer instead, can you imagine the knowledge gained?
As far as I can tell, the launch technology here (the GLSV) is bog standard, the mission is unmanned, and it's using a rover that's less sophisticated than Curiosity. The ground being covered here is well-trodden.
Lest anyone think I'm pooh-poohing the achievement here, I'm not. Human spaceflight is still in its infancy, and being the 5th country to make a soft landing on the moon is a big deal, and a practical demonstration of a lot of technical prowess.
The problem is.. it appears to be mostly a demonstration at the end of the day. Is mapping the surface of the moon and detailing its composition more or less important than lifting 90 million people out of poverty? I know what my opinion is here, but I think it's something that reasonable people can disagree on.
This question assumes a lot. It's not an either / or. In fact, they may be mutually beneficial, where each effort boosts the other.
The efforts for both endeavors are also not completely interchangeable. We can't productively redirect all resources going into the space program towards poverty reduction. Perhaps some of it may be redirected productively, but most of it cannot.
I understand you may already get this, but I think it is worth repeating.
Part of the issue is that 'humanity' is divided by borders, and countries do not generally share information and capabilities freely. So the fact that the US has achieved something means very little to other countries beyond 'this is possible to do'. Doing it themselves may still be a significant milestone in their own industrial development.
I wish it were otherwise, but I have a horrible feeling that the only workable alternative is a single global government, which I'm not convinced is a better option in the long run...
Arguably India is teaching itself to fish which is even better.
Einsteins don't manifest in rural Indian villages the moment you feed and clothe the local populace. It takes a lot more than a full belly to contribute in a meaningful way, and India isn't getting there any time soon. Plus, it doesn't matter if you surface the next Einstein/Feynman/Tesla if there are no forward-thinking projects for them to work on.
Not unless they're doing something new, they don't. This is what I meant by "the ground here is well-trodden". Getting to the moon is within the reach of any nation with a modern space program, the US just hasn't done it for a long while since there hasn't been any pressing need (political or research) for them to do so.