32 comments

  • SilasX 1864 days ago
    I would be a lot more forgiving of these screwups if Tesla didn't constantly swear up and down that they've solved self-driving cars.

    As far back as 2016 they were claiming they had full SDC capability above human driver safety [1], and their recent Model Y announcement suggests that the only thing holding it up is regulatory approval, and not failure to achieve the desired spec.

    >Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.

    [1] https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

    [2] https://news.ycombinator.com/item?id=19397942 (linking HN because it's hard to find the text in the page with their UI)

    • slg 1864 days ago
      I don't want to dismiss your whole point, because it is certainly valid, but that isn't really the issue here. It is entirely possible for bugs like this to exist in the self driving tech and for Tesla to be correct in their claims that Autopilot is on average safer than a human driver.

      It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans. You and I certainly don't have enough data to say one way or another on that. I would bet even Tesla doesn't have enough data to say definitively. However writing this tech off as unsafe just because it makes what seems like an obvious mistake is a great way to slow progress which will result in more human deaths long term.

      • rayiner 1864 days ago
        I think you're missing the forest for the trees. We're not even at the point where we're comparing safety rates. Can a Model Y drive through DC? Can it ignore traffic signals in favor of following hand signals from a uniformed traffic officer, or follow the directions of a construction worker having traffic in turns through a road that they've decided to turn into one lane? Can it deal with an unannounced presidential motorcade rerouting traffic? Because that happens literally every day in DC. If Tesla can't handle that, it's inaccurate to say it has "full SDC," even before you get into an analysis of safety.
        • close04 1864 days ago
          It's because Tesla marketing material doesn't tell the whole truth, just the selected bit that sounds impressive. The conditions in which the Autopilot is safer than a human driver are so narrow and under so many restrictions that it's obvious it's less "self driving" and more "driver assist".

          Indeed, I also find it disingenuous to claim the Autopilot is safer than a human driver even if it actually can't function at all in 95% of situations on the road. Guess saying "possibly safer than a human driver at these 3 things and only under these very specific conditions" isn't so catchy. Definitely not thousands-of-dollars catchy.

          • thomasjudge 1864 days ago
            It's because -in general- marketing material doesn't tell the whole truth, just the selected bit that sounds impressive.
            • LoSboccacc 1864 days ago
              but still there's quite some difference in the fine prints between a new MacBook being "two times faster than previous model#" and a Tesla "comes with full self driving hardware&"

              # only on specific workloads

              & don't ever get distracted because it can literally kill you

              • avs733 1864 days ago
                this is the point I see getting lost. This isn't someone overstating the how good a pair of pants make me look. It's more like selling a flame thrower as a weeding tool.

                I know that sometimes these lines are objective, but acting like you can't tell okay and not okay apart because the line is gray in some cases just seems to be bad faith.

                If you are okay with Tesla misleading selling a prototype feature in the name of disruption, own your argument. If this level of risk is just okay to you because of the potential long term upsides...I disagree but I respect your willingness to say it. Just don't make a different one because you are afraid of the social consequences of your actual point.

                • mindslight 1864 days ago
                  Don't you mean selling a weeding tool as a "flame thrower" ?
                  • eropple 1864 days ago
                    What you describe would be a disappointment. What he describe would be life-threatening.
                    • xkcd-sucks 1863 days ago
                      Tesla did actually sell a propane torch that's a weeding tool (or asphalt surfacing tool, paint stripping tool etc) and they called it a flamethrower
                    • nitrogen 1863 days ago
                      Flamethrowers are regularly used as weeding tools though. The best way to kill weeds on the edge of a field or a ditch is to burn them.
                    • avs733 1863 days ago
                      Thank you for the assist.
            • simple_phrases 1864 days ago
              Which, unfortunately, doesn't excuse them when users take Tesla at their word that their Autopilot feature is fully self-driving and end up in an accident and/or dead.
              • Domenic_S 1864 days ago
                Is this hyperbole, or do you think Tesla actually said that?
            • darkpuma 1864 days ago
              Yeah and the auto dealer industry is notoriously deceptive too. Telsa told us they'd be different. Well they're not selling cars through dealers, that much is true, but it seems clear the deception is still in force.
              • close04 1863 days ago
                That's probably very true but this isn't about the dealer as much as the manufacturer. When one says they have 4 crumple zones to absorb the energy of an impact and you later find out that the passengers were those crumple zones, someone goes to prison. When Tesla claims that their system is fully self driving at a level safer than a human driver they're doing the same. Because they know very well that system is just a glorified driver assist that can do any kind of driving only in the most narrow and perfect of conditions. Take it out of that narrow Autopilot "comfort zone" and you have plain old driver assist.
        • maxerickson 1864 days ago
          From what I understand, it can't even negotiate a traffic light.

          Like here's a recent article swooning over it coming out sometime soon:

          https://electrek.co/2018/12/09/tesla-autopilot-soon-traffic-...

          The article there also says that the more capable version will require new hardware, which isn't something they have been admitting very readily over the years.

        • mannykannot 1863 days ago
          Furthermore, the demarcation between what it can handle and what it cannot is neither simple nor clear. As this case shows, you can be bowling down the sort of highway it is supposedly suitable for, yet suddenly come across something it cannot deal with - and maybe something it did negotiate successfully in the past.

          As long as Tesla says that driver vigilance is required at all times, while simultaneously promoting it as if it were true automation, the risk to third parties, from Tesla autopilot users who don't understand what's going on here, is unacceptable.

        • slg 1864 days ago
          As far as I'm aware, Tesla has never claimed their cars will be able to handle all those situations. "Self driving car" is a vague term that has different definitions depending on who you ask. You are considering "self driving" as level 5 autonomy and Tesla probably considers it somewhere between 3-4 autonomy. I don't think it makes sense to get angry because their definition of the phrase is not the same as your definition of the phrase.
          • dragontamer 1864 days ago
            Tesla skirts morality.

            The feature is called 'Full self driving'. Their sales reps regularly told customers to take their hands off the steering wheel of their "Autopilot". Sure, the fine print says "always pay attention", but there's an entire marketing scheme going on here which is borderline dishonest.

            • darkpuma 1864 days ago
              > "Sure, the fine print says "always pay attention","

              In fact the not-so-fine print in the user manual says to keep your hands on the wheels, but the promotional material tells a different story, as does Musk when he takes his hands off the wheel on national television.

              • salawat 1863 days ago
                Honestly, I'm surprised a false advertising case hasn't been pursued if this is the case. Marketing practices this inaccurate are well outside the puffery defense.
            • Chris_Chambers 1864 days ago
              Borderline?
          • rayiner 1864 days ago
            That’s just called “driving” and if the car can’t handle those situations it’s not a “self driving car.” It’s just a driver assistance feature.
      • xkjkls 1864 days ago
        > It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans. You and I certainly don't have enough data to say one way or another on that. I would bet even Tesla doesn't have enough data to say definitively. However writing this tech off as unsafe just because it makes what seems like an obvious mistake is a great way to slow progress which will result in more human deaths long term.

        And where is the research to show that? To prove that within any significance, they would have to have millions and millions of miles to compare against human drivers. This is new technology and making false claims about its safety is dangerous.

        We as a society force drug companies to rigorously show that their drugs do what they say they do and all of their side effects are properly accounted for. This should be the case here too.

        • mannykannot 1863 days ago
          The last time anyone tried to make that case statistically, it turned out to be bogus [1].

          Bizarrely, this report was from the NHTSA, raising the sort of regulatory capture issues that have surfaced between Boeing and the FAA.

          Also, when such a study is performed, we must be careful that technologies are not conflated to claim more than properly can be. The effectiveness of less powerful technologies such as lane-keeping assist and automatic emergency braking tells us nothing about the safety of full-authority automated driving.

          [1] https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...

      • fmpwizard 1864 days ago
        > see self driving cars run into solid and stationary objects, but human drivers do that all the time too.

        Human drivers who are distracted do that all the time, AP is supposed to avoid that, it is supposed to be alert all the time, but when it sees a stationary object, the result of their algo is, "it must be a sign we can somehow go through"

        We need to make it very clear that self driving cars are better than driving drunk, or when you haven't slept in 24+ hours, but if you are a driver who pays attention, don't use this tech.

        And it is not that I don't want the tech to take over the world, I wish I could just put my kids in a self driving car and have the car take them to the school that is 4 miles from home, but we are nowhere close to that, even with me in the driver seat, if I only have seconds to take over before I end up on a ditch or worse.

        • nradov 1863 days ago
          There are currently no self driving cars which a drunk or exhausted person could operate with an acceptable level of safety.
        • cameldrv 1862 days ago
          It’s far from clear in the statistics that self driving cars today (without the use of a safety driver) are safer than driving drunk. IMO Tesla autopilot, fairly clearly is less safe than driving drunk if a human is not carefully monitoring.
      • skwb 1864 days ago
        No, it's still the issue. If there's a bug in the software code for a programmable pacemaker, it doesn't matter if the fundamental design is flawed. For governmental regulators, it's the actual implimentation and real world performance that matters.

        Sure, a bug fix for a routine being called is maybe easier to fix, but the real world performance is still the important metric to track.

        • slg 1864 days ago
          Ok, but a lot of people are arguing something along the lines of "there is a bug in the pacemaker, let's take take out everyone's pacemakers".

          I am not saying we shouldn't be critical of Tesla or hold them accountable for their product. My point is simply that their product doesn't have to be infallible for it to still be an improvement over the current solution.

          • gamblor956 1864 days ago
            A pacemaker has objective improvements over the alternative, namely, death. So if a pacemaker has a bug, it's still overwhelmingly better than the baseline scenario of "no pacemakers."

            A self-driving Tesla is not objectively better than a regular, human-driven car. The jury's still out on whether any self-driving car is even as good as the average human driver, so if one of them has a bug that causes serious accidents in reproducible situations, that's not better than the baseline situation it's being compared to.

            • jimktrains2 1864 days ago
              > So if a pacemaker has a bug, it's still overwhelmingly better than the baseline scenario of "no pacemakers."

              Unless it goes of when it's not needed and kills you.

              • yellowapple 1864 days ago
                Right, but the probability of you dying because you do have a pacemaker is (in the situations for which a pacemaker is prescribed) far less than the probability of you dying because you don't have a pacemaker.

                The same cannot yet be said about "self-driving" cars.

                • jimktrains2 1864 days ago
                  > Right, but the probability of you dying because you do have a pacemaker is (in the situations for which a pacemaker is prescribed) far less than the probability of you dying because you don't have a pacemaker.

                  That assumes the pacemaker works more often than it doesn't. (Which is the case now.) It's an unstated assumption that doesn't always apply when generalizing your example.

                  • kilotaras 1863 days ago
                    No it doesn't assume that. We don't install pacemakers to everyone at birth.

                    Even if pacemaker fails 90% of the time 90% chance of death is better than a 100% without it.

                    • jimktrains2 1863 days ago
                      You're assuming "fails" only accounts for false negatives. False positives are a thing. If there was a 90% chance of a pacemaker going off when it wasn't needed, they wouldn't be used as it'd cause nearly as meany deaths as it prevents, assuming no false negatives.
          • mannykannot 1863 days ago
            Another reason pacemakers are not comparable is that pacemakers do not present anything like the potential threat to third parties that self-driving cars do. Two reasons we can say that with confidence is that, with pacemakers, the scenarios are simpler and we have good statistical data.
            • lightwin 1863 days ago
              >> Another reason pacemakers are not comparable is that pacemakers do not present anything like the potential threat to third parties that self-driving cars do.

              Unless the person wearing pacemaker is driving a non-self-driving car.

              • mannykannot 1863 days ago
                As I wrote, specifically because I knew someone would make this reply, in this case we have adequate statistics to make a good estimate of the minuscule risk.
      • disiplus 1864 days ago
        i would not take those chances. if the self driving car is only slightly better then average driver in fatal accidents that's not good enough. most of accidents happen because of distractions (texting and so on) and if a self driving car is only slightly better then average driver I can get a lot better if I just don't use my smartphone while driving and don't sit behind a wheel tired.

        so it's not enough to be better then average driver. It needs to be as good as "good" driver or better before I would trust it.

        • gregknicholson 1863 days ago
          I wouldn't trust my life to proprietary software, because its makers' motives obviously don't align with my own. If my safety was really paramount, they'd open their software to scrutiny.

          It's unethical to leave life-and-death decisions to a black-box algorithm, when I know it was written for the primary purpose of gathering more money. Safety is a constraint, not their goal.

          • vntok 1863 days ago
            You already do that right now and basically have done it every day of your life.

            The power grid, railway, traffic lights, elevators, etc. all these systems are critical and closed source, and you don't see them killing people on a wide scale nor even on a small scale but regular basis.

            • gregknicholson 1863 days ago
              Elevators I'll grant you, but the power grid, railway and traffic lights are all controlled by governments, councils and NGO-type organisations — they all have reliability as their primary concern, not churning out units for profit.

              And it may be irrational, but wrongly-activating brakes feel like less of a risk than wrongly-activating steering or accelerators.

              And anyway, “lots of things are untrustworthy” is not a good argument for trusting something else.

            • badpun 1863 days ago
              Yep. Add the code which controls ABS and ESP in modern cars.
        • robbiep 1864 days ago
          You’re implying that agency is the element that is most desired. Ie if a human crashes and kills themselves/others, it is ok, because someone is at fault.

          If the car is anywhere from slightly to quite a lot safer, but accidents that result in injuries/deaths occur, then it is not ok.

          Psychologically you may feel this to be ‘right’ but I would prefer a world with less injuries and deaths all round. And one day the courts will too

          • disiplus 1864 days ago
            no I'm just saying that average is not good enough.

            imagine lung cancer for example, if we take average then it's 6% risk taking smookers and not smookes, but if I decided not to smoke then it's 0.2%. so in this case I can do better for lowering the chances then when I would accept some other solution that would let us all have it at 5% witch might seem ok but not for me.

        • slg 1864 days ago
          You are setting up a false dichotomy. You don't have to choose between Autopilot and paying attention. The video above is an obvious example. It didn't result in an accident because the driver was paying attention and intervened.

          Also you aren't considering that other drivers can be the cause of a potential accident. You can assume that the other drivers on the road are average drivers with all the distractions that come along with that. If you make the other drivers on average slightly safer, that improves your safety even if your behavior is completely unchanged.

          • rayiner 1864 days ago
            > You are setting up a false dichotomy. You don't have to choose between Autopilot and paying attention. The video above is an obvious example. It didn't result in an accident because the driver was paying attention and intervened.

            If you have to pay attention, then you might as well be driving.

            • TheSpiceIsLife 1864 days ago
              Not only that, this is a new type of driving where you have to actively pay attention and fight the vehicle.
            • automathematics 1864 days ago
              You definitely aren't doing your commute in a Tesla, then.
            • slg 1864 days ago
              This is just a silly argument to me. Would anyone suggest that cruise control is worthless because you still have to steer the car?
              • macintux 1864 days ago
                The more the car drives for you, the harder it is to pay attention and the slower the context switch when suddenly the situation demands it.

                This is basic human nature.

              • quickthrower2 1864 days ago
                Cruise control is useful to me because it means I won’t accidentally speed. But having to hold the wheel means I need to pay attention. If the car is driving itself human nature means you might not pay attention. Also if the car does something weird you have to decide in a split second if it is because of a hazard you haven’t seen but the car has, or because the software went wrong.
                • 0815test 1864 days ago
                  You do have to hold the wheel, precisely because you might find yourself having to assess what the car is doing "in a split second", and possibly issue corrective actions. You can't do that unless you are holding the wheel and are paying attention to what the car is doing! But all that means is that the car is not really driving itself; it is however dealing with the boring, predictable parts of the job and leaving the rest up to you - this makes it easier to be attentive, not harder. The tasks where it's hard to pay continued attention are ones where you have to do something that seems extremely predictable, but also has very rare events where it isn't. Computer assistance can actually help a lot with such cases.
                  • rayiner 1864 days ago
                    The boring predictable parts of driving (lane keeping and speed keeping) are already basically unconscious muscle memory. It’s watching others for surprises that takes the mental energy.
                    • techcode 1863 days ago
                      Exactly - though speed keeping is nice for long road-trips simply not to get foot cramps.

                      Anyway, I'm yet to see a video of Tesla doing emergency breaks because of highway pileup where it's not only a couple of seconds after human driver could see a row of red breaking lights in front.

                  • cgriswald 1864 days ago
                    That makes no sense. If I can’t pay attention to the boring predictable part of driving how am I going to pay attention when I’m not even the one driving?

                    As for the rare unpredictable stuff, I might not be able to predict, say, a ladder falling off a work truck, but I can recognize it could fall off, prepare for that type of event, probably even recognize it before the car does, and if I don’t die, I can learn from it.

                    I don’t really see how also having to worry about the car itself creating a rare event helps. Now I’ve got to spend additional time deciding whether to take over and if I do, the time I have to react is reduced.

              • dsfyu404ed 1864 days ago
                Well if marketing says you get a CNC mill and what you really get is three power feeds it's not worthless but it's still not what you were promised.
          • disiplus 1864 days ago
            i was talking about self driving cars, not really Tesla autopilot. but do you really think this (paying attention and correcting autopilot mistakes) would not be a bigger problem knowing well that no.1 reason of accidents is not paying attention.

            yea, other drivers are also part of this, but even then I would like something alot better then average. sure there is a chance a distracted teenager is riding somewhere around you, but it's not better if there is now 10 slightly less distracted teenagers around you.

            average is really not a great measure here if we are talking about self driving cars.

      • jrs95 1863 days ago
        It may be better than humans on average, but is it safer than a sober person who is driving responsibly in a similarly priced vehicle? Anecdotally, most accidents I've seen have been caused by at least one driver doing some stupid shit. Based on my own personal experiences, I'm just not comfortable surrending control of my vehicle. Especially not when we could be comparing apples to oranges. Safety of a self driving system should be evaluated against currently available safety technologies like blind spot detection etc. If you're including older vehicles that don't have the best possible non-self driving technology in your data, you're making a comparison that is inherently biased in favor of self driving cars.

        All that being said, I do think self driving is what we will inevitably arrive at...we just need to have a higher degree of confidence in it before it's widely used in my opinion.

      • TheSpiceIsLife 1864 days ago
        Hang on.

        Unless something has changed, the Telsa self driving tech intentionally ignores stationary objects.

        And Telsa are claiming this is a feature not a bug.

        Am I understanding this correctly?

        I don't know anyone who isn't suicidal who intentionally ignores stationary objects while driving.

        • dahfizz 1864 days ago
          Do you have anything to back this up? That seems like a pretty big claim to make unsubstantiated. If autopilot literally ignores staionary objects, you would see them crash much more often than they do.
      • thisisit 1864 days ago
        > It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans

        I fail to understand this line of reasoning. Are you saying because humans tend to run into solid objects it is okay if the self driving cars do it, only if it does less times than humans?

      • m463 1864 days ago
        I think this graph:

        https://upload.wikimedia.org/wikipedia/commons/a/a5/Causes_o...

        puts a lot of arguments (and young people) to rest.

        • mprev 1863 days ago
          Not at all. It says that car accidents are a problem. It does not say that the answer is to pretend your driver assistance tech is actually self-driving tech.
          • m463 1863 days ago
            I was addressing the point in the parent comment:

            > The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans.

      • prestonh 1864 days ago
        Is it, though? If every car that drove that route was a Tesla on autopilot, you'd probably have more traffic fatalities than the national total, and that doesn't even factor in every other similar traffic barrier this can occur at.
      • Haga 1863 days ago
        Nn tech debt.
      • eanzenberg 1864 days ago
        >>It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too.

        HAHAHA lol wow hahaha you're something. I hope you own a Tesla then.

      • ketzo 1864 days ago
        That’s one of those thorny AI-human problems, though — if I crash 1 in 10 drives, it’s my fault, and therefore much easier to rationalize to be, well, not my fault. If my Tesla crashes 1 in 100 drives, it’s a faulty machine and an obvious death trap.
        • gamblor956 1864 days ago
          If you crash 1 in 10 drives you have no business being on the road. Most people can drive for years without even having a single, minor accident.

          If Tesla crashes 1 in 100 drives, it's absolutely a death trap since that's still an obscenely bad accident rate.

        • slg 1864 days ago
          It is a classic trolley problem. Is it worth killing a group of innocent people if it will save a larger group of innocent people? I can certainly see why people might be pushing for caution against Tesla since we don't know the size of either of those groups.
          • tim333 1864 days ago
            It could theoretically be a trolley problem. In practice Tesla are probably not that safe if you compare them driving themselves on freeways to humans driving under the same conditions.
            • btilly 1864 days ago
              I was going to point you to a study showing that autosteer reduced accidents by 40%, but Google told me that this was recently shown to be wrong. In fact it increased accidents by 60%.

              See https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t... for details and verification.

              • slg 1864 days ago
                The the paragraph directly after that 60% number:

                >So does that mean that Autosteer actually makes crashes 59 percent more likely? Probably not. Those 5,714 vehicles represent only a small portion of Tesla's fleet, and there's no way to know if they're representative. And that's the point: it's reckless to try to draw conclusions from such flawed data. NHTSA should have either asked Tesla for more data or left that calculation out of its report entirely.

                I will go back to my original statement at the start of this thread. No one in these comments has the data to say definitively whether Autopilot is safer than a human driver. I am skeptical that Tesla even has enough data for that. But I am also skeptical of people who take that unknown and the occasional anecdotal data point like the above video as proof that Autopilot is inherently less safe than humans.

                • gamblor956 1864 days ago
                  Tesla turned over the best data set to the NHTSA. They had more data on Autopilot usage and chose not to turn it over. It's thus fair to assume that the data they did not provide would not have benefited the company. So if even the Tesla-provided data set shows Tesla is worse than a normal driver, then it's logical to assume that a full data set would show the same--or worse.
    • Animats 1864 days ago
      Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.

      We've heard that before, for the Model X.

    • Scoundreller 1864 days ago
      > As far back as 2016 they were claiming they had full SDC capability above human driver safety

      One crash/incident doesn’t mean it’s less safe than a human driver on average, even if it’s something a human driver might have avoided.

      We should expect different failure modes from a machine, but adopt it anyway if it avoids enough human failure modes to make up for it.

      Source: my thermostat thought 33C was an acceptable temp once, but I still didn’t switch to full-manual HVAC control.

      • tptacek 1864 days ago
        Don't Teslas fare significantly worse than cars in similar class in terms of driver fatalities?
        • slg 1864 days ago
          Are you asking about Teslas or Teslas being driven by Autopilot? Autopilot so far has a fatality rate of roughly 0.25 per 100 million miles. Just for information's sake the overall rate in the US in 2017 was 1.16 per 100 million miles. Although there are plenty of caveats that should prevent you from comparing those numbers directly. The national number is for all cars, all drivers, all conditions, etc while those driving a Tesla in Autopilot are generally considered to be in a safer cars, safer conditions, and be safer drivers than average.
          • FireBeyond 1864 days ago
            Tesla drivers have the luxury of turning autopilot off in suboptimal conditions. Human drivers don't.

            Like you say, this number is really valuable only for Tesla marketing.

            • heavenlyblue 1864 days ago
              Tesla would have had 0 fatalities in total if the Autopilot disengaged just before hitting the barrier.
              • SilasX 1864 days ago
                For all driving protocols P, and accidents A, P would have zero fatalities if it switched to a better protocol just before encountering the situation that led to A.
                • heavenlyblue 1864 days ago
                  That's exactly what I am trying to say. Yet Tesla implies their tech is statistically superior to human drivers, yet can disengage at any time.
                  • SilasX 1862 days ago
                    Ah okay. Sorry, easy to misread people on this topic.
          • eanzenberg 1864 days ago
            That stat of 1.16 includes motor vehicles, motorcycles, pedestrians, buses, bicyclists and trucks. How is this a comparison to luxury electric cars again?
        • Scoundreller 1864 days ago
          Dunno, maybe. I do wonder which cars Tesla gets lumped in with.

          There can be a difference between a $75k ICE driver and a $75k electric driver.

          The point is, one stupid decision by a machine doesn’t prove much.

          • salawat 1863 days ago
            So... Two planes falling out of the sky, killing 400 shouldn't be taken as proving anything?

            Or how about 1 stupid decision by a radiotherapy machine? Ever heard of THERAC-25?

            In each case, it was just 1 stupid decision by a machine.

            The entire reason Engineering as a practice is a thing is because when you implement the capacity for a stupid decision into a system that is then mass produced, dire consequences can result.

            I look down on any thought process that doesn't discriminate between the difference between 0 and 1.

            If the system provably worked, that decision would not have happened (0). The stupid decision happened (1), however, which means it can happen again at a poorly understood confluence of circumstances.

            To err is human, and we forgive each other every day for it.

            To err as a machine is a condemnation to the refuse bucket, repair shop, or back to the drawing board.

            To err so egregiously as a machine to cause an operator and those around them to lose their life is willful and moral negligence on the part of the system's designer. Slack is cut when good faith is demonstrated, but liability is unambiguous. The hazard would not be there if you hadn't put it there.

            • Scoundreller 1863 days ago
              And people have died and been paralyzed as a direct result of the flu vaccine. Death and paralyses that would not have occurred had they not received that vaccine.

              Does that mean the flu vaccine should get dumped in the refuse bin?

              Similarly, some other aircraft flying today/tomorrow has automation with an unknown bug/issue that will cause loss of life. Should we disable everything except the 6-pack and stick and rudder?

              It would have saved the lives killed by automation, but we would have more aviation death overall.

              • salawat 1859 days ago
                Liability is still clear, and in the specific case of medical practice, a degree of "we can't foresee everything" is implicit in that our understanding of the governing principles of the human body is incomplete.

                Aviation does not have that excuse. The 737 MAX 8 system description is enumerated from the ground up. Seeing as there was so much recertification effort that didn't need to be done, it makes the failure to properly handle the MCAS implementation all the more damning.

                This wasn't some subtle bug. This was an outright terrible design choice. Anyone with any experience composing complex systems out of smaller functional building blocks should have been able to look at the outputs, look at the inputs, and realize there was the potential for catastrophic malfunction.

                As I've said elsewhere, automation should make flying a plane easier when functional. When non-functional, however, the pilot should still be able to salvage the plane. That requires clear communication of what automation does, and what it's failure modes are.

              • perl4ever 1862 days ago
                Does that mean you believe the decision to ground 737 max 8s is incorrect?
        • lordalch 1864 days ago
          I don't have the fatalities stats, but their safety ratings have been perfect.

          [0] https://www.cnbc.com/2018/09/20/tesla-model-3-earns-perfect-...

    • bitL 1864 days ago
      They solved it, they just need 100x faster GPUs in cars to be reliable outside datacenters ;-)
    • pbreit 1864 days ago
      The passage says “will” which implies the future.
  • tbabb 1864 days ago
    I have been yelling about this for a long time: Tesla is not going to be able to deliver full self-driving as promised. They don't have the hardware, for one (ranging is terrible; they need stereo cameras), and second, their software strategy seems to be a dead end.

    They need sensor fusion. The system needs to make maximum use of all the information available to it: Where is the road striping? Where are the other cars going? Where are the road signs and signals? (If there's one in your path, you certainly shouldn't drive into it!) Are there camera-visible obstructions? What were the interpretations and actions of previous Tesla trips along the same route?

    In these problem cases, all data except the left and right lane striping seems to be completely ignored. There was even more information at the fatal offramp location (cross-striping over the lane separation zone), which the vehicle drove straight over. The system is not making maximum use of the information available to it, in fact it is using hardly any of it at all, and fixating on what it thinks is a single most salient piece of data.

    Sensor fusion algorithms tend to behave the opposite way-- each additional piece of data informs the interpretation of all the other data. You can have very poor-quality data, but if it is even moderately over-constrained, your state estimate can be very good in spite of it. I think it would be completely reasonable to have a neural net in the loop of a sensor fusion algorithm, with fusion constraints informing the NN's interpretation, and the NN's estimates feeding back into the fusion algorithm as uncertain data.

    IMO Tesla will do at least one of:

    * Very expensively retract their promise of full self driving for delivered vehicles

    * Completely overhaul/redesign their driving software and start again nearly from scratch

    * Get into a regulatory/legal tangle with the NTSA/courts/DOJ over all the dead people their system is making.

    • eclipxe 1864 days ago
      They do have stereo cameras and sensor fusing, as well as detecting more than just the lines in the road. Here is what the camera sees: https://www.youtube.com/watch?v=rACZACXgreQ
      • tbabb 1864 days ago
        What in that video suggests sensor and/or stereo fusion to you?

        I notice that the temporal coherence is pretty bad-- Pedestrians pop out of recognition when they go behind trees; lane/exit boundaries wiggle all over the place and occasionally frame-pop into different configurations. A kalman filter, for example, is a state estimator which maintains temporal coherence, and makes heavy use of previous estimates/sensor inference when computing the most updated estimate. It doesn't look to me like that kind of strategy is being used to maintain the vehicle's world model. IMO a good estimator wouldn't treat "a pedestrian popping out of existence" as the most likely estimate for any circumstance, let alone one where they were clearly present in the previous 50 frames. I don't doubt they're using KF on the vehicle's inertial movement, but based on the failures and this video, it sure doesn't look like it's using a fusion technique for the world model.

        There are left and right-looking cameras, but the FOV overlap between them is not very substantial, and there can't be stereopsis where there is no overlap. Per the Tesla website, there are three forward-looking cameras, and they each have a different FOV. The parallax baseline between them is only a few centimeters, too, so the depth sensitivity isn't going to be spectacular. It's certainly possible that there could be some narrow-baseline stereo fusion, but it could only really happen inside the narrowest field of view, where the coverage overlaps with more than one camera. That's the circumstance where having a narrow baseline would hurt the most. Based on that it doesn't really seem like the system is well set-up for stereopsis; if it's there it seems like an afterthought.

        I could certainly be wrong, as I don't have access to the code. Are you going by some other secondary source/information?

        • davrosthedalek 1863 days ago
          To be fair, it could be that this is what the camera segmentation does before it is combined with other sensors, and before it is used to update the word model (which then has temporal information)
        • ben174 1864 days ago
          Certainly two cameras with different FOVs could be combined to give the same depth data that a stereo camera setup could give, right?
          • dmitrygr 1864 days ago
            not if they are on the same axis. then anything along that axis cannot have its depth determined
            • ec109685 1864 days ago
              It can through other signals. You can drive successfully with one eye.
              • darkpuma 1863 days ago
                I've got one eye at 20-20 vision, and the second legally blind without correction. My drivers license has a little note that it's not legal for me to drive without my glasses, which I never wear under any other circumstances.

                So it's not so clear cut as you make it out to be.

                (And you know what? Even if it were legal for me to drive without those glasses, I'd still drive with them. Because ranging is important!)

              • tbabb 1864 days ago
                That's not stereopsis. And it's terribly inaccurate.
      • gamblor956 1864 days ago
        Stereo cameras overlap. Tesla's cameras are intended for 360-degree coverage, not overlapping vision.
    • ec109685 1864 days ago
      I think you are confusing what the software is capable of today with hardware limitations in the sensors deployed in the cars.
    • Dumblydorr 1864 days ago
      Where is your evidence for these claims?
    • tim333 1864 days ago
      Musk is more optimistic than you - says late 2020 for sleep while the car drives https://www.youtube.com/watch?v=Y8dEYm8hzLo&t=10m25s

      Also quite interestingly he says there will be a big jump forward in quality when they switch to their own computing hardware (18m40 or so)

      • x38iq84n 1864 days ago
        > late 2020 for sleep while the car drives

        Musk is often overly optimistic and he keeps underestimating the problem at hand. I call BS on this, it won't be ready in 5 or even 10 years. And then there is a regulatory approval.

      • madeofpalk 1864 days ago
        > Musk is more optimistic than you

        Indeed.

        What Musk says and what Tesla delivers are two completely separate things.

        • goshx 1864 days ago
          If my anecdote is useful for anything, my car drives about 90% of my 25mi commute today on its own, including leaving one highway and going over a ramp to then merge into another highway. Needless to say that I’m extremely happy with what has been delivered so far.
          • tbabb 1864 days ago
            I think a careful interpretation shows how far that is. A self-driving system ought not be considered reliable until it can drive O(100 million miles) without disconnecting once, in order to match human reliability (that's about the distance between fatal accidents currently). A guaranteed disconnect within 25 miles is many zeros of missing reliability.

            A Disney park engineer once relayed to me the philosophy for designing safe attractions in the parks: "If there's a one in a million chance of it happening, it'll happen multiple times per year," given attendance numbers which are in the millions.

            A self-driving car needs to handle ordinary commute circumstances with 100.0% reliability, and one-in-a-million circumstances (which statistically you will have never personally encountered) with reliability literally above 99%.

            • ec109685 1864 days ago
              On the flip side, with a million cars on the road, that is a lot of edge cases they see before they remove the steering wheel of their cars.
            • tim333 1863 days ago
              I don't know many humans that drive 100 million miles without disconnecting. The disconnect thing is more analogous to pulling over to check the map for which 5000 miles might be ok.
              • darkpuma 1863 days ago
                If the computer needs to pull over and check itself into a motel and nap for a few hours, I'm not going to object. But right now the computer "takes a break" by simply disengaging in the middle of the road. Presumably when this happens, the computer isn't even confident in it's ability to safely park the car on the shoulder, as a human driver would attempt in the event of an emergency (blown tire) or undrivable conditions (which for a human might be an intense blizzard or torrential downpour.)
            • goshx 1864 days ago
              Sure, and we are not there yet and nobody has claimed that we are.
              • eric-hu 1864 days ago
                This is the comment thread beginning with "Musk claims 2020 for sleep while the car drives". I think the point of this discussion is that Musk, has been implying that.
                • goshx 1864 days ago
                  I’m pretty sure it is still 2019.
                  • tbabb 1863 days ago
                    And you're expecting reliability to improve by a factor of ten million before 2020?
                    • goshx 1863 days ago
                      I am expecting significant improvements as more and more vehicles start providing data for the entire system to learn and I don’t care at all if it takes longer than that. I don’t want the technology to fail just because Musk is overly optimistic.
          • eanzenberg 1864 days ago
            Hope your car doesn't drive you into a wall then.
            • goshx 1864 days ago
              I’m sure you care.
          • carlivar 1864 days ago
            And if your trip on autopilot is 99.9% reliable, how would you feel about that stat?

            Do you find yourself paying attention to the road context the entire time? I'm curious if your mental acuity has dropped over time as the car has driven you.

            • goshx 1864 days ago
              You start trusting it more as you use it and learn its flaws, so you can anticipate when it may do something stupid. I pay attention the entire time, but it is a much more relaxed experience. I’d still pay attention the entire time even if it were supposedly 100%.

              It is very sad that a lot of people in this forum are hoping this never works. It is one of the most exciting advancements in technology that can benefit us all, but people here seem to be more interested in seeing Musk and Tesla fail rather than hoping they achieve this and bring the whole industry forward one more time, affecting millions of lives.

              • mprev 1863 days ago
                That is not what is happening here. People are rightly skeptical of the claims. As Theranos showed, you can't "fake it till you make it" when people's lives are involved.
              • tbabb 1863 days ago
                > It is very sad that a lot of people in this forum are hoping this never works.

                Top level OP here. I am personally rooting for self-driving to succeed and catch on. I just don't think Tesla's current strategy is likely to work, and their cavalier, unjustified overconfidence is either going to sink them or kill people, or both, neither of which is good for the future of self-driving.

                • goshx 1863 days ago
                  We’ll see how this post will age.
      • FireBeyond 1864 days ago
        He's also said 2016. And 2018.
  • SamuelAdams 1864 days ago
    Additional context: on March 23, 2018, a Tesla Model X, while using autopilot, drove into a barrier and killed the driver. This was fixed in a software update, but the issue seems to have resurfaced.

    NTSB analysis of the March 23 2018 collision: https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

    Tesla's statement of the March 23 2018 collision: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...

    • MagicPropmaker 1864 days ago
      Well, it wasn't really "fixed" -- the driver is still dead.
      • craftyguy 1864 days ago
        Sure, if you construe the meaning of 'fixed' to mean 'roll back all events and time', which I don't think anyone expects. They probably fixed the software bug.
        • falcolas 1864 days ago
          I think that it's important to note that "we'll fix bugs via live patches" is a deadly decision when it comes to self driving software.

          Just discussing the incident in terms of a "software bug" does a disservice to the severity of the issue.

          • gwbas1c 1864 days ago
            Remember, autopilot and autosteer are not self-driving. Tesla is very explicit that the driver must remain alert, and supervise, at all times.

            That being said, drowsy driving is a thing, and it's very easy to fall asleep behind the wheel. The car really needs a better strategy to handle this situation.

            • netsharc 1863 days ago
              Tesla's legal department is very explicit, the marketing department? Not so much...
          • craftyguy 1864 days ago
            No, it does not. While I agree that testing patches on users with no testing beforehand is horrible, 1) I doubt Tesla does this, and 2) the person died before they fixed it, which GP seemed to be confused about.
  • steelframe 1864 days ago
    The day I took delivery of my 2016 Model X with AP 1.0, Tesla announced AP 2.0. A friend of mine immediately ordered a Model X with AP 2.0 and rubbed it in my face.

    For the entire next year, my AP 1.0 (which is non-Tesla technology -- Mobileye rocks) had no trouble doing adaptive cruise control and lane assist. Meanwhile his AP 2.0 would brake suddenly and swerve all over the place. It took a full year of OTA updates before his AP 2.0 was finally on-par with the functionality that I had the whole time. Of course, by then Tesla pulled a "we're sorry, but the princess is in another castle" and came out with AP 2.5.

    Now this kind of stuff doesn't matter to me. I got tired of that company's shit and have pulled out of the Teslasphere entirely. I'm now driving a non-Tesla EV, and I'll never look back. I'm also letting my government representatives know that they should support a common EV charging standard and keep Tesla so-called "self-driving" shit off public roads.

  • sschueller 1864 days ago
    Eventually we need the NTSB to certify updates before they are pushed out over the air. Similar to what the FAA does.

    These OTA updates are not ok for large machinery and endagers not only the Tesla driver but all others on the road.

    • quasse 1864 days ago
      Yeah, it's a pretty crazy world where a game developer pushing an update to a game on Xbox live seems to have a harsher approval process to go through than an automaker pushing OTA updates that literally drive cars.
    • x38iq84n 1864 days ago
      Indeed. I have never understood the cheering for OTA of something that can literally kill you. Were they limited to infotainment and the useless easter eggs I would not care, however AP OTA is downright scary, especially pushed by vendor instead of being initiated by an informed user.
      • DarmokJalad1701 1864 days ago
        It is initiated by the user ... the update only happens after you give say it's okay to do so.
        • javagram 1864 days ago
          According to the posters on Reddit, this update’s release notes were about “Dog Mode” and “Sentry Mode”.

          Nothing warning that behavior of autopilot might change.

      • 0815test 1864 days ago
        The AP can only kill you if you're distracted/asleep at the wheel and not paying attention. You're always in charge of driving the car appropriately, the "AP" can only provide tentative assistance of a very rough sort. It will try to guess at the control inputs you might want to provide (which does make things a bit more comfortable in the best case) but the actual control always rests with the human driver.
        • qball 1864 days ago
          > You're always in charge of driving the car appropriately, the "AP" can only provide tentative assistance of a very rough sort.

          It can "tentatively assist" you in killing yourself and potentially others, but we already know that from the disaster in Mountain View, where this exact same thing happened.

          The problem with behavior changes in autopilot is that the car needs to react the same way every time.

          If you're going around a corner on a road which you've driven on for years without issue, and the car all of a sudden does something unexpected, the panic and over-correction reaction that everyone instinctively has tends to cause more accidents than just holding your desired course does.

          It doesn't matter if it's a skid, or your Tesla trying to accelerate you into a wall at 70 miles an hour. If the car does something the driver is not expecting it to do for any reason (from fatally defective software to ice on the ground) the driver performs sub-optimally as a result.

          And since this is Tesla we're talking about (who bakes in features like your car not starting before you upgrade its software, which is another "feature" about these cars that's going to get someone killed sooner or later), I'm willing to bet that the car doesn't warn you that this might occur. It just works fine for 6 months, then an update gets pushed, the car tries to put you into a wall, and you cause an accident because you're trying to stop the car from killing you and didn't check your blind spot before that.

          That is a UI and UX failure of the highest magnitude, and it is completely unacceptable, no matter how well it otherwise tends to work.

        • lisper 1864 days ago
          > The AP can only kill you if you're distracted/asleep at the wheel and not paying attention.

          That's not true. There are many situations where the AP could make a sudden turn and kill you long before you had time to react even if you were paying attention. And in fact, the situation in the video seem to be not so far away from that.

          • 0815test 1864 days ago
            Make a sudden turn? It's just lane following, so I really doubt that. And if your hands are on the wheel (as they should be) you'll very quickly become aware that the car is not staying in the lane as it's supposed to, and be in the best position to deliver the right input. Even looking at this video, the car didn't seem to "turn" suddenly and unexpectedly; if anything, it failed to physically take a turn even though it should have. Which is quite bad, but it's not startling to the point where a driver would be in trouble.
            • lisper 1864 days ago
              > Make a sudden turn? It's just lane following

              You're begging the question [1]. The autopilot has full control authority over the steering wheel, so if it fails, nothing constrains it from making a sudden turn. If it is "just lane following" then it hasn't failed (yet).

              [1] https://en.wikipedia.org/wiki/Begging_the_question

              • Karunamon 1864 days ago
                Your hands. If you're just resting them on the wheel, that's not enough to prevent the attention monitor from working, it actually has to measure some physical resistance. If you resist the automatic inputs too much, the AP cuts out.

                You can test this yourself in a tesla by engaging cruise control, then hitting a turn signal. This would normally initiate an automatic lane change - but keep your hands tightly on the wheel as if you wanted to stay in the lane you're in. The wheel will attempt to turn, fail as you're preventing it from turning - and the AP disables.

                • lisper 1864 days ago
                  > Your hands.

                  Sure, if you react fast enough.

                  Let's not lose the plot here. The original claim was:

                  "The AP can only kill you if you're distracted/asleep at the wheel and not paying attention."

                  (Emphasis added.)

                  And that's not true. It can kill you, quite simply, by producing the wrong control input in a situation where the available recovery time is less than your reaction time.

                  If you doubt this, then I challenge you to drive a car where the autopilot is under my control. (It will have to be remote control because no fucking way am I willing to be in the car with you when we do this experiment.)

                  • Karunamon 1863 days ago
                    What I'm trying to imply is that "not paying attention" is exactly what happened here.

                    The "attention checking" has a delay of a few seconds on it before it'll start warning you to grip the wheel. If your hands were on the wheel, and you were paying attention, there's no reaction time, since the erroneous control input would be overridden by your hands keeping you in your lane.

                    Put simply, if your grip on the wheel was loose enough to where the computer-generated move could physically move the wheel, you weren't in control of the vehicle and would be generating warnings.

                    • lisper 1862 days ago
                      No, it wouldn't. Having your hands are on the wheel doesn't insure that you're paying attention. And if you need to be paying attention 100% of the time anyway, what's the point of having an autopilot?
                      • Karunamon 1862 days ago
                        But having your hands off the wheel absolutely does mean you aren't paying attention. Which is why Tesla and most other self-driving systems I'm aware of check for it. It's a negative signal, not a positive one.

                        >And if you need to be paying attention 100% of the time anyway, what's the point of having an autopilot?

                        The same reason cruise control is a thing on every modern car. It down on fatigue, which in turn, should improve safety and comfort. You're still required to be in control of your speed, but the vehicle manages keeping you at the set speed.

                  • Domenic_S 1863 days ago
                    This seems like a meaningless argument - we use fly-by-wire systems all the time and your point is true for most of them. Should we be suspicious of electronic throttle because it could theoretically hit the gas at a crosswalk when you tried to stop?
            • MertsA 1864 days ago
              Here's autopilot swerving into oncoming traffic.

              https://www.youtube.com/watch?v=ZBaolsFyD9I

              Here's autopilot following a lane straight into a concrete barrier.

              https://www.youtube.com/watch?v=-2ml6sjk_8c

              You can't assume that autopilot won't screw up lane following and swerve into a large obstacle. In those situations it's not as simple as making sure the lane ahead of you is clear, you might only have a split second warning between Autopilot going into "casual murder mode" and a collision.

            • madmax96 1864 days ago
              Consider how widely affected contemporary machine learning models are by adversarial examples [1]. I don't know the specific approach used by Tesla or their release process, but I would not be surprised at all if their software has similar shortcomings.

              Granted, a ton of active research is going on trying to prevent these sorts of problems, but it definitely isn't a solved problem.

              Basically, ask yourself this: how robust is this software? How are you measuring that?

              [1] https://arxiv.org/pdf/1712.07107.pdf

        • craftyguy 1864 days ago
          It shouldn't be marketed as 'Autopilot' then. Less informed people will expect it to do more than it can. It's (IMHO, criminally) misleading to market something as being capable of doing something it cannot do.
        • saagarjha 1864 days ago
          The issue is that the way the feature is designed and marketed makes it very easy for the driver to be distracted and not paying attention.
    • alistairSH 1864 days ago
      OTA is fine. It's unregulated, unannounced OTA that's problematic.

      I'd love for my VW to get updates OTA without taking it to the dealer. But, I don't want to receive those updates without knowledge that the update has been tested sufficiently (and given I don't trust the vendor, I'd like NTSB or similar government body to do this on my behalf).

    • Mizza 1864 days ago
      Not eventually - now. This is putting everybody in danger, not just the owners of the cars.
    • dragonwriter 1864 days ago
      > Eventually we need the NTSB to certify updates before they are pushed out over the air. Similar to what the FAA does.

      You mean, similar to what FAA might do if it didn't allow manufucturer self-certification instead, right?

      Also, wouldn't NHSTA, which does safety regulation and standards for autos, not NTSB, that does accident investigation, be the natural agency?

    • smileysteve 1864 days ago
      > Similar to what the FAA does.

      The timing of this is precarious (given the recent 737 Max allegations)

    • kitsunesoba 1864 days ago
      I agree, but is there a way that could happen without slowing the process to a crawl? Depending on what’s involved it could easily push the gap between updates from months to years.
      • CharlesColeman 1864 days ago
        > I agree, but is there a way that could happen without slowing the process to a crawl?

        I don't see why a more rigorous release process would slow progress down at all. All the iterations that lead to progress should be done on test vehicles, not customer vehicles.

        "Move fast and break things" is a development model that should only be applied to low-importance, low-risk systems. Most software development work occurs on such systems, and I think that narrows the perspective of the software development community as a whole.

      • toomuchtodo 1864 days ago
        This is a feature, not a bug, and should be expected in life critical systems. Would you want Boeing to push updates out as frequently as Tesla does with the same sparse release notes Tesla provides (“bug fixes”) when safety system functionality is modified?

        Disclaimer: I own a Model S

        • anth_anm 1864 days ago
          Imagine if Boeing got away with just pushing a software update to the 737 MAX8 and saying "it's fixed now".
          • CamperBob2 1864 days ago
            Well, it might have prevented the second crash if they had treated the matter with a bit more urgency. Depends on whether the two incidents really had a common cause, which is looking like the case.

            Of course it's also looking like they should have grounded the fleet after the first crash, given the history of the aircraft prior to its last flight.

        • CamperBob2 1864 days ago
          This argument would be a bit different if we were losing 30,000 people a year in airliner crashes, instead of approximately zero.
          • toomuchtodo 1864 days ago
            You can still save lives with automotive safety systems and not have a dumpster fire of an SDLC process. There is a spectrum between one firmware update a year and "f* it we'll do it live".
            • CamperBob2 1864 days ago
              I think a big step forward would be addressing something you mentioned earlier, with respect to documenting what changed in a given update. "Bug fixes" doesn't belong in a Spotify changelog, much less one for a product made by Boeing or Tesla.

              It almost seems like a right-to-repair issue, where manufacturers are going out of their way to avoid documenting how their products actually work for fear of losing control over them or of disclosing details that a competitor or patent holder might find useful.

              There definitely needs to be a strong regulatory response to that kind of behavior on the manufacturer's part when it comes to safety-critical updates, or even updates that might conceivably impact safety of life. Which, in the airplane business, is basically all of them.

              • toomuchtodo 1864 days ago
                You make excellent points, and I think the unfortunate answer is regulation will be required.
      • notfromhere 1864 days ago
        do you want faster updates or a machine that doesn't inadvertently kill you? because for life-critical systems you really can't do both
      • v_lisivka 1864 days ago
        Autopilot can automatically record all incidents and then upload them to Tesla, with obfuscation and by user permit, to improve autopilot software. Tight feedback loop is much better for AI, IMHO.
    • Traster 1863 days ago
      Having the NTSB certify updates isn't going to increase safety. The accepted approach to safety as set in ISO 26262 largely focuses on making sure that the processes in place via which the software and hardware is designed, created and modified. The reason you wouldn't get regression if you were interested in FuSa is because you'd have a process in place to ensure that software can't be distributed without being tested and a process to ensure that bugs are included in the test suite.

      It's quite clear in this case that Tesla doesn't have an organisational structure that fulfills the requirements for functional safety.

      • sangnoir 1863 days ago
        > Having the NTSB certify updates isn't going to increase safety.

        It will if they run it through regression tests that Tesla doesn't seem to have the discipline to.

  • erobbins 1864 days ago
    I think Tesla eventually has a catastrophic accident and is sued and/or criminally prosecuted into oblivion. I feel sorry for the people who are going to have to die for this to happen.

    This trope that humans are bad drivers is, in general, crap. Humans are very good drivers. The US has 7.3 deaths per billion km driven. This means if you drive 50km a day, every day, you are (essentially) guaranteed to die... after 7500 YEARS. You have less than a 1 in 10 million chance of dying on any given trip you take. That is NOT risky, and is NOT dangerous.

    • derefr 1864 days ago
      > The US has 7.3 deaths per billion km driven.

      This is a meaningless measurement. Segment those km by where they occur. Most of them are either:

      1. "cruise-control compatible" kilometers—e.g. freeway straightaways—where you're surrounded by cars but they're all going the same speed in a straight line, and all you need to do to be safe is to go the same speed in a straight line as well.

      2. "closed-course" kilometers—e.g. most rural roads, and most suburban roads any time other than rush-hour—where the road may curve and have intersections and such, but at any given time of day, it's probable that there aren't any other cars (or even pedestrians) on the road for you to collide with, no matter how bad your driving is. (Think "roads you'd have a teenager practice driving on." These roads are good for practice because there are effectively no accidents to get into.)

      3. (a smaller segment, but still relevant because of the number of freight kilometers driven here:) "empty" kilometers—this has all the properties of segment 2, but also, the road is at grade, and there's nothing abutting the road (i.e. the road isn't a street), so even if you veer off the road, you're unlikely to hit anything. (Examples: the Nevada desert; Saskatchewan; most farmland.)

      People point out that the safety-per-km stats for airplanes are a nonsense measurement, because what little crashing that airplanes do tends to mostly occur during the first and last 50km of the planned flight-path—so short flights are just as dangerous as long flights.

      Well, the same goes for car accidents. Subtract all the "trivial" driving that humans and AIs can both do by just doing... nothing much at all, with no obstacles/hazards to evaluate, let alone react to. The kilometers that are left (freeway merging; city driving; suburban streets during rush-hour; parking in parking lots) are a lot more crash-y, and are the place where both human and AI competence is questionable.

    • kirse 1864 days ago
      I feel sorry for the people who are going to have to die for this to happen.

      I try to explain this to friends who are far too optimistic on self-crashing car technology. Self-driving cars (SDC) ultimately trade one class of problems that result in death (human-attention deficit) for another class of growing issues (sensor malfunctions/incapabilities, software defects).

      Ultimately SDC deaths end up as bugs/features on some random devteam's backlog, and I have no desire to have a JIRA ticket named in my honor.

      In my opinion, by the time all the money and effort is spent making SDC's capable of successfully driving from point-A to point-B in the near infinite possible conditions they could encounter, it would have been 50x cheaper to simply build a fully modernized high-speed rail network over existing highways and roads.

      • tzs 1864 days ago
        > Self-driving cars (SDC) ultimately trade one class of problems that result in death (human-attention deficit) for another class of growing issues (sensor malfunctions/incapabilities, software defects).

        One big difference between those classes is what you can do after an accident. With both, you can investigate why it happened and then make recommended changes to prevent it or reduce its chances in the future.

        But with driver attention problems, such as drunk driving or driving while texting, it is easy for those recommendations to be ignored. We've been telling people not to drive drunk, not to text while driving, and so on for probably a century in the case of drunk driving, and for as long as texting has existed for texting...and people still do them frequently.

        With a software defect, it is a lot easier to make sure that the fix actually gets deployed. Make it part of the annual registration renewal for cars that all safety updates have been applied to their self-driving systems.

      • 1123581321 1864 days ago
        You are saying that the cost to develop successful self-driving cars is 1-50 quadrillion dollars using an optimistic estimate of rail costs. That does not seem reasonable. Perhaps 200 billion across all companies have been invested in self-driving cars so far (I.e., Waymo is just a fraction of that.)
        • kirse 1864 days ago
          Agree, SWAG was heavily W. Updated. I still think we're 50 years off from near-flawless SDC's though, assuming current LOE.
      • mr_toad 1864 days ago
        > it would have been 50x cheaper to simply build a fully modernized high-speed rail network

        Where do you get that number from?

      • albntomat0 1864 days ago
        > Ultimately SDC deaths end up as bugs/features on some random devteam's backlog, and I have no desire to have a JIRA ticket named in my honor.

        Instead you can be the victim in a vehicular manslaughter case due to DUI/texting/etc.

        As stated elsewhere on this thread, self-driving needs to be measurably better than a human. My state displays how many people have been killed thus far this year on the automatic traffic signs (used for amber alert, traffic info, etc as well). A 20% reduction in the 3000+ folks killed in 2018 would mean a whole lot for those saved.

    • MSM 1864 days ago
      You're looking at it strictly from a safety perspective which is a small part of the whole picture. If we looked at transportation from a safety perspective throughout history, we'd still be walking everywhere. What's the risk of crashing and dying while walking? Zero; it's fully optimized for safety. We rode horses and now drive cars to save time. Driving to work is unquestionably a better experience than riding a horse into work every day, but it could be better- we could be sleeping on our way into the office, or starting our work day as we leave our driveway, or eating a good breakfast and finishing our makeup (while not endangering everyone else).

      Having commuting time available as what amounts to "free time" is an insane boost to daily life. A quick google says Americans spend 12.2 days per year in their cars. If 300 million people can save 12 days' time per year, you're freeing up ten million years of time every year.

      I'm not advocating for throwing everyone in self driving cars untested and who cares how many people die, but if, on the journey to saving many millennia of time every year, a person is killed, why should the company be sued into oblivion? If we're going to sue everyone into oblivion whenever anything doesn't go quite right, why would any company ever try to take on difficult problems?

      • perl4ever 1862 days ago
        "What's the risk of crashing and dying while walking? Zero; it's fully optimized for safety."

        Huh? Have you never seen or heard of anyone falling, hitting their head, and causing a concussion and/or death?

    • Joe-Z 1864 days ago
      Cars have gotten so safe though. There's all this measures put in place to ensure your survival _in case of a crash_. Wouldn't it be more relevant to calculate the chances of getting injured / end up in a crash at all?

      I mean just being caught up in a traffic accident and have no bodily harm done to you can be a traumatic event.

      • erobbins 1864 days ago
        You're not wrong. There are plenty of accidents caused by inattention and poor driving that aren't fatal, but that do result in injuries... a quick search didn't give me much information but crashes with injuries appear to be about 100x more common than fatal crashes. This doesn't take into account how serious the injuries are, however.

        So if we assume that serious injuries are 10x more common than fatalities, that raises the odds to one incident per 750 years, or 1 incident per 75 years of any accident at all. You're still talking about things that happen to people once or twice in their entire lives. Of course there will be outliers (I've been in 5 accidents myself, only 1 with injuries) but the odds of being in a crash don't seem to be worth the risk of trying to automate mousetraps... we should concentrate on replacing car-based transportation instead of trying to use magical black boxes to make it theoretically safer.

        • FireBeyond 1864 days ago
          As an aside, as of 2019, you are more likely to die of an opiate overdose in the US than as a result of a car accident.
      • mdorazio 1864 days ago
        Yes, this is a very important distinction. Humans have gotten pretty good at not getting killed in car accidents, mostly due to increasing safety engineering and assistance features in cars. Humans are not very good at avoiding collisions entirely. In 2016 there were 6.3 million motor vehicle accidents reported to police. And NHTSA estimates about 10 million accidents go unreported each year (mostly minor fender benders and people damaging their own vehicles on fixed objects). To me, that's pretty clear evidence that humans are poor drivers.
        • gamblor956 1864 days ago
          Humans are not very good at avoiding collisions entirely.

          So far, neither are Teslas...And the point of this reddit thread is that any progress that Tesla does make on this front can be instantly reverted in a future update.

    • mandeepj 1864 days ago
      > This trope that humans are bad drivers

      Humans are bad in general. At least, self-driving cars will not be texting or using their phone while driving or involving in road rage. This list is really long.

      "Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44"

      https://www.asirt.org/safe-travel/road-safety-facts/

    • deevolution 1864 days ago
      Certainly we can do better than this tho? Your bar for very good driver" is pretty low. I'm sure everyone has at least one acquaintance they know who has died in a car crash. I can count 3 who have died from car crashes and 3 others who have been seriously injured. Humans arent going to get any better at driving. Something has got to change.
    • ec109685 1864 days ago
      30k deaths a year in US, 50 years of driving. That is 1.5M deaths over your lifetime. Let’s say there are 690M drivers throughout your lifetime. You have a quarter of a percent chance of driving being your cause of death.

      Also, 2.45% of deaths worldwide are from road accidents: https://ourworldindata.org/grapher/share-of-deaths-by-cause-...

    • m463 1864 days ago
      yet the leading cause of death in youths is automobile accidents:

      https://upload.wikimedia.org/wikipedia/commons/a/a5/Causes_o...

      • perl4ever 1862 days ago
        Well, we don't need self driving cars to fix that, just a way to increase the fatalities from cancer, heart disease, and Alzheimers in the young.
    • matz1 1864 days ago
      I hope not, yes there will be death, nothing is perfect, but i will just consider it an acceptable loss.
  • paul7986 1864 days ago
    Scary... software developers at these robotic cars and their mistakes/bugs aren’t just going to bring down a business application(lose money) but kill their customers and innocent drivers.

    Progress to where it’s safer is going to be a killer and we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.

    • breakyerself 1864 days ago
      Willing guinea pig here. I'm going to do my best not to die, but I'm excited about this tech and willing to deal with the drawbacks of being an early adopter.
      • lambda_lover 1864 days ago
        It's great that you're willing to die for Tesla's profit, but you realize that autonomous vehicle-induced crashes affect everyone on the road? Even someone who doesn't consent to Tesla's TOS is still sharing the road with potentially dangerous software.

        Granted, individual drivers are awful enough that it probably doesn't make that huge a difference in danger. But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?

        • jstanley 1864 days ago
          I don't think he said he's willing to die. He said he's willing to test it, and presumably he'll continue concentrating and will manually take over if the car does something dangerous.
          • FireBeyond 1864 days ago
            He came fairly close, honestly. "I'm going to do my best not to die, but I'm excited about this tech" is recognizing, and accepting that death is a possibility as a result of Tesla's self driving process.
            • breakyerself 1864 days ago
              There's risk of dying every time I get behind the wheel of any car. There's no benefit to pretending real risk doesn't exist in any given scenario. I'm confident that I can be attentive and cautious enough using this tech to keep the risk similar to what it would be just driving normally.
          • atomicUpdate 1864 days ago
            That's assuming he's always able to intervene before it kills him. The argument is that he may not always be able to or prevent AP from behaving erratically enough that results in killing someone else (while he saves his own life).
        • Dumblydorr 1864 days ago
          Everyone puts their lives in the hands of other machines daily, for instance brakes or automated elevators or medical devices; driving is an inherently dangerous activity. Computer or human won't change that. If we delay computers getting better than humans, we will just have status quo which is 30k or more road deaths yearly in the US.
          • txcwpalpha 1864 days ago
            Elevators and medical devices go through extensive testing and certification processes before they ever go near being put into service. And when they are updated, they again go through extensive testing and certification.

            Teslas, on the other hand, apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers, and introducing bugs like the OP that are liable to get someone killed.

            They are not the same, and comparing them only highlights the issues that Tesla has around their OTA update practice.

            • paul7986 1864 days ago
              Insane such type of commits into production need to be government regulated and scrutinized.
            • DarmokJalad1701 1864 days ago
              > apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers

              OTA updates only happen after confirmed by the users. Where did you hear that it happens without user intervention?

              • javagram 1864 days ago
                According to the linked Reddit poster “ Tesla's only release notes for this release were DOG MODE and SENTRY MODE. They don't tell you there is a massive change to AP and to reset your expectations.“
              • FireBeyond 1864 days ago
                What part of this behavior do you think is covered by:

                * Improved DOG MODE

                * Improved SENTRY MODE

                Which were the release notes for the update?

              • Bluestrike2 1864 days ago
                You're right. The owner/driver has to approve the update, if I recall. But putting responsibility to review and understand release notes on the car's owner seems kind of absurd. And that's assuming that you have accurate and descriptive release notes, which was most certainly not the case for the described instance. In any case, clicking "update" is such a rote behavior for users on computers, phones, and now their Teslas, I'd argue that it's effectively no different than an update happening without notice or user intervention.

                There's only so much you can learn from even the best release notes, period. The ever-so-common "bug fixes," for example, is so broad that it effectively means nothing at all. At best, it's telling the end user "this little update just changes some stuff hidden under the hood. You won't notice anything, so don't give it any thought."

                • perl4ever 1862 days ago
                  Disclosure seems like a red herring, if there is no real choice other than to accept the update. If I get an update to my car that says "this may cause your car to explode at random times", and I don't want to scrap it, the only thing I can do is look around and see if other people are ignoring the warning, and then rationalize that it won't happen to me.

                  You can't ever look at consent outside of the context of the best available alternative to agreeing to something.

              • throwaway2048 1864 days ago
                But they agreed to the terms of service, what is everyone complaining about?
          • hnaccount141 1864 days ago
            On the other hand, if enough people die because a company rushed self driving to market before it's ready there's a very real chance of knee jerk regulation setting the technology back even further.
        • vonmoltke 1864 days ago
          > But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?

          I don't have a dog in this fight, but appeals to emotion in order to drive irrational thinking do not make for constructive debate.

          • rchaud 1864 days ago
            > appeals to emotion in order to drive irrational thinking do not make for constructive debate.

            In a perfect world, sure. Real world, you will never have an inherently emotional situation (road safety) where the only voices heard are those of completely detached individuals.

            As humans, we have to figure out ways to connect with them, and empathize with what they're feeling. Simply dismissing their concerns as driven by emotion isn't a winning strategy.

            • vonmoltke 1864 days ago
              Understanding the emotional reactions of people to situations is important. "How would you feel if"-type statements do not do that. They do, and are often intended to, shut down conversation instead of foster it.
          • benj111 1864 days ago
            I disagree.

            The gp said they're a "Willing guinea pig".

            The parent pointed out it isn't just their lives on the line, but others, potentially including their family.

            That isn't irrational at all. Or an appeal to emotion.

    • trhway 1864 days ago
      > we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.

      we'll adapt, ie. adjust our behavior to account for that new factor. Police for example have already learnt how to pretty safely stop a Tesla on autopilot with a driver sleeping behind the wheel and not reacting to any signals (because of being deadly drunk for example).

      • benj111 1864 days ago
        I like comments where you can't tell if the author is defending something, or absolutely condemning it.
    • jseliger 1864 days ago
      Right now 30,000-40,000 people die in car crashes annually: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in.... And those are just fatalities: hundreds of thousands more are injured. How do we deal with those presently?

      We just accept that amateurs should be hauling around at high speeds in several thousand pounds of missile.

      The most relevant question is whether Tesla AP is safer or less safe than typical amateur drivers per 1,000 vehicle miles driven. I don't know the answer to that question.

    • nathanaldensr 1864 days ago
      Unwilling? Don't buy a Tesla. Don't believe the hype. It's that simple. Granted, there is nothing stopping any ECU from killing you, but I'd trust a company like Honda way before Tesla.
      • maxxxxx 1864 days ago
        "Unwilling? Don't buy a Tesla."

        That doesn't protect me from being killed by a Tesla. I am pretty neutral on the topic but I am getting the feeling that they are in danger of pushing out half baked stuff like we tend to do in software. For most things like software this is OK but maybe not for things that are moving at high speeds.

        • smileysteve 1864 days ago
          Reminder that it has been and still is the norm for the last half century for 30k people to die in car accidents each year. Many more injured and disabled.

          Better yet, texting while driving increases the risk of an accident while driving 23x

      • adrianN 1864 days ago
        Tesla crashes can injure people who don't drive a Tesla themselves.
        • jstanley 1864 days ago
          So can Honda crashes.
          • yborg 1864 days ago
            Honda isn't experimenting on the public at large with unproven technology. One of their suppliers did, and ended up bankrupt as a result.
      • Tomte 1864 days ago
        I generally have zero interest in cars and don't follow the new models, but my impression from articles I've seen in the last years is that Volvo is actually a top contender for driver assistant systems (when you don't fool yourself into thinking you have an autopilot, but you really want sensible safety augmentation features).

        Is that impression accurate?

  • bsaul 1864 days ago
    I wonder how do unit tests work with NN (or if they're even a relevant concept at all).

    You could replay some testing video frames and make sure the objects are correctly identified, but i suppose that's already what training is about...

    If an issue like that resurface, does it mean that the original frames leading to the 2018 accident aren't part of the training (or at least frames from someone driving in this kind of scenario) ?

    • neilalexander 1864 days ago
      Yes, the problem is indeed that there's no real way to "look into" a neural network and understand how it has been trained. All you can do is observe that the given inputs generate the desired outputs.

      Even if there was training based on the 2018 frames, that doesn't mean that you have verifiably fixed the problem. It's difficult to train a neural network selectively – every time you "train" the network with additional data, you are increasing the chance that you are also teaching it something you didn't intend which then can have a side-effect in some seemingly unrelated scenario.

      You can see this in real life with image recognition networks. Teach them too much and they gradually become less effective at identifying anything.

    • bluGill 1864 days ago
      The frames from the 2018 should not be part of the training. They should be part of the test set to prove whatever training they do works.
    • inertiatic 1864 days ago
      Well, testing in the strict sense would be measuring accuracy on data not in the training set, and even accuracy on the training set data (which isn't guaranteed to be 100% I believe)
    • skwb 1864 days ago
      You don't unit test a NN. You can unit test certain functions, but fundamentally this is an integration test.

      This is why serious scientific training is needed to understand these complex systems when health and safety are on the line.

      • leggomylibro 1864 days ago
        "Autonomous vehicle integration test track" could make for a great setting in a spy thriller. The villain could own the megacorporation which makes the cars, and the heroes could find evidence of their evil plot among the sprawling acres of labs and potemkin streets. But then, in the distance, the sound of revving engines...

        Seriously though, I wonder if that sort of physical test track will become popular. You would load your build onto an idle car, queue it up, and make sure that it didn't hit any of the silhouettes which spring up, unusual traffic and weather conditions, etc. They must already do that in some capacity, right?

        • saalweachter 1864 days ago
          > "Autonomous vehicle integration test track"

          Just say "the 101", it's shorter.

    • ummonk 1864 days ago
      Should be using integration tests, not unit tests. I.e. SITL simulations + having the car drive around test circuits / scenarios.
    • dv_dt 1864 days ago
      My first thought was, well someone is in store for a revamp of their regression test suite.
    • jononor 1864 days ago
      Unit tests are not sufficient for neural networks. Say the network takes 2D images of typical 224x224 pixels with 3 channels (RGB) of 8 bit values, this input space has 256^(224x244x3) = 3.5×10^362507 possibilities. Billions of years to test them all. This is before we consider stereo vision, 3D images and state over time. How to know which subset of these inputs are necessary to give reasonable coverage? Right now I don't think we have very good answers to these things. Of course one can always add a regression test (with some K mutations) when someone crashes. It is better than nothing, but hardly good assurance that something like this will never happen again.

      The entire area of safety and quality assurance with neural networks is still in active research actively, from multiple angles. For some examples of how chaotic neural networks can be, look up 'adversarial examples'.

  • x38iq84n 1864 days ago
    I have always wondered... Does the AP have some higher-level notion of object permanence, continuity (road behind a horizon or after a curve) and things like that? Does it track a pedestrian that is momentarily hidden behind an obstacle and will probably reemerge in a second or two on the other side? Does it expect that kids may run after that ball that just flew from behind a car? Does it continuously track and improve classification of all objects in the field of vision, with their trajectories and speeds if they are moving? Personally I don't think it does, otherwise it would not erratically slam into clearly visible and marked large objects in its way, or it would be aware of a truck moving in a perpendicular way and so on. I am of opinion that without such higher-level awareness it can never succeed, hope to learn about the state of the self-driving art.
    • automathematics 1864 days ago
      In these videos, "Autopilot" is mentioned as the culprit, which seems to be a subset of the features Tesla has.

      It seems to me there are 3 layers:

      (level 0: Just Adaptive Cruise Control (use radar to adjust speed up to a max). Human still steers.

      - 1: Adaptive Cruise Control (use radar to adjust speed up to a max) + Autosteer (camera's watch the lane markers and follow them). This is referred to as "Autopilot"

      NOTE: This is where the accidents happen. The car isn't driving towards a barrier, it's following the lane markers and hits an error state. This is also NOT self driving.

      - 2: "Nav on Autopilot". This is an additional function you turn on where the car has more intelligent (use this word loosely) capabilities on highways. The car will still do everything on level 1 combined with lane changes (using cameras to detect objects and trajectories differentiating cars from trucks from pedestrians from bikes from motorcycles etc). It will still follow lane lines, but with a lot of additional information (is there an object? am I merging? am I exiting? etc)

      - 3: "Full Self Driving". This is an additional package that doesn't exist to the public. Internally I'm sure they're testing the functionality but this is using all of the sensors and algorithms and likely neural networks to decide what to do. A cool point though is that all Tesla's are likely running this code in "shadow mode" where data can be collected and assumptions can be tested without endangering any actual drivers. (see here for some cool data on this: https://electrek.co/2019/03/05/tesla-autopilot-detects-stop-...). "Hey, I think the car, if full self driving SHOULD take action X. *compare to what the driver actually does and log the data" over BILLIONS of miles

      So when a Tesla hits the barrier or gets in an accident, we're actually running "#1" and people start freaking out. But when we get to the capabilities of #3 a lot of the "object permanence and continuity" stuff starts to come into play.

      Full Disclosure: - I drive a Model 3 every day on Autopilot 75% of the time - I only 75% think self driving is capability under the current Tesla software suite but I bought the package anyways.

    • dragontamer 1864 days ago
      I don't think neural networks are wired to "remember" things. In theory, they could be hooked up that way. But your typical convolutional neural network is looking at things frame-by-frame.

      In theory, ANNs could have an output layer that passes data from one frame to another frame to assist things. But there's no real programming to "hardcode" something like object permanence into an ANN. You pretty much throw a bunch of data into the system and hope for the best.

      • hemogloben 1864 days ago
        NNs are just the first step in the pipeline. Their outputs (detected objects, segmentation, etc) will be piped into other software that builds higher level models.

        Considering the path-planning requirements I would be absolutely shocked if Autopilot wasn't build history models and estimated paths for objects around the vehicle (other cars etc).

        • JanSolo 1864 days ago
          Agreed; I imagine they use neural networks to detect and classify objects which are then saved into a scene-graph for use in pathing.

          I expect what happened was that they trained their NNs for improved detection in one area but unknowingly reduced it in another. Perhaps now it can detect tricycles 99% but road barriers went down to only 30%. Having worked with NNs it's very common to see gains in one domain which come at a cost of reduced performance in another.

          • stevenjohns 1864 days ago
            They must have known. I haven’t worked with NNs for a few years but I don’t believe the methodology has changed where you would stop testing it over different sets of training data.

            Barriers are a pretty big part of driving on roads and highways and the only reason it would have been unknowingly reduced would be if they just weren’t testing the NN against data with them.

      • robrenaud 1864 days ago
        There are architectures that use CNN on image inputs, and LSTMs across frames of the video to keep memory.

        https://arxiv.org/pdf/1609.06377.pdf

        • x38iq84n 1864 days ago
          Thanks (to all in this subthread). I've skimmed through this doc, I have watched https://vimeo.com/274274744 linked from original reddit discussion and I have lost all remaining faith in AP as it is today. This should be in a closed alpha version, not anywhere near paying customers and marketed as FSD ready/feature ready as it's none of that and won't be for many years. I expect there will be at least a generation of Tesla cars sold with FSD-readiness package that will never see FSD in their lifetime.
    • eclipxe 1864 days ago
  • mslev 1864 days ago
    Watching the video, '2019.5.15 - Try 2' is interesting. You can see the car moving normally, then it starts to follow the black crack in the road and moves to the right- at this moment, the white Nissan in front is blocking the white lines ahead where the lanes actually split.

    Does AP use other cars as reference points, or just the road? Ideally in this situation it would be both: "The line has disappeared, and there's a new one now, but that car went over it". Instead it seems to just be following whatever lines it can see. Does that make sense?

    Note- not at all defending the AP behavior here. Just thinking out loud.

    • treis 1864 days ago
      >Watching the video, '2019.5.15 - Try 2' is interesting. You can see the car moving normally, then it starts to follow the black crack in the road and moves to the right- at this moment, the white Nissan in front is blocking the white lines ahead where the lanes actually split.

      It seems like it's failing in different ways:

      Try 1- Toughest to tell, but it looks like it failed to recognize any lines. Kept going straight which was at the barrier. Hard to tell if the car would have recovered.

      Try 2- Looks like the car tried to go left into the closed lane. Seems like an error in detecting the barriers closing the road. I'd guess that it would have avoided the concrete barrier and driven down the closed lane

      Try 3 - This one looks like it picked the wrong lane marker to be the left side of the road. In that it thought the right lane marker of the closed lane was actually the left lane marker. This one probably ends up with a smashed car and dead driver.

  • alanh 1864 days ago
    My Model 3 suddenly changed lanes today for no discernible reason. I think it considered my lane to jump over into the next lane, for some reason. I should have hit 'Record' to save the footage from TeslaCam.

    (Notably, AutoPilot is not supposed to change lanes without explicit confirmation from the driver, which is clearly illustrated on the dashboard/panel.)

    It's a stretch of road on which I have previously used AutoPilot many times.

    • gpm 1864 days ago
      Recently I accidentally changed lanes in an intersection. The road had 3 lanes in each direction (+ separate streetcar tracks down the center), including the right lane which was required to turn. I was in the center lane. After the intersection the road was still a 3 lane road - which didn't quite register on my brain. I moved from the center lane to the right lane (figuring it exited so I was supposed to be in the now-right lane) when I should have remained in the center lane.

      While I am a novice driver, I've gone through that intersection before without blinking or doing the wrong thing. It's not a particularly complicated intersection.

      Anyways, point is, driving is surprisingly hard. I think counting anecdotes on the internet probably gives you a sample heavily biased against Tesla, because most people don't go post "so I did this stupid thing" but they do post "so my car did this stupid thing".

      • alanh 1864 days ago
        My car and I are stupid in different ways. It has perfect attention and will always brake in time to avoid a fender-bender in traffic. It’s also a lot worse than I am about reading and communicating intent with other drivers or simply recognizing the objects around us.
  • systemspeed 1864 days ago
    With demand for self-driving vehicles as high as it is, yet with the lag in advancement of computer vision, I think it's about time for a serious discussion about smart roads. I realize this isn't entirely relevant, but I can't be the only one thinking that we're missing the forest for the trees by trying to solve a transportation problem while simultaneously solving a vision problem.
    • progfix 1864 days ago
      Might as well put them on rails (Smart-Rails(TM)) and make a railway network for smart people. Transportation problems solved!
    • vkou 1864 days ago
      So, trains?
  • jaimex2 1864 days ago
    Warning: Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause damage, serious injury or death.

    https://www.tesla.com/content/dam/tesla/Ownership/Own/Model%...

  • Tomte 1864 days ago
    The comments are scary. People defending Tesla's misleading advertisement re: full self-driving hardware etc.

    Because clearly "having full self-driving hardware" only means the hardware is there, not that the car can actually self-drive.

    I bet those people find themselves misunderstood in pretty much every discussion with people outside their nerdy circle.

  • mcguire 1864 days ago
    One might suspect, given the reintroduction of the bug, that Tesla doesn't understand their code.
  • syntaxing 1864 days ago
    Tesla should just partner or acquire a LiDAR startup already like Baraja [1]...They can let the computer vision do all the magic they want. Just have one scanning LiDAR as a redundant system so that it doesn't run into anything in front of the car.

    [1] https://www.baraja.com/

    • ip26 1864 days ago
      I don't know why, but supposedly they've made "no LiDAR" their hill to die on.
      • syntaxing 1864 days ago
        Yeah, it's such a weird mentality. Their new patent on a "CNN" ASIC is pretty neat but still doesn't solve a lot of their problems. Are they banking on a magical depth CNN architecture to be released or something?!
      • edshiro 1864 days ago
        Costs. Installing LIDAR will significantly hike up the price of their vehicles.
  • gwbas1c 1864 days ago
    The video really doesn't convey what happened. It's not really clear when the driver took over.

    My Model 3 is very odd when going through forks like this. It just has trouble picking one side of the lane or the other. Yesterday I got a little scared when it swerved back and forth while trying to figure out how to take an exit.

  • sabareesh 1864 days ago
    Since ModelX incident I have made sure not to drive on first lane. I always drive on the 2nd lane and it is much better.
    • bdcravens 1864 days ago
      For a $700 car I might consider crazy compromises, but $70k+? Absolutely not.
      • warp_factor 1864 days ago
        That's what I find so interesting with Tesla owner. They spend a fortune on a car, then they minimize every single issue they have with the car. My explanation for this is that for a lot of owners, the car is a way to be part of a hyped group more than being a utilitarian object (what a car should be).
        • mikestew 1864 days ago
          Happens with a lot of stuff, certain American-made motorcycles, for instance. You're in the club now, and the only way to stay in the club is to carry the manufacturer's water. In no case are we to admit that we spent a bunch of money on something that does a poor job of fulfilling its advertised purpose.
          • bdcravens 1864 days ago
            Or more relevant to the HN audience, Apple.
            • mikestew 1864 days ago
              What, Tesla wasn't relevant enough for you to not take an "obligatory" dig at Apple product owners?
              • bdcravens 1864 days ago
                Trust me, if I'm taking a dig, I'm pointing at myself; I have probably $7k of Cupertino products within arm's reach at the moment. But I have to admit there's times when the comparison is apt.
        • gojomo 1864 days ago
        • jaimex2 1864 days ago
          The cars are brilliant pure and simple, minor defects get eclipsed.

          Nothing out there comes close to Autopilot. It's biggest problem is it works 99% of the time and people trust it too much.

          It's also worth noting you are NOT meant to be using it in work zone. A point a lot of comments seem to completely ignore.

          https://www.tesla.com/content/dam/tesla/Ownership/Own/Model%...

        • fh973 1864 days ago
          In context of Apple this was often referred to as the Stockholm Syndrome.

          https://en.m.wikipedia.org/wiki/Stockholm_syndrome

        • goshx 1864 days ago
          The point non owners miss is that early adopters are aware of what they are buying and what to expect, while the very vocal know-it-alls of the interwebs are judging the technology as if it was supposed to be perfect. I am glad Tesla’s future doesn’t depend on these people’s opinions.
        • serf 1864 days ago
          what's even more interesting is when a market of people choose to act out of the ordinary to cope with a product that has an issue, but in the situation where the product is self-learning to a degree.

          Tesla hits a wall, Folks drive abnormally due to qualms with incident, Tesla receives data and throws it in the ML mix, Car learns from people driving purposely strange.

          Of course, it doesn't work out like that; not enough people will do it to influence anything very much, but it's a funny thing to consider.

      • MockObject 1864 days ago
        Folks might be even more likely to compromise with a $70k car than a $700 one. The psychology of sunk costs!

        https://www.lesswrong.com/posts/tyMdPwd8x2RygcheE/sunk-cost-...

      • sabareesh 1864 days ago
        Autopilot is a beta feature that you are choosing to buy. And it costs 3k and no one is close to what Tesla has that you can buy now
        • bdcravens 1863 days ago
          Nothing in their marketing suggests that it's a beta feature. (Just look at the Autopilot tab when configuring a new Tesla)
    • kurtisc 1863 days ago
      Where I live, that's not legal.
  • Scoundreller 1864 days ago
    That’s a very strange way to close a highway.

    Where are the blinkenlights? The words? The flashing arrows? The plastic jersey barriers?

    The buckets full of sand or water?

    The repainted lines directing you to the right?

    Can’t tell if this is temporary or long-term closure, but if it’s a multi month thing, I would expect more “Don’t you dare fork to the left” signalling.

    Not everyone is a local.

    • sbierwagen 1864 days ago
      The section of road shown in the video is the I-5 express lane where it rejoins the regular freeway in North Seattle: https://www.google.com/maps/@47.7021836,-122.3299637,3a,71.1... Note how it's configured for southbound traffic in the street view image. It switches direction twice a day.

      Though you can't see it very well in the video, there is a crash arresting net after all the signs. https://www.google.com/maps/@47.7034263,-122.3302029,3a,19.3... You can't just drive through a couple of boards and run into oncoming traffic.

      • Scoundreller 1864 days ago
        I followed back the first link for the opposite direction of travel and didn’t find any signage before the swing-gates. You kinda truck along at 45mph+, sticking to the left to continue and with a small amount of notice time, you need to fork to the right.

        I get that a local would know you are to always fork to the right, but someone unfamiliar or a machine could be confused by the initial half barriers, with only a 2 lane wide barrier after the demarcation point. Along with the oddly painted lanes.

        But as I said in another post, I guess DOT would have the data on incidents (and hopefully act on it if there’s enough evidence in favour of more safeguards).

        • justinv 1863 days ago
          There is signage - there are signs to tell you whether the Express Lanes are open or not that are level with the general signage for direction of travel (ie above the road)
    • blahyawnblah 1864 days ago
      It's a lane of traffic that can go either way depending on demand. Common in larger cities in California and Washington.
    • jakeogh 1864 days ago
      Why would any of that be relevant?
      • Scoundreller 1864 days ago
        Because the same things that confuse humans can confuse computers that are largely trained with human data.
        • dymk 1864 days ago
          Millions of drivers per year use roads like this in California and Washington. They’re not very confusing for a human driver who’s paying attention.
          • smileysteve 1864 days ago
            I'd like to see the data behind this statement. I highly doubt that the "roads like this" see zero accidents.

            Based off of guard rail end cap lawsuits alone, it seems that humans many times get confused even when driving straight on a highway; whether they are paying attention is another question, if it's relevant at all.

          • Scoundreller 1864 days ago
            I’m from Toronto, so I never underestimate drivers’ ability to do stupid like drive into streetcar-only tunnels past the end of the ashphalt, despite ample signage. 26 times.

            https://torontolife.com/city/queens-quay-streetcar-tunnel-dr...

            • BoorishBears 1864 days ago
              How many hundreds of thousands of cars for those 26 to make that mistake?

              Now imagine if every Toyota did this instead?

              FSD is being touted as a "force multiplier" for safe driving, but it could end up being a force multiplier for deadly mistakes

              • Scoundreller 1864 days ago
                Few care about the 26 that missed the signs.

                It’s the multi-hour shutdown of the city’s 3rd busiest transit route that connects to the continent’s 3rd busiest train station that impacts a lot of people.

        • sgt101 1864 days ago
          Upsetting as it is, it also turns out that different things also confuse the computers. Sometimes unexpectedly and suddenly.
      • muzika 1864 days ago
        Increasingly, cities will be expected to plan roads in a way that would be simpler for autonomous vehicles to bavigate.
        • Scoundreller 1864 days ago
          And humans.

          If two lanes in the default path of highway driving (stay left to continue) are closed half the day, I’d expect some blinking sign overhead saying “two left lanes closed ahead, stay right”.

          I guess DOT would have the records of people making the mistake to know if there’s enough value from implementing another signal for what you’ll have to do up ahead.

          Dunno if Tesla reads signs or not (or used scanned signs to navigate), but seems like a useful thing for humans that could assist machines.

          But maybe I’m just spoiled by my jurisdiction’s electronic signs that say this kind of thing.

          • SketchySeaBeast 1864 days ago
            Wow. Where I'm currently at they'll put up "left lane closed signs" and you don't know if it's the left lane, the right lane, or neither, that's actually closed until you get there.
        • jakeogh 1864 days ago
          The SDC concept vastly underestimates the processing power of wetware. The tendencies to view humans as the bugs and to add rules to make reality more predictable are just excuses to avoid the real issue.
    • stagger87 1864 days ago
      I would think the several reflective construction signs with arrows would be enough. Also there is an impact attenuator (buckets of water). It's also cut too soon to the turn to know whether there was more signs leading up to it.
      • Scoundreller 1864 days ago
        The only reflective gate that swings into the second lane from the left is after the barrier.

        I checked the street view linked above and followed it back: no signs.

  • newnewpdro 1864 days ago
    I find it absurd how the NHTSA doesn't treat Tesla similarly to how it treated Toyota ~10 years ago with the sudden unintended acceleration controversy. [1]

    Tesla has demonstrably flawed and dangerous vehicles operating on our roads. These persistent autopilot bugs are killing people. If we treated the Tesla autopilot as a licensed driver, it'd have its license suspended. (Not that it could have passed a driving test in the first place)

    [1] https://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehic...

    • DarmokJalad1701 1864 days ago
      > These persistent autopilot bugs are killing people.

      Are there any numbers on this?

  • cmurf 1864 days ago
    So far Tesla automation seems to handle edge cases. I have never rear ended or side swiped anyone, getting into such an accident or avoiding one is an edge case for me. Useful but not really automation.

    For it to even have a chance during lane keeping of doing something only a sleeping or suicidal driver would do? It's utter bullshit. It's less safe than a human.

    Autonomous driving is overhyped vaporware. Autonomous airplanes would be easier than cars, and we still are no where near that.

  • ummonk 1864 days ago
    Maybe it's merely that I'm used to cones rather than barriers of that sort, but that stretch of road looks rather confusing to me as a human as well.
  • FreedomToCreate 1864 days ago
    Tesla can only really gather depth data from there radars, while the cameras operate a DNN to detect features and make adjustments for steering and speed based on fusing those two pieces of data together. An error in the radar or the DNN detecting features incorrectly because of lighting, road color change or object on the road which the model was not trained on, can cause problems like this.
    • zmarty 1864 days ago
      "Tesla can only really gather depth data from there radars" - they can also do it visually through depth from motion
    • notfromhere 1864 days ago
      if it came back after an update, wouldn't it be the issue of the algorithm training itself incorrectly?
  • sandos 1863 days ago
    What I dont understand is, that yellow sign is actually very visible compared to other things the neural nets can detect. Why hasn't it been done yet? Detecting things as visible as that sign should not be a huge problem.
  • dreamcompiler 1864 days ago
    I used to be a big supporter of Tesla, but I now have to apply Internet-of-Shit Rule #1 to them: No software updates ever without explicit owner approval.

    And "owner" is me, not Elon Musk.

  • jtaft 1864 days ago
    Can they simulate their sensors input pretty well? Do they use car driving simulators (video-game like) to see what would happen in similar scenarios?

    Of course, real world testing would still be necessary.

    • jaimex2 1864 days ago
      When a customer disengages or corrects AP that periods sensor data is recorded and uploaded to Tesla. The uploads are curated and submitted into the neural network training data-set.

      They basically have an infinite amount of real world data coming in.

  • lgleason 1864 days ago
    Tongue in cheek......this is planned obsolescence. New model with upgraded features is announced so you start to brick the old ones by making them run into things. :) Of course there is that pesky issue of it potentially injuring/killing the driver and passengers....

    In all seriousness, these things are getting better, but Skynet is not here yet. Just because a company, governmental agency etc. says something is safe you always need to evaluate things yourself..and it's probably not a good idea to rely on these auto pilots unless you want to win a Darwin award. I love the cars, but the hype is a bit annoying around stuff like this.

  • GoToRO 1864 days ago
    That's one way to keep drivers alert I guess... Add a little bit of randomness into your boring drive.
  • perfunctory 1864 days ago
    It troubles me that we use the term "software bug" to describe this.
  • 101001001001 1863 days ago
    I still want a model 3. I’ll just not use the autopilot.
  • throwaway789214 1864 days ago
    Crazy how people are blatantly ignoring that even on autopilot drivers need to be attentive and keep situational awareness at all times.

    I can't remember that there was ever a case in which autopilot caused a situation that an attentive driver couldn't have recovered from. Yet it is blamed on the system when people ignore this requirement and end up in an accident.

    Throw-away for obvious reasons...

    • rhino369 1864 days ago
      So what’s the point of it? Not steering but needing to able to steer immediately seems much more mentally taxing than just steering.
      • throwaway789214 1864 days ago
        Do not use it if it is net negative for you but if you do, follow the instructions. You can't have the cake and eat it.

        There are a lot of people at Tesla working fulltime on auto-pilot and some drivers think they know better than them by disregarding their instructions how to use it? Frankly, that's just crazy.

        • jessaustin 1864 days ago
          Perhaps those drivers don't have your inside view of Tesla R&D?
        • malms 1864 days ago
          You are a retard.

          Tesla is calling the thing "autopilot", only say "keep hands om wheel" for purely judicial reasons. As a consequence it is a more dangerous system overall because full attention is always better than the "partial attention" state in which driver are with autopilot. Then it blames people for having accidents...

          This is hypocrisy at its finest. Please open your eyes and see the biggest picture.

    • dragontamer 1864 days ago
      Other companies call their technologies by more precise names. Other companies call it "Lane Keep Assist". Tesla is the only one calling it "autopilot". Therefore, people have different expectations when they use the thing.

      Its Tesla's marketing that people are really complaining about: "Full self driving", "Navigate on Autopilot", "Enhanced Summon".

      https://twitter.com/elonmusk/status/1067967799547449344

  • anth_anm 1864 days ago
    Said it before, saying it again. OTA updates to safety critical things is not a feature. It's a bug.