Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
Interestingly, I think that similar types of arguments are made against "agentic coding"
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
Sure, but you can do that in a diff after the event, rather than live.
Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
Reminds me of a situation not long ago.
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD doesn't not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
Huh.
I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.
Which is just worse.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.
SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.
I think you mean SAE Level 3. SAE Level 2 is “lane centering” and “adaptive cruise control” [1]. (Level 3 is “when the feature requests, you must drive.)
A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.
I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.
I think their point was "it's not ready yet."
"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"
Too bad that project failed.
Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.
Would it be a vote of no confidence in Full Self Flying?
No, it would be an acknowledgement of the lack of perfection in human systems so far.
I mean, they kinda are.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
That presents an interesting failure mode challenge.
Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.
Quite a few more places have them now:
How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.
- [deleted]
It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.
EDIT: IIRC the threshold for disengagement is 1s.
>> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?
That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.
To be fair, that report says
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
IDK, this has the same unethical energy as police turning off body cameras.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
> the "right thing to do" is NOT turn off the cameras just before a collision
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.
Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.
I guess i'll step in for the counter.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds
- [deleted]