My 2025 Mazda Miata has a CAN connected Telematics Control Unit that sends a bunch of data to Mazda on ignition off. Among this data is acceleration and velocity data along with coordinates sampled for where you were. It is also used as a gateway for the Mazda app to start your car, query your vehicle's tire pressure, etc. It is claimed that you can opt out of this by calling Mazda and being persistent.
The CAN traffic is unencrypted. It was pretty easy to MITM this module with a cheap arm Linux board and a can transceiver to enable writing a two way filter capable of blocking the traffic that didn't raise any DTCs (that I observed) and could be turned on/off by the user. I preferred this approach to complete disconnection of the module (which is noticeable via errors at the diagnostic port) or trying to faraday cage or disable the antennae on the TCU so it can't remotely send/receive. I can also turn off my module or completely remove it before I sell it.
I fear the next version of Miata will be an encrypted CAN like most other cars have moved to and even with my expertise I won't be able to access the latest safety features from new cars without surrendering what little privacy I've been able to claw back.
I opted to try the "beg the manufacturer to turn off the panopticon" approach[1]. The first time I got 2 hours of elevator music before hanging up, the second I went through 3 levels of customer support before they claimed it was done (3 days later). Might have to steal your approach to verify that though...
Have you posted any writeups or other information about how you built this? I'm eyeing a Mazda as a next car (I've never owned a car newer than a 2014, and outside of that one, any newer than 2006, but family safety needs may lead to getting a newer car soon), and telemetry seems like one of the few downsides to an otherwise good carmaker. Would be very interested to learn more!
> (I've never owned a car newer than a 2014, and outside of that one, any newer than 2006, but family safety needs may lead to getting a newer car soon)
I don't know much about automotive safety, but has much actually changed since 2014 in terms of safety standards? I had thought that by the 2010s, basically everyone big had already figured out how to build a relatively safe car from a structural standpoint. Or are you only talking about electronic assistive features, like proximity sensors or lane assist?
> The CAN traffic is unencrypted. It was pretty easy to MITM this module with a cheap arm Linux board
And you didn't poison their databases and statistics with fake data?? OMG, I'm thinking of buying one of these cars just for this opportunity! (No, I'm not.)
I suspect this data is made "anonymous" and sold to insurance companies and misc data brokers. If it's linked to my insurance company, I don't want to jack my rates. Further, I've thus far avoided a CFAA conviction and I'd like to keep it that way.
As anonymous as there are Miatas in your neighborhood parking in your driveway.
Then do the opposite. Poisoned data that can improve your insurance rates
they use the data mostly to charge you more, you can't really get the price all that lower
I've had a clean driving record for 30 years and I'm still paying the junk rates most other people get
So, it's like credit scores, basically? Advertise a happy, meritocratic future for consumers, where the "better"/more responsible ones will reap massive rewards at the expense of the "worse" consumers, and then keep adjusting the brackets until the system is only used punitively - you don't really get anything from a high score nowadays, your only goal is clearing a certain low bar to avoid negative consequences.
Yeah exactly... it's stick or no stick, the carrot is the razor thin margin only used to keep you away from the competitors.
At this point car insurance has gotten so bad that it's becoming normal that you can save hundreds of dollars by switching providers every 6 months. These companies are probably making millions on people who are just too exhausted to switch constantly.
It would be an extremely totalitarian dynamic to be persecuted with the CFAA for modifying a device you own based on part of it having been (nonconsensually!) programmed by a third party to upload data to their own server. You own the device, so anything you do within that device is authorized. And the code that uploads the data is authorized to do so because it was put there by the same company that owns [controls] the servers themselves.
I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law - so it's best to steer clear. And this goes double with with the current overtly pay-to-play regime. But just saying.
(Awesome description btw! I really wish I'd find a buying guide for many makes/models of cars that detail how well they can be unshackled from digital authoritarianism. A Miata is not the type of vehicle I am in the market for (which is unfortunate, for several reasons))
If you can be prosecuted for guessing urls you can be prosecuted for sending garbage data in a way you know will be uploaded to a remote system.
The DoJ lost the case they went after for someone guessing URLs.
link me
They lost it because they charged in the wrong jurisdiction.
Also come on, you can't reasonable describe that case as being about "guessing urls". It's the associated chat logs that really make the case.
You think criminalizing guessing URLs is unreasonable.
What about guessing passwords? Should someone be prosecuted for just trying to bruteforce them until one works?
Guessing passwords is an attempt to access privileged information you have no right to access, and could not otherwise access without bypassing security measures.
Guessing a URL is an attempt to access (potentially) privileged information which was not secured or authenticated to begin with.
A password is a lock you have to break. An unlisted URL is a sticky note that says "private" on the front of a 40" screen. It's literally impossible for that information to stay private. Someone will see it eventually.
Guessing URLs is equivalent to ordering an item not on the menu in a restaurant. The request may or may not be granted.
This same logic is easily extended to SQL injection, or just about any other software vulnerability.
How do you propose the line should be drawn?
The question can be easily inverted for the other side: if any user accidentally damages a service's functionality in any way, can they always be criminally liable? Can this be used by companies with no security or thought put into them whatsoever, where they just sue anyone who sees their unsecured data? Where should the line be drawn?
To me, this is subjective, but the URL situation has a different feel than something like SQL injection. URLs are just references to certain resources - if it's left unsecured, the default assumption should be that any URL is public, can be seen by anyone, and can be manipulated in any ways. The exception is websites that put keys and passwords into their URL parameters, but if we're talking solely about the address part, it seems "public" to me. On the other hand, something like wedging your way into an SQL database looks like an intrusion on something private, that wasn't meant to be seen. It's like picking up a $100 bill of the street vs. picking even the flimsiest, most symbolic of locks to get to a $100 bill you can see in a box.
>The question can be easily inverted for the other side: if any user accidentally damages a service's functionality in any way, can they always be criminally liable? Can this be used by companies with no security or thought put into them whatsoever, where they just sue anyone who sees their unsecured data? Where should the line be drawn?
I don't think the question can be inverted like that, not meaningfully anyway. The CFAA specifically requires one to act knowingly. Accidentally navigating to a page you're not supposed to access isn't criminal.
>To me, this is subjective, but the URL situation has a different feel than something like SQL injection.
I don't think the url below is necessarily that different.
> GET wordpress/wp-content/plugins/demo_vul/endpoint.php?user=-1+union+select+1,2,3,4,5,6,7,8,9,(SELECT+user_pass+FROM+wp_users+WHERE+ID=1)
> if it's left unsecured, the default assumption should be that any URL is public, can be seen by anyone, and can be manipulated in any ways
It can be, but not lawfully so. It's not possible to accidentally commit a crime here, for example in the IRC logs related to the ATT case the "hackers" clearly understood that what they were doing wasn't something that AT&T would be happy with and that they would likely end up in court. They explicitly knew that what they were doing was exceeding authorized access.
> On the other hand, something like wedging your way into an SQL database looks like an intrusion on something private, that wasn't meant to be seen
I think you've reached the essence of it. Now, let's say you just accidentally find an open folder on a bank's website exposing deeply personal KYC information of their customers. Or even better, medical records in the case of a clinic.
Lets say those files are discoverable by guessing some URL in your browser, but not accessible to normal users just clicking around the website. If you start scraping the files, I think it's pretty obvious that you're intruding on something private that wasn't meant to be seen. Any reasonable person would realize that, right?
> GET wordpress/wp-content/plugins/demo_vul/endpoint.php?user=-1+union+select+1,2,3,4,5,6,7,8,9,(SELECT+user_pass+FROM+wp_users+WHERE+ID=1)
This is why I tried to make the clarification that I was referring to the address part of the URLs only, not the parametrized part. In my mind, something like /users?key=00726fca8123a710d78bb7781a11927e is quite different from /logins-and-passwords.txt. Although, parameters can also be baked into the URL body, so there's some vagueness to this.
> I think you've reached the essence of it. Now, let's say you just accidentally find an open folder on a bank's website exposing deeply personal KYC information of their customers. Or even better, medical records in the case of a clinic.
I guess if I try to distill my thoughts down, what I really mean is that there should be a minimum standard of care for private data. At some point, if being able to read restricted data is so frictionless, the fault should lie with the entity that has no regard for its information, rather than the person who found out about it. If a hospital leaves a box full of sensitive patient data in the director's office, and getting to it requires even the minimal amount of trespassing, the fault is on whoever did so. But if they leave that box tucked away in the corner of a parking lot, can you really fault some curious passer-by that looked around the corner, saw it and picked it up? Of course, there's a lot of fuzziness between the two, but in my mind, stumbling into private data by finding an undocumented address doesn't clear the same bar as bruteforcing or using a security vulnerability to gain access to something that's normally inaccessible.
>How do you propose the line should be drawn?
there is a line drawn for such things. a fuzzy line. see:
https://en.wikipedia.org/wiki/I_know_it_when_I_see_it
same as this famous case, in which a supreme court justice is asked "what is and is not pronographie" - of course he realizes if he defines "what is not" people are going to make all kinds of porn right on the boundary (see: japanese pronographies where they do the filthiest imaginable things yet censor the sensitive books, making it SFW in the eyes of their law). this judge avoided that.
Anyways, parallel to the fact that filthy pronographies can be made a gorillion different ways, a "hack" may be manifested also a gorillion different ways. Itemizing such ways would be pointless. And also in the same vein, strictly defining a black and white line "this is legal, this is not" would cause hackers to freely exploit and cheese the legal aspect as hard as possible.. businesses and data miners and all these people would also freely exploit it, at massive scale and with massive funding, since it is officially legal. Thusly it must be kept an ambiguous definition as with pronographies, as with many things
Do you think the current line, where it's based on you "knowingly" exceeding your access or deliberately damaging the operation of a computer system, is excessively vague?
Cyber attacks are consentual, digital engineering is the only discipline where we have complete mastery of the media. If you make a system (or authorize it) what someone does with it is your fault.
Probably somewhere short of incarcerating someone for what they typed in a browser's URL bar.
So if I deliberately exploit a bug on your website and download your customer database by typing things in my browsers URL bar, I should not be prosecuted?
No, and I would support a law explicitly making it illegal for prosecutor to prosecute you for this.
I'd be totally down for that, but I reckon it would be kind of shitty for the vast majority of the people who are not CTF enthusiasts.
- [deleted]
Closer to trying the handle on random car doors.
It depends on stuff.
Sometimes a URL can have a password in it.
But when it's just a sequential-ish ID number, you have to accept that people will change the ID number. If you want security, do something else. No prosecuting.
How do I know which URLs of a website are legal to visit and which are illegal?
I can't say I've ever struggled to make this determination, but I don't make a habit of trying random ports, endpoints, car doors, or brute-force guessing URLs.
But it was very tempting when i saw that my national exam results were sent to us in a mail as nationalexam.com/results/2024/my-roll-number. Why would i not try different values in the last part.
Try it once to see if it works, you'll probably be fine.
Find out that it works, and then proceed to look up various other people? Whether you're fine depends entirely on whether or not you genuinely believe that you're supposed to be accessing that stuff.
I think criminalising both is unreasonable, what you do with the URL you accessed or the password you guessed however is different.
Passwords are different from URLs because URLs are basically public, whereas passwords aren't supposed to be. Furthermore, this is not 1995. Everyone who is in the industry providing IT services is supposed to know that basic security measures are necessary. The physical analogy would be, walking through an unlocked and unmarked door that faces the street in a busy city, versus picking a lock on that door and then walking through it.
> Everyone who is in the industry providing IT services is supposed to know that basic security measures are necessary.
And everyone who doesn't have wool for brains knows to not carry large rolls of cash around in a bad part of town, but we can still hold the mugger at fault.
Nevertheless, URLs are as public as door knobs. If someone is merely observing that a door is unlocked and they have not stolen anything, they have done nothing wrong. People being prosecuted over discovery and disclosure of horrible design flaws based on URLs should never be prosecuted. If they use the information to actually cause damage, we can be in agreement that they are responsible for the damage.
>People being prosecuted over discovery and disclosure of horrible design flaws based on URLs should never be prosecuted. If they use the information to actually cause damage, we can be in agreement that they are responsible for the damage.
That's literally the current state of things.
As a strictly logical assertion, I do not agree. Guessing URLs is crafting new types of interactions with a server. The built in surveillance uploader is still only accessing the server in the way it has already been explicitly authorized. Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.
As a pragmatic matter, I do completely understand where you're coming from (my second paragraph). In a sense, if one can get to the point of being convicted they have been kind of fortunate - it means they didn't kill themselves under the crushing pressure of a team of federal persecutors whose day job is making your life miserable.
>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;
If your goal is to deliberately "poison" their data as suggested before, it's kind of obvious that you are knowingly causing the transmission of information in an effort to intentionally cause damage to a protected computer without authorization to cause such damage.
>Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.
This has very little to do with the TOS though, unless the TOS specifically states that you are in fact allowed to deliberately damage their systems.
And no, causing damage to a computer does not refer to hackers turning computers into bombs. But rather specifically situations like this.
Any reasonable programmer (a peer) would say an unencrypted system that doesnt validate data is an unprotected system.
It's a legal term, has nothing to do with technical protections.
Practically any device connected to the internet is a "protected computer". The only case I can think of where the defendant prevailed on their argument that the computer in question was not a "protected computer" was US v Kane. In that case the court held that an offline Las Vegas video poker machine was not sufficiently connected to interstate commerce to qualify as a "protected computer".
A computer being supplied with false data which it then stores is not damaging the computer - hence there being a provision about fraud. But for this case it's not fraud either, as the person supplying the data is not obtaining anything of value from the false data.
>the term “damage” means any impairment to the integrity or availability of data, a program, a system, or information;
Deliberately inserting bad data to mess with their analytics does in fact fit that definition.
You are construing "integrity" to mean lining up with their overarching desires for the whole setup of interconnected systems regardless of who owns each one. By that measure, stopping the collection of data is impairing its availability on their system.
I would read that definition as applying only to their computer system - the one you aren't authorized to access. This means the integrity of data on their system has not been affected, even if the source of that data isn't what they'd hoped.
As I said, the law contemplates a different call out for fraud. This would not be needed if data integrity was meant to be construed the way you're claiming.
(For reference I do realize the law is quite unjust and I'll say we'd be better off if the entire law were straight up scrapped along with the DMCA anti-circumvention provisions)
Why do you think the CFAA is unjust?
What specific activities does it unjustly criminalize?
I had assumed you were coming from a similar position, and your argument was more of a reductio-ad-absurdum.
But if you're not - the fact it's putting a chilling effect on this activity right here is a problem.
Another big problem is the complete inequity. It takes the digital equivalent of hopping over a fence and turns it into a serious federal felony with persecutors looking to make an example of the witch who can do scary things (from the perspective of suits).
Another glaring problem is that if the types of boundaries it creates are noble, then why does it leave individuals powerless to enforce such boundaries against corpos, being easily destroyed by clickwrap licenses? Any surveillance bugs/backdoors on a car I own are fundamentally unauthorized access, and yet I'm powerless to use this law to press the issue.
It might be interesting for an enterprising lawyer to try to flip this around. Suppose you send a letter to your car manufacturer saying that, as the owner of the car, you are prohibiting them from accessing the location of the car or performing unauthorized software updates and that any attempt to circumvent this will result in criminal prosecution for unauthorized access to your computer.
If you were to purposefully try to poison/damage their dataset and admitted as such you probably wouldn't win without spending an unreasonable amount of money on lawyer fees. Without admitting anything though and claiming ignorance it would probably be pretty easy to get dismissed, provided you are able to spend atleast some money on a lawyer.
Prosecuting someone for deliberately injecting garbage data into another persons system hardly seems totalitarian.
> You own the device, so anything you do within that device is authorized
You're very clearly describing a situation where at least some of the things you're doing aren't happening on your own device.
>I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law
FWIW this is simply not true. The essence of the CFAA is "do not deliberately do anything bad to computers that belong to other people".
The supreme court even recently tightened the definition of "unauthorized access" to ensure that you can't play silly games with terms of service and the CFAA. https://www.supremecourt.gov/opinions/20pdf/19-783_k53l.pdf
My device. I generate whatever the fuck the data I want. If you log it, kiss my ass.
Sure, I have the same attitude when it comes to the government telling me that I'm not allowed to use drugs. Doesn't mean I'm in the clear from a legal point of view.
However, it's worth clarifying that the important detail isn't generating the data, but sending it. Particularly the clearly stated malicious intent of "poisoning" their data.
This seems like exactly what the lawmakers writing CFAA sought to criminalize, and is frankly much better justified than perhaps the bulk of things they tend to come up with.
>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;
Doesn't seem exactly unfair to me, even if facing federal charges over silly vandalism is perhaps a bit much. Of course, you'd realistically be facing a fine.
Could you argue the computer was unprotected? No encryption is wild.
No, "protected computer" refers to computers protected by the CFAA.
>(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or
>(B) which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States.
If you paid for a device it doesn't mean you have no rules set up on how you can operate it. I'm sure the is an EULA you agreed to.
As anecdote, while buying a new car I signed a statement that I'm not going to resell it to russia.
And you think it is all fine and dandy?
No it does in fact seem totalitarian. I support repealing the CFAA.
I would absolutely love to hear the arguments behind this.
- [deleted]
- [deleted]
Oh man. Logging insane average speeds and ludicrous acceleration during rush hour. Deliciously tempting idea.
A data scientist will simply filter out impossible data when conducting an analysis
That’s why you make this as popular as possible
you give a lot of credit to an industry poisoned by the profit motive
Just make sure you are criticizing the industry on things that are real. Accurate data collection (put not necessarily publication to a broad audience) is something industry does. Decision makers want to understand reality, they don't necessarily want you to though.
Draw the old twig and berries in gps coordinates in hundreds of random cities, with velocity between points carefully kept to regular traffic speeds every single day until they shut the modem off.
I see absolutely no reason not to completely unplug the cellular modem. The only thing that would stop me is an annoying error message or warning light in the gauge cluster. My car does not display any of these, but unplugging the modem results in losing the right speaker and microphone, unless a bypass harness is used.
The modem is usually in the sharkfin with the XM radio chipset and GPS. If you can unplug it at the sharkfin that's usually the best course of action. Some cars may bark at you, but mine just says it can't detect GPS if I attempt to use it (which I never use anyway).
Wouldn't it be better to connect resistive pigtails to the antenna connectors on the board? A little more work to get to, but less risk to damaging paint and weather seals, and would do a better job preventing signal leakage. I'm no expert on such things, but will definitely be looking at something like that for the next car I buy.
For anyone else confused, Diagnostic Trouble Codes (DTCs). Automotive context
Can't you just turn off "Connected Services" in the menu?
I have been canceling that stupid warning message it presents when leaving it off, every day for several years now.
I fear the next version of Miata will be an encrypted CAN like most other cars have moved to
As I understand it, they're required to do that now if they want to sell in the EU. They emphatically do not want anyone tinkering with their cars.
They don’t want people modifying ADAS systems mostly, and the main requirement is SecOC, which is cryptographic authentication but the message is still plaintext. Basically they don’t want third party modifications able to randomly send the “steer left” message to the steering rack, for example.
The ADAS systems mandated in Europe are insanely intrusive. I had a few rental cars in Europe this summer and wanted to send them off a cliff. (and I'm not an auto tech luddite, I've had modern cars in the US with autopilot type systems, lane keep, blind spot warning, rear traffic assist radar, forward collision warning, etc. IMO rear traffic assist/FCW/AEB tend to work really well, autopilot pretty well, and lane keep and blind spot silly gimmicks at best).
Bring on the full self-driving cars, or let me drive my own car. This human-in-the-loop middle state is maddening. We're either supervising our "self-driving, but not really" cars, where the car does all of the work but we still have to be 100% aware and ready to "take over" the instant anything gets hard (which we know from studies is something humans are TERRIBLE at)... Or, we're actively _driving_ the car, but you're not really. The steering feel is going in and out as the car subtly corrects for you, so you can't trust your own human senses. Typically 40% brake pedal pressure gets you 40% brake pressure, unless you lift off the throttle and hop to the brakes quickly, in which case it decides when you apply 40% pedal pressure you actually want 80% brake pressure. Again, you can't trust your human senses. The same input gets different outputs depending on the foggy decisions of some computer. Add to that the beeping and ping-ponging and flashing lights in the cluster.
It's like clippy all over again. They've decided that, if one warning is good and helpful, constant alerts are MORE good and MORE helpful. Not a thought has been given to alert fatigue or the consequences of this mixed human-in-the-loop mode.
So much this. We had a rental BYD in Greece this summer, and while it was actually great car in general the mandated “assistance” was awful.
It constantly got the speed limits wrong, constantly tried to tug me out of the correct lane, and was generally awful. It could be disabled but was re-enabled on each restart of the ignition because it’s mandated by EU regulation.
I appreciate a Greek island perimeter road may be a worst case scenario, but it did the same with roadworks on the freeway and many other situations.
Actively dangerous in my experience…
“Lane keep” yanks the wheel dangerously because it incorrectly detects the lane, or because you don’t indicate to pass a pothole on an empty road (which itself would be confusing to other road users)
Forward collision warning has misfired on 2 occasions on me in the last 3 years
The main issue is that so many cars have broken “auto dipping” headlights which don’t dip, or matrix headlights which don’t pick out other cars.
This automation shit should stop, but it won’t.
parking beepers are reasonable, they simply come on occasionally and don’t actually interfere when they go wrong. The rest of it just makes things far worse at scale.
> Forward collision warning has misfired on 2 occasions on me in the last 3 years
My Lexus is afraid of a bush behind my garage in the alley. It's on a neighbors property and not really overgrown, but my car refuses to get within about 5 ft of it. Makes backing out a nightmare. I haven't figured out a way to disable it, and have considered just selling this 2025 NX.
> I haven't figured out a way to disable it, and have considered just selling this 2025 NX.
I found this for the TX, might work for the NX as well?
Try disabling Parking Support Brake under vehicle settings > drive assist.
parking beepers -- that do not go off immediately when you start a parked car
Yes, and to do that, CAN must be encrypted. The idea isn't just to secure it from hackers. The idea is to secure it from owners.
> SecOC, which is cryptographic authentication but the message is still plaintext
Oh, OK, that's better. I can see what my car is doing, I just can't do anything about it.
I integrated SecOC on some ECU's at work. I hate myself for it. I frigging hate what they're doing with this. I think it's going to make cars less repairable, less modifiable. It's a horrible horrible stupid initiative in the name of "cybersecurity".
I understand notionally where they were going, but it all sort of went off the deep end somewhere along the line. A concern that someone buying some "mileage blocker" or whatever other shady device off of AliExpress might be vulnerable to the device steering their car into a wall is actually quite a valid one, but of course the solution is some overcomplicated AUTOSAR nightmare that doesn't solve for key provisioning in a way to make modules replaceable.
I have less trust in their good intentions. I think OEM's want to lock down their platforms in order to squeeze extra revenue streams. And I tend to be quite charitable with my interpretations.
As an aside, I checked out your GitHub. Cool projects, the vag flashing tool looks super useful, might actually give it a spin in sive development projects.
- [deleted]