I can't read the full article, but I would like to remind everyone that this is not the first time the EU has done something like this.
In 2006 the EU passed the Data Retention Directive: https://en.wikipedia.org/wiki/Data_Retention_Directive
>According to the Data Retention Directive, EU member states had to store information on all citizens' telecommunications data (phone and internet connections) for a minimum of six months and at most twenty-four months, to be delivered on demand to police authorities.
This was actually law for 8 years until the Court of Justice of the EU found it to be violating fundamental rights and was declared invalid.
It's still a law in Denmark, despite being rendered illegal in the EU, and likewise in national courts.
It was last used to convict a murderer of the murder of Emilie Meng (https://en.wikipedia.org/wiki/Murder_of_Emilie_Meng). At the time, he had kidnapped a 13 year old girl (IIRC), that he had sexually assaulted for 24+ hours, and various dashcam recordings were used to piece together what had happened. He was also convicted of attempted kidnapping of a 15 year old girl from a school.
They found the 13 year old in his home, so not much doubt about that, but the other two cases were partially proven with phone metadata logging, proving he had been in the area at the time.
In the light of that, it's hard to disagree 100% that it's a "bad idea". It's a question of balance I guess, and the mass surveillance proposed in ChatControl is way out of balance. Not only does it scan in the background, it also scans for things that are unknown to you, and alerts authorities without alerting you. That's the perfect tool for facist regimes to get rid of political dissents.
It's always a tradeoff. Nothing is ever going to have zero benefit, the problem is that these laws use the marginal benefit as an excuse to institute something that actually has massive downsides.
> It's always a tradeoff. Nothing is ever going to have zero benefit
Realizing that is the first step to having any kind of productive discourse, much less a chance of influencing the outcome. It's also the step I see most commenters in discussions on this and related topics here, are unable to take.
But then, the follow-up step is:
> the problem is that these laws use the marginal benefit as an excuse to institute something that actually has massive downsides
Are they marginal though, and are the downsides that big? Or does it only seem that way from our armchairs, as we debate computer philosophy and look at the world as a diagram of interacting systems, instead of, you know, the real world?
I'm not saying these particular regulatory ideas are good - I just have a problem with this assumption (not even implicit, it's often outright spelled out here), that it's some evil elites that try to strip us off our privacy and freedom, and keep trying to push the same laws hoping to catch our vigilant protectors off-guard.
Truth is, there's plenty of people who push for these things because they actually think of the children and honestly think these are good trade-offs, and they may be even more right than we are. They definitely sit closer to the real world and real people, real problems and real policing, than we do. They may be fatally misguided, too, but we won't achieve anything unless we try and see their perspective and honestly address the issues they're concerned with.
The removal of out-dated privacy offers great benefits. Why not begin to invest and develop technology to scan people's brains and leverage the supreme protection advantages for the nation state by its complete elimination ?
Mandate that every resident of the EU wear a certified, union-approved, scanning head-band that monitors the resident's brain for violent, racist, subversive or even "offensive" thoughts using state-of-the-art AI and supporting algorithms.
Authorities are notified immediately and are granted auto-approved warrants. Judges get notifications on auto-sentences that include mandatory re-education to heal such delinquents. Obviously, the system will include the vital and necessary exemptions - politicians and friends of the party, top campaign-donors, favored minorities and cartels, etc.
Every Resident will be made Safe and Happy! This will lead to the establishment of an utopian state - the ultimate paradise on Earth! "Privacy", today, is a nasty detriment that is holding back the Progress of Civilization.
Excellent! People need to think more deeply about what various laws and technologies are leading us to.
Eh, I don't know, I feel like "is complete population surveillance a net good?" has been answered a million times, I'm not sure we need to go into it from first principles.
"Complete population surveillance" is an ill-defined category; depending on how you slice it, it's something very undesirable, or a status quo we've been living in for the past couple decades.
complete population surveillance is the system we evolved to thrive in: there is no privacy in hunter gatherer societies. and in medieval city societies the average person had no privacy. it was only nobles who had privacy, and they were generally up to no good.
I can't see anything in the intersection of "desirable", "status quo" and "complete surveillance", can you think of some examples?
Cellular telephony. Electronic banking.
Both have been "status quo" for decades, and subject pretty much everyone in the western world to significant, continuous surveillance. We can discuss whether it's desirable, but it's been like this for a while and people very much like benefits both provide.
I'm fairly sure I definitely don't want to be surveilled by my phone, yep.
Too late.
Your phone (GSM anyway) continuously reports back to the cell tower it's connected to, the strength of every other cell tower it can "see". The cell network, not the handset, decides which is the better cell tower for your handset to transfer to, which is why this information is being sent in the first place.
That information, the strength of cell towers, along with the knowledge of exactly where a cell tower is placed, can be used to triangulate your position down to a few meters in crowded areas with many cell towers. It's also how your phone establishes its position without GPS.
Besides that, you probably also have a handful or more apps that tracks your location within 100m constantly.
That's not the point of this thread. The original point is whether there's any desirable mass surveillance. I think we've pretty much shown there isn't.
Yet it predates smartphones, and is a fundamental aspect of how cellular networks operate. Surveillance of course got more thorough, detailed and overarching over time, still largely for engineering reasons - the network needs to know precisely where each handset is to aim the radio beam at it.
I doubt much will change.
Your location is already known to your mobile operator, to your phone OS manufacturer, to various social media services, and more, including the government/law enforcement on request (maybe, or they have permanent access, who knows).
Any time you buy stuff with a debit/credit card, the details of that transaction is known to your bank, your card provider, tax authorities, including where you bought stuff, and by request, authorities
Money that goes into your bank account is also known by your bank (obviously), by tax authorities, and by request, authorities.
Your ISP knows who you talk to, and can easily log metadata about which sites you visit, even if you use a secure DNS, and in most countries, authorities can request (metadata) logging from your ISP, which you'll never even notice.
During COVID, health authorities started analyzing sewage to estimate how much the virus had spread in various communities, and some places they were down to street level accuracy. Obviously that gets a lot more diffuse on Manhattan than some rural city with 400 people in it, but you pretty much can't fart without anyone knowing it.
We are already under constant surveillance, whether we like it or not. I don't mind as much as long as it's used retroactively, but the ChatControl proposition would be proactive instead. It would scan your texts and report if it found something "suspicious", with the caveat that you as a user don't know what's suspicious today (or tomorrow). The list isn't public, and you wouldn't get notified that someone had called an adult, not until someone comes knocking on your door.
Their plan is/was to use AI, and we all know that ChatGPT never gets confused about anything, so that sounds like a great and ultra consistent plan. Most things require context. I might be angry because some kids gave me hard time, and write "fuck all children" to someone, but the anger isn't evident in the message, only the literal message, which I agree might be interpreted as something else (deliberately). This would then (probably) result in a notification for human review, a task that would fall to the operator of said service, so now Meta, Google or whomever has a legal justification for reading my messages looking for context, and I can't see any way that could go wrong. The other option was for law enforcement to read the messages, and while they're probably a bit more trustworthy in terms of privacy, I doubt we want to staff up our law enforcement offices by a factor 10 to read peoples messages.
The list could also be updated behind your back, so for totalitarian wannabe regimes, it could be used to pinpoint exactly who is organizing all those darned protests.
I'm sorry but this whole thing stinks
- The data was collected in 2016, and was used in 2023 - a retention period of 7 years, way longer than the specified maximum of 2
- I'd argue that the basis of lawmaking is weighing the advantages versus the costs - supplying partial evidence in a case once a decade does not meet the requirements for introducing mass surveillance with infinite retention
The police work was sloppy, the facts as they stand are:
- The guy was on a list of 1400 or so of suspects, and was convicted of abducting a 13yo in 2023, a different crime. It bears mentioning that the town had a population of 5k and the municipiality 63k, halving that just to count the men doesn't give you a short list
- A white car was seen at the 2016 scene of the crime, suspected to be a Hyundai i30, but with a degree of uncertainty, just to illustrate how uncertain they were, the article mentions the police confiscated a white van
- The guy owned and sold his 2016 Hyundai around the time
- Thanks to Big Brother dragnet perma-retention surveillance (mobile cell info), it was established that the guy was in the area (a train station!) of the 2016 crime at the time (which is a large window considering the exact time of the disappearence of the first girl is not well known)
From this it's unclear to me whether the same guy was the perp in the 2016 and 2023 cases. If he was, I'd argue the dragnet-collected evidence is only circumstantial. I feel like it was possible that the police wanted to pin the crime on him as they had an expectation to catch the 2016 killer and he was obviously a pedo.
Even if he did it, I'd say the digital evidence was neither necessary nor that important in convicting him.
"- The data was collected in 2016, and was used in 2023 - a retention period of 7 years, way longer than the specified maximum of 2"
Normally, when there's an ongoing police investigation, the police can either request and retain a copy of the data, or request the holder of the data retain it until "further notice". I'm assuming that's what was going on here.
"If he was, I'd argue the dragnet-collected evidence is only circumstantial."
The phone logging was not conclusive evidence, only used to establish that he had been in the area. They found various artifacts in his house, like a roll of duct tape with the dead girls DNA on, that he explained he had found while walking around the lake, the same type of duct tape used to bind the girl. They also found various other items with the girls DNA on.
They used the logging data to establish his whereabouts for the night in question and compared it to his statement of where he'd been. They also used various financial transactions, like buying a cup of coffee with his credit card at a gas station, etc.
In Denmark, DNA cannot be used as a single evidence, only as supporting evidence, and the same goes for phone logging. But combined, if your location data says you've been there, and your DNA is found at the crime scene, even though it may not by itself be enough to get you convicted, it makes all other evidence much more believable.
Um, the data retention directive didn't apply to dashcams, only network metadata... and that Wikipedia article isn't curing my confusion.
Probably because dash cams have a questionable legal status in Denmark.
There's a law that prohibits all video monitoring of public spaces, and a register where you must register your video cameras if you're a business owner. Video surveillance in Denmark has a maximum legal retention of 28 days, unless there's an ongoing investigation.
Considering that dash cams mostly monitor "public spaces" and are moving around, the legality of them have been questioned multiple times. They are however also becoming more and more common, so I'm guessing they will eventually be allowed with a relatively low retention, like 1-2 days, enough to get footage off of them in case of a crash.
In 2006, when it was discussed, FFII made the analysis that it was violating fundamental rights. It took 10 years to reach the CJEU.
And cherry on the cake, countries like France decided to ignore the CJEU ruling using the joker card of 'national security'.
> can't read the full article
Reader mode seems to work.