I wonder how many people actually believe this is for the kids and not for population control.
The bad people can use encrypted services just like they use guns, even if they are illegal. But then there's a spike in arrests for posting on social media if people express opinions or content contrary to preferred narratives.
Fortunately, there are people exposing NGO money flows and who favors them.Fortunately, the US keeps free speech sacred.
I'm immensely grateful to the founding fathers and their ability to come up with something so helpful so many years down the line
Look, chat control is obviously a mental proposal ä. But the sitting president bans journalists who does not write what he says they have to. How is that keeping free speech sacred?
To me, free speech in the US is sacred because there is a set of rules that shields citizens from being imprisoned simply for expressing their thoughts—otherwise, a huge swath of Bluesky users would already be behind bars...
Can you speak freely in Russia, China, the UK, and beyond like you do in the US? How many people are incarcerated in each of those countries just for saying stuff vs. in the US?
As for the president, you can't control him from pulling every lever to push his agenda, but at least the system allows for accountability.
In the meantime, you can tell as many people as possible what's going on, so that when voters vote again, they can be made aware of contrary information.
It's not a perfect system, but I think it's way better than exposing people to the risk of a crooked elite infiltrating the propaganda and censorship layer and making it impossible for contrary ideas to be shared.
> I wonder how many people actually believe this is for the kids and not for population control.
For most of my life, news orgs have been treating national IC/LEO as if they have a history of truth-telling. Whenever a press conference comes up, journalists/editors reliably forget that they've never been told a meaningful truth in one of these.
If the people's who's job it is to highlight the lies of the powerful - usually don't, what hope is there for the proletariat?
I've used them for years. They're likely the most private VPN, but I still can't recommend them. Their IPs are constantly blocked, and with few servers, switching doesn't help (this was already an issue long before I was a customer). Plus, their macOS app has tons of issues.
Huh, I've been pretty happy with them. I live in a country where IPs are often blocked entirely by US and European websites (presumably due to hacking issues and lack of government action). So my major use is being able to access websites.
It's true that servers are flagged, but every VPN has that issue. Usually switching to a new server resolves it and I've noticed some servers aren't use much and are very fast and not flagged by many websites.
What I like about Mullvad is not only the commitment to privacy, but also the VPN speeds. I get 300-500MB/s pretty regularly. Some servers get congested during peak times, but switching to another I'll usually find a fast one in a desirable country very quickly.
I don't think there is a way around the fact that governments will always want at least "lawful intercept" (with warrants) capabilities.
It's a noble fight trying to get E2EE be compatible with the law. But I think some perspective for privacy advocates is due. People don't want freedom and privacy at the cost of their own security. We shouldn't have to choose, but if nothing else, the government has one most important role, and that is not safeguarding freedoms, but ensuring the safety of its people.
No government, no matter how free or wealthy can abdicate its role in securing its people. There must be a solution to fight harmful (not neccesarily illegal) content incorporated into secure messaging solutions. I'm not arguing for backdoors in this post, but even things like Apple's CSAM scanning approach are met with fierce resistance from the privacy advocate community.
This stance that "No, we can't have any solutions, leave E2EE alone" is not a practical stance.
Speaking purely as a citizen, if you're telling me "you will lose civil liberties and democracy, if you let governments reduce cp content", my response would be "what's the hold up?". Even if governments are just using that as an excuse. As someone slightly familiar with the topic, of course I wouldn't want to trade my liberties and freedoms, but is anyone working on a solution? are there working groups? Why did Apple get so much resistance, but there are no opensource solutions?
There are solutions for anonymous payments using homomorphic encryption. Things like Zcash and Monero exist. But you're telling me privacy preserving solutions to combat illicit content are impossible? My problem is with the impossible part. Are there researchers working to make this happen using differential privacy or some other solution? How can I help? Let's talk about solutions.
If your position is that governments (who represent us,voters) should accept the status quo, and just let their people suffer injustice, I don't think I can support that.
Mullvad is also in for a rude awakening. If criminals use Tor or VPNs, those will also face a ban. We need to give governments solutions that lets them do what they claim they want to do (protect the public from victimization) while preserving privacy to avoid a very real dystopia.
Freedoms and liberties must not come at the cost of injustice. And as i argued elsewhere on HN, in the end, ignoring ongoing injustice will result in even less freedoms and liberties. If there was a pluralistic referendum in the EU over chat control, I would be surprised if the result isn't a law that is even far worse than chat control.
EDIT: Here is one idea I had: Sign images/video with hardware-secured chips (camera sensor or GPU?) that is traceable to the device. When images are further processed/edited, then they will be subject to differential-privacy scanning. This can also combat deepfakes, if image authenticity can be proven by the device that took the image.
Thanks for the reply, but you are exactly the audience my post is for. Because you say that, we will lose what little figments of privacy and freedoms we have left.
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
> Because you say that, we will lose what little figments of privacy and freedoms we have left.
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
> I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
You have an ideological approach instead of a practical one. It isn't governments that are demanding it. I am demanding it of our government, I and the majority. I don't want freedoms paid for by such intolerable and abhorrent levels of ongoing injustice. It isn't a false sense of security, for the victims it is very real. Most criminals are not sophisticated. Crime prevention is always about making it difficult to do crime, not waving a magic wand and making crime go away. I'm not saying let's give up freedoms, but if your stance is there is no other way, then freedoms have to go away. But my stance is that the technology is there, it's just slippery slope fallacy thinking that's preventing from getting it implemented.
> What about this is privacy preserving?
Persons aren't identified before a human reviews and confirms that the material is illicit.
You have to identify yourself to the government to drive and place a license plate connected to you at all times on your car. You have to id yourself in most countries to get a mobile phone sim card, or open a bank account. Dragnet surveillance is what I agree is unacceptable except as a last resort, it isn't dragnet if algorithms flag it first, and it isn't privacy invading if false hits are never associated with individuals.
> you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
There is just cause, the material was flagged as illicit. In legal terms, it is called probable cause. If a cop hears what sounds like a gunshot in your home, he doesn't need a warrant, he can break in immediately and investigate because it counts as extenuating circumstance. The algorithms flagging content are the gunshots in this case. You could be naked in your house and it will be a violation of privacy, but acceptable by law. If you said after review, they should get a warrant from a judge I'm all for it.
It is materially false, because that the scanning can be done without sending a single byte of the device. The privacy intrusion happens not at the time of scanning, but at the time of verification. To continue my example, the cop could have heard you playing with firecrackers, you didn't do anything wrong but your door is now broken and you were probably naked too, which means privacy violated. This is acceptable by society already.
The false positive rates for cops seeing/hearing things, and for eyewitness testimony is very high in case you're not aware. by comparison, apples csam scanner was very low.
> There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects
As stated above, so long as the scanning is happening strictly on-device, you're not being surveilled. When there is a hit, humans can review the probable cause, a judge can issue a warrant for your arrest or a search warrant to access your device.
Another solution might be to scan only at transmission time of the content, not capture and storage (still not good enough, but this is the sort of conversation we need, not plugging in of ears).
Let's take a step back. Another solution might be to restrict every content publishing on the internet to people positively identifying themselves.
Except that it is not materially false. Only in a perfect society will your “system that flags illicit content” not become a system that flags whatever some authoritarian regime considers threatening, and subverting public logging/auditing is similarly trivial to a motivated authoritarian. All your hypothetical solutions rely on humans, who are notoriously susceptible to being influenced by either money or being beaten with pipes, and on corporations, who are notoriously susceptible to being influenced by things that influence their stock price.
The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.
(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)
This is already the case with other means of communication. the internet isn't that special. If you don't trust your government, do something else about it.
We rely on eye witness testimony and human juries all the time. The innocence project has a long list of people that spent decades in prison because of this.
The solution to authoritarian regimes is to not have one, not tolerate cp on the internet.
I wonder how many people actually believe this is for the kids and not for population control.
The bad people can use encrypted services just like they use guns, even if they are illegal. But then there's a spike in arrests for posting on social media if people express opinions or content contrary to preferred narratives.
Fortunately, there are people exposing NGO money flows and who favors them.Fortunately, the US keeps free speech sacred.
I'm immensely grateful to the founding fathers and their ability to come up with something so helpful so many years down the line
Look, chat control is obviously a mental proposal ä. But the sitting president bans journalists who does not write what he says they have to. How is that keeping free speech sacred?
To me, free speech in the US is sacred because there is a set of rules that shields citizens from being imprisoned simply for expressing their thoughts—otherwise, a huge swath of Bluesky users would already be behind bars...
Can you speak freely in Russia, China, the UK, and beyond like you do in the US? How many people are incarcerated in each of those countries just for saying stuff vs. in the US?
As for the president, you can't control him from pulling every lever to push his agenda, but at least the system allows for accountability.
In the meantime, you can tell as many people as possible what's going on, so that when voters vote again, they can be made aware of contrary information.
It's not a perfect system, but I think it's way better than exposing people to the risk of a crooked elite infiltrating the propaganda and censorship layer and making it impossible for contrary ideas to be shared.
> I wonder how many people actually believe this is for the kids and not for population control.
For most of my life, news orgs have been treating national IC/LEO as if they have a history of truth-telling. Whenever a press conference comes up, journalists/editors reliably forget that they've never been told a meaningful truth in one of these.
If the people's who's job it is to highlight the lies of the powerful - usually don't, what hope is there for the proletariat?
For months, Mullvad has been papering San Francisco with smart and cheeky ads, like "Mass surveillance is made by machine men with machine hearts".
I admire that they're saying this, and wish other VPN companies would do similar public relations to highlight the risks of ad targeting.
Tbh, I'm a customer. Before Mullvad, I used PIA.
https://mullvad.net/en/blog/advertising-that-targets-everyon...
I've used them for years. They're likely the most private VPN, but I still can't recommend them. Their IPs are constantly blocked, and with few servers, switching doesn't help (this was already an issue long before I was a customer). Plus, their macOS app has tons of issues.
> They're likely the most private VPN, but I still can't recommend them. Their IPs are constantly blocked
Every VPN provider's IPs are blocked now. IP data providers finally got serious about identifying them, 5 or 10 years back.
If you want to look for alternatives: https://kumu.io/embed/9ced55e897e74fd807be51990b26b415#vpn-c...
Huh, I've been pretty happy with them. I live in a country where IPs are often blocked entirely by US and European websites (presumably due to hacking issues and lack of government action). So my major use is being able to access websites.
It's true that servers are flagged, but every VPN has that issue. Usually switching to a new server resolves it and I've noticed some servers aren't use much and are very fast and not flagged by many websites.
What I like about Mullvad is not only the commitment to privacy, but also the VPN speeds. I get 300-500MB/s pretty regularly. Some servers get congested during peak times, but switching to another I'll usually find a fast one in a desirable country very quickly.
I don't think there is a way around the fact that governments will always want at least "lawful intercept" (with warrants) capabilities.
It's a noble fight trying to get E2EE be compatible with the law. But I think some perspective for privacy advocates is due. People don't want freedom and privacy at the cost of their own security. We shouldn't have to choose, but if nothing else, the government has one most important role, and that is not safeguarding freedoms, but ensuring the safety of its people.
No government, no matter how free or wealthy can abdicate its role in securing its people. There must be a solution to fight harmful (not neccesarily illegal) content incorporated into secure messaging solutions. I'm not arguing for backdoors in this post, but even things like Apple's CSAM scanning approach are met with fierce resistance from the privacy advocate community.
This stance that "No, we can't have any solutions, leave E2EE alone" is not a practical stance.
Speaking purely as a citizen, if you're telling me "you will lose civil liberties and democracy, if you let governments reduce cp content", my response would be "what's the hold up?". Even if governments are just using that as an excuse. As someone slightly familiar with the topic, of course I wouldn't want to trade my liberties and freedoms, but is anyone working on a solution? are there working groups? Why did Apple get so much resistance, but there are no opensource solutions?
There are solutions for anonymous payments using homomorphic encryption. Things like Zcash and Monero exist. But you're telling me privacy preserving solutions to combat illicit content are impossible? My problem is with the impossible part. Are there researchers working to make this happen using differential privacy or some other solution? How can I help? Let's talk about solutions.
If your position is that governments (who represent us,voters) should accept the status quo, and just let their people suffer injustice, I don't think I can support that.
Mullvad is also in for a rude awakening. If criminals use Tor or VPNs, those will also face a ban. We need to give governments solutions that lets them do what they claim they want to do (protect the public from victimization) while preserving privacy to avoid a very real dystopia.
Freedoms and liberties must not come at the cost of injustice. And as i argued elsewhere on HN, in the end, ignoring ongoing injustice will result in even less freedoms and liberties. If there was a pluralistic referendum in the EU over chat control, I would be surprised if the result isn't a law that is even far worse than chat control.
EDIT: Here is one idea I had: Sign images/video with hardware-secured chips (camera sensor or GPU?) that is traceable to the device. When images are further processed/edited, then they will be subject to differential-privacy scanning. This can also combat deepfakes, if image authenticity can be proven by the device that took the image.
> But you're telling me privacy preserving solutions to combat illicit content are impossible?
Yes. You cannot have a system that positively associates illicit content with an owner while preserving privacy.
Thanks for the reply, but you are exactly the audience my post is for. Because you say that, we will lose what little figments of privacy and freedoms we have left.
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
> Because you say that, we will lose what little figments of privacy and freedoms we have left.
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
> I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
You have an ideological approach instead of a practical one. It isn't governments that are demanding it. I am demanding it of our government, I and the majority. I don't want freedoms paid for by such intolerable and abhorrent levels of ongoing injustice. It isn't a false sense of security, for the victims it is very real. Most criminals are not sophisticated. Crime prevention is always about making it difficult to do crime, not waving a magic wand and making crime go away. I'm not saying let's give up freedoms, but if your stance is there is no other way, then freedoms have to go away. But my stance is that the technology is there, it's just slippery slope fallacy thinking that's preventing from getting it implemented.
> What about this is privacy preserving?
Persons aren't identified before a human reviews and confirms that the material is illicit.
You have to identify yourself to the government to drive and place a license plate connected to you at all times on your car. You have to id yourself in most countries to get a mobile phone sim card, or open a bank account. Dragnet surveillance is what I agree is unacceptable except as a last resort, it isn't dragnet if algorithms flag it first, and it isn't privacy invading if false hits are never associated with individuals.
> you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
There is just cause, the material was flagged as illicit. In legal terms, it is called probable cause. If a cop hears what sounds like a gunshot in your home, he doesn't need a warrant, he can break in immediately and investigate because it counts as extenuating circumstance. The algorithms flagging content are the gunshots in this case. You could be naked in your house and it will be a violation of privacy, but acceptable by law. If you said after review, they should get a warrant from a judge I'm all for it.
It is materially false, because that the scanning can be done without sending a single byte of the device. The privacy intrusion happens not at the time of scanning, but at the time of verification. To continue my example, the cop could have heard you playing with firecrackers, you didn't do anything wrong but your door is now broken and you were probably naked too, which means privacy violated. This is acceptable by society already.
The false positive rates for cops seeing/hearing things, and for eyewitness testimony is very high in case you're not aware. by comparison, apples csam scanner was very low.
> There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects
As stated above, so long as the scanning is happening strictly on-device, you're not being surveilled. When there is a hit, humans can review the probable cause, a judge can issue a warrant for your arrest or a search warrant to access your device.
Another solution might be to scan only at transmission time of the content, not capture and storage (still not good enough, but this is the sort of conversation we need, not plugging in of ears).
Let's take a step back. Another solution might be to restrict every content publishing on the internet to people positively identifying themselves.
Except that it is not materially false. Only in a perfect society will your “system that flags illicit content” not become a system that flags whatever some authoritarian regime considers threatening, and subverting public logging/auditing is similarly trivial to a motivated authoritarian. All your hypothetical solutions rely on humans, who are notoriously susceptible to being influenced by either money or being beaten with pipes, and on corporations, who are notoriously susceptible to being influenced by things that influence their stock price.
The Pleyel’s corollary to Murphy’s law is that all compromises to individuals’ rights made for the sake of security will eventually be used to further deprive them of those rights.
(I especially liked the line “You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.”)
This is already the case with other means of communication. the internet isn't that special. If you don't trust your government, do something else about it.
We rely on eye witness testimony and human juries all the time. The innocence project has a long list of people that spent decades in prison because of this.
The solution to authoritarian regimes is to not have one, not tolerate cp on the internet.
[dead]