hckrnws
It's terrifying that basically nothing has changed since the Snowden leaks. And most people simply don't care so governments can keep scooping up our data, sifting through it for whatever they may deem interesting.
The push for HTTPS everywhere came directly from the Snowden revelations, and that is considered a good thing.
Now people are focused on encrypting metadata, so things like DNSSEC took off.
There was a recent discussion about how state actors are using push notifications to spy on users. Maybe that is the next area of improvement.
> so things like DNSSEC took off.
DNSSEC doesn't encrypt anything - it's all plaintext on the wire. There are some DNS extensions that encrypt the query/response (DNS over HTTPS does this), but DNSSEC is not that.
DNSSEC is simply a way to verify that the response you get has not been meddled with in transit - it's the domain owner signing the DNS records so that you can verify that your DNS responses aren't being modified by a malicious entity (that may very well be your ISP).
Yes, they're probably thinking of DoH, which is much, much more widely deployed than DNSSEC.
How are you calculating that?
The number of users of recursive resolvers that support DNSSEC vs users of browsers that use DoH? Number of companies that has infrastructure that supporting DoH compared to number of companies that has infrastructure that supporting DNSSEC? Daily users?
The right figure of merit should be "lookups protected by DoH/DNSSEC" (stipulating that DoH and DNSSEC have different definitions of "protected" and just assuming arguendo they're the same). I don't think it'd even be close; I would assume DoH exceeds DNSSEC by several orders of magnitude.
Note that this isn't lookups that happen to run through a resolver with DNSSEC enabled; to count, you'd be talking about such a lookup to a zone that had DNSSEC signatures. You can see the advantage DoH has here, since it works with all zones.
That would be the volume of traffic being sent over DoH compared to the volume of traffic from every recursive and authoritative dns servers that support dnssec.
It would interesting to see statistics. I wouldn't assume anything in that race. Some TLD's which are signed has quite a lot of traffic going through them on any given day, and most resolvers connecting to those have dnssec enabled by default. There are published statistics for this, but I can't find anything similar from either google or cloudflare.
All traffic sent over DoH is protected. Most traffic --- the overwhelming majority of traffic --- sent through a DNSSEC-verifying resolver isn't signed by DNSSEC, because the overwhelming majority of zones --- and an even higher proportion of popular zones, by any reasonable metric of popularity you choose (I use the Moz 500) --- aren't signed.
However so many sites are using CloudFlare and other DDoS prevention and CDN services. I'm sure the NSA has fiber taps (beam splitters) at the point where the data travels unencrypted on the internal datacenter network.
CloudFlare itself might not even be aware of the taps. Or maybe only a few select employees know about it.
I think the solution to these problems is to reduce dependence on the Internet. It's now possible to torrent an entire library worth of books and have it all on your personal computer at home. 20TB HDDs are readily available, and constantly getting cheaper. Also check out https://reddit.com/r/DataHoarder. And we have local AI models, again these do not need the Internet to function.
> I think the solution to these problems is to reduce dependence on the Internet
Uh, I thought the concern is about communications (email, IM, etc), not about content consumption. Communications can't be replaced with some static archives.
I doubt any TLA cares if I read Python or Rust documentation, or if I watched Oppenheimer, or Barbie, or both. If they do - well, it's their loss, because such data is absolutely worthless at scale, as repeatedly demonstrated by the ad industry failing to extract any meaning from all the Big Data(tm) they hoard. And if they would somehow get interested in me personally - I don't think having an offline Wikipedia copy would help me any much.
The solution is to encrypt and authenticate every single byte transferred, end-to-end, with strongest known algorithms. And, well, some legislative action too.
How Some Governments Eliminate HTTPS/TLS Encryption [1]
[dead]
https everywhere is literally throwing the baby with the bathwater. yeah we got a little better at hiding content, still leaking ton of metadata, and still vulnerable to all the root CAs in your browser... and lost cache and everything else that http had.
> https every[where] is literally[1] throwing [out] the baby with the bathwater[2].
1) That would be figuratively, not literally, as there's no literal baby in HTTPS-everywhere that I know of.
2) What is HTTPS-everywhere throwing out? Which part is the baby and which is the bathwater? I don't think this is the right expresion to use here, not even figuratively.
> no literal baby in HTTPS-everywhere that I know of
Well not anymore. We threw it out.
literally
on 2: caches for one
>and still vulnerable to all the root CAs in your browser...
certificate transparency makes this very risky to pull off, making it all but useless unless you're trying to catch a international terrorist or something.
Comment was deleted :(
you forget systems have humans in them. most online banking scams hijack bank domains and use CAs for that country gov, which usually have keys leaked or sold on the right (wrong?) places. just look at india or brazil list of small govt CA revocations. those are usually CAs signed by the CAs in your browser.
so, yeah, a gov abusing this is very bad and visible. scammers profiting from the complexity and humans in the machine, is very common.
>most online banking scams hijack bank domains and use CAs for that country gov, which usually have keys leaked or sold on the right (wrong?) places. just look at india or brazil list of small govt CA revocations
Source? If true they're grounds for ejection from root certificate programs of various OS/browsers.
karpersky writes about then from time to time. since its not the CA key but some CA signed by those CA they just revoke that one and move on and nobody cares. last year (or the one before) they discussed this at length on the mozilla chats before the meeting
> and lost cache and everything else that http had.
A genuine loss, and also the ability to zip imagery.
Was it a loss? I don't think so. It was either ineffective, stale or a massive privacy issue. We're better off with local caches.
Comment was deleted :(
The zip images? It meant that less data was being sent. Same with the cache.
The worst of it was that internet providers wanted to tamper with data, and insert this or that advert into what they sent. The absence of that is a good thing.
Something changed: government agencies are now clear that they can carry on, build more of it, and get away with it. Even try and build more of it into law (see EU). It was an expensive test but successful.
Plenty has changed. In general the technology industry cares a lot more about security these days. Things have gotten better and many services became much more secure by default. WhatsApp is the most widely used messaging platform in the world and it has end-to-end encryption. It's not ideal but the fact is never before have so many people used something this secure. It's foiled my country's courts more than once.
What we need now is to get these governments to accept defeat and stop trying to undermine our security with constant legislative assaults. The fact they keep trying is evidence that it's working.
> And most people simply don't care
this is not true and insulting at the same time. Individual people are powerless against organized commercial activity, and, more than one million people in the USA are on payroll with uniform services, so they cannot object.
in addition, the throw-away word "terrifying" is also useless and annoying.. really
Exactly this. Anything individuals can do will be undone by state actors and social media and other corporations who are pressured by state actors.
I disagree: "terrifying" accurately sums up the future we're hurtling straight towards. I worry about people who are not worried, personally.
yeah - I worry a lot reading in this University town.. but the single word "terrifying" is overused by legal types trying to get reliable drama IMHO
It's absolutely an insult and frankly disheartening. And in order to get a word in edgewise you would have to rollup an entire decade of work into a simple cliche using appropriately PC keywords. Which is just as draining to contemplate as do.
This is so hard to read: Everything has changed.
And not only that, but the posted article even goes into some of the high level changes.
But you are right in one aspect. People absolutely dont care to stay on top of this- case in point your comment and the upvotes it has garnered.
Nothing has changed because we didn't get another leak ... Its likely much worst.
Wasn't expecting to find something amazing from this blog post, but this project looks amazing! and has a few big partners behind it so I hope it does not vaporware https://onionshare.org/
Micah Lee also made Dangerzone [1] a tool to safely convert untrusted pdf files to safe-to-open pdf files, wrote the book Hacks, Leaks, and Revalations [2] and he made Tor-Browser-Launcher [3] and he's a respected investigative journalist: https://micahflee.com/
I wish hosting a Tor website was as easy as Onionshare though. Start an app, point it at a dir with a rendered static site, and hand out the dot-onion to whomever you wish to show it.
I'm thinking: ephemeral websites like Opera Unite used to provide, to e.g. share a photo album for as long as you're online.
Edit: I should probably say I'm talking about the Android version. Which would be so convenient for an appliancy web server. The desktop version already do this.
In my experience hosting an onionsite is actually easier than hosting a regular website. Two or three lines in a .torrc file, then `tor -f .torrc & python -m http.server -d static` or whatever to start the webserver, and everything is handled for you (iirc doing `mkdir tor/hidden_service` might also be needed?). No need to port forward or fiddle with DNS...
Though hosting two at once is a bit of a pain ([0])..
[0] https://stackoverflow.com/questions/14321214/how-to-run-mult...
I guess I should just use Termux instead of waiting for Onionshare's Android version to reach parity.
It seems that there are at most 10000 Tor nodes.
Are the identity of the node operators known? Do known trusted organizations and individuals own the majority of these nodes?
It’s indeed expensive and risky for individuals to run Tor notes.
See https://nusenu.github.io/OrNetStats/
It's up to individual relay operators whether or not to publish contact information. For an example, see one of my relays here: https://nusenu.github.io/OrNetStats/w/relay/375DCBB2DBD94E52...
(If you're not an exit relay, it's neither risky nor expensive.)
Try the I2P anonymous network where by default every user is also a router. You can chain the two together, set I2P to use an outproxy and set the Tor browser to connect through I2P.
I have no clue as to what security this provides nowadays. I predict the NSA or FBI have large scale packet timing correlation in operation now. Or the random number generator has been compromised, or there are a series of bugs in the Tor or I2P implementation itself? I also think there was a human compromise of the Tor Project itself 8-9 years ago as well, it has been significantly weakened.
Personally I tend to like one-way data broadcasting systems, that way the receivers cannot be traced if they are air-gapped, e.g. satellite or radio data broadcasting. However nobody operates any useful service nowadays. Twenty years ago you could receive the whole worldwide Usenet feed with a DVB-S PCI card, unencrypted in the clear from a service called Cidera[1]. It was just UDP multicast packets containing Usenet messages, split into fragments with sequence numbers, trivial to reverse engineer.
I had this at service home, we had dial-up and with a 55cm satellite dish in my bedroom window, I had a 45Mb/s data feed. I wrote my own software including a device driver for the DVB-S card running under FreeBSD (4.2-RELEASE I believe). Nearly 1000x faster than dial up, which was mind-blowing back then.
Also archiving Nostr text messages is interesting, you can get the entire worldwide feed of messages from the relays, and then search for whatever topic of interest you want in the messages. Nobody can tell what you are searching for, unless your computer is compromised. I'm thinking about broadcasting these by satellite somehow, but satellite bandwidth is very expensive. It's just a matter of getting the funds for it.
Reception should be possible using a RTL-SDR or AD9364+USB3.0 microcontroller[2], and a satellite TV dish. The AD9364 chip can be had from AliExpress[3] for $6 now.
1. https://web.archive.org/web/20020806064624/http://www.cidera...
I thought I was a veteran of the early internet/Usenet enough to know most of the ins and outs but this DVB-S Usenet is new to me.
That sounds like a pretty unique approach to Usenet content delivery. I wonder if anything else/similar has existed/exists (unlikely in Usenet and I can't think of something quite as useful).
The goal is to not have to trust the node operators, but to have as much diversity as possible. They can only deanonymize you if all three nodes in one cascade collude.
Running a non-exit node carries very little to no risk.
No, you only need to compromise the first node – named the entry guard for that reason [1] – and either the exit node or ideally the endpoint (hidden) service. Deanonymization is then possible by correlating the timing of traffic between those two points, as Tor wants to be low-latency, without randomly delaying traffic.
For this reason not all nodes may be guard nodes, as decided by the directory authorities, and guard nodes are maintained for a longer time by the client to reduce the chance that you pick a compromised guard node because you switch often. This is balanced against the risk that you are unlucky and pick a compromised guard at first (which you then maintain for a longer time).
The exit node is pretty much assumed to be compromised, as it's a role not available to many entities – it requires high bandwidth and much teeth-gritting – and the public internet is intercepted at large anyway.
Given the bandwidth cost alone of some of the top nodes, it is not unreasonable to draw conclusions about who funds and operates them. With the additional incentive to control the network considered, the truth of the matter couldn't be more obvious.
I know that Chaos Computer Club used to run a bunch of them. And Noisebridge did for a while, but I think they stopped. A few universities, too.
I haven't been very active in the space for about a decade, I'd also love a more knowledgeable answer
The reason Tor exists is because of intelligence agencies - they literally created it[1]. It makes perfect sense. Agencies certainly want to be able to surveil the population at large but they want to be able to act anonymously themselves. I mean, any fixed entry-point an agency chooses to the Internet is going to get a lot of attention. Agencies could mix it up in various ways - change entry points, mix their traffic, etc ... and the final result looks a lot like Tor. And so yeah, I'm sure you have idealistic operators who take heat for running nodes but I'd expect the agencies put some resources into making sure one way or another that the exit nodes exist.
Tor experts: Why does Tor daemon try to access some random domains, i.e., www.[randomstring].com, when it starts up. It sends SNI, i.e., plaintext domain name over the wire. What purpose does that serve other than to allow anyone sniffing the network to see it.
This has been brought up on HN before. In 2013 and 2014. Why is the daemon still doing this in 2023.
Those random domains belong to Tor relays. The list of relays is public anyway so you can just look at the IP address to see that it's a Tor connection. Obfuscating it would achieve nothing. If you want to hide that you are using Tor you should use a bridge. https://bridges.torproject.org/
All true, but the question I am asking is why SNI is needed. For example, if the domains are bogus and the certificates are not obtained from a CA that checks domain registrations. What is the purpose.
Are these bogus domain names unique to each Tor user. Why not use ECH.
IIRC it just should look like normal TLS traffic. They are unique to each of the already public Tor relays not the users. What benefit would ECH bring? You can still look at the IP and know it's Tor.
All the "let's hide that it's Tor" work is done with pluggable transports [0] used in combination with unlisted relays (bridges). This way there can be multiple completely different protocols like obfs4 (looks completely random), Snowflake (uses WebRTC), meek (uses domain fronting) and WebTunnel (WebSockets over https) without a need to update the Tor spec and all relays.
[0]: https://spec.torproject.org/pt-spec/architecture-overview.ht...
Still not getting an answer why SNi is needed. Will have to read the code. Usually SNI is needed to "choose the correct certificate" but that seems inapplicable here. The plaintext servernames observable on the wire appear to be bogus. Maybe certificates are generated on the fly. Who knows. Thought someone might have an answer.
According to the spec [0] it looks like SNI is not needed. It just specifies a TLS connection. Why does it send an SNI? I guess because basically everything does it so why should Tor not send it?
[0]: https://spec.torproject.org/tor-spec/negotiating-channels.ht...
"... why should Tor not send it?"
Why not send the string "Hi there! I'm using Tor." Why not. Tor is annoucing itself with a unique TLS fingerprint anyway.
"Why not" was not the question I asked. IMO, it's a different question than asking "Why".
SNI has been used for censorship. Perhaps this is why the servername is apparently dynamic rather than static. I do not know. There could be many answers to "Why not". We might not know all of them. I certainly will never know all of them.
I am not arguing for or against sending SNI. I have had that debate too many times on HN.
TLS1.3 encrypts the handshake and ECH encrypts SNI. The folks who did that work on TLS thought it was worth doing. What was their reason. Was it "Why not encrypt the handshake and the servername."
Why encrypt DNS. Many people cite privacy as a reason. Yet HN commenters will routinely claim that they can determine (with no significant, additional effort) what sites someone is accessing because they can see the IP addresses on the wire. If so, then why try to make DNS private.
Why encrypt DNS and at the same time send plaintext domainnames on the wire via SNI. A significant portion of the web is still using TSL1.2 so the DNS names in the certificate are sent plaintext in the handshake as well. Why not just use unencrypted DNS. Everyone else is doing it.
No doubt Apple and others who have implemented "push notifications" thought "Why not" when they saw no need to be concerned about people sniffing the traffic.
As a user, I'm not a fan of all the noise on the wire from Apple and so-called "tech" companies. I'm generally not interested in the "features" and "conveniences" they are pushing. (No pun intended.) Plugging in a computer with an OS from one of these so-called "tech" companies usually results in it immediately trying to connect to remote server(s) without any input from the computer owner.
This is why I like NetBSD. Generally everything is off by default. It's up to me to decide what I want to automate.
Plain Tor says "Hi there! I'm connecting to a Tor node" because all relay IPs are public. You can't hide that fact with ECH or by not sending the SNI. Every second invested in that is just wasted.
And of course I never suggested anyone could. Despite that I provided links to a couple of past discussions that show people are well-aware using "plain Tor" is not something anyone can hide, you keep trying to reframe the question I asked into something else, a debate whether using Tor is detectable or not. Who cares. Tell us something we do not already know.
Why is Tor sending SNI. What is the purpose. It's a simple question. That is all I am asking.
For example, CDNs use SNI to host many HTTPS-enabled sites on a limited number of IP addresses. Why is Tor using SNI.
I've looked it up for you. "IIRC it just should look like normal TLS traffic." and "because basically everything does it" was correct.
"/* Browsers use the TLS hostname extension, so we should too. */"
Thanks. That's not much of a reason. These domain names are not fooling anyone. Just looking at them one can see they are faked up, not normal at all. Plus there is no corresponding DNS lookup. And not all traffic is browsers. Will have to edit this out and recompile tor daemon.
Make it a tun interface instead of a socks proxy.
LD_PRELOAD/proxychains for stuff isn't nice or leak proof.
They should also partner with vpn providers so vpn tunnels terminate in Tor and exit from it and Tor exits get hosted more and more by vpn providers (bit harder to block/classify as a tor exit).
Also, a new class of exits that support traffic only to specific subnets for sites that are "good" (low abuse potential like bbc, wikipedia,news sites,archive sites), these exits can be run out of people's residential IPs and phones
Is the https://snowflake.torproject.org/ still good. I run the extension on everything. My country basically does not block anything.
Two questions on Tor.
1. Browsers have become complex, and the users’ machines could be conceivably compromised through zero days in the browser. How does the security of the Tor browser, Chrome, Firefox, Safari and Brave compare with one another (in terms of chances of zero days)?
2. Do people here use Tor for everyday use (accessing the clearnet, not onion links)?
> 2. Do people here use Tor for everyday use (accessing the clearnet, not onion links)?
I use it almost daily. It is a great tool for sysadmin work, and a must for a lot of websites with less-than-honorable tracking policy.
There's about a thousand other ways to do it, but every time I want to see if a site is accessible to the outside world-at-large, I fire up Tor browser.
Comment was deleted :(
You can use Qubes Whonix to avoid relying on the browser security alone.
Tor Browser is essentially Firefox ESR and gets regular updates.
Yes.
Fuck all, is what.
People are far too happy to embrace surveillance and embrace censorships in the name of "think of the cause du-jour".
Even advocating free speech one has to worry about being painted as an alt-right 'freeze peach' kook.
Privacy and other rights online are soon going to go the way of their offline cousins -down the drain!
Is Tor still considered secure? If a single entity controls enough entry and exit nodes, I thought it was possible to identify users?
Examples:
[1]: https://www.ibtimes.co.uk/fbi-crack-tor-catch-1500-visitors-...
Tor is considered as good as anything.
After all, if you suspect the feds control a lot of tor nodes you probably also suspect they’ve infiltrated or outright own the major VPNs; that they’ve got special access to the major cloud providers, and that they’ve got backdoors in things like TPMs and remote management agents.
Of course Tor has its problems - exit nodes with trash IP reputations, unreliable hidden services, evil exit nodes and suchlike. So it’s certainly not perfect.
Team Cymru claim to be able to trace through VPNs using widely collected flow records from Internet core routers. ISPs sell these flow records to third parties.
So the whole fabric of the Internet itself is one giant spy machine, in effect. That sounds like is like it's straight out of dystopian fiction, but no, it's for real.
https://www.vice.com/en/article/jg84yy/data-brokers-netflow-...
I wonder if stuff like this can be thwarted by having a constant flow of encrypted junk traffic between two parties - only replacing the junk with real data (but not changing the volume) when they're actively communicating.
Obviously, this doesn't scale for something like social media. But metadata regarding one-on-one conversations using something like Signal could be effectively obscured.
Trouble is it's so difficult to know what your adversary is doing, it's better to switch to a different medium i.e. reduce dependence on the Internet (as I detailed in another post).
The fundamental nature of the Internet itself permits this behavior to go unchecked, there is no way for a user to know what is happening behind the scenes with certainty. We send out our private information (search queries, etc.) into this giant black box we have no control over. That's the crux of it.
That was not the case with radio or satellite TV, the ability to determine what people were listening to or watching on a mass scale was nearly impossible due to the laws of physics. As the system was completely receive-only.
> Tor is considered as good as anything.
By who? It's an important question for someone taking this advice.
One drawback of Tor is that it attracts attention to you.
> also suspect they’ve infiltrated or outright own the major VPNs ExpressVPN, PIA, Cyberghost, Zenmate, Intigo, and other major VPN providers are owned by ex-undercover commando in Israeli military Teddy Sagi of Kape Technologies.
It is safe to assume Israel and US are actively using them as honeypots, especially they were mostly acquired - not built - by Kape.
> Is Tor still considered secure?
Define "secure." Secure from what attackers, in what threat model, with what resources devoted to the attack, etc.
There are several categories of attack against Tor, and several ways to mitigate them, depending on who you are and what you use Tor for.
For a typical end user of Tor, the main one to worry about is "browser beaconing" style attacks - where a compromised onion website causes the browser to beacon out, on the clearnet, with something that links the browser's request on the clearnet to the browser's activity on the onion network. If you just use a regular browser proxied to Tor, this is a rather high risk, as browsers leak all sorts of things (I believe WebRTC was a common way of doing it for a while). The solution here is Whonix - a multi-VM setup in which your workstation (with a stripped down browser) is only connected to a Torification VM that routes all inbound traffic over Tor. So, if the browser tries to beacon out, it doesn't matter. Pop open a command shell and use ping, it still goes out through Tor. Etc. I consider this a reasonable way to use Tor, and any lesser construct is probably a dumb idea unless you're using it for things like sysadmin where beaconing out doesn't matter. Of note, Qubes supports the Whonix configuration as a first party sort of setup, and can route all your traffic through Tor, should you care.
There's also the risk of traffic correlation for end users, but I don't have a sense for the scale of this risk - I wouldn't leave long running connections over Tor, but I don't know if it matters for "casual use."
If you're hosting hidden services, the "guardian nodes" that know your identity are a risk, and given how many nodes seem to be run by three letter agencies, you'll want to deeply understand Tor and how to protect your services if you're going to host something - I believe you can limit guardian nodes to those you trust (and run yourself, perhaps?), but that changes some of the risk equations in ways I don't fully understand how to reason about (not running hidden services that matter - my blog has an .onion address, but it's literally just the same content as the clearnet version).
And then, we get into the problem that "computers in general" could be argued very convincingly to be "not in the slightest bit secure against a high level adversary," which is another can of worms...
> Pop open a command shell and use ping, it still goes out through Tor.
Nitpick, but since there's no way to send ICMP traffic through a SOCKS proxy, using ping from whonix workstation is impossible (i.e. it won't work). But any other kind of beaconing (DNS leak, curl to a clearnet website) will be properly torified.
What are you going to do? Not use Tor? That's hardly better.
There are alternatives to Tor that have different anonymity and routing protocols.
For example, take a look at I2P, which has been around almost as long as Tor. It has a lot in common with Tor, but has some key differences that may be appealing to some people. I2P nodes are capable of implementing something like an exit node (often called an "outproxy"), but there's no distinction between peers in I2P that designates one as an exit node. The project is more oriented towards hidden services, implementing its internal network, than it is in anonymizing connections to the clearweb. I think it's great that Tor exists, but I wish more people would consider I2P or at least simultaneously hosting their hidden services on both Tor and I2P. And if you really don't like running a Java runtime, Purple I2P exists and is written in C++.
There are also other networks like GNUnet, which slightly predates Tor, which is mostly file-sharing oriented, but with the goal of anonymity. It can do other things too but, from what I can tell, the project never gained much favor anywhere. Nevertheless, it still exists and is being worked on.
And I can't forget Freenet, or what's not referred to as "Hyphanet". I'll just call it Freenet for now because a lot of people still remember it. Freenet's focus is not only on anonymity but providing a distributed data store that is censorship resistant. This at least in part solves the issue of having to be online all the time in order to host a hidden service. It's been a long time since I've used Freenet, but supposedly the community is very good at discouraging crime and other unsavory elements. I haven't used the new iteration called Hyphanet.
All of these projects have significant differences from Tor, and some of these differences are seen by some as fixing significant flaws present in the Tor protocol that Tor can't reconcile. I2P's design of having no peer distinctions, in my opinion, is a vastly superior model for both security and plausible deniability. Its routing protocol also makes DDoS attacks a greater challenge. Having a primitive yet effective implementation of human-readable hostnames is also nice.
All of these projects are available for people to use today.
Tor does have two upsides. The first is that it has a larger community. The second is that it has the Tor Browser, which I2P does not have an equivalent to, although the Tor Browser can be adapted to use I2P.
Just because the FBI used shitty browser exploits that would have easily been thwarted by sandboxing the browser properly, doesn't mean that Tor is secure.
Running Tor nodes is pocket change for intelligence agencies, and a major legal risk for volunteers. It's virtually guaranteed that the US intelligence agencies own the majority of the network between them. If they were in an arms race with foreign intelligence agencies, the number of Tor nodes would be exploding.
It's just that the NSA won't lend its shiniest toys to the FBI just to bust some CP websites. The lives of children aren't worth the risk of exposing and losing a zero-day exploit.
>It's virtually guaranteed that the US intelligence agencies own the majority of the network between them.
I'm unconvinced by this argument. First of all, why US intelligence agencies and not let's say Czech intelligence agencies? Or, more likely, nobody owns the majority of the network and nobody wants an arms race?
Is it a major legal risk for volunteers? In most jurisdictions I've only heard of awkward conversations with police not familiar with what Tor is, but beyond that? Not much outside of jurisdictions that already aren't friendly to Tor.
The Tor Project is primarily funded by the US State Department and DARPA [1], so it is a forgivable error if someone mistook TOR for FBI surveillance software.
Tor seem pretty open about where that money is coming from and have DARPA and US goverment funds listed here with explanations: https://www.torproject.org/about/sponsors/
"U.S. Department of State Bureau of Democracy, Human Rights, and Labor The Bureau of Democracy, Human Rights and Labor leads the U.S. efforts to promote democracy, protect human rights and international religious freedom, and advance labor rights globally."
"DARPA's Resilient Anonymous Communication for Everyone (RACE) program researches technologies for a distributed messaging system that can: a) exist completely within a given network, b) provide confidentiality, integrity, and availability of messaging"
Both of those seem to align with Tor's goals of privacy and confidentiality.
Even if they aren't spying on people's TOR traffic, it's still useful for spies. The more people are using TOR, the easier it is for them to hide their own traffic alongside it.
How does that square with the GP's linked articles that allege the FBI "cracked" TOR on multiple occasions?
It strikes me as cognitive dissonance that someone would simultaneously distrust the USG for obvious reasons (TFA is about Snowden), yet also think they are a perfectly noble arbiter of "secure communications", and look the other way when copious evidence suggests that the USG has means to compromise said "secure communications".
It's a very roundabout way of arguing "if you have nothing to hide, you have nothing to fear". TOR is a fine product if you aren't doing anything that the USG would realistically prosecute you for.
The USG is not a monolith and doesn't operate like one. It is composed of many entirely different organizations doing different things with different goals.
If it were just one agency, sure, but we have evidence that State, DARPA, and FBI all have their hands in the TOR cookie jar. It would be naive to assume that CIA and NSA aren't there too.
Occam's Razor says that TOR is primarily a tool of the USG for enabling intelligence & influence operations, particularly those involving low-level assets without clearances (e.g. color revolutions), and the stuff about "consumer privacy" is an unreliable side effect.
Comment was deleted :(
It seems like there are multiple reactions one can have to Snowden type revelations:
1) "I have nothing to hide" (which isn't quite as bad as it sounds - "my security is being a nobody")
2) "The government shouldn't be spying on people"
3) "The government shouldn't be spying on people but of course they are and you should expect that. Your online privacy is essentially your jobs and you shouldn't use services and expect them to protect your privacy"
4) "Even if the state is always actually going to spy on people in various fashions, we still don't want to accept and normalize this. Forcing the state to 'parallel construction' is better than letting the just publicly exhibit all its surveillance since normalized surveillance has more of a chilling effect. Moreover, institutions have a limited ability to keep their own secrets so surveillance is itself going to periodically come into the open and when it does, attacking it is useful, again, even if we know it will always be an element of modern society".
5) And you can go on and on... Should we allow enough explicit machinery of surveillance to make the state not want to use don't-ask-don't-tell third parties (no but..) and so-forth.
> which isn't quite as bad as it sounds
It is as bad as it sounds: not only the apologists of this rhetoric believe in the false securities of it themselves, but they aggressively try to persuade everyone what if you don't want to be a nobody or have anything what you don't want to share with a complete strangers, including the government - then you are doing something illegal.
As a addendum to this. I think many of the J6 crowd thought the same thing when posts they made online were used in evidence against them. You may have nothing to hide, but like Snowden said: "Saying you have nothing to hide is like saying you don't care about freedom of speech because you have nothing to say"*
In both situations there is a loss of privacy and autonomy. At some point people just shut down, for the same reasons people behave differently when they know they are under surveillance verses when they think they're not.
*paraphrasing
It's not so much that digital security isn't important -- it's that if you are not a "nobody", and you have something to hide, digital security is further down your list of your worries.
I think too many people think about "digital security" in isolation, without any human context or considerations of threat profile. Digital security without holistic human security is simply a math exercise.
With point #1.....
I've often been concerned that focuses on the most aberrant and rule-breaking of people inevitably broadens the range of the scope of behavior that gets targeted over time. When the most severe of criminality is less prevalent and easier to detect and stop, would law enforcement then not use their remaining resources to go after pettier and pettier crimes? What makes you think your present behavior will always keep you "uninteresting"?
Comment was deleted :(
[dead]
Comment was deleted :(
[flagged]
Comment was deleted :(
[flagged]
Hi,
is it possible to mark the packages that leave your house and flow into the street using your router or some small device in the box that connects the cables from outside going into your house?
And could these boxes on the street do a job like that?
I mean, even if I have a DIY network, hard- and software, at home, wouldn't it be futile because my encrypted packages, while not easily opened, could still be "marked" at any point outside my house?! With a prefix or in between the packets? That's what someone on the internet means, when they say, the more of us use Tor, the better protected the few are and the fewer users surf via tor, the less protected whistleblowers and journalists are.
Also: when our packages are encrypted, wouldn't it be possible for specific software, say, MS Windows, to pack huge amounts of very specific data into some of these packages, that would get encrypted in "reengineerable" ways, that are easier to break?
I have a surface-level understanding of the basics of these things. Forgive me for not reading up on that stuff, first.
Edit 2: could similar things be done via CPU? I always wondered what the disadvantages of Apples way of keeping stuff in RAM were. Can the CPU send or safe data about what's in RAM in ways that can't be easily tracked via the OS?
Crafted by Rajat
Source Code