hckrnws
> Since there is a limit to how much data a symmetric cryptographic key can protect
Is this regulatory related or self imposed?
Neither: it's because (overgeneralizing) there are bounds on the number of times nonces can be generated under a key without risk of collision, which in AEAD constructions is a devastating event.
Comment was deleted :(
But shouldn't that bound be astronomical?
GCM is like 4 billion messages, which is not astronomical at Facebook's scale.
Meta (we) process many billions of messages per second. Finding enough randomness is a thing at scale.
Per day? but not per second. would require every human being sending @ message every 8 seconds
Per second. Messages in the Kafka sense not the WhatsApp sense.
I'm fairly out of my depth here, but the birthday paradox is a harsh mistress.
Especially when the pool of possibilities is equally massive
For a given nonce collision, how many blocks or messages can be decrypted?
For some modes, you can break every message encrypted with that key.
If you want to read more about cryptographic key wear out, Soatok wrote an excellent explainer on this a few years ago https://soatok.blog/2020/12/24/cryptographic-wear-out-for-sy...
Comment was deleted :(
Comment was deleted :(
Scribe[1], Scuba[2], Hive[3], and folly[4]. It's neat for sure, but speaking as an outsider without the resources to learn (much less create!) an entire universe of data infrastructure and SDKs, it feels like I'm in the corner eating dinner at the kids' table.
At any rate, I'm glad somebody out there is doing this stuff.
[1]: https://engineering.fb.com/2019/10/07/core-infra/scribe/
[2]: https://research.facebook.com/publications/scuba-diving-into...
[3]: https://research.facebook.com/publications/hive-a-warehousin...
You can absolutely learn it. Elasticsearch, Logstash, Kibana, Grafana, Prometheus. Set up a local Docker Compose deployment of the full stack and pass through some dummy events. Write an app to generate synthetic load. Play around with aggregations and visualizations. Tech blogs use a lot of buzzwords but you just need to know the fundamental concepts.
Comment was deleted :(
What it's like to be a Facebook engineer in this context: It's your oncall week, and all of a sudden you get an urgent task that you must resolve in the next few days: Figure out how to replace a crypto algorithm in some component your code depends on or else we tell your manager that you're not playing ball.
How it really works - security team files a task to replace a crypto algorithm in your team's code and gives you a SLA deadline of 6 months. 6 months later, no one has touched it, so they escalate. Your manager says "we are short staffed" and requests an extension. They give you another 3 months and write a detailed guide on what lines to change. 3 months later everyone they originally spoke to has left the team, and the new people don't have any context. They try and figure it out but the security person has also left the team. No one remembers what the task was originally created for. A year later the service that was using the old algorithm is itself deprecated.
You forgot the multiple rounds of taskcreeper closing the task due to inactivity
In my experience with the cryptographic security space at Facebook, what happens is the security team reaches out to you and starts a conversation. You collaboratively establish the severity, acceptable mitigation options, and appropriate timeline. There might be a little bikeshedding over various degrees of ideal option (it's more fun for the crypto guys that way). They might volunteer to do the work. You review the diffs, they're good, it lands.
They're smart people and the team knows how to collaborate with random infra/product teams.
When I was there, the experience here would've been more like "you see some comms on Workplace saying this thing went live".
For a small number of people, it would look like "It's your oncall week, and you unexpectedly receive a large diff stack from a complete stranger". Most engineers at Meta won't touch anything related to C++ during their tenure there, and you can safely assume all this stuff would be abstracted away from them.
If you are on a C++ project, odds are that your experience here would be "I'm on-call and got pinged by a random person saying they were submitting a diff stack to implement this feature". Except if your project was sufficiently large, you'd probably have known about this ages ago so the diff wouldn't actually be unexpected, nor the submitter a complete stranger.
Sure, sometimes I got a bunch of automod diffs. Sometimes it was: fix this now. It's impossible to know everything that every infra team is cooking up.
> you'd probably have known about this ages
but nobody paid attention and it falls on oncall's plate
You’d be calling libraries that are vetted by the security team. Engineering at Meta is a lot like coloring with crayons. It’s very limiting but the infra does a lot for you.
This is true, but like, sometimes you're using them inappropriately. Or you've got a limited exception for performance reasons, and your replacement solution is, uh, unsound.
Do you ever eat the crayons?
Never worked at facebook, but typically at a large companies, things like this are sent with instructions on how to go about doing that, and most often just making sure your dependencies are updated so next run of automated build/deploy will pick up the change.
triage and redirect to L7+ colleague
Comment was deleted :(
Crafted by Rajat
Source Code