hckrnws
Given the discipline surrounding most "air gapped" machines I've seen I always find this quote appropriate:
"At best, an air gap is a high-latency connection" -Ed Skoudis - DerbyCon 3.0
And i love that quote, because it suggests the existence of another - very slow, but very high troughput network, human to human, embedded device to embedded device. You could even go without centralized infrastructure there. Just the organisms formed by the devices ( a line of home-routers is a "street", a group of devices meeting every morning is a "bus" etc.) and the routing address is basically a "interaction" with data-organism map route-finding. No ISP involved anywhere. But your info still gets to the airport, hops on a cellphone and gets to the goal.
> And i love that quote, because it suggests the existence of another - very slow, but very high troughput network, human to human, embedded device to embedded device
This is the network that operation Olympic Games used to get Stuxnet into the Natanz facility. Contactor laptops are a major part of that network.
This could be implemented via Delay-Torelant Networking protocols.
It's just like how we once used UUCP and Fidonet for email / news / message boards to remote systems that only had intermittent dialup connectivity in the 1980s and 1990s. Pockets of local communities would pool together to share a single system that would make the long distance calls to another city to send and receive messages. That really helped when long distance cost $0.34/minute and could be shared by hundreds of end users.
Some organisatios increase this latency by filling the USB ports with hot glue.
For the old SunRay thin clients one could disable the USB ports by policy (and enable for certain users, iirc). That was an important feature there, as one intended application was as public kiosk systems, e.g. in a library.
The same is possible in Windows 10 and 11, but the users will revolt, if a sysadmin were to enforce such (the same users who insist on using Windows instead of a more secure system).
> For the old SunRay thin clients one could disable the USB ports .... >The same is possible in Windows 10 and 11, but the users will revolt, if a >sysadmin were to enforce such (the same users who insist on using Windows instead >of a more secure system).
Can I add a little more colour here (and have worked in and designed-for very secure environments) - users will revolt if removing the USB ports makes their life more difficult. This can work if there is an effective feedback loop that makes sure the users can still do their jobs efficiently in the absence of USB ports, and corrects for them when they can't. Users won't go around something unless it gets in their way!
Plenty of organisations enforce "no USB devices" on all their users. Not even super secure places, but just many regular admin-type office workers get their USB ports disabled in software.
Partly it's to prevent leaking of company secrets, unauthorized use of corporate devices for home use, harder to track the location of data, as well as the possibility of malware.
But you can almost always just reboot into safe mode to get around it.
Interesting. So no USB camera, headset, etc either?
> Interesting. So no USB camera, headset, etc either?
My workplace has a policy of no USB storage devices (though you can request an exception). By default, other USB devices work, and storage devices are mounted as read-only.
I don't think the goal is so much system security as preventing data breaches/data exfiltration.
I work in finance, and this sort of setup is pretty common. Yes, I have a USB headset and camera for calls. My USB keyboard and mouse work just fine. If I plug my phone in, best I can do is charge it (slowly), so I use a wall-plug charger instead.
I could easily bypass the policy since I have the permissions to do so, but I won't. Working in the trading/hedge fund space, it's not unheard of to see employees sued for stealing trade secrets (quant models, for example). One only needs to search "citadel sues former employees" for examples.
edit: former Citadel employee; have not worked there in over a decade.
Depends. USB devices have "class"es which define their functions. Or you can allow per device via "manufacturer:model" identifiers.
The controls can be very granular, if you decide to manage that.
The few occasions I worked in a bank, our client made it very clear that anyone inserting an USB drive anywhere would be walked to the front door by security within an hour.
Today the malware can be in a cable, it doesn't need to be a drive. Some of these cables also behave like they should, so they are difficult to notice.
I used a Sun Ray thin client on an airgapped network in my first job, working for the government. They were perfect for this.
No persistent storage, so no concerns about easily recoverable classified data sitting on desks. You could disconnect from your session and pick it up again in the other office across town, or just leave your stuff running overnight.
99% of "disabled" usb ports aren't. Keyboards and mice still work, which means there remains a path to be exploited.
Ps/2 superiority!
I'm pretty sure I wouldn't use a PS2 keyboard or mouse that's been through the NSA fulfillment warehouse.
Even a malicious ps/2 keyboard could run any command it desires automatically.
I had a PS/2 keylogger disguised as an extension cable, controllable by specific keystroke and it would dump its records as typed text... Simple and efficient !
Or just passively key capture everything.
But it still cuts down on attack surface, no? Most USB hacks are via ignorant employees plugging in compromised usb drives/devices or am I missing something here? The hot glue is a significant reminder that you add “you can be fired for misusing company computers” to the company employee manual
Yeah,I was going to point out that a software block is unlikely to help against bad-usb stuff that infects the USb firmware?
Depends. It won't help against exploitative firmware or shocker devices, but most USB exploits don't come with zero-day firmware exploits or even require user interaction, which this policy will prevent.
Additionaly, even when attacked with such extreme measures, most users won't try to plug in planted, potentially malicious USB devices if they don't expect them to work.
Actually, the attack of leaving a USB drive forgotten in the parking lot has proven time and time again to be extraordinarily effective.
In organizations where only HID USB devices are allowed, not mass storage? I'm not aware of any reported successes in that environment, although it's theoretically possible (Heck, you could even have your evil HID-presenting SOC USB stick open a command prompt and type in the malware if it detects a long enough lapse in input without an obvious screen lock command).
It is, but if your organization completely forbids any non-HID USB devices, users are less likely to try their found USB stick on a company PC, since they don't expect it to work anyway.
https://usbguard.github.io/ for Linux, amongst others. Mostly to be found in the context of 'anti-forensics' there.
> the same users who insist on using Windows
People don’t like windows let alone corporate deployments of windows.
We had to use epoxy. They picked the hot glue out.
They aren’t that hard to desolder either if you have downtime and are tired of playing hearts.
But isn’t part of security realizing that there is no 100% solution? It’s all about probability. Air gapping cuts down on the number of interactions with the network at large. Lots of packet drops that will never reach it, easy to make sure the number of ports available to interact with it? I worked at places with 25 year old DOS running in a VM running multi-million dollar machines and they had never been infected with anything, probably because they are air gapped and who can “touch” them is quite limited to trained personal only.
> But isn’t part of security realizing that there is no 100% solution? ... Air gapping cuts down on the number of interactions with the network at large.
My point is that, practically speaking, most companies don't have the discipline to actually keep an air gap up, long-term. You inevitably need to get data in and out of the air-gapped systems.
The "air gapped" networks I've seen end up not actually being air gaps. Real air gaps are inconvenient, so eventually somebody installs a dual-homed host or plugs the entire segment into a "dedicated interface" on a firewall. Even without that, contractors plug-in random laptops and new machines, initially connected to the Internet to load drivers / software, get plugged-in to replace old machines. The "air gap" ends up being a ship of Theseus.
I had a Customer who had DOS machines connected to old FANUC controllers. They loaded G-code off floppy diskettes. Eventually those broke and they started loading G-code over RS-232. The PCs didn't have Ethernet cards-- their serial ports were connected to Lantronix device servers. It wasn't ever really an air gap. It was a series of different degrees of "connectivity" to the outside world.
An airgap is only as secure as the dirtiest USB-found-on-the-street that you plug into it.
Reminds me of the time I was looking after a SECURE system: One of the tasks was the daily update of the antivirus. So I would grab the blessed stick, insert it into the Internet-PC, and using FTP would download the latest antivirus update. Then I'd walk over to the SECURE system, insert the stick, and run the exe from the stick. There, system SECURED for today!
Norton, trust no other!
You forgot that you need to use read-only media to transfer data from Internet-connected system to air gapped system, such as CD-ROM, or destroy writeable media after use in an air-gapped system.
If the purpose of the attack is to bring something into the network, to e.g. destroy something (Stuxnet), or blink an LED that faces a window, then RO media will be pretty useless, and will probably cause a false sense of security.
Likely that is the point. The initial process with the stick is security theater, and adding the RO requirement is just more theater. Both comments are sarcastic, imo.
The read-only requirement makes sense because otherwise confidentiality of the airgapped system can be easily compromised (data extraction).
If one's role is to only update AV on the airgapped machine then their data transfer to the airgapped machine should be only going into one direction.
i think the joke requires knowledge that, if the exe is compromised, there's zero ways in hardware you can enforce read only mode on a USB stick, so it's probably done in software and is moot.
and also, if it's air gapped, why even have an antivirus. ... for air borne ones?
It's incredibly easy to enforce read only on a USB stick when you destroy it after bringing it into a classified environment. As for antivirus, aren't we talking _right now_ about bringing potentially infected drives into an network?
Still sounds like unnecessary risk when you can achieve it with a read-only CD drive.
The professionals who defined this update protocol have access to classified information I'm sure that allows them to assess risks us readers of public blog posts are not privy to! So we shouldn't judge on the morsels of public information what must have been an elaborate evaluation of best practices only accessible to the echelon of administrators in the government branch where I was doing my duty.
Seriously though, I learned a lot there. If I wanted friends to have access to such a system, this is the plausibly deniable access route I'd set up for them.
That sounds like an ideal attack vector! Norton and other AV have elevated privileges with an opaque data format ready to be exploited.
I believe that was exactly the other commenter's point.
The funniest part is that the update was an exe to be run from the USB stick. The one thing you should not ever do on any system.
Unfortunately I wasn't prepared to broach the subject in a way that didn't have me say "you'd be safer without the AV". So I got nowhere.
Oh even worse! Yeah, you likely wouldn't have made any headway.
I’m of the opinion that 3rd party security software is malware. If it isn’t today, a future acquisition or enshittification ensures that it will be.
While true, the future is the future, and not entirely relevant.
Or do you eschew using a fork, because in 12 weeks in will fall on the floor?
Certainly, the problem is secret falls on the floor. The ones we can see can be handled.
This problem even happens with brand names, with hardware. You buy a fridge, and a decade later go to buy another. Meanwhile, megacorp has been bought by a conglomerate, and brand name is purposefully crap.
Imagine, if you will a bed of gold embroidered and wrought with the most excuisite works. Above the bed however is a sharp sword suspended on a single hair of a horse's tail. Would you avoid relaxing on the the bed because the sword may fall and kill you at some point in the future?
What’s wrong with the brand-name AV engines and security controls shipped with the OS? To me, it’s mostly just a lack of trust on the part of management.
Kaspersky is/was a brand-name AV. Look at what happened on their way out after the US ban...
Everyone should build their own security software?
All the major desktop OS have AV engines built by excellent teams. I do trust this more than McAfee or Norton. I also trust it not to take my machine down as much as CrowdStrike.
You trust native Windows security? I’m hoping it’s not, but what if a hospital’s decision looks like a choice between ransomware and a root system like crowd strike?
Have fun running your business with no third party software. You'll have to start by writing your own OS.
Speaking of which... it's remarkable that Microsoft Windows probably has code from 50,000 people in it. Yet there haven't been any (public) cases of people sneaking malicious code in. How come?
If Windows had malicious code in it, would we be able to tell the difference?
Sure, I’m sure somebody who is going to go through the effort of slipping malicious code into Windows would also make sure to do some QA on it. So it would be suspiciously unbuggy.
That makes complete sense if your threat model is preventing data from leaving a secure network, assuming the USB drive stayed in the secure network or was destroyed after entering it.
Why would you need A/V on an air-gapped system?
I didn't expand on that but actually that system was part of a global network; entirely separate from the Internet. There was MS Outlook installed on the terminal nodes. One can see how somebody could become nervous about not having AV on the nodes and come up with a "protection" scheme like the one I described.
Air-gapped doesn't mean no data transfer. If there is data transfer, then viruses could get on it which will use up system resources.
The weak-point is the shared USB device that copies from one machine to another which seems to defeat the whole purpose of being air-gapped - you could have printed-and-OCR'd data three decades ago so the air-gapped machine is never reading anything from outside at all, these days a video stream and AI could probably automate that?
The things are much easier: two parts, one has blinking LED, another is photosensor. This is called "data diode" and there is a lot of them.
Here is a random vendor with nice pictures: https://owlcyberdefense.com/learn-about-data-diodes/
> But what if I need to send data two-ways? Some systems cannot operate one-way, so they require a two-way solution. For these use cases, Owl has a unique bidirectional data diode solution – ReCon – that operates on two parallel one-way paths. Get all the security advantages of data diodes with the flexibility of a two-way solution.
…but…what? Why are we doing the blinking-light song and dance at all then?
Comment was deleted :(
Let's hope the photosensor processing software on the receiving end doesn't have any bugs that could be exploited.
Depends on the direction.
If data diode points to outside, like a power plant exporting its status to web, then photosensor can be completely taken over. Sure, the web page might be completely bogus, but there will be no disruption in power plant's system. The hardware design guarantees it. That is the strongest case for data diodes.
If data diode points to inside, like a power plant getting new data from the outside, then sure, photosensor software is a concern, but since it's relatively simple, this would not be my biggest worry. I'd worry about app that runs on target PC and receives files; if file is an archive, about un-archiver exploits; an finally about the files themselves. If there a doc, are you sure it's not exploiting Word? If there is an update, are you sure it's not trojaned? Are you sure users are not click on the executable thinking it's a directory?
So one-way IrDa?
Yes, but the vendor also gives some reasonable transmission software that will be able to transmit common protocols (like OPC/DB updates and so on) multiplexed and abstracting away the confirmationless medium.
An optic fiber.
Optic gas.
It has to be. Otherwise it is not air-gapped but vacuum-gapped!
If you're already using a data transfer mechanism that the human can't verify every character going over the line, why use infrared? What does that give over a USB cable or, gasp, an internet connection?
The idea is in the name. It is a "data diode". It lets data through in one direction and the data can't go in the other. Verifiably because it doesn't have the hardware for data to go the other direction.
I don't think this property can be guaranteed for the alternatives you proposed.
But surely malware is just "data", no? Or am I missing something.
The idea is that the malware could have infiltrated the system (probably) but couldn't have exfiltrated data from it.
So a data diode wouldn't stop a "stuxnet" scenairo where the malware is trying to sabotage the air-gapped. But it would prevent secret information being leaked out.
(Btw. I'm just explaining what a data diode is, and what guarantees it provides. I don't actually think that it would be useful in practice, because it feels to be too cumbersome to use it and therefore the users/IT would poke holes into the security it would provide otherwise.)
interesting, thank you.
There is a cheap way to test via the open source data diode workshop. Https://www.github.com/vrolijk/osdd
Love to read your findings!
Why light instead of electricity: tradition, and a bit of quality assurance. For RS232, cutting one line was fine. But modern devices are complex: Ethernet transceivers support auto-MDIX and your RX line might become TX one with a flip of a bit, or your GPIO becomes input instead of output. You can fix it with a buffer, but optocouplers are cheap and look nice in slides.
Why not USB or internet:
Transmitter is totally safe from compromised receiver. If you insert USB stick to upload file, it could maliciously pretend to be a keyboard. If you connect to Internet to upload a file, your network stack can be exploited (and if you have firewall, then firewall must be exploited first, not impossible). Only data diode lets you push the data to unsecure zone and not worry about getting infected in the process.
If receiver has to be secure, things are not as clear-cut, but there is still advantages from great reduction in complexity. None of existing protocols work, so vendor usually implement something minimally simple to allow file transfer and maybe mailbox-like messages. This system will always have some risks present - even if you securely sent PDF to airgapped site, it might still exploit the PDF viewer. But at least the malware won't be able to report status to C&C and exfiltrate the data.
So with this data diode I can install an application to use the PC speaker as an output device, and then record the sound for exfil? Nice.
exfil ideas are always interesting to think about! The PC speaker idea may work, assuming:
(1) protected computer has a built-in PC speaker (for example, the computer I am typing this message on does not)
(2) There is an insecure PC with sound card and a microphone (or at least headphones which can be used as microphone)
(3) Secure and insecure PCs are close to each other, as opposed to being in different rooms
(4) It's quiet enough, and no one will notice the sounds (because PC speakers are crappy and can't do infra/ultra sound)
Likelihood of this succeeding depends on a lot of factors, the biggest of them being "how good is the security team". Presumably if they are buying data diodes, they at least have some knowledge?
Other exfil ideas I've read were to emit sounds using HDD, emit sounds by changing fan speed, blink code messages on lights ("sleep mode" or caps/num lock), show special patterns on monitors to transmit RF, add hidden dots to printed pages, abuse wireless keyboard or mice.. There are many idea and most of them are pretty impractical outside of very limited circumstances.
[dead]
I can definitely imagine use cases where a network is air gapped internally for security but bidirectional transfer still takes place. The point is that humans are supposed to be in control of exactly what is transferred, in both directions (not feasible with a network connection, to my knowledge).
Yes, humans are in control, but in the case of Windows the humans that control the default behavior of the system when an USB device is connected are not the ones that are using it. Frankly, I wonder why implement an air gap if Windows is being used. Even in the case of Linux a hardened configuration should be used.
Windows can be hardened as much as Linux and has less attack surface for supply chain attacks. At least, the latter holds when you believe Microsoft as company is overall more secure and less compromised than tens of thousands of open source contributors working from home.
In both Windows and Linux the amount of people contributing code are roughly within an order of magnitude equal. The difference is that with Linux we understand that every commit must be verified. We do not know to what extent Windows upholds that same standard.
I'm not discussing the general situation as the topic is too vast, just this particular case of OS reaction when a device is connected.
Why the false dichotomy? One should use an OS dedicated to security, e.g., Qubes OS.
QubesOS is so much better than a traditional OS but the separation is (at least in theory should be!) weaker than an air-gapped system as there is still a connection through software (and hardware) components.
But the air-gapped system turned out to be hacked because of the way USB devices are handled by the OS, something that can be very finely controlled in Linux. As for Windows, I didn't do any research, but either (1) it is controlled by Microsoft and you can't turn this automation off, (2) it can be done but the technicians hardening these systems didn't do their job correctly.
> but the separation is ... weaker than an air-gapped system
Not necessarily: https://www.qubes-os.org/faq/#how-does-qubes-os-compare-to-u...
> But the air-gapped system turned out to be hacked because of the way USB devices are handled by the OS, something that can be very finely controlled in Linux.
This is one the key features of Qubes: All USB devices are isolated with hardware virtualization into a dedicated VM. It would protect against the USB attack.
I created such a system (though to transfer Bitcoin Transactions/Signatures from an airgapped system). The problem is that if you have a lot of bi-directional traffic, you'd want to automate the process of scanning/storing the information. Suddenly, you just have a slow USB device.
What you want is to minimize your data to less than a 1Kb so that it can be manually transmitted.
Wouldn't it be easier to just have every port blocked except for a very simple application which has no privileges and just writes ASCII to some file? Such an application would be very easy to audit
You then need to trust that the kernel doesn't have a bug in the network stack. That trust might be justified, but keep in mind that even OpenBSD suffered a remotely exploitable vulnerability in their ipv6 stack ...
Until someone finds a bug in the network stack
I think the general point stands though. While nothing is perfectly secure, having small and understandable components that are fully audited should allow a high level of safety
If a network stack on a modern computer is too dangerous, then use a modem (silly example: apt install minimodem) and an aux cable from the one computer's speaker to the other's mic jack, or a serial connection (not very familiar with those, can't say how complex the driver is there) or something similarly basic that you can audit a memory-safe implementation of
You advocate for really simple application layer, while having that on top of all the other complex communication layers. Implementations had multiple known vulnerabilities over the years. In case of vulnerability an attacker might be able to do much more damages with real-time access. Is it any safer than an USB stick?
On top of the complex communication layer we're trying to avoid? Umm, I'm not suggesting to run an aux cable or serial connection on top of a TCP stack, so I don't understand what you're saying
Edit: or do you mean the other way around, namely running a network stack on top of this (e.g.) serial connection? Also not what I meant but I wasn't explicit about that so this confusion would make sense. What I had in mind is doing whatever comms you want to do with the airgapped system, like logging/storing the diplomatic transmissions or whatever this system was for, via this super simple connection such that the airgapped system never has to do complex parsing or state machines as it would with something like USB or a standard kernel's network stack
That's not an air-gapped system but mediocre op-sec at best.
Did you do it before or after BBQr and QR started getting broadly used used in air-gapped hardware wallets such as ColdCard Q or Foundation Passport?
Way before. Transactions in Bitcoin and small and simple (unless you have lots of inputs). You only need a QR code generator and a Transaction builder.
Yup, that's what I was thinking. Combining PSBT and QR is a very intuitive workflow. All the pieces are there waiting to be put together. Makes it more novel and impressive you did it way before.
You're on the right track in the sense that a key characteristic for a successful air gap is diligent human review of all the information that flows in and out.
Surely some government has come up with physically-unidirectional data transmission mechanisms for getting data onto airgapped networks. There has to be something more sophisticated than single-use CD-ROMs, even if it's just a blinking LED on one end and a photosensor on the other end.
> There has to be something more sophisticated than single-use CD-ROMs
But why, when a DVD-R handles most use cases at a cost of < $0.25 each, are reliable and ubiquitous, the hardware is likely already there (unless you are using Apple - caveat emptor) and they close the threat vector posed by read/write USB devices.
Sometimes the simplest solution is the best solution.
DVD-R is read/write unless you are very careful to have read-only hardware on the destination device.
Even if the destination device were to write something to said discs, the optical media are cheap enough that it makes sense to destroy them (or archive them in case they become useful for forensic purposes) rather than reusing them.
Plus, compared to a USB form factor, one imagines it’s harder to sneak in circuitry that could retransmit data by unexpected means.
Why "very" ?
Also, if you think that the seller is lying to you, can't the drive be opened up and inspected to check for that kind of capability ?
You can always shred the DVD afterwards I guess!
I would guess having a CD/DVD drive opens another attack surface. Similar to why people gluing their USB ports closed.
Right — but the question isn’t CD/DVD versus nothing. It’s CD/DVD versus USB; and which has a smaller attack surface.
I’d argue that read-only CD/DVD has a smaller attack surface than USB, so of the two, it’s preferable. I’d further argue that a CD/DVD (ie, the actual object moved between systems) is easier to inspect than USB devices, to validate the behavior.
The CD/DVD discs used can also be retained and later audited to verify what was moved to and from the systems.
Data diodes are commonly used: https://csrc.nist.gov/glossary/term/data_diode
I don't know if people class something connected using a data diode as airgapped or not.
Regular two-way IR diodes and sensors were standard on 90's business laptops for ordinary RS-232 file transfer between machines wirelessly. Before wifi or even ethernet was everywhere, and before USB and Bluetooth came along. The first smartphones had it too so you could dial up the internet on the road in the years before phones had a browser and stuff like that.
Yes. …and, indeed, that used to be a vector for hacking “air gapped” systems back in the day
Airgapped from the Internet —- yes.
I have heard (on HN) of... 100 MBit ethernet with the transmit wires cut. Probably in the context of in-flight infotainment: plane data to infotainment yes, infotainment anything to plane control anything no. If it's stupid but it works...
gigabit fiber works fine, amazon sells splitters, split tx and loop back to rx on the tx side. Rx side just works.
you need to use a file transfer tool intended for unidirectional transfer (e.g. multicast) otherwise you will have failure from lost packets.
If you don't require high speed just use RS232.
It was used to connect network monitors (packet capturing devices) to ensure that ARP or a bug or misconfiguration wouldn't reveal the existence of that device on the network.
Which ironically describes them evolving into software driven gateways.
Comment was deleted :(
SD cards can also have a switch to make them read only.
They have a switch that requests the host doesn't try and write, but it's not read only:
It is the responsibility of the host to protect the card. The position [i.e., setting] of the write protect switch is unknown to the internal circuitry of the card
https://en.wikipedia.org/wiki/SD_card#Write-protect_notchAh, security via the honour system. Unbeatable
It's usability feature, not security. For cameras/floppy-like usage it's to prevent accidental write/erase errors, which are quite common in managing a large stash of cards.
The CIA would never spy on me: they promised!
Good old UART without the RX connected on one side.
Alas, this can be reconfigured without anyone noticing, assuming compromise on both sides.
A diode / photosensor can't.
Yeah they exist. Data diodes or data guards. They operate at currently available line speeds and there are 100s of thousands in operation. Data diodes are favored by OT companies. For government, Data Guards as they tend to have more robust inspection
Exactly. Air-gapped means non data going in and out of systems. USB stick are acting as a cable.
> The weak-point is the shared USB device that copies from one machine to another which seems to defeat the whole purpose of being air-gapped...
Yup. I was going to post that TFA and the people at these embassies apparently have a very different definition of what people consider an air-gapped system.
Pushing the non-sense a bit further you could imagine they'd recreate ethernet, but air-gapped, using some hardware only allowing one packet in at a time, but both ways:
"Look ma, at this point in time it's not talking to that other machine, so it's air-gapped. Now it got one packet, but it's only a packet in, so it's air-gapped! Now it's sending only a packet out, so it's air-gapped!".
Yeah. But no.
> TFA and the people at these embassies apparently have a very different definition of what people consider an air-gapped system.
And Wikipedia? Which says:
> To move data between the outside world and the air-gapped system, it is necessary to write data to a physical medium such as a thumbdrive, and physically move it between computers.
Source: https://en.m.wikipedia.org/wiki/Air_gap_(networking)#Use_in_...
How would you get data or even the OS itself to the machine under your definition?
Lol. Even if it's with the QR code, it will not be safe. If you can read a bit, you can read a file. Security is a mote, and the hacker is a catapult. Any sufficiently complex system, any metric of security will be incomplete or ignoring that Turning complete and uncomputable. Security is about intelligence in all layers of the stack, from the electron to the application and even the front door. A USB exploit attacks a driver or the OS. A QR code attacks the application. There are other ways to exploit besides breaking and entering. Sometimes it's about influence. In the age of AI, the entire internet and all knowledge could be shifted to reframe a single organization to make an exploit possible. Pandora's box is wide open. It's pouring out. Even a machine on the internet can be secure, but an air gap is only the transport layer. It's a false sense of security. You need to be worried about the full stack because that's the only way to be safe, to never be safe, the eternal guard and gaze. The vigilance. Security in layers. Security in depth.
Arguably the qr-code based approach would be much safer, as it would be much simpler to implement and audit.
Moving a USB key between two windows machines sounds as bad of an idea as it can get for airgapped data exchange.
Only if it's done right; when I first made this image, the scanners built into phones straight up executed the JavaScript: https://benwheatley.github.io/blog/2017/04/2017-04-10-11-15-...
Sure, but if you have resources to spend validating the security of a tool, would you rather validate the QR parser or the USB stack?
There are forensic USB write blockers which sit between the USB device and prevent host-writeback to the device.
This is an old attack vector. No one is learning from history. The organizations being hit have poor cybersecurity.
https://en.wikipedia.org/wiki/2008_malware_infection_of_the_...
Super old… my first experience with a “virus” was an Amiga boot sector attack from 1986!
At the time the morris worm had inspired some folks to see if they could spread binaries by infecting every disk inserted. That’s all it did….. spread. I think the virus lives off an interrupt generated by disk insertions.
Fortunately it was harmless (except for a few extra crashes) and I had my original OS disks that could be booted from to clean up the disks.
Just in case anyone isn't aware of this history - the "Morris worm" being referred to here is named after Robert Morris who wrote it. He's also one of the co-founders of YC, which built HN.
Why would you go through all the hassle of setting up an air-gapped system, only to stop at enforcing strict code signing for any executable delivered via USB?
Just the fact that one can insert a USB drive into the air-gapped system amazes me. I remember my days as a contractor at NATO and nothing could be plugged into those machines!
I guess the problem is that most air-gapped guides and practices out there mostly focus on sucking the "air" out of computers: internet, networking, bluetooth, etc from the get-go ("remove the network card before starting!"). But even air-gapped systems need some sort of input/output, so a keyboard, mouse/trackpad, displays and monitors will be connected to it - all pretty much vectors for an attack; a base sw will be installed (making possible supply-chain attacks); largely USB drives and even local networking may be present.
As a general rule, I'd say anything that executes code in a processor can be breached to execute malicious code somehow. Signing executables helps, but it's just another hoop to jump over. In fact I thought the threat in OP was about a USB firmware issue, but alas, it was just an executable disguised with a folder icon some user probably clicked on.
To make things worse, critical hardware (trains, power plants...) vendor's fondness for Windows is notorious. Just try to find nix-compatible infrastructure hardware controllers at, say, a supplier like ABB who (among other many things) makes hydroelectric power-plant turbines and controllers: https://library.abb.com/r?dkg=dkg_software - spoiler, everything is Windows-centric, there's plenty of non-signed .EXEs for download at their website. This is true in many other critical industries. So common it's scary these things could be compromised and the flood gates, literally, opened wide open.
Air gaps are easily enforced and require absolutely zero technical knowledge.
You just need a PC and then have a CD delivered through a trusted source – embassies should already have a way of ensuring physical integrity of their mail.
The technical knowledge needed for code signing, especially now with trusted hardware modules, is orders of magnitute more complicated than that.
Not just knowledge: code signing is going to be a lot of whack-a-mole work dealing with every tool you use. I’d expect that to cost more than you expect and get political blowback from whoever needs tools which get broken.
Why worry about the blowback? That's the corpse talking, if I hear, "This disrupts our workflow," I'm even more confident that I should rip the band-aid.
Offices that don't follow security practices uncovered because they never called for help, another chance for drifters on autopilot to walk away from the job because it just got too hectic, stop paying licenses for a bunch of tools you didn't realize you were paying for and don't need, find replacements for all the tools that are not actively maintained, or don't have cooperative maintainers.
It's a healthy shake-up and our society at large should be less scared of making decisions like these
You’re assuming that everyone shares the IT security department’s priorities. If you tell someone senior that they can’t use a tool they need, you might learn that they have political clout as well – and the context here makes that especially plausible.
That's a good point!
> Why...
What is your priority?
(1) Ensuring Actual Security
(2) Following the Official Security Theater Script
In most government orgs, idealists who care about #1 don't last very long.
That sounds like a complete nightmare, so much code isn't signed that you're going to have an incredible number of false positives
This does really not deserve a huge writeup.
Employees (unknowingly(?)) using infected USB drives caused security problems. Well imagine that.
As several others pointed out the USB ports on the secure serfver should all be fullly disabled
In addition I would suggest leaving one rewired seemingly availble USB port that will cause a giant alarm to blare if someone inserted anything into it.
Further all informatin being somehow fed into the secure machines should be based on simple text based files with no binary components. To be read by a bastion host with a drive and driver that will only read those specific files, that it is able to parse succefully and write it out to the destination target, that I would suggest be an optical worm device that can then be used to feed the airgapped system.
> As was the case in the Kaspersky report, we can’t attribute GoldenJackal’s activities to any specific nation-state. There is, however, one clue that might point towards the origin of the attacks: in the GoldenHowl malware, the C&C protocol is referred to as transport_http, which is an expression typically used by Turla (see our ComRat v4 report) and MoustachedBouncer. This may indicate that the developers of GoldenHowl are Russian speakers.
This is quite a stretch. So we have nothing so far.
Any malware production outfits that aren't using adversarial stylometry in this market are leaving money on the table. Just plain bad business sense.
Comment was deleted :(
As soon as the article started describing malware being installed upon insertion of a USB thumb drive, I had to Ctrl-F for "Windows", and indeed, of course that's the OS these machines are running.
I'd be really curious to hear of stories like this where the attacked OS is something a little less predicable/common.
As a Linux user, I'll defend Microsoft here and say that I'd rather suspect it's a sign of Windows' prevalence than Windows' (un)safety. Around the Snowden leaks I had a different opinion but nowadays I feel like those calling the shots at Microsoft realised it's no longer an optional component or that security is merely a marketing story
> I feel like those calling the shots at Microsoft realised it's no longer an optional component or that security is merely a marketing story
I dunno, if a company has for more than two decades (2002: https://www.cnet.com/tech/tech-industry/gates-security-is-to...) said that security is the top priority, and they keep re-iterating that every now and then (2024: https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...), yet they still don't actually seem to act like it, I'm pretty sure they still see it as an optional component/marketing story.
> I dunno, if [they've been saying it for 25 years], yet they still don't actually seem to act like it
That's what I'm saying though: from my point of view, they've started to act like it in the last ~20 years. If you've got evidence to the contrary, feel free to share it.
From my pov, they're about as perfect as the average other for-profit, which is not very security-in-depth at all but it's not just a marketing sham anymore either the way that it used to be. From Bitlocker to Defender to their security patching and presumably secure coding practices, it's not the same company that it was when they launched XP. A lot of the market seems to have grown up and, at least among our customers, we're finding fewer trivial issues
At any rate, this subthread started by saying this standard Windows setup shouldn't be used in the first place. I'm all for not using closed software, but then the question rather becomes: who do you think is deserving of your trust in this scenario?
Actually, if security is the goal, I expected that they would use a security-oriented OS (e.g., Qubes OS).
Speaking of Snowden, and since we're at the State actor level, both Windows and Intel CPUs (and maybe also Ryzen CPUs) have to be assumed to be backdoored by the NSA.
Whether that is a threat worth dealing with for the concerned embassies is another question of course.
Unless I'm missing something, this doesn't rely on something really advanced and low-level like USB drive firmware, but a classic flaw that's existed in Windows for almost 30 years:
It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
It's just another variant of the classic .jpg.exe scam. Stop hiding files and file extensions and this hole can be easily closed.
An air gapped system should not allow regular users to run random executables under any circumstances, much less directly off a USB drive. Windows probably is not suitable for such use.
Windows shouldn't be used in a serious environment since Windows 95 and ubiquous networking. NT just made it harder to attack the system but not the user accounts.
I agree, if you change "windows" to "commercially-available OS". If it can run elf or exe files, you lose.
>Ctrl-f, Windows.
Ahem, "air-gapped'.
Any decent Unix system has either udev or hotplug based systems to disable every USB device not related to non-storage purposes. Any decent secure system woudln't allow to exec any software to the user beside of what's in their $PATH. Any decent system woudn't alllow the user to mount external storage at all, much less executing any software on it.
For air-gapped systems, NNCP under a secure Unix (OpenBSD with home mounted as noexec, sysctl security tweaks enforcing rules, and such) it's godsend.
Securelevel https://man.openbsd.org/securelevel.7
Comment was deleted :(
Isn't this how the stuxnet got into Iranian facilities?
i think for that one they bribed/coherced one insider to explicitly insert a bad USB on the system.
Am I the only one that finds it incredible an air gapped device has enabled USB ports? You want to bring data to it, use a freaking cd/dvd-rom. You may bring all sorts of crap in, but if let's say the air gapped machine is reimaged from cd/dvd every day and nothing ever leaves it, who cares?
The keyboard and mouse were probably USB.
Aren't there locks to deal with that ?
Because you could also say then that anyone can add an USB drive by plugging it directly on the motherboard...
I don't think anyone in the EU govt sat down and thought about security beyond the air gapping.
we don't know if it's EU, likely it's one member state.
I don't know anything about security, but why does an airgapped system even have a USB drive? Seems obvious to me that you want to disable all IO systems, not just internet? OK, sure people can still take photos of the screen or something, but that would require a willing collaborator.
It's pretty normal for airgapped systems to have USB drives, typically you're trying to keep data from getting out more than coming in. The problem here was that they were letting drives go from the classified side to the unclassified side.
You generally want to avoid getting malware into your network, but it is even more important to avoid allowing for exfiltration of data. So the "copy via USB-stick" serves a purpose and makes it MUCH harder to exfiltrate data.
Use a DVD-R with a read-only drive on the air-gapped machine. Much easier to audit than an evil USB.
I’m a bit disappointed the mechanism to exfiltrate data is based on sharing the USB between an internet-connected and air gapped devices. It would have been cool if it used some other side channel like acoustic signals.
I felt like the article spent way too many words to explain the idea of "the agency shared data across the air gap using USB drives, and a vulnerability was used to surreptitiously copy the malware onto the USB and then onto the target machine", and AFAICT none on explaining what that vulnerability is or why it exists (or existed). Then the rest is standard malware-reversing stuff that doesn't say anything interesting except to other malware reverse engineers. The inner workings of the tools aren't interesting from a security perspective; the compromise of the air gap is.
(As for acoustic etc. side-channel attacks: these would require a level of physical access at which point the air gap is moot. E.g. if you can get a physical listening device into the room to listen to fan noise etc. and deduce something about the computation currently being performed, and then eventually turn that into espionage... you could far more easily just directly use the listening device for espionage in the form of listening to the humans operating the computers.)
There was no novel vulnerability. The pwned machine just replaced a recently-accessed folder on the stick with an exe to trick the user into executing it on the target machine.
Yeah it is very bloated. I am suspicious that the article was bloated with AI rather than a human, though. I wonder if they either made the first section as a summary or extended sections necessarily.
For example, early on it says: " collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems."
and later on: " they were used, among other things, to collect and process interesting information, to distribute files, configurations, and commands to other systems, and to exfiltrate files."
It also mentions several times that the attack on a South Asian countries embassy was the first time this software was seen.
Repeating info like this was kind of a sign of part-applied AI edits with RAG a while ago, might still be true today.
Yup, no respect for the people who published the article. It was one paragraph of content impossibly diluted. TLDR: some idiots allowed USB sicks to be plugged into the supposedly air-gapped system. Hilarity ensued.
Such side channel attacks are academic. In fact someone on HN pointed out there's a researcher that invents new ones by the dozen and media run with it whenever he presents another one.
It's fun, and not hard to come by. Everything anything does - which includes everything an air-gapped computer does - constantly radiates information about its doing, at the speed of light (note: think causality, not light). We know the data is there, and inventing new and interesting ways to tap into that stream is a really nice hobby to have.
You probably mean Dr. Moderchai Guri - all his Arxiv mentions (a lot!) are for unconventional tactics for compromising airgapped systems.
I mean, someone who researches security of airgap computers continually coming up with new ways to break them, seems like the expected outcome. Its their job after all.
I would start by asking what they need computers for.
You don't really need one to read text from a screen. Of that most would be old documents that for the most part should be public. What remains besides reading is most likely 95% stuff they shouldn't be doing.
The most secure part is the stuff we wish they were doing.
I’m having a real hard time understanding what this comment is saying. Are you asking what high side computers are used for besides reading classified information?
Maybe, I could also be asking why you would use a computer if all you want is to read documents.
If you have an operator send a telegram for you that person is capable of doing a lot more with your text than you want. On the other end is another telegram operator to further increase the risk. You might want to send a letter in stead. It's slower but more secure.
If you want to read text from a monitor a computer is super convenient but like the operator it can do other things you don't want. You don't need a computer to put text on a screen. Alternatives might be slow and expensive but in this case you don't have to send things to the other side of the world. That would be the thing you specifically don't want.
The foundations of computer science were once, mostly academic.
One of my favorite hacks of yore was somehow some folks managed to compromise the iPod to that point that they could run some of their code, and make a beep.
They compressed the ROM, and "beeped" it out, wrapping the iPod in an acoustic box, recording it, and then decoding it to decode the ROM.
I think it was more of a “click”
Back in the ?ps/2? days, I had a joke equalizer plugin for Winamp that used the 3 LEDs on your keyboard. Another output device!
Just wait till neuralink gets hacked and people themselves become the side channel.
This is the plot of most of Ghost in the Shell. That series looks more and more prescient as time goes on. Another big plot point is that most of the internet is just AIs talking to each other. 10 years ago that sounded ridiculous, now not so much.
"Ralfi was sitting at his usual table. Owing me a lot of money. I had hundreds of megabytes stashed in my head on an idiot savant basis, information I had no conscious access to. Ralfi had left it there. He hadn't, however, came back for it." -- Johnny Mnemonic, William Gibson, 1981
Also how super-sensitive may be kept on physical books and papers, albeit in a form scannable by optic implants.
If you're a gamer, you should try Cyberpunk2077 :D Currently playing it, at over 200 hours, and it really feels like a scarily accurate, techno-dystopian version of our world.
you don't need that, just make the airgapped system give odd error messages that people will google across the gap.
It is my view that television and other propaganda can hack persons.
You might like the movie Videodrome[0].
https://en.wikipedia.org/wiki/Snow_Crash , or much of https://en.wikipedia.org/wiki/Philip_K._Dick , especially https://en.wikipedia.org/wiki/The_Man_in_the_High_Castle or https://en.wikipedia.org/wiki/The_Man_in_the_High_Castle_(TV...
Or https://en.wikipedia.org/wiki/The_Giver / https://en.wikipedia.org/wiki/The_Giver_(film)
Or https://en.wikipedia.org/wiki/The_Congress_(2013_film)
Or Nineteeneightyfour and so much more...(yawn)...
Or
I am not sure why you are being downvoted. Just like fridges, cars, ovens gained internet access, enhanced humans will be extremely likely to be, eventually -- and possibly with interesting consequences -- hacked.
You can already hack people by just telling them things. Many of them will do dumb shit if you just use the right words.
I like the analogy. Lets explore it a little.
<< You can already hack people by just telling them things.
True, but language fluctuates, zeitgeist changes and while underlying techniques remain largely the same, what nationstate would not dream of being able to simply have people obey when it tells them to do behave in a particular way. Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
To offer a contributory not-really-metaphor for viewing things: After a "grey goo" apocalypse covers the world in ruthlessly replicating nanobots, eventually there arise massive swarms of trillions of allied units that in turn develop hivemind intelligences, which attempt to influence and "hack" one-another.
I am one of them, so are you, and I just made you think of something against--or at least without--your will.
> True, but language fluctuates, zeitgeist changes and while underlying techniques remain largely the same
This applies to software as well
> Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
Widespread use of BCIs would help with this for sure, but don’t be under the impression that individual and population level manipulation techniques haven’t progressed well past simple propaganda.
<< don’t be under the impression that individual and population level manipulation techniques haven’t progressed well past simple propaganda.
I absolutely buy it based merely on the glimpse of the document from various whistleblowers over the years. At this point, I can only imagine how well oiled a machine it must be.
Certainly people would like an API for others without needing to reverse engineer them. Agreed that there is a threshold of simplicity past which it becomes easier to organize than having to give speeches and run propaganda.
Isn't doomed. Over and above, one must employ language that will disorient in punctuated chronology.
Like the January 6 question, I’m assuming that anyone who had a neuralink would likely be ineligible for any sort of clearance to access information like this.
I am not as certain. Sure, Musk and his product are no longer 'cool' given his move to US political right faction, but tech is tech. Some tried banning cell phones and whatnot and the old guard there had to adjust their expectations.
In short, I am not sure you are right about it. If anything, and I personally see it as a worst case scenario, use of that contraption will be effectively mandatory the way having cell phone is now ( edit: if you work for any bigger corp that and and want to log from your home ).
As far as I am aware, no electronic devices from outside, and no devices that transmit anything, are allowed in these high security areas. That’s inclusive of cell phones, for example.
That is: the point I am making is more nuanced than whether something is popular (like cell phones or other tech).
Oh, I am sure there are restrictions for the rank and file, but the higher ups with such access can ( and apparently do ) get exceptions[1] and while this is one of the more visible examples, I sincerely doubt he is the only one.
[1]https://www.wired.com/2017/01/trump-android-phone-security-t...
That's not really true, in that context security will largely be a solved problem.
Using chips with a secure architecture, safe languages and safe protocols is going to result in secure implants.
Not to say there might not be some new vulnerability, but I disagree with this idea people love to repeat that security is impossible.
What are you smoking, we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence. Not to mention social media's entire business model is hacking people - keep them on your platform by any means necessary.
> What are you smoking,
Facts and reality, I guess?
> we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence.
I'm not sure why you think this is counter to my point (perhaps we should wonder what you yourself are smoking?), which to reiterate was that:
1. Most current security issues are due to the various insecure foundations we build our technology on, and
2. By the time Neuralink type implants are common, that won't be the case anymore.
We have both cars and pacemakers that can kill people if you send the right wireless commands. Why would Neuralink be different?
I agree that we do have the technology to make it secure if we want to. We've made flight software secure in the '80s or so.
What we don't have, is the incentives. We've built everything on insecure foundations to get to the market cheaper and faster. These incentives don't change for Neuralink. In fact, they create kind of gold rush conditions that make things worse.
What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations, even at the cost of red tape and slow bureaucratic processes. And it's starting, slowly. But e.g. the EU is promoting SBOM's, sobtheir underlying mental model is still one where you tape random software together quickly.
> Why would Neuralink be different?
At some point in the future no one will be using x86 or any variation, and we will all be using a secure architecture. Same as with insecure languages, far enough in the future, every language in common use will be safe.
I believe by the time brain implants are common, we will be far enough in the future that we will be using secure foundations for those brain implants.
> What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations,
For a damn brain implant I don't see why they wouldn't.
I can tell you're high because #2. The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh, and Musk isn't allowed a Neuralink tripwire to blow up your brain via his invention because he saw pronouns listed somewhere and got triggered.
Nah, not high, just experienced.
> The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh man, you've ingested that anti-capitalism koolaid like so many young college kids are so quick to do. It's always such a shame.
This isn't really anything to do with capitalism, it's a question of regulation e.g. what the FDA does, and also a question of time because when enough time passes, most computing will be secure by default due to having rid the insecure foundations.
And more than that, it's an issue with democracy more than capitalism. Fix the way people vote if you want to fix the world, or prevent the types of people who want to believe the earth is flat from having a vote at all.
I'll believe this pipe dream when I see it, that's all I'm saying.
Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
There is no technical solution to people uploading high res photos with location metadata to social network de jour. Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
None of the people involved in bringing "enhanced human" tech to market will be immune to any of those pressures. (I mean, FFS, in the short term we're really talking about a product that _Elon_ is applying his massive billionaire brain to, right? I wonder what the media friendly equivalent term to "Rapid Unscheduled Disassembly" for when Nerualink starts blowing up people's brains is going to be?)
> Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
It absolutely will. I didn't say completely solved, I said largely solved.
> There is no technical solution to people uploading high res photos with location metadata to social network de jour.
Bad example honestly, since most social media sites strip out exif data by default these days. Not sure there are any that don't.
> Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
Yes yes, humans can be selfish and take risks and be bribed and negligent and blah blah blah.
The context of the comment was in neuralink implants getting hacked the way an out of date smart tv might. As when it comes to the actual tech, security will be a solved problem, because most of the problems we see today are due to everything being built on top of insecure foundations on top of insecure foundations.
if Neuralink became pervasive like smartphones I'd join the Amish
> I am not sure why you are being downvoted.
Trigger-happy emotional non-intelligence.
the-computer-wears-sneakers-net
Wouldn't have happened had they used seL4.
I'd hope there's some EU investment on it now.
it seems PEBKAC all the way down
A good design would protect against PEBKAC somewhat.
I would bet that those air-gapped systems are running some version of MS windows.
Let me guess. Someone installed a TCP-over-airgap utility.
> This may indicate that the developers of GoldenHowl are Russian speakers.
Journalists need to check their biases and ensure that everything they write is balanced. When mentioning that they might be Russian speakers, a good balancing sentence would be to point out countries which use the Russian language. Just throwing in "Russian speaker" after explicitly stating they're not sure which nation state did this is extremely unprofessional.
Sure, mention all the facts. Don't try to interpret them as "clues". If you have to, make sure you're not building a narrative without being absolutely sure.
Its not good journalism to go from `transport_http` to indicating that this is an attack by the Russian federation. That's not how you do good journalism. How many people will retain the fact that the author does NOT know which, if any, nation state did this?
The most likely nation state is the one with the most Russian speakers: Russia.
You don't arrest the most likely guy, you arrest the guy you have evidence on.
It also doesn't seem like a good assumption: https://sourcegraph.com/search?q=context:global+transport_ht...
I'm actually seeing some organizations deliberately forbidding air-gapped systems. The upsides no longer outweigh the downsides. While the speed at which attacks can be implemented is lower, they are more difficult to detect. An air-gapped system still needs to be updated and policed. So someone has to move data into it, for software updates at least. But the air-gap makes such systems very difficult to monitor remotely. Therefore, once an attack is ongoing it is harder to detect, mitigate and stop.
love it
tldr: The breach relied on careless human(s) using USB key to and from the air-gapped systems. All the clever technology would have been for naught had the staff used robust physical security procedures.
What protocol would you have recommended?
Using any kind of storage media to transfer data to a Windows machine is by default a disaster waiting to happen.
Windows natively provides the ability for executables to embed icons (known as resources) for the file manager to render them as. This, combined with the default of hiding file extensions for known types (e.g. .exe), is a recipe for a user eventually executing the malware instead of opening the file or directory they wanted.
This malware exploits that very fact by naming itself after the most-recently modified directory on the drive and embedding an icon that ensures that the file manager will render it as a directory.
If you ensured by policy that file extensions were never hidden, that resources were not rendered (every exe got the default icon [1]), and that every user received regular training to properly distinguish files from each other (and files from directories), this risk could be somewhat managed. Good luck; I don't even know if you can disable resource rendering.
USB can be OK however you need like a staging machine and scan the files before entry plus use of a write block device on the USB hard drive. These are commonly used in forensics.
https://www.nist.gov/itl/ssd/software-quality-group/computer...
This also tends to be a supply chain and insider threat.
Scan them with what exactly? If you're hinting at AVs — I honestly doubt they could be useful against novel state-sponsored malware.
It would have caught it once Kapersky reported it and they are keeping their AV definitions up to date.
Write block prevents transfer back to the USB which is the exfiltration mechanism.
How about Qubes OS with minimal templates? https://www.qubes-os.org/doc/templates/minimal/
See also: https://www.qubes-os.org/faq/#how-does-qubes-os-compare-to-u...
Based on other comments here, typically the USB key is destroyed after the data was copied into the network. No data is allowed to exit the airgapped network.
Read-only media or destroying the media after use is a reasonable mechanism to protect against data exfiltration.
I'm not sure how you protect against infiltration though. A computer system that cannot get data in is pretty useless methinks.
I don't know, but maybe DO NOT USING systems with ShowSuperHidden features would help ?
Such thing just MUST BE a helper for creating malwares, what else it could be ? Definitely for circumventing human users.
Good job Microsoft ! Autoexec.bat is proud of you ! /s
I woudn't use Windows at all. USB media? Authentificated and encrypted, with some system like NNCP and a little multiplaform Go based GUI (or heck, TCL/Tk) on top.
Not just that. You can blame USB but the question is still how the malware got to run on the target system. Did the user double click on the malware? Did it try to exploit Explorer trying to preview a file? Did it modify the USB stick's firmware such that it sends commands to the computer that exploit the Windows USB storage driver? Something else?
So the interesting TLDR, to me, is this:
> [The malware on the infected computer] finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, [...]. We also believe that the component uses a folder icon, to entice the user to [click on] it when the USB drive is inserted in an air-gapped system
So the attack vector is "using a transfer medium where data can be replaced with code and the usual procedure [in this case: opening the usual folder] will cause the code to run"
[dead]
Crafted by Rajat
Source Code