hckrnws
Proper decoupling capacitor practices, and why you should leave 100nF behind
by zdw
The advice to use 1uF or 2.2uF for typical projects is good. It's common to see higher value capacitors in modern reference designs because this is well known among people who actually engineer circuits instead of copy and paste from 20 year old wisdom.
Don't go to a larger package to get more capacitance, though. Capacitors that are physically larger will have worse higher frequency performance. The physical package is part of the limiting factor. Very high speed designs will prefer 0201 capacitors.
Also keep in mind the distribution of your decoupling capacitors. Putting a single 2.2uF in a board in place of multiple 100nF caps distributed around the PCB would be a mistake. The decoupling capacitor needs to be physically close to what you're trying to decouple.
For hobby projects and microcontrollers most of this just doesn't matter. Pick a capacitor and put it on the board. For real high speed work you have to consider the layout. Number, size, and location of vias around the capacitor has a big impact. Loop area is also a big factor. Don't use narrow traces or locate capacitors far away.
On your note about capacitor sizes — at my first EE job, my boss taught me about capacitance-voltage derating[0] for ceramic capacitors and it was quite the revelation. There is a significant inverse relationship between the two, which no one tells you about in college!
I'm now very careful to pick ceramic capacitors with enough headroom on their rated voltage as you lose a lot if you're close to the rated value. This curve is dependent on the different ceramic types as well (C0G, X7R, etc). Cheaper ceramics have a steeper rolloff.
For personal projects I am very careful to pick higher quality ceramics (X7R if I can) and use caps rated to 2-3x my operating voltage. Likely overkill, but I'm not optimizing for cost at volume.
[0] https://resources.altium.com/p/voltage-derating-ceramic-capa...
I think 100nF is arguably still a safer option for people that are just going to copy the datasheet. As mentioned at the end of the post, lots of low inductance capacitance can reduce the phase margin of your linear regulators, and you can very quickly end up with a hundred decoupling caps on a small board. This is of course a solvable problem and sometimes it's just not a problem at all, but it's horrendously difficult to determine when it's going to start being a problem in many cases with big, low inductance power planes and caps littered across them various distances away from each other and the regulator.
The solution is to additionally stuff a bigass electrolytic capacitor on the rail, either tantalum or aluminum. The ESR of the electrolytic will damp out the naughty high-Q tendencies of the ceramics and everything will work out wonderfully in typical cases. (Of course there are pathological cases out there. There are always pathological cases. And if there ever stop being pathological cases, I'll be out of a job!)
At this point—thank you!—a Zachtronics game was born in my head. I’d like to play it!
(Maybe it’s a good secret level in that Zachtronics game about nondeterministic infinitesimals portrayed as getting things done in a corporate environment;… what was the name of that one again?)
Having more decoupling capacitance on the board will increase inrush current as well, as all the capacitors have to be charged up once power is connected. Using larger decoupling capacitors than necessary might mean you'll have to add measures to decrease inrush current where you'd otherwise not need to.
Softstart is often good design though and shouldn't be too hard anymore.
How many circuits cannot afford 100us of softstart?
I guess to your point though: 100uF of capacitance (because of a lot of 10uF caps) to 3.3V requires a 3.3Amp softstart over those 100us startup time. So you still can't go crazy spammy.
While 100x 100 nF caps is only 10uF all together.
Comment was deleted :(
Or just open the simulator/charts on the capacitor manufacturer website and look at which capacitor filters which frequencies at which temperatures?
Apparently most EE's don't do this.. I've seen decoupling caps in designs that basically do nothing.
Comment was deleted :(
In my opinion, this is bad advice.
Peak currents are very much a thing! for anything powered by battery or over USB. When you connect your USB gadget, you don't want it to exceed the USB spec and fry your USB port (or USB hub). That means there is a limit to how much in total capacitance your USB gadget can have before the power surge of connecting its cable becomes an issue. And that means you usually need to use the smallest caps that are OK to do the job. You typically need to use multiple decoupling caps all over the board, so using one that is too large very quickly adds up. That's how you end up using 100nF instead of 1uF.
Also, the article complains that 100nF caps have their filtering peak at the wrong frequency, but I'd argue that 20MHz to 40MHz is exactly in line with the rise and fall times of modern ICs, meaning that 100nF caps would even work better than 1uF for those ICs. As an example, look at the PCM5242 datasheet which suggests 100nF caps and their switching times are in the 16ns to 20ns range. Looking at the "impedance plot" in the original article, that means the red line is most suitable ... which is the 100nF 0402 capacitor.
The article would be correct if you are working on things using old (slow) ICs and a dedicated power supply. But if you're working on USB power with modern ICs, I believe going from 100nF to 1uF is a step in the wrong direction.
I'm glad I'm not the only one who was weirded out by the glossing over of the "it's better in the 20-40MHz range, I guess?" punch line. That's kind of exactly the frequency band where you need the reinforcement.
The article does get it right that you need to reduce the parasitic elements to the chip, but you have to consider like, everything - not just the wires to the package, but also the lead frame, bond wires, the wiring in the chip. Usually the chip designers modeled that all out and they put decoupling capacitors on chip and did PCB model simulations that includes some specific assumption about the impedance curves of the off-chip caps, and they probably used 0.1uF in their simulations.
If anything, you need closely placed decaps to prop up higher frequencies, not lower frequencies. Remember if you have a SPI bus clocking at 25MHz, 25MHz is just the fundamental - you have the whole fourier series going up to 100's of MHz on the edge.
The answer I had always seen looking at the chip models is that there is an off-chip capacitance value below which it does not make sense to use because the bond wires effectively isolate the chip above certain frequencies (i.e., while smaller value capacitors have higher SRF it doesn't matter because the chip can't "see" the capacitor due to the bond wires screening it out).
If you knew where that roll-off was, and you knew the curve of your capacitors + board parasitics, you'd place a cap as close as you can to the chip in the frequency band right below the roll off of the bond wires to prop up that zone. Then, you'd place larger caps farther away because, as the article notes, the inductance goes up but also you're just looking to prop up the higher impedance-at-lower-frequencies curve of the tiny cap that's close to the chip.
So a lot of it depends on the exact chip you're working with and how well designed it is. A classic chip design team would have an expert who did all the parasitic modeling of the package, board, and then they'd do a noise analysis on the chip and recommend a minimum on-chip decap so the board designers don't have to worry too much, they can get away with "almost anything" in the 0.1uF range. Unfortunately chip design teams are getting leaner and leaner these days and I don't see the same level of care being put into chips. I think we more or less get away with it because there is so much margin in the chip timing; also modern chips are "mostly" (>50%) fill cells -- e.g. decoupling capacitors -- that are placed right up against the logic gates so you can get away with bloody murder on the package and power distribution networks (background: modern chips are wiring-limited, not transistor-limited, but for process stability reasons you still need to make transistors everywhere at a uniform density, so they instantiate dummy transistors that are wired as capacitors between power and ground).
Where it really starts to matter is if you had e.g. a PLL and you're trying to reduce noise that the loop filter can't get rid of and in those cases often times you need much smaller capacitors because they have much better performance at higher frequencies. Yes, they suck at low frequencies - but your noise problem isn't in the 1MHz band anyways; the loop filter can track that out. It's going to be in the 100MHz+ range.
And as someone noted elsewhere, in-rush current is a real problem, and too much capacitance can cause a problem for regulator stability; especially the extremely high performance ceramics. And, if you're doing an extremely power efficient design you may need to consider factors like leakage and losses due to CV-energy cycling if you shut down significant portions of the design when not in use.
(edits for clarity)
Oh yeah, I completely forgot to mention proximity. 10x 100nF caps might be superior to 1x 1uF simply because the latter can only be close to 1 pin whereas the first can be close to 10 power supply pins.
Looking at the left diagram under the "Decoupling capacitor placement" headline here:
https://jmw.name/projects/exploring-pdns/
... it is very obvious that the cap being 1cm away will already cause much worse degradation than what going from 100nF to 1uF could ever improve.
Many modern chips have multiple power input pins. Using smaller caps close to all of them will do much better than fewer bigger better caps, but with more distance.
I like this article a lot, but it doesn't hammer home the fullest, easiest statement of this kind of lazy-best-effort decoupling: pick the smallest package you are willing to deal with, then buy the biggest value of capacitor you are willing to pay for in that package. Loop area really does rule all, and if you don't know that, you're going to have a hard time of it.
The article also doesn't do a particularly good job of making the argument against relying on the "notch" (seen here at 25–40MHz), which is that the notch moves. It moves around with just about any change in... anything... so you can either pay the heavy price to genuinely control it (it can be actually worthwhile to drop a notch on things in certain analog applications; think knocking out a DAC clock frequency in a reconstruction filter) or you can ignore that the notch exists. Usually that's the easier option!
>Oh yeah, I completely forgot to mention proximity. 10x 100nF caps might be superior to 1x 1uF simply because the latter can only be close to 1 pin whereas the first can be close to 10 power supply pins.
That's the point of the article, though. 1uF caps are now available in package sizes smaller than 100nF caps were when the rule of thumb originated. You can get a 16V rates uF cap in 0201 nowadays, so proximity really isn't a problem.
I second the general rule of thumb: stacking decoupling capacitors is extremely rarely needed nowadays. Pick your size, put the largest capacitor you can get in that size (or, if you're paranoid and think the manufacturers might be pushing things, go one size smaller) as close to the chip as you can, and maybe assess if you need some bulk capacitance as well, but more likely you are liable to wind up with too much capacitance.
> assess if you need some bulk capacitance as well, but more likely you are liable to wind up with too much capacitance
Remember also that most bulk capacitor types bring in some ESR, and the associated damping can really help a PDN. If you're too lazy to simulate, at least leave a footprint for a tantalum or aluminum capacitor!
The article says:
"If you have a lot of devices powered off a single rail, placing lots of high-value decoupling capacitors will add up, so pay attention to inrush current. If you’re sticking 10uF decoupling caps on 20 devices then that’s 200uF. Maybe dial it back a smidge."
In the recommended application circuit for a PCM5242RHB DA with a TPA6120A2 headphone amp, I already count 22 decoupling caps. Adding a STM32F446 CPU adds another 21 decoupling caps.
So if I was following the advice in the article, that would be 43uF of total capacitance just for a small CPU and headphone sound output. At about 10uF at 5V, you're running into issues with the USB spec. (When not using power delivery negotiation, which necessitates additional components and beefy MOSFETs)
So while the article mentions that too much capacitance can be an issue, following the advice in the article will pretty much make sure you run into exactly that issue.
But such a board likely would have a buck converter between the USB 5V input and those components and their capacitors. Just use a buck with soft start and that can effectively hide the bulk capacitance from an inrush perspective, no? Then you only count inrush for what’s in front of the buck, which won’t be nothing but can be easily controlled and designed to meet the inrush spec.
Plenty of designs are going to not use a traditional buck converter and instead use something cheap and easy like the AMS1117-33 to just linear-regulate their way to "it works good enough" for everything under 1A.
Comment was deleted :(
> There are two cases where I would recommend caution:
> 1. If you have a lot of devices powered off a single rail, placing lots of high-value decoupling capacitors will add up, so pay attention to inrush current. If you’re sticking 10uF decoupling caps on 20 devices then that’s 200uF. Maybe dial it back a smidge.
The conclusion of the article specifically mentions inrush concerns as one of the reasons to use large values. For the notch itself, relying on the difference between 0.05 Ohm and 0.02 Ohm decoupling is gonna make for a bad time, especially given how much that notch will move across DC bias and temperature.
What I didn't like about the article is that he picks a couple 1 uF capacitors that he knows have good specs and then adds to the graph some random 0.1 uF cap without any specs or part numbers. How do we know this is apples to apples? We don't. It is almost like he ran out of time to write the article when he got near the end. Too bad.
Interesting... My father, was a electronics engineer, was putting always 100nF decoupling caps in his boards. And was stuff that not was powered by a USB. It was on industrial control boards. Boards controling electrovalvs and circuits in high tension AC (10-20KV) at around 10KHz to drive ozone generators. And never had issues with electrical noise in his boards.
For pluggable power there are some other considerations you shouldn't neglect, like accidentally building a decent-ish-Q series LC with the input capacitors: https://www.analog.com/media/en/technical-documentation/appl...
USB-C sorta solves this because it starts at 5 V and steps up after being plugged in, but then you have issues with arcing on unplugging (see: USB-C connector spec, one of the last appendices deals with this).
Critical information for those that aren't aware: MLCC capacitance decreases with applied DC voltage, like, a whole fucking lot. [1]
Those 10uF/100V/X7R/1210 capacitors you love for your space constrained designs might only be 1uF at 48V. And it gets worse when choosing smaller package sizes.
This caught me completely off-guard. I've always thought an MLCC with a reasonable Dielectric at a given Capacitance would perform at least as well as an Electrolytic or Tantalum (minus fire hazards).
[1] (PDF) https://www.digikey.com/Site/Global/Layouts/DownloadPdf.ashx...
Also, pin the selection waterfall from your capacitor manufacturer to the wall - or at least bookmark it in your browser. Here's one from Kemet: [1]
Screenshot: https://i.imgur.com/sMaXBpN.png
If you really need to be on the lowest column (highest capacitance) for a given voltage rating, you'll either pay for it in voltage derating, temperature performance, tolerance accuracy, package height, or just pay for it in literal cash.
You cannot go below the lowest column, they have not figured out how to build a 10uF/25V/X7R/0603 MLCC, that is just not a thing you can buy.
With a given dielectric, material properties science only go so far. You're leaving performance on the table if you select a given package size with less capacitance and a lower voltage rating than what's available. (Assuming decoupling, not analog stuff where you need exactly 438.6 pF for a particular resonant frequency or something). Each package size has basically a constant inductance, and usually, capacitor height isn't that critical - you don't want to be oversquare, but they don't sell many of those. Each manufacturer publishes a waterfall diagram, but all manufacturers are working with the same physics.
Conversely, if you've selected an X7R dielectric and an 0603 package for a decoupling capacitor, there's not a great reason to go with a 0.1uF value, or to restrict yourself to 6.3V rating - eg [2]. They make a 0.47 uF 25V capacitor that's otherwise identical! [3] And because designers are lazy and default to 100nF, the part with 1/5th the performance is literally 6% more expensive!
Note that for 0402 packages, the 100nF capacitor is typically the right part to select! You can't get a 120nF/X7R/0402 at any voltage rating above 6.3V, the 220nF and 470nF are exotic parts that sacrifice stability and accuracy for maximum capacitance in a volume, but a 100nF/16V/X7R/0402 is a pretty good default.
[1] https://content.kemet.com/datasheets/KEM_C1002_X7R_SMD.pdf
[2] https://www.digikey.com/en/products/detail/kemet/C0603C104K9...
[3] https://www.digikey.com/en/products/detail/kemet/C0603C474K4...
Notably as the capacitance goes up in a given package size this effect increases dramatically. Every good manufacturer provides a plot of this for each product.
Thats because of the MLCCs you are considering, you can get ones with different voltage coefficents if you want. Like ones that keep 95% at 100v or even better. They cost more and have different materials.
Extreme parts cost a whole lot more
I was once caught by this issue too. The problem was that I didn’t know about the basic phenomenon (MLCC losing capacitance with voltage) so I didn’t know that I was expecting extreme parts.
They're generally bigger as well. That's more or less the tradeoff: the denser the energy storage, the worse all other attributes get.
The traditional fix back in the 1980s was to use Tantalum capacitors which had good high frequency response. Unfortunately they turned out to be dependent on oppression for their source material and also tend to short at the smallest possible disturbance.
The cheap option was an electrolytic cap and a .1 uF disc ceramic. The self inductance of a wound aluminum capacitor tended to be ok at 120 hz hum frequency but horrible at higher frequencies found in TTL and CMOS logic.
Those also turned out to self destruct over time due to the extensive use of an incomplete stolen trade secret electrolyte formula in many low cost capacitors.
I'd still use both, personally.
Tantalum and electrolytic capacitors are primarily for bulk storage and lower frequencies. They don't substitute for small ceramic capacitors in this context.
Switching regulator frequencies have become much higher since the 80s. It's common to have switching regulators operating in the MHz range with small ceramic output capacitors and relatively low value inductors.
I think the setup in GP is something like 0.1uF tantalum + 47uF electrolytic. 0.1uF stops CPU crashing and electrolytic stops CPU passing out. Tantalum fails in shorting mode and have since been replaced with ceramics that fail open, of course.
> I think the setup in GP is something like 0.1uF tantalum + 47uF electrolytic. 0.1uF stops CPU crashing and electrolytic stops CPU passing out.
You might find a configuration like that in some old retro computing gear, but a modern 1uF ceramic chip capacitor will outperform a 0.1uF tantalum significantly. You don't need to use an electrolytic for 47uF. You can get 47uF in one or two surface mount ceramic caps.
I was surprised to find that you can, indeed, get 47 uF 35 volt MLCC surface mount caps, but they'll set you back about 10x the cost of the same capacitance in a surface mount electrolytic. In some cases, it might be worth it.
Most of the repairs I do are to older test equipment and radios, from WWII to the end of the 20th century. Some of the new surface mount stuff is just too small for us to work with. 6TTSOP is a chip 2 by 1.25 mm, and it's really hard to tack leads onto it to patch it into an existing circuit. (We needed 20+ dB of gain at 2 Ghz to replace an obsolete part, we ended up ordering a MAR-6+ instead, at least it's big enough to solder)
True. Couple big ceramic caps on the secondary side of power circuit and cheapest 0.1uf peppered around is going to make sense. Chip mounter machines probably like it too.
I'd love to see some practical tests done. Like just layout a microcontroller PCB and use different decoupling caps and see how that affects things like max working SPI frequency or something.
I made a test PCB with capacitor footprints repeated at various intervals, with measurement ports for controlled experiments. You can really see the performance difference between two and four layer PCBs, for example: https://jmw.name/projects/exploring-pdns/
Nice one!
I don't suppose you ever did the measurement with big-V decoupling, and with and without the big electrolytic? That would have been really interesting.
Maybe this fits the bill? https://www.youtube.com/watch?v=ARwBwHZESOY
Like many engineering problems, filter design rarely has a universal solution. Many MLCC exhibit resonant piezoelectric and or electrostrictive effects. Thus offer marginal noise floor performance in VHF/UHF LNA, and sometimes introduce more problems than they solve.
The main reason MLCC are so popular... is the price. =3
It's also probably dependent on who made them, and the engineering of the materials they used. After all even if every cap meets the spec but there are several paths to get to the spec.
This was really well written. As someone who isn’t an electrical engineer but dabbles in making PCBs, I learned a lot.
But most importantly, as the author ends, I will probably continue lowering my mental overhead and put in 1uF or 2.2uF capacitors as my decoupling caps from now
What a wonderful article. I really appreciated the graphs of the frequency-dependent impedance to help illustrate what might've been just left at "V-shaped plot." I've always been in awe of folks who do analog design. I used to work doing driver development for some pretty sophisticated radios, and the analog side of dealing with multi-GHz signals was completely over my head. Software folks think in terms of "get some bits here, move some bits over there" and can easily forget about how complicated that is at the analog level.
Fewer, larger caps is probably correct for most hobby projects per this post, but many high-speed parts will recommend using tiny caps on the order of 1-10 nF underneath large BGAs plus some extra bulk capacitance (1-10 uF parts). It is really all about doing the math.
Comment was deleted :(
Since these ceramic decoupling capacitors can be made so small nowadays and since very low-inductance capacitors are suited to handle today's high frequency switching noise, maybe it is worth the cost for IC packages should incorporate a small sized (low inductance) capacitor directly in the package, as that would be much closer to the silicon's power & ground than could be placed on a pcb. Then the pcb would only need to have the larger (>=1uF) capacitor.
Just after I typed the above, I see Intel has such a patent filed in 2001 (https://patents.google.com/patent/US20050156280A1/en). So maybe that patent has prevented other companies from adopting such a practice. (Having decoupling capacitors already in the package would sure make hobbyist pcb designing much easier, cause it is so hard to deal with tiny SMD parts.)
Capacitors can be embedded in chip packages, but it adds cost, size and complexity. Bonding two chips inside of a package increases the overall size considerably. You would have trouble connecting from the main IC to pins on one side of the chip, crossing over the capacitor, for example.
Every production board will have capacitors in other places, so removing one of them at the expense of increasing chip size and PCB area isn't a good trade.
For high density designs with high budgets, you can embed capacitors and other passives directly into the circuit board in cutouts. You can also use special capacitance layer substrates to form a big distributed capacitor between two copper planes in the PCB.
There are a lot of options out there, it just doesn't make sense in most cases because putting a capacitor that costs less than $0.01 around the chip is trivial.
Most of the advanced chips (the 16 nm that Xilinx uses for UltraScale/+ and below) are flip-chip wafers and have an interposer which is basically a very dense PCB that helps fanning out the extremely dense and small pitch of flip-chip bumps. They will usually include extra low impedance ("landscape" orientation) capacitors on the substrate, which leads to much relaxed PCB decoupling requirements.
Having designed FPGA boards with both their 7th generation parts and their Zynq UltraScale parts, the internal capacitors are such a time and cost saver in terms of being able to fan out more signals without more PCB layers
I can also attest that even relatively "slow" chips like 14 nm FinFET MPUs from Renesas have decoupling caps on the substrate
Do you make SoMs for a reseller by any chance ?
I'm looking at the newest versal chips from Xilinx/AMD for a new design and buying a SoM & designing our carrier board could fit the bill nicely. We're still very early in the design process, we need to get prices for the chips too to see if it's an idea worth pursuing.
>“But Graham,” I hear you protest, “I see these practices recommended in vendor’s datasheets all the time! Surely they can’t be wrong? They’re professionals!”
>The 100nF value for decoupling became so entrenched because it works well enough* most of the time, so you don’t need to even think about it. By eliminating trivialities you can focus your brain-juices, spoons, or whatever else you want to call them on more challenging tasks.
So the argument in this article is that we should ignore this triviality and spend more time precisely tuning decoupling capacitor selection on top of designing the rest of the system?
Software people would call this premature optimisation, no?
Note that "in the 80s" in the article seems to refer to the late 80s - CMOS was kind of rare in the earlier 80s, when IIRC most things were TTL or NMOS.
What about for microcontrollers? I've seen some sequences of fairly specific capacitor values for 3.3V power in some microcontroller datasheets, and I always wondered how much that mattered.
Usually the value isn't a big deal for "power rails" which expect to have additional capacitors connected elsewhere. However, sometimes microcontrollers expose decoupling pins for use with internal voltage converters and regulators (for example the core voltage on older STM32 parts), and those require very specific values that correspond with the tuning of feedback networks and power converter design within the IC
It's hard to generalize. Common microcontroller clock speeds range from 1 to 500 MHz. An 8-bit microcontroller running at 1 MHz will often work fine without any decoupling at all. A 32-bit microcontroller might not even boot without a ceramic capacitor nearby.
The basic answer is basically what the original article says. If you don't want the mental burden of figuring it out, the spec gives you safe defaults that should work for almost all uses. But it's almost never the case that you need to do it that way.
I was always a fan of using 100uf and 100nf near a chip, and something like 1000uf for the entire board... but I don't do modern stuff usually, mostly retro boards if I can still get parts.
Just use 3 terminal capacitors. Crazy low inductance, no rules of thumb needed.
https://www.kyocera-avx.com/docs/techinfo/CeramicCapacitors/... by Kyocera AVX
https://product.tdk.com/en/techlibrary/solutionguide/3tf03.h... by TDK
tl;dr: ESL is dominated by packaging inductance so the geometry of the capacitor matters a lot more than the value
3 terminal capacitors are the cheapest "low inductance" capacitor that have a meaningfully better ESL. If you need better than that you should use an EM field solver to properly understand PDN impedance.
Yes, but the regular 2 terminal kind round to free at JLC on my hobby projects!
Free > better, I've only used 3-terminal caps once out of curiousity
"currently under development"
Seems to be targeted at a different audience than the article.
Excellent article ... if you read it.
If you don't really read it, YMMV.
> Now, when I said that 100nF “works well enough” above, what I really mean is your circuit usually doesn’t completely break if you use a 100nF decoupling capacitor. But given that the cost to use a larger, better capacitor is effectively nil in most cases
This is simply not true.
That Samsung cap he quoted is about 1 or 2 cents in volume from Digikey or Mouser. That cheap Chinese quote means that they are probably substituting inferior parts.
By contrast, the 0.1uF(100nF) in the same size is at least an order of magnitude cheaper. This matters a lot as you can wind up with a lot of bypass caps on your board (a significant percentage of 100 isn't uncommon). In addition, you can get 10V rating instead of 6V which means that you don't have to worry about USB transients destroying your cap and you get much better bias derating.
However, this article has a kind of fundamental misunderstanding:
"Bypass" caps (mostly) aren't about charge storage.
The point of a "bypass" cap is to provide a return path for high frequency signals--the "bypass".
All electrical signals require a circuit--that's a full loop. That loop goes positive power supply->chip A power->chip A out->chip B in->chip B gnd->negative power supply.
In the case of slow signals, it is fine for that loop to be that big. The problem is that as the signal speed increases, the resistance/capacitance/inductance of that loop the whole way back to the power supply gets bigger and bigger and starts slowing everything down.
You use your bypass capacitor so that the loop looks like chip A bypass (positive)->chip A power->chip A out->chip B in->chip B gnd->chip A bypass (negative). That loop is a LOT smaller than going the whole way back to the power supply. Which means that you want your capacitor to look a whole lot like a short circuit at the frequencies of interest, which 0.1uF(100nF) does.
In fact, given how much faster signals are nowadays, you probably want to use 10nF bypass caps, but that's an argument for another day.
As for RF bypassing, you almost always have to go to small value 0402 and 0201 in values like 100pF or lower, anyway. So, this discussion is mostly moot for RF.
Yes, if I have to use a ceramic 1uF capacitor for some other reason already, I won't sweat the idea of it serving as a bypass capacitor. But I'm certainly not going to upvalue all my nice, cheap 100nF bypasses.
> That Samsung cap he quoted is about 1 or 2 cents in volume from Digikey or Mouser. That cheap Chinese quote means that they are probably substituting inferior parts.
No, they aren't. LCSC is a very reputable distributor that has lower margins by having much cheaper labor and having absolutely insane economies of scale.
Comment was deleted :(
Important subject, but impossible to read on a phone due to animated ads for every second paragraph
UblockOrigin, on a phone, blocking the javascript, resulted in reading the whole article with zero ads.
> Proper decoupling capacitor practices, and why you should leave 100nF behind
Interesting article with lot of good points. However, without an EMC radiated emmissions test, the rant is useless. Especially when it resorts to "monkeys".
I read this in electroboom's voice. Much more entertaining that way.
Crafted by Rajat
Source Code