hckrnws
310 pages of text, 500 pages of C code in the appendix - this could need a supplemental github page.
The source code is at https://github.com/Dwayne-Phillips/CIPS
You might find this interesting as well:
2000-2003, both are pre-historic. We have neural networks now to do things like upscaling and colorization.
I see it the same way I see 'Applied Cryptography'. It’s old C code, but it helps you understand how things work under the hood far better than a modern black box ever could. And in the end, you become better at cryptography than you would by only reading modern, abstracted code.
Yes, those methods are old, but they’re explainable and much easier to debug or improve compared to the black-box nature of neural networks. They’re still useful in many cases.
Only partially. The chapters on edge detection, for example, only have historic value at this point. A tiny NN can learn edges much better (which was the claim to fame of AlexNet, basically).
That absolutely depends on the application. "Classic" (i.e. non-NN) methods are still very strong in industrial machine vision applications, mostly due to their momentum, explainability / trust, and performance / costs. Why use an expensive NPU if you can do the same thing in 0.1 ms on an embedded ARM.
A NN that has been trained by someone else on unknown data with unknown objectives and contains unknown defects and backdoors can compute something fast, but why should it be trusted to do my image processing? Even if the NN is built in-house overcoming trust issues, principled algorithms have general correctness proofs while NNs have, at best, promising statistics on validation datasets.
I wonder if doing classical processing of real-time data as a pre-phase before you feed into NN could be beneficial?
Yes, it’s part of the process of data augmentation, which is commonly used to avoid classifying on irrelevant aspects of the image like overall brightness or relative orientation.
"The chapters on edge detection, for example, only have historic value at this point"
Are there simpler, faster and better edge detection algorithms that are not using neural nets?
Classical CV algorithms are always preferred over NNs in every safety critical application.
Except Self driving cars, and we all see how that's going.
Comment was deleted :(
Crafted by Rajat
Source Code