hckrnws
In the case of MCPs, this post is indeed a quick primer. But from a coding standpoint, and despite the marketing that Agent/MCP development simplifies generative LLM workflows, it’s a long coding mess that is hard to tell if it’s even worth it. It’s still the ReAct paradigm at a low level and if you couldn’t find a case for tools then, nothing has changed other than the Agent/MCP hype making things more confusing and giving more ammunition to AI detractors.
Yes, I read this post and was actually emotionally affected by a post about coding. I was surprised how sad I felt. I’ve been around for a long time but this truly feels like the best era if you like gluing trash to other trash and shipping it.
If you need to define and write the functions to calculate interest… what exactly is the llm bringing to the table here? I feel like I’m missing something.
The LLM is what decides which endpoint/tool to call (or none at all) in response to the user input.
The original 2022 ReACT paper is still best explainer: https://arxiv.org/abs/2210.03629
I think it's the case you don't need to but if you find it necessary. Basically you're augmenting LLMs with "normal computer power" just like a human.
maybe some of these could be fit? https://python.langchain.com/docs/tutorials/
You know it's going to be a great article when the design is from 1995
The units for the free memory are interestingly wrong; 'Executing shell command: free -m' The total system memory is 64222 bytes, with used (available) 8912 bytes.
which given that there seems to be no way to specify any data structure or typing in this MCP interface is hardly surprising!
This website design is blessed. A great return to the past
I'm just distracted by the "WALLA" in the penultimate paragraph.
It should be "Voilà", which is French for “there it is”.
Even name takes us back in time - SPARC
MCP is great for when you’re integrating tools locally into IDEs and such. It’s a terrible standard for building more robust applications with multi-user support. Security and authentication are completely lacking.
99% of people wouldn’t be able to find the API keys you need to feed into most MCP servers.
While I’m a fan, we’re not using MCP for any production workloads for these very reasons.
Authentication, session management, etc, should be handled outside of the standard, and outside of the LLM flow entirely.
I recently mused on these here; https://github.com/sunpazed/agent-mcp/blob/master/mcp-what-i...
You are correct ... it is still early days IMHO ... will have to see how this evolves
[dead]
What, according to you, are some alternatives that exist or are in development that fill these gaps?
Is anyone really still using langchain? Has it gotten better? Seemed like a token burning platform the last time I used it.
I recently finished a Langgraph class on Deeplearning.ai about a week after it came out. Already then the provided Notebook example didn't work and I needed to debug it to pass. I had great hopes on Langchain in 2024, but their product decisions toward LCEL and the complete lack of a discernible roadmap that does not constantly break things made me move away from them.
I had to look up LCEL: https://python.langchain.com/docs/concepts/lcel/
What the heck? I have no idea what problem this is solving while not also creating new problems.
The worst thing about LCEL is not that it's a different coding pattern. It also creates a major break withing the documentation that can't be fixed as you now have to factor into your search all documentation with and without LCEL.
It's still absolutely fucking terrible - the mongo of the LLM world.
Crafted by Rajat
Source Code