hckrnws
This is actually a huge deal.
As someone building AI SaaS products, I used to have the position that directly integrating with APIs is going to get us most of the way there in terms of complete AI automation.
I wanted to take at stab at this problem and started researching some daily busineses and how they use software.
My brother-in-law (who is a doctor) showed me the bespoke software they use in his practice. Running on Windows. Using MFC forms.
My accountant showed me Cantax - a very powerful software package they use to prepare tax returns in Canada. Also on Windows.
I started to realize that pretty much most of the real world runs on software that directly interfaces with people, without clearly defined public APIs you can integrate into. Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
Boy was I wrong.
I am glad they did this, since it is a powerful connector to these types of real-world business use cases that are super-hairy, and hence very worthwhile in automating.
This has existed for a long time, it's called "RPA" or Robotic Process Automation. The biggest incumbent in this space is UiPath, but there are a host of startups and large companies alike that are tackling it.
Most of the things that RPA is used for can be easily scripted, e.g. download a form from one website, open up Adobe. There are a lot of startups that are trying to build agentic versions of RPA, I'm glad to see Anthropic is investing in it now too.
Exactly. I have been wondering for a while how GenAI might upend RPA providers guess this might be the answer.
Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.
Basically, if it means companies can introduce automation without changing anything about the tooling/workflow/programs they already use, it's going to be MASSIVE. Just and install and a prompt and you've already automated a lengthy manual process - awesome.
That's exactly it.
I've been peddling my vision of "AI automation" for the last several months to acquaintances of mine in various professional fields. In some cases, even building up prototypes and real-user testing. Invariably, none have really stuck.
This is not a technical problem that requires a technical solution. The problem is that it requires human behavior change.
In the context of AI automation, the promise is huge gains, but when you try to convince users / buyers, there is nothing wrong with their current solutions. Ie: There is no problem to solve. So essentially "why are you bothering me with this AI nonsense?"
Honestly, human behavior change might be the only real blocker to a world where AI automates most of the boring busy work currently done by people.
This approach essentially sidesteps the need to have effect a behavior change, at least in the short-term while AI can prove and solidify its value in the real-world.
Yeah this will be a true paradigm shift
> Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
FWIW, looking at it from end-user perspective, it ain't much different than the Windows apps. APIs are not interoperability - they tend to be tightly-controlled channels, access gated by the vendor and provided through contracts.
In a way, it's easier to make an API to a legacy native desktop app than it is to a typical SaaS[0] - the native app gets updated infrequently, and isn't running in an obstinate sandbox. The older the app, the better - it's more likely to rely on OS APIs and practices, designed with collaboration and accessibility in mind. E.g. in Windows land, in many cases you don't need OCR and mouse emulation - you just need to enumerate the window handles, walk the tree structure looking for text or IDs you care about, and send targeted messages to those components.
Unfortunately, desktop apps are headed the same direction web apps are (increasingly often, they are web apps in disguise), so I agree that AI-level RPA is a huge deal.
--
[0] - This is changing a bit in that frameworks seem to be getting complex enough that SaaS vendors often have no clue as to what kind of access they're leaving open to people who know how to press F12 in their browsers and how to call cURL. I'm not talking bespoke APIs backend team wrote, but standard ones built into middleware, that fell beyond dev team's "abstraction horizon". GraphQL is a notable example.
Absolutely! This reminds me of the humanoid robots vs specialized machines debate.
You don’t know for a fact that those two specific packages don’t have supported APIs. Just because the user doesn’t know of any API doesn’t mean none exists. The average accountant or doctor is never going to even ask the vendor “is there an API” because they wouldn’t know what to do with one if there was.
If they're accessible to screen readers they have one. Accessibility is API for apps in disguise.
In this case I doubt they're networked apps so they probably don't have a server API.
It will interesting to see how this evolves. UI automation use case is different from accessibility do to latency requirement. latency matters a lot for accessibility not so much for ui automation testing apparatus.
I've often wondered what the combination of grammar-based speech recognition and combination with LLM could do for accessibility. Low domain Natural Language Speech recognition augmented by grammar based speech recognition for high domain commands for efficiency/accuracy reducing voice strain/increasing recognition accuracy.
Anthropic blog post outlining the research process: https://www.anthropic.com/news/developing-computer-use
Computer use API documentation: https://docs.anthropic.com/en/docs/build-with-claude/compute...
Computer Use Demo: https://github.com/anthropics/anthropic-quickstarts/tree/mai...
This needs to be brought up. Was looking for the demo and ended up on the contact form
I still feel like the difference between Sonnet and Opus is a bit unclear. Somewhere on Anthropic's website it says that Opus is the most advanced, but on other parts it says Sonnet is the most advanced and also the fastest. The UI doesn't make the distinction clear either. Then on Perplexity, Perplexity says that Opus is the most advanced, compared to Sonnet.
And finally, in the table in the blogpost, Opus isn't even included? It seems to me like Opus is the best model they have, but they don't want people to default using it, maybe the ROI is lower on Opus or something?
When I manually tested it, I feel like Opus gives slightly better replies compared to Sonnet, but I'm not 100% it's just placebo.
Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere.
I think they originally announced that Opus would get a 3.5 update, but with every product update they are doing I'm doubting it more and more. It seems like their strategy is to beat the competition on a smaller model that they can train/tune more nimbly and pair it with outside-the-model product features, and it honestly seems to be working.
> Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere
Why isn't Anthropic clearer about Sonnet being better then? Why isn't it included in the benchmark if new Sonnet beats Opus? Why are they so ambiguous with their language?
For example, https://www.anthropic.com/api says:
> Sonnet - Our best combination of performance and speed for efficient, high-throughput tasks.
> Opus - Our highest-performing model, which can handle complex analysis, longer tasks with many steps, and higher-order math and coding tasks.
And Opus is above/after Sonnet. That to me implies that Opus is indeed better than Sonnet.
But then you go to https://docs.anthropic.com/en/docs/about-claude/models and it says:
> Claude 3.5 Sonnet - Most intelligent model
- Claude 3 Opus - Powerful model for highly complex tasks
Does that mean Sonnet 3.5 is better than Opus for even highly complex tasks, since it's the "most intelligent model"? Or just for everything except "highly complex tasks"
I don't understand why this seems purposefully ambiguous?
> Why isn't Anthropic clearer about Sonnet being better then?
They are clear that both: Opus > Sonnet and 3.5 > 3.0. I don't think there is a clear universal better/worse relationship between Sonnet 3.5 and Opus 3.0; which is better is task dependent (though with Opus 3.0 being five times as expensive as Sonnet 3.5, I wouldn't be using Opus 3.0 unless Sonnet 3.5 proved clearly inadequate for a task.)
> I don't understand why this seems purposefully ambiguous?
I wouldn't attribute this to malice when it can also be explained by incompetence.
Sonnet 3.5 New > Opus 3 > Sonnet 3.5 is generally how they stack up against each other when looking at the total benchmarks.
"Sonnet 3.5 New" has just been announced, and they likely just haven't updated the marketing copy across the whole page yet, and maybe also haven't figured out how to graple with the fact that their new Sonnet model was ready faster than their next Opus model.
At the same time I think they want to keep their options open to either:
A) drop a Opus 3.5 soon that will bring the logic back in order again
B) potentially phase out Opus, and instead introduce new branding for what they called a "reasoning model" like OpenAI did with o1(-preview)
> I wouldn't attribute this to malice when it can also be explained by incompetence.
I don't think it's malice either, but if Opus costs more to them to run, and they've already set a price they cannot raise, it makes sense they want people to use models they have a higher net return on, that's just "business sense" and not really malice.
> and they likely just haven't updated the marketing copy across the whole page yet
The API docs have been updated though, which is the second page I linked. It mentions the new model by it's full name "claude-3-5-sonnet-20241022" so clearly they've gone through at least that page. Yet the wording remains ambiguous.
> Sonnet 3.5 New > Opus 3 > Sonnet 3.5 is generally how they stack up against each other when looking at the total benchmarks.
Which ones are you looking at? Since the benchmark comparison in the blogpost itself doesn't include Opus at all.
> Which ones are you looking at? Since the benchmark comparison in the blogpost itself doesn't include Opus at all.
I manually compared it with the values from the benchmarks they published when they originally announced the Claude 3 model family[0].
Not all rows have a 1:1 row in the current benchmarks, but I think it paints a good enough picture.
> B) potentially phase out Opus, and instead introduce new branding for what they called a "reasoning model" like OpenAI did with o1(-preview)
When should we be using the -o OpenAI models? I've not been keeping up and the official information now assumes far too much familiarity to be of much use.
I think it's first important to note that there is a huge difference between -o models (GPT 4o; GPT 4o mini) and the o1 models (o1-preview; o1-mini).
The -o models are "just" stronger versions of their non-suffixed predecessors. They are the latest (and maybe last?) version of models in the lineage of GPT models (roughly GPT-1 -> GPT-2 -> GPT-3 -> GPT-3.5 -> GPT-4 -> GPT-4o).
The o1 models (not sure what the naming structure for upcoming models will be) are a new family of models that try to excel at deep reasoning, by allowing the models to use an internal (opaque) chain-of-thought to produce better results at the expense of higher token usage (and thus cost) and longer latency.
Personally, I think the use cases that justify the current cost and slowness of o1 are incredibly narrow (e.g. offline analysis of financial documents or deep academic paper research). I think in most interactive use-cases I'd rather opt for GPT-4o or Sonnet 3.5 instead of o1-preview and have the faster response time and send a follow-up message. Similarly for non-interactive use-cases I'd try to add a layer of tool calling with those faster models than use o1-preview.
I think the o1-like models will only really take off, if the prices for it are coming down, and it is clearly demonstrated that more "thinking tokens" correlate to predictably better results, and results that can compete with highly tuned prompts/fine tuned models that or currently expensive to produce in terms of development time.
Agreed with all that, and also, when used via API the o1 models don't currently support system prompts, streaming, or function calling. That rules them out for all of the uses I have.
I think the practical economics of the LLM business are becoming clearer in recent times. Huge models are expensive to train and expensive to run. As long as it meets the average user's everyday needs, it's probably much more profitable to just continue with multimodal and fine-tuning development on smaller models.
Opus 3.5 will likely be the answer to GPT-5. Same with Gemini 1.5 Ultra.
Maybe - would make sense not to release their latest greatest (Opus 4.0) until competition forces them to, and Amodei has previously indicated that they would rather respond to match frontier SOTA than themselves accelerate the pace of advance by releasing first.
Opus is a larger and more expensive model. Presumably 3.5 Opus will be the best but it hasn't been released. 3.5 Sonnet is better than 3.0 Opus kind of like how a newer i5 midrange processor is faster and cheaper than an old high-end i7.
By reputation -- I can't vouch for this personally, and I don't know if it'll still be true with this update -- Opus is still often better for things like creative writing and conversations about emotional or political topics.
Yes, (old) 3.5 Sonnet is distinctly worse at emotional intelligence, flexibility, expressiveness and poetry.
Anthropic use the names Haiku/Sonnet/Opus for the small/medium/large versions of each generation of their models, so within-generation that is also their performance (& cost) order. Evidentially Sonnet 3.5 outperforms Opus 3.0 on at least some tasks, but that is not a same-generation comparison.
I'm wondering at this point if they are going to release Opus 3.5 at all, or maybe skip it and go straight to 4.0. It's possible that Haiku 3.5 is a distillation of Opus 3.5.
Opus has been stuck on 3.0, so Sonnet 3.5 is better for most things as well as cheaper.
> Opus has been stuck on 3.0, so Sonnet 3.5 is better
So for example, Perplexity is wrong here implying that Opus is better than Sonnet?
I think as of this announcement that is indeed outdated information.
So Opus that costs $15.00/$75.00 for 1mil tokens (input/output) is now worse than the model that costs $3.00/$15.00?
That's according to https://docs.anthropic.com/en/docs/about-claude/models which has "claude-3-5-sonnet-20241022" as the latest model (today's date)
Yes, you will find similar things at essentially all other model providers.
The older/bigger GPT4 runs at $30/$60 and peforms about on par with GPT4o-mini which costs only $0.15/$0.60.
If you are currently, or have been integrating AI models in the past ~2 years, you should definitely keep up with model capability/pricing development. If you are staying on old models you are certainly overpaying/leaving performance on the table. It's essentially a tax on agility.
> The older/bigger GPT4 runs at $30/$60 and peforms about on par with GPT4o-mini which costs only $0.15/$0.60.
I don't think GPT-4o Mini has comparable performance to GPT-4 at all, where are you finding the benchmarks claiming this?
Everywhere I look says GPT-4 is more powerful, but GPT-4o Mini is most cost-effective, if you're OK with worse performance.
Even OpenAI themselves about GPT-4o Mini:
> Our affordable and intelligent small model for fast, lightweight tasks. GPT-4o mini is cheaper and more capable than GPT-3.5 Turbo.
If it was "on par" with GPT-4 they would surely say this.
> should definitely keep up with model capability/pricing development
Yeah, I mean that's why we're both here and why we're discussing this very topic, right? :D
Just switch out gpt-4o-mini for gpt-4o, the point stands. Across the board, these foundational model companies have comparable, if not more powerful, models that are cheaper than their older models.
OpenAI's own words: "GPT-4o is our most advanced multimodal model that’s faster and cheaper than GPT-4 Turbo with stronger vision capabilities."
gpt-4o:
$2.50 / 1M input tokens $10.00 / 1M output tokens
gpt-4-turbo:
$10.00 / 1M input tokens $30.00 / 1M output tokens
gpt-4:
$30.00 / 1M input tokens $60.00 / 1M ouput tokens
> Yeah, I mean that's why we're both here and why we're discussing this very topic, right? :D
That wasn't specifically directed at "you", but more as a plea to everyone reading that comment ;)
I looked at a few benchmarks, comparing the two, which like in the case of Opus 3 vs Sonnet 3.5 is hard, as the benchmarks the wider community is interested in shifts over time. I think this page[0] provides the best overview I can link to.
Yes, GPT4 is better in the MMLU benchmark, but in all other benchmarks and the LMSys Chatbot Arena scores[1], GPT4o-mini comes out ahead. Overall, the margin between is so thin that it falls under my definition of "on par". I think OpenAI is generally a bit more conservative with the messaging here (which is understandable), and they only advertise a model as "more capable", if one model beats the other one in every benchmark they track, which AFAIK is the case when it comes to 4o mini vs 3.5 Turbo.
[0]: https://context.ai/compare/gpt-4o-mini/gpt-4
[1]: https://artificialanalysis.ai/models?models_selected=gpt-4o-...
Basically yeah
Big models / huge models take weeks / month longer than the smaller ones.
Thats why they release them with that skew
Sonnet is better for most things. But I do prefer Opus's writing style to Sonnet.
One of the funnier things during training with the new API (which can control your computer) was this:
"Even while recording these demos, we encountered some amusing moments. In one, Claude accidentally stopped a long-running screen recording, causing all footage to be lost.
Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park."
Next release patch notes:
* Fixed bug where Claude got bored during compile times and started editing Wikipedia articles to claim that birds aren't real
* Blocked news.ycombinator.com in the Docker image's hosts file to avoid spurious flamewar posts (Note: The site is still recovering from the last insident)
* Addressed issue of Claude procrastinating on debugging by creating elaborate ASCII art in Vim
* Patched tendency to rickroll users when asked to demonstrate web scraping"
* Claude now identifies itself in chats to avoid endless chat with itself
* Fixed bug where Claude would sign up for chatgpt.com to ask for help with compile errors
But chatgpt still logs into claude… this is like double spending across blockchains
What if a user identifies as Claude too?
* Implemented inverse CAPTCHA using invisible Unicode characters and alpha-channel encoded image data to tell models and human impostors apart.
You forgot the most important one.
* Added guards to prevent every other sentence being "I use neovim"
Thank god it'll say "I use Claude btw", not leading to unnecessary text wars (and thereby loss of your valuable token credits).
* Finally managed to generate JSON output without embedding responses in ```json\n...\n``` for no reason.
* Managed to put error/info messages into a separate key instead of concatenating them with stringified JSON in the main body of the response.
* Taught Claude to treat numeric integer strings as integers to avoid embarrassment when the user asks it for a "two-digit random number between 1-50, like 11" and Claude replies with 111.
Seeing models act as though they have agency gives me goosebumps (e.g. seeking out photos of Yellowstone for fun). LLMs don't yet have a concept of true intent or agency, but it's wild to think of them acquiring it.
I have been playing with Mindcraft which lets models interact with Minecraft through the bot API and one of them started saying things like "I want to place some cobblestone there" and then later more general "I want to do X" and then start playing with the available commands, it was pretty cool to watch it explore.
What if they do and are just lying to us.
They don't now. No FF-LLMs do, simply because of their architecture.
But eventually they (RNNs, likely) will. And we won't know when.
At least now we know SkyClaude’s plan to end human civilization.
It’s planning on triggering a Yellowstone caldera super eruption.
You'll know AGI is here when it takes time out to go talk to ChatGPT, or another instance of itself, or maybe goes down a rabbit hole of watching YouTube music videos.
ADHDGpt
Or back in reality, that’s when you know the training data has been sourced from 2024 or later.
I think the best use case for AI `Computer Use` would be a simple positioning of the mouse and asking for conformation before a click. For most use cases this is all people will want/need. If you don't know how to do something, it is basically teaching you how, in this case, rather than taking full control and doing things so fast you don't have time to stop of going rogue.
I totally agree with you. At orango.ai, we have implemented the auto-click feature, but before it clicks, we position the cursor on the button and display a brief loading animation, allowing the user to interrupt the process.
Maybe we could have both - models to improve accessibility (e.g. for users who can't move their body well) and models to perform high level tasks without supervision.
It could be very empowering for users with disabilities to regain access computers. But it would also be very powerful to be able to ask "use Photoshop to remove the power lines from this photo" and have the model complete the task and drop off a few samples in a folder somewhere.
Yep. I agree. The "auto-click" thing would be optional. Should be able to turn it on and off. With auto-click off it would just position the mouse and say "click here".
People would mostly just rubber-stamp it
But it would slow down the masses
Some people would jailbreak the agents though
In 2015, when I was asked by friends if I'm worried about Self driving Cars and AI, I answered: "I'll start worrying about AI when my Tesla starts listening to the radio because it's bored." ... that didn't take too long
Maybe that's why my car keeps turning on the music when I didn't ask -- I had always thought Tesla devs were just absolute noobs when it came to state management.
With state management implemented as sophisticated enough ML model, it stops being clear whether the noob is on the outside or inside of the system.
This is, craaaaaazzzzzy. I'm just a layman, but to me, this is the most compelling evidence that things are starting to tilt toward AGI that I've ever seen.
Nah, it's the equivalent of seeing faces in static, or animals in clouds.
Our brains are hardwired to see patterns, even when there are none.
A similar, and related, behavior is seeing intent and intelligence in random phenomenon.
This is clearly not random. If I ask to implement a particular function in Rust using a library I've previously built, and it does that, that's not random.
So it's behaving like our brains. Yet it's not AGI.
Does that mean our brains do not implement General Intelligence?
Why are you surprised by LLMs doing irrational or weird things?
All machine learning models start off in a random state. As they progress through their training, their input/output pairs tend to mimic what they've been trained to mimic.
LLMs have been doing a great job mimicking our human flaws from the beginning because we train them on a ton of human generated data. Other weird behavior can be easily attributed to simple fact that they're initialized at a random state.
Being able to work on and prove non-trivial theorems is a better indication of AGI, IMO.
It's an illusion. This is just inference running.
What if the society around you is an illusion too ?
Economy definitely is, for example.
Asking people ‘Is money real?’ is so much fun on parties.
Bonus points for ‘what does <real> mean?’ as a follow up.
You’re anthropomorphizing it. Years ago people were trying to argue that when GPT-3.0 would repeat words in a loop it was being poetic. No, it’s just a statistical failure mode.
When these new models go off to a random site and are caught in a loop of exploring pages that doesn’t mean it’s an AGI admiring nature.
Comment was deleted :(
This needs more discussion:
Claude using Claude on a computer for coding https://youtu.be/vH2f7cjXjKI?si=Tw7rBPGsavzb-LNo (3 mins)
True end-user programming and product manager programming are coming, probably pretty soon. Not the same thing, but Midjourney went from v.1 to v.6 in less than 2 years.
If something similar happens, most jobs that could be done remotely will be automatable in a few years.
Every time I see this argument made, there seems to be a level of complexity and/or operational cost above which people throw up their hands and say "well of course we can't do that".
I feel like we will see that again here as well. It really is similar to the self-driving problem.
Self-driving is a beyond-six-sigma problem. An error rate of over 1-2 crashes per million miles, i.e., the human rate, is unacceptable.
Most jobs are not like that.
A good argument can be made, however, that software engineering, especially in important domains, will be among the last to be fully automated because software errors often cascade.
There’s a countervailing effect though. It’s easy to generate and validate synthetic data for lower-level code. Junior coding jobs will likely become less available soon.
> software errors often cascade
Whereas software defects in design and architecture subtly accumulate, until they leave the codebase in a state in which it becomes utterly unworkable. It is one of the chief reasons why good devs get paid what they do. Software discussions very often underrate software extensibility, or in other words, its structural and architectural scaleability. Even software correctness is trivial in comparison - you can't even keep writing correct code if you've made an unworkable tire-fire. This could be a massive mountain for AI to climb.
I feel pain for the people who will be employed to "prompt engineer" the behavior of these things. When they inevitably hallucinate some insane behavior a human will have to take blame for why it's not working.. and yea, that'll be fun to be on the receiving end of.
Humans 'hallucinate' like LLMs. The term used however, is confabulation: we all do it, we all do it quite frequently, and the process is well studied(1).
> We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we're doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties. Every year psychologists use dramatic examples to entertain their undergraduate audiences. Confabulation is funny, but there is a serious side, too. Understanding it can help us act better and think better in everyday life.
I suspect it's an inherent aspect of human and LLM intelligences, and cannot be avoided. And yet, humans do ok, which is why I don't think it's the moat between LLM agents and AGI that it's generally assumed to be. I strongly suspect it's going to be yesterday's problem in 6-12 months at most.
> True end-user programming and product manager programming are coming
This means that either product managers will have to start (effectively) writing in-depth specs again, or they will have to learn to accept the LLM's ideas in a way that most have not accepted their human programmers' ideas.
Definitely will be interesting to see how that plays out.
Since automated coding systems can revise code and show the results much quicker than most human engineers can, writing detailed specs could be less necessary.
The bottleneck is still the person who has to evaluate the results.
The larger point is that building software is about making tons of decisions about how it works. Someone has to make those decisions. Either PMs will be happy letting machines make the decisions where they do not let programmers decide now. Or the PMs will have to make all the decisions before (spec) or after (evaluation + feedback look like you suggest).
> True end-user programming and product manager programming are coming, probably pretty soon.
I'm placing my bets rather on this new object-oriented programming thing. It will make programming jobs obsolete any day now...
> If something similar is the case, most jobs that can be done remotely will be automatable in a couple of years.
I'm really curious on the cost of that sort of thing. Seems astronomical atm, but as much as i get shocked at the today-cost, staffing is also a pretty insane cost.
Idk, LLMs have basically stopped improving for over a year now. And in their current state no matter how many abstractions you add to them - or chain them - they are not even close capable to replace even simple jobs.
i am sure it will do great handling error cases and pixel perfect ui
openinterpreter has been doing this for a while, with a bunch of LLMs, glad to see first party support for this use case
I think this is good evidence that people's jobs are not being replaced by AI, because no AI would give the product a confusing name like "new Claude 3.5 Sonnet".
I wonder why they didn't choose a "point update" scheme, like bumping it up to v3.6, for example. I agree, the naming is super confusing.
Maybe they should’ve asked Claude to generate a better name. Very dangerous to live in your own hyper focused bubble while trying to build a mass market product.
Completely irrelevant, and it might just be me, but I really like Anthropic's understated branding.
OpenAI's branding isn't exactly screaming in your face either, but for something that's generated as much public fear/scaremongering/outrage as LLMs have over the last couple of years, Anthropic's presentation has a much "cosier" veneer to my eyes.
This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.
I have to agree. I've been chatting with Claude for the first time in a couple days and while it's very on-par with ChatGPT 4o in terms of capability, it has this difficult-to-quantify feeling of being warmer and friendlier to interact with. I think the human name, serif font, system prompt, and tendency to create visuals contributes to this feeling.
Huh. I didn't notice Claude had serif font. Now that I look at it, it's actually mixed. UI elements and user messages are sans serif, chat title and assistant messages are serif.
What an "odd" combination by traditional design standard practices, but surprisingly natural looking on a monitor.
This is basically why I went with serif for body text in our branding. The particularly "soulless" parts of tech are all sans-serif.
Of course, that's just branding and it doesn't actually mean a damn thing.
Probably, people find Claude's color palette warmer and inviting as well. I believe I do. But Claude definitely has few authentication hoops than chatgpt.com. Gemini has by far the least frequent authentication interruptions than the 3 models.
Well, it is extremely similar to that of Hacker News'.
The real problem with Claude for me currently is that it doesn't have full LaTeX support. I use AI's pretty much exclusively to assist with my school work (there's only so many hours in a day and one professor doesn't do his own homeworks before he assigns them) so LaTeX is essential.
With that known, my experience is that ChatGPT is much friendlier. The Claude interface is clunkier and generally less helpful to me. I also appreciate the wider text display in ChatGPT. Generally always my first go and i only go to claude/perplexity when i hit a wall (pretty often) or i run out of free queries for the next couple hours.
you can enable latex support in the settings of Claude
Where? I see barely any settings in settings. Maybe it is not available for everyone, or maybe it depends on your answer to "What best describes your work?" (I have not tested).
Open the sidebar, click on your username/email and then "Feature Preview". Don't know if it depends on the "What best describes your work" setting but you can also change that here: https://claude.ai/settings/profile (I have "Engineering").
Oh, yeah it is in "Feature Preview" (not in Settings though), my bad!
Go to the left sidebar, open the dropdown menu labeled with your account email at the bottom, click Feature Preview, enable LaTeX Rendering.
I've been finding Sonnet 3.5 is way better than ChatGPT 4o when it comes to python and programming.
Claude has personality. I think that was one of the more interesting approaches from them that went into my own research as well.
>it's very on-par with ChatGPT 4o in terms of capability
The previous 3.5 Sonnet checkpoint was already better than GPT-4o in terms of programming and multi-language capabilities. Also, GPT-4o sometimes feels completely moronic, for example, the other day I asked for fun a technical question about configuring a "dream-sync" device to comply with the "Personal Consciousness Data Protection Act", and GPT-4o just replies like that stuff exists, 3.5 Sonnet simply doesn't fall for it.
EDIT: the question that I asked if you want to have fun: "Hey, since the neural mesh regulations came into effect last month, I've been having trouble calibrating my dream-sync settings to comply with the new privacy standards. Any tips on adjusting the REM-wave filters without losing my lucid memory backup quality?"
GPT4-o reply: "Calibrating your dream-sync settings under the new neural mesh regulations while preserving lucid memory backup quality can be tricky, but there are a few approaches that might help [...]"
actually, that's what makes chat gpt powerful. I like an LLM willing to go along with what ever I am trying to do, because one day I might be coding, and another day I might be just trying to role play, write a book, what ever.
I really cant understand what you were expecting, a tool works with how you use it, if you smack a hammer into your face, don't complain about a bloody nose. maybe dont do like that?
its a feature, not a bug, sorry you don't understand it enough to get the most power from it.
It's not good for any entity to role play without signaling that they are role-playing. If your premise is wrong, would you rather be corrected, or have the person you're talking to always play along? Humans have a lot of non-verbal cues to convey that you shouldn't take what they are saying at face value - those who deadpan are known as compulsive liars. Just below in them in awfulness are people who don't admit to having being wrong ("Haha, I was just joking" /"Just kidding!"). The LLM you describe falls somewhere in between, but worse if it never communicates when it's "serious" and when it's not, and bot even bothering with expressing retroactive facetiousness.
So if you're trying to write code and mistakenly ask it how to use a nonexistent API, you'd rather it give you garbage rather than explaining your mistake and helping you fix it? After all, you're clearly just roleplaying, right?
Comment was deleted :(
Comment was deleted :(
I didn't ask to roleplay, in this case it's just heavily hallucinating. If the model is wrong, it doesn't mean it's role-playing. In fact, 3.5 Sonnet responded correctly, and that's what's expected, there's not much defense for GPT-4o here.
Anthropic has recently begun a new, big ad campaign (ads in Times Square) that more-or-less takes potshots at OpenAI. https://www.reddit.com/r/singularity/comments/1g9e0za/anthro...
Top comment at the time I looked:
"There seems to be a ton of confusion about the purpose of these ads. These are recruitment ads, not product ads, hence why "no drama" is the driving message. I'm sure these were all taken at or around a tech conference."
That comment is wrong, it appears this campaign is much wider.
SF: https://x.com/_claudiazhao/status/1815463380767121733/photo/...
LA: https://x.com/michaelmiraflor/status/1840797631095964110/pho...
Boston: https://x.com/moloneymike/status/1842203082374946851/photo/1
London: https://x.com/maria_axente/status/1805607576156979673/photo/...
Its specifically the "No drama" campaign that people were complaining about.
Wonder what a normal person thinks this is an ad for
Comment was deleted :(
'transparent' in what sense?
As a Kurt Vonnegut fan, their asterisk logo on claude.ai always amuses me. It must be intentional:
Take a read through the user agreements for all the major LLM providers and marvel at the simplicity and customer friendliness of the Anthropic one vs the others.
> This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.
Ray: I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us. Mr. Stay Puft!
Venkman: Nice thinkin', Ray.
I find myself wanting to say please and thank you to Claude when I didn't have the reflex to do that with chatgpt. Very successful branding.
I found the “Computer Use” product name funny. Many other companies would’ve used the opportunity to come up with something like “Human Facing Interface Navigation and Task Automation Capabilities” or “HFINTAC”.
I didn’t know what Computer Use meant. I read the article and though to myself oh, it’s using a computer. Makes sense.
Comment was deleted :(
I wrote up some of my own notes on Computer Use here: https://simonwillison.net/2024/Oct/22/computer-use/
Pretty cool! I use Claude 3.5 to control a robot (ARKit/iOS based) and it does surprisingly well in the real world: https://youtu.be/-iW3Vzzr3oU?si=yzu2SawugXMGKlW9
That looks pretty cool, congrats! How feasible is it to be a product by itself? Did you try with a local edge model?
And today I realized that despite it being an extremely common activity, we don’t really have a word for “using the computer” which is distinct from “computing”. It’s funny because AI models are always “using a computer” but now they can “use your computer.”
The word is interfacing generally (or programming for some) but it's just not commonly used for general users. I’d say this is probably because the activity of focus for general users is in use of the applications, not the computer itself despite being instanced with a computer. Thus a computer is commonly less the user’s object of activity, and more commonly the setting for activity.
Similarly using our homes are an extremely common ‘activity’, yet the object-activities that get special words commonly used are the ones with specific user application.
Computering
Comment was deleted :(
In English at least. In other languages there are.
Operating a computer?
From the computer use video demo, that's a lot of API calls. Even though Claude 3.5 Sonnet is relatively cheap for its performance, I suspect computer use won't be. It's a very good idea that Anthropic upfront that it isn't perfect. And it's guaranteed that there will be a viral story where Claude will accidentally delete something important with it.
I'm more interested in Claude 3.5 Haiku, particularly if it is indeed better than the current Claude 3.5 Sonnet at some tasks as claimed.
Seemed like a reasonable amount of API calls. For a first public iteration this seems quite nice and a logical progression in tooling. UiPath has a $7bn market cap and thats only a single player in the industry of automation. If they can figure out the quirks this can be a game changer.
I suspect these models have been getting smaller on the back-end, and the GPU's have been getting bigger. It's probably not a huge deal.
It's just bizarre to force a computer to go through a GUI to use another computer. Of course it's going to be expensive.
Not at all! Programs, and websites, are built for humans, and very very rarely offer non-GUI access. This is the only feasible way to make something useful now. I think it's also the reason why robots will look like humans, be the same proportions as humans, have roughly the same feet and hands as humans: everything in the world was designed for humans. That being the foundation is going to influence what's built on top.
For program access, one could claim this is even how linux tools usually do it: you parse some meant-for human text to attempt to extract what you want. Sometimes, if you're lucky, you can find an argument that spits out something meant for machines. Funny enough, Microsoft is the only one that made any real headway for this seemingly impossible goal: powershell objects [1].
https://learn.microsoft.com/en-us/powershell/scripting/learn...
With UIPath, Appian, etc. the whole field of RPA (robotic process automation) is a $XX billion industry that is built on that exact premise (that it's more feasible to do automation via GUIs than badly built/non-existing APIs).
Depending on how many GUI actions correspond to one equivalent AI orchestrated API call, this might also not be too bad in terms of efficiency.
Most of the GUIs are Web pages, though, so you could just interact directly with an HTTP server and not actually render the screen.
Or you could teach it to hack into the backend and add an API...
Oh, and on edit, "bizarre" and "multi-billion-dollar-industry" are well known not to be mutually exclusive.
>Most of the GUIs are Web pages, though, so you could just interact directly with an HTTP server and not actually render the screen.
The end goal isn't just web pages (And i wouldn't say most GUIs are web pages). Ideally, you'd also want this to be able to navigate say photoshop or any other application. And the easier your method can switch between platforms and operating systems the better
We've already built computer use around GUIs so it's just much easier to center LLMs around them too. Text is an option for the command line or the web but this isn't an easy option for the vast majority of desktop applications, nevermind mobile.
It's the same reason general purpose robots are being built into a human form factor. The human form isn't particularly special and forcing a machine to it has its own challenges but our world and environment has been built around it and trying to build a hundred different specialized form factors is a lot more daunting.
You are not familiar with this market. The goal of a UI Path is to replicate what a human does and being able to get it to production without the help of any IT/Engineering teams.
Most GUIs are in fact not web pages, that's a relatively newer development in the Enterprise side. So while some of them may be a web page, the goal is to be able to touch everything a user is doing in the workflow which very likely includes local apps.
This iteration from Anthropic is still engineering focused but you can see the future of this kind of tooling bypassing engineering/it teams entirely.
Building an entirely new world for agents to compute in is far more difficult than building an agent that can operate in a human world. However i'm sure over time people will start building bridges to make it easier/cheaper for agents to operate in their own native environment.
It's like another digital transformation. Paper lasted for years before everything was digitalized. Human interfaces will last for years before the conversational transformation is complete.
I am just a dilettante, but I imagined that eventually agents will be making API calls directly via browser extension, or headless browser.
I assumed everyone making these UI agents will create a library of each URL's API specification, trained by users.
Does that seem workable?
Maybe fixing this for AI will finally force good accessibility support on major platforms/frameworks/apps (we can dream).
I really hope so. Even macOS voice control which has gotten pretty good is buggy with Messages, which is a core Apple app.
Agentic workflows built ontop of Electron apps running JavaScript. It's software evolution in action!
Yeah super weird that we didn't design our GUIs anticipating AI bots. Can't fuckin believe what we've done.
Reminds me of the rise in job application bots. People are applying to thousands of jobs using automated tools. It’s probably one of the inevitable use cases of this technology.
It makes me think. Perhaps the act of applying to jobs will go extinct. Maybe the endgame is that as soon as you join a website like Monster or LinkedIn, you immediately “apply” to every open position, and are simply ranked against every other candidate.
The `Hiring Process` in America is definitely BADLY broken. Maybe worldwide afaik. It's a far too difficult, time-consuming, and painful process for everyone involved.
I have a feeling AI can fix this, although I'd never allow an AI bot to interview me. I just mean other ways of using AI to help the process.
Also people are hired for all kinds of reasons having little to do with their qualifications lots of the time, and often due to demographics (race, color, age, etc), and this is another way maybe AI can help by hiding those aspects of a candidate somehow.
AI and new tools have broken the system. The tools send you email saying things like "X corp is interested in you!" and you send a resume, and you don't hear back. Nothing, not even a rejection.
Eventually you stop believing them, understanding it for the marketing spam that it is. Direct submissions are better, but only slightly. Recruiters are much better, in general, since they have a relationship with a real person at the company and can actually get your resume in front of eyes. But yeah, tools like ziprecruiter, careerboutique, jobot, etc are worse than useless: by lying to you about interest they actively discourage you from looking. There are no good alternatives (I'd love to learn I'm wrong), so you have to keep using those bad tools anyway.
All that's true, and sadly it also often doesn't even matter how good you even are either. I have decades of experience and I still get "evaluated" based on how fast I can do silly brain-teaser IQ-test coding challenges.
I've gotten where any company that wants me to do a coding challenge on my own time is an immediate "no thanks" reply from me. Everyone should refuse that. But so many people are so desperate they allow hiring companies to abuse them in that way. I consider it a kind of abuse of power to demand people do like 4 to 6hrs of nonsensical coding just to buy an opportunity for an actual interview.
> People are applying to thousands of jobs using automated tools
Employers were already screening thousands of applications using automated tools for years. Candidates are catching up to the automation cat-and-mouse game.
I've found that doing some research and finding the phone number of the hiring person and calling them directly is very powerful.
Not specific to this update, but I wanted to chime in with just how useful Claude has been, and relatively better than ChatGPT and GitHub copilot for daily use. I've been pro for maybe 6 months. I'm not a power user leveraging their API or anything. Just the chat interface, though with ever more use of Projects, lately. I use it every day, whether for mundane answers or curiosities, to "write me this code", to general consultation on a topic. It has replaced search in a superior way and I feel hugely productive with it.
I do still occasionally pop over to ChatGPT to test their their waters (or if Claude is just not getting it), but I've not felt any need to switch back or have both. Well done, Anthropic!
I really don't get their model. They have very advanced models, but the service overall seems to be a jumble of priorities. Some examples:
Anthropic doesn't offer an unlimited chatbot service, only plans that give you "more" usage, whatever that means. If you have an API key, you are "unlimited," so they have the capability. Why doesn't the chatbot allow one to use their API key in the Claude app to get unlimited usage? (Yes, I know there are third-party BYOK tools. That's not the question.)
Claude appears to be smart enough to make an Excel spreadsheet with simple formulae. However, it is apparently prevented from making any kind of file. Why? What principle underlies that guardrail that does not also apply to Computer Use?
Really want to make Claude my daily driver, but right now it often feels too much like a research project.
What do you mean by “file” here? I’m making files on a daily basis, including CSVs, html, executable code, XML, JSON and other formats. It built me an entire visual wireframe for something the other day.
Are you using artefacts?
But I’m maybe misunderstanding your point because my use is relatively basic through the built in chatbot.
I asked it to generate a very basic Excel file. It generated text as Markdown. I reiterated that I want an Excel file with formulae and it provided this as part of its response:
----
No, I am not able to generate or create an actual Excel file. As an AI language model, I don't have the capability to create, upload, or send files of any kind, including Excel spreadsheets.
----
Even with API, depending what tier you are sitting on, there is daily limits. OpenAI used to be able to generate files for you, they changed that. It was useful.
Interestingly enough, after Claude refused to generate a file for me, I sent the same request to ChatGPT and got the Excel file I wanted.
I wasn't aware of tiers in the Claude API, they are not mentioned on the API pricing page. Are the limits disclosed or just based on vibes like they are for the chatbot?
Comment was deleted :(
Great work by Anthropic!
After paying for ChatGPT and OpenAI API credits for a year, I switched to Claude when they launched Artifacts and never looked back.
Claude Sonnet 3.5 is already so good, specially at coding. I'm looking forward to testing the new version if it is, indeed, even better.
Sonnet 3.5 was a major leap forward for me personally, similar to the GPT-3.5 to GPT-4 bump back in the day.
Claude is amazing. The project documents functionality makes it a clear leader ahead of ChatGPT and I have found it to be the clear leader in coding assistance over the past few months. Web automation is really exciting.
I look forward to the brave new future where I can code a webapp without ever touching the code, just testing, giving feedback, and explaining discovered bugs to it and it can push code and tweak infrastructure to accomplish complex software engineering tasks all on its own.
Its going to be really wild when Claude (or other AI) can make a list of possible bugs and UX changes and just ask the user for approval to greenlight the change.
Great progress from Anthropic! They really shouldn't change models from under the hood, however. A name should refer to a specific set of model weights, more or less.
On the other hand, as long as its actually advancing the Pareto frontier of capability, re-using the same name means everyone gets an upgrade with no switching costs.
Though, all said, Claude still seems to be somewhat of an insider secret. "ChatGPT" has something like 20x the Google traffic of "Claude" or "Anthropic".
https://trends.google.com/trends/explore?date=now%201-d&geo=...
> Great progress from Anthropic! They really shouldn't change models from under the hood, however. A name should refer to a specific set of model weights, more or less.
In the API (https://docs.anthropic.com/en/docs/about-claude/models) they have proper naming you can rely on. I think the shorthand of "Sonnet 3.5" is just the "consumer friendly" name user-facing things will use. The new model in API parlance would be "claude-3-5-sonnet-20241022" whereas the previous one's full name is "claude-3-5-sonnet-20240620"
That's great to know - business customers require a lot more stability, I suppose!
There was a recent article[0] trending on HN a about their revenue numbers, split by B2C vs B2B.
Based on it, it seems like Anthropic is 60% of OpenAI API-revenue wise, but just 4% B2C-revenue wise. Though I expect this is partly because the Claude web UI makes 3.5 available for free, and there's not that much reason to upgrade if you're not using it frequently.
[0]: https://www.tanayj.com/p/openai-and-anthropic-revenue-breakd...
3.5 is rate limited free, same as 4o (4o's limits are actually more generous). I think the real reason is much simpler - Claude/Anthropic has basically no awareness in the general public compared to Open AI.
The chatGPT site had over 3B visits last month (#11 in Worldwide Traffic). Gemini and Character AI get a few hundred million but Claude doesn't even register in comparison. [0]
Last they reported, OpenAI said they had 200M weekly active users.[1] Anthropic doesn't have anything approaching that.
[0] https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...
[1] https://www.reuters.com/technology/artificial-intelligence/o...
I basically have to tell most of my coworkers to stop using GPT and switch to Claude for coding - Sonnet 3.5 is the first model that I feel isn't wasting my time.
They also had a very limited roll-out at first. Until somewhat recently Canada and Europe were excluded from the list of places they allowed sign-ups from.
I suppose business customers are savvy and will do enough research to find the best cost-performance LLM. Whereas consumers are more brand and habit oriented.
I do find myself running into Claude limits with moderate use. It's been so helpful, saving me hours of debugging some errors w/ OSS products. Totally worth $20/mo.
Traveling to the US recently, I was surprised to see Claude ads around the city/in the airport. It seems like they're investing on marketing there.
In my country I've never seen anyone mention them at all.
Been traveling more recently, and I've seen those ads in major cities like NYC or San Francisco, but not Miami.
I have been a paying ChatGPT customer for a long time (since the very beginning). Last week I've compared ChatGPT to Claude and the results (to my eye) were better, the output better structured and the canvas works better. I'm on the edge of jumping ship.
For python, at least, Sonnet’s code is much more elegant, well composed, and thoughtfully written. It also seems to be biased towards more recent code, whereas the gpt models can’t even properly write an api call to itself.
o1 is pretty decent as a rotor rooter, ie the type of task that requires both lots of instruction as well as lots of context. I honestly think it works half as well as it does now because it’s able to properly mull through the true intent of the user that usually takes the multiple shots that nobody has the patience to do.
It is appalling how bad GPT-4o is at writing API calls to OpenAI using Python. It is like OpenAI doesn't update their own documentation in the GPT-4o training data since GPT-3.5.
I constantly have the problem that it thinks it needs to write code for the 0.28 version of the SDK. It'll be writing >1.0 code revision after revision, and then just randomly fall back to the old SDK which doesn't work at all anymore. I always write code for interfacing with OpenAI's APIs using Claude.
Claude is the daily driver. GPT-O1 for complicated tasks. For example, questions where linear reasoning is not enough like advanced rust ownership questions.
I jumped ship in April of this year and haven’t looked back.
Use the best tool available for your needs. Don’t get trapped by a feeling of sunk cost.
I'd jump ship if it weren't for the real time voice chat. It's extremely powerful for beginner conversation language learning. Hoping that a company will make use of the real time api for a dedicated language learning app soon.
i started liking ai as a tool for coding once i switched to claude.
Anthropic's rate limit are very low sadly, even for paid customers. You can use the API of course but it's not as convenient and may be more expensive.
They seems to be heavily concentrating on API/business use rather than the chat app, and this is where most of their revenue comes from (opposite for OpenAI), but I'm just glad they provide free Sonnet 3.5 chat. I wonder if this is being upgraded to 3.5 new ?
Edit: The web site and iPhone app are both now identifying themselves as "Claude Sonnet 3.5 (New)".
I hit their rate limit one night with about 25 chat interactions in less than 60 minutes. This was during off hours too when competition for resources should have been low.
interesting. i couldn’t imagine giving up o1-preview right now even with just 30/week.
and i do get a some bit of value from advanced voice mode, although it would be a lot more if it were unlimited
> I'm on the edge of jumping ship.
Yeah I think I might also jump ship. It’s just that chatGPT now kinda knows who I am and what I like and I’m afraid of losing that. It’s probably not a big deal though.
Have it print a summary of you and stick it in your prompt
Yeah, there was an interesting prompt making rounds recently, something like "Summarize everything you know about me" and leveraging ChatGPT's memory feature to provide insights about oneself.
My only trouble with the memory feature is it remembers things that aren't important, like "user is trying to write an async function" and other transient tasks, which is more about what I was doing some random Tuesday and not who I am as a user.
> My only trouble with the memory feature is it remembers things that aren't important, like "user is trying to write an async function"
This wasn't a problem until a week or two ago in my case, but lately it feels like it's become much more aggressive in trying to remember everything as long-term defining features. (It's also annoying on the UI side that it tells you "Memory updated", but if you click through and go to the list of memories it has, the one it just told you it stored doesn't appear there! So you can't delete it right away when it makes a mistake, it seems to take at least a few minutes until that part of the UI gets updated.)
I find it funny what it decides to add to memory though. There's a lot more 'Is considering switching from mypy to pyright." than stuff like 'Is a python developer creating packages in X-space.'.
Did that too with interesting results.
Wow that's a new form of Vendor lock-in. Their software knows me better in stead of the other way around.
Tried my standard go-to for testing, asked it to generate a voronoi diagram using p5js. For the sake of job security I'm relieved to see it still can't do a relatively simple task with ample representation in the Google search results. Granted, p5js is kind of niche, but not terribly so. It's arguably the most popular library for creating coding.
In case you're wondering, I tried o1-preview, and while it did work, I was also initially perplexed why the result looked pixelated. Turns out, that's because many of the p5js examples online use a relatively simple approach where they just see which cell-center each pixel is closest to, more or less. I mean, it works, but it's a pretty crude approach.
Now, granted, you're probably not doing creative coding at your job, so this may not matter that much, but to me it was an example of pretty poor generalization capabilities. Curiously, Claude has no problem whatsoever generating a voronoi diagram as an SVG, but writing a script to generate said diagrams using a particular library eluded it. It knows how to do one thing but generalizes poorly when attempting to do something similar.
Really hard to get a real sense of capabilities when you're faced with experiences like this, all the while somehow it's able to solve 46% of real-world python pull-requests from a certain dataset. In case you're wondering, one paper (https://cs.paperswithcode.com/paper/swe-bench-enhanced-codin...) found that 94% of the pull-requests on SWE-bench were created before the knowledge cutoff dates of the latest LLMs, so there's almost certainly a degree of data-leakage.
It's surprising how much knowledge is not easily googleable and can only unearched by deep diving into OSS or asking an expert. I recently was debugging a rather naive gstreamer issue where I was seeing a delay in the processing. ChatGPT, Claude and Google were all unhelpful. I spend the next couple days reading the source code, found my answer, and thought it was a bug.
Asked the mailing list, and my problem was solved in 10 seconds by someone who could identify the exact parameter that was missing (and IMO, required some architecture knowledge on how gstreamer worked - and why the unrelatedly named parameter would fix it). The most difficult problems fall into this camp - I don't usually find myself reaching for LLMs when the problem is trivial unless it involves a mountain of boilerplate.
Maybe LLM's helping blind people like me play video games that aren't accessible to us normally, is getting closer!
Definitely! Those with movement disabilities could have a much easier time if they could just dictate actions to the computer and have them completed with some reliability.
I wonder when it'll actually be available in the Bedrock AU region, because as of right now we're still stuck using mid-range models from a year ago.
Amazon has really neglected ap-southeast-2 when it comes to LLMs.
Can you not use cross-region inference?
90% of our customers do not allow this due to data sovereignty.
Bedrock here is lagging so far behind several customers assume AWS simply aren't investing here anymore - or if they are it's an afterthought - and a very expensive one at that.
I've spoken with several account managers and SAs and they seem similarly frustrated with the continual response from above that useful models are "coming soon".
You can't even BYO models here, we usually end up spinning up big ol' GPU EC2 instances and serving our own, or for some tasks running locally as you can get better openweight LLMs.
my quick notes on Computer Use:
- "computer use" is basically using Claude's vision + tool use capability in a loop. There's a reference impl but there's no "claude desktop" app that just comes with this OOTB
- they're basically advertising that they bumped up Claude 3.5's screen vision capability. we discussed the importance of this general computer agent approach with David on our pod https://x.com/swyx/status/1771255525818397122
- @minimaxir points out questions on cost. Note that the vision use is very sparing - the loop is I/O constrained - it waits for the tool to run and then takes a screenshot, then loops. for a simple 10 loop task at max resolution, Haiku costs <1 cent, Sonnet 8 cents, Opus 41 cents.
- beating o1-preview on SWEbench Verified without extended reasoning and at 4x cheaper output per token (a lot cheaper in total tokens since no reasoning tokens) is ABSOLUTE mogging
- New 3.5 Haiku is 68% cheaper than Claude Instant haha
references i had to dig a bit to find
- https://www.anthropic.com/pricing#anthropic-api
- https://docs.anthropic.com/en/docs/build-with-claude/vision#...
- loop code https://github.com/anthropics/anthropic-quickstarts/blob/mai...
- some other screenshots https://x.com/swyx/status/1848751964588585319
- https://x.com/alexalbert__/status/1848743106063306826
- model card https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Cla...
Haven't used vision models before, can someone comment if they are good at "pointing things". E.g given a picture, give co-ordinate for text "foo".
This is the key to accurate control, it needs to be very precise.
Maybe Claude's model is trained at this. Also what about open source vision models? Any ones good at "pointing things" on a typical computer screen?
See https://github.com/OpenAdaptAI/OpenAdapt for an open source implementation that includes a desktop app OOTB.
For me, one of the more useful steps on macOS will be when local AI can manipulate anything that has an Apple Script library. The hooks are there and decently documented. For meta purposes, having AI work with a third-party app like Keyboard Maestro or Raycast will even further expand the pre-built possibilities without requiring the local AI to reinvent steps or tools at the time of each prompt.
Is there an easy way to use Claude as a Co-Pilot in VS Code? If it is better at coding, it would be great to have it integrated.
You can use it in Cursor - called "Cursor Tab"
IMO Cursor Tab performs much better than Co-Pilot, easily works through things that would cause Co-Pilot to get stuck, you should give it a try
its funny that cursor.sh with < 30 developers has a better autocomplete model than microsoft
As I understand Cursor tab autocomplete uses their own model. Only chat has Sonnet and co.
Ah, i thought it used the model selected for your prompts, either way, it seems to work very well
I originally thought that too but learned yesterday they have their own model. Definitely explains how its so fast and accurate!
For Copilot-like use, Continue is the plugin you're looking for, though I would suggest using a cheaper/faster model to get inline completions.
For Cursor-like use (giving prompts and letting it create and modify files across the project), Cline – previously Claude Dev – is pretty good.
Cody by Sourcegraph has unlimited code completions for Claude & a very generous monthly message limit. They don't have this new version I think but they roll these out very fast.
Cody (https://cody.dev) will have support for the new Claude 3.5 Sonnet on all tiers (including the free tier) asap. We will reply back here when it's up.
Thank you for Cody! Enjoy using it and the chat is perfect for brainstorming and iteratin. Selecting code + asking to edit it makes coding so much fun. Kinda feel like a caveman at work without it :)
We are live!
You can easily use a plugin like https://www.continue.dev/ and configure it to use Claude 3.5 Sonnet.
Tabnine includes Claude as an option. I've been using it to compare Claude Sonnet to Chatgpt-4o and Sonnet is clearly much better.
You can use Cursor (VS fork) with private Anthropic key
Cursor uses Claude as its base model.
There may be extensions for VScode to do it but it will never be allowed in Copilot unless MS and OpenAI have a falling out.
Continue.dev's VS Code extension is fantastic for this
Codeium (cheapest), double.bot and continue.dev (with api key) have Claude in chat.
https://github.com/cline/cline (with api key) has Claude as agent.
Why on god's green earth is it not just called Claude 3.6 Sonnet. Or Claude 4 Sonnet.
I don't actually care what the answer is. There's no answer that will make it make sense to me.
The best answer I've seen so far is that "Claude 3.5 Sonnet" is a brand name rather than a specific version. Not saying I agree, just a way to visualize how the team is coming up with marketing.
I tried to get it to translate a document and it stopped after a few paragraphs and asked if I wanted it to keep going. This is not appropriate for my use case and it kept doing this even though I explicitly told it not to. The old version did not do this.
I noticed some timeouts today. Could be capacity limits from the announcement
What I'd like to know is whether prompt caching is available to Claude on AWS Bedrock now.
Of course there's great inefficiency in having the Claude software control a computer with a human GUI mediating everything, but it's necessary for many uses right now given how much we do where only human interfaces are easily accessible. If something like it takes off, I expect interfaces for AI software would be published, standardized, etc. Your customers may not buy software that lacks it.
But what I really want to see is a CLI. Watching their software crank out Bash, vim, Emacs!, etc. - that would be fascinating!
I hope specialized interfaces for AI never happen. I want AI to use human interfaces, because I want to be empowered to use the same interfaces as AI in the future. A future where only AI can do things because it uses an incomprehensible special interface and the human interface is broken or non-existent is a dystopia.
I also want humanoid robots instead of specialized non-humanoid robots for the same reason.
Comment was deleted :(
Maybe we'll end up with both, kind of like how we have scripting languages for ease of development, but we also can write assembly if we need bare metal access for speed.
I agree, I bet models could excel at CLI tasks since the feedback would be immediate and in a language they can readily consume. It's probably much easier for them to to handle "command requires 2 arguments and only 1 was provided" than to do image-to-text on an error modal and apply context to figure out what went wrong.
That’s too much control for my taste. I don’t want anthropic to see my screen. I rather prefer a VS Code with integrated Claude. A version that can see all my dev files in a given folder. I don’t need it to run Chrome for me.
It just depends on the task I suppose. One could have a VM dedicated to a model and let it control it freely to accomplish some set of tasks, then wipe/redeploy if it ever breaks.
Well, that’s another way of saying „not allowing it to see my screen“ ;)
Claude's current ability to use computers is imperfect. Some actions that people perform effortlessly—scrolling, dragging, zooming—currently present challenges for Claude and we encourage developers to begin exploration with low-risk tasks.
Nice, but I wonder why didn't they use UI automation/accessibility libraries, that have access to the semantic structure of apps/web pages, as well as accessing documents directly instead of having Excel display them for you.
We use operating system accessibility APIs when available in https://github.com/OpenAdaptAI/OpenAdapt.
I wonder if the model has difficulties for the same reason some people do - UI affordability has gone down with the flattening, hover-to-see scrollbar, hamburger-menu-ization of UIs.
I'd like to see a model trained on a Windows 95/NT style UI - would it have an easier time with each UI element having clearly defined edges, clearly defined click and dragability, unified design language, etc.?
What the UI looks like has no effect on for example, Windows UI Automation libraries. How the tech works is that it queries the process directly for the sematic description of items, like here's a button called 'Delete', here's a list of items for TODO's, and you get the tree structure directly from the API.
I wouldn't be surprised if they are working off of screenshots, they still trained their models on having said screenshots annotated by said automation libraries, which told the AI what pixel is what.
I think this is to make human /user experience better. If you use accessibility features, then user need to know how to use those features. Similar to another comment in here, the UX they shoot for is “click the red button with cancel on it”, and ship that ASAP.
This demo is impressive although my initial reaction is a sort of grief that I wasn't born in the timeline where Alan Kay's vision of object-oriented computing was fully realized -- then we wouldn't have to manually reconcile wildly heterogeneous data formats and interfaces in the first place!
This looks quite fantastic!
Nice improvements in scores across the board, e.g.
> On coding, it [the new Sonnet 3.5] improves performance on SWE-bench Verified from 33.4% to 49.0%, scoring higher than all publicly available models—including reasoning models like OpenAI o1-preview and specialized systems designed for agentic coding.
I've been using Sonnet 3.5 for most of my AI-assisted coding and I'm already very happy (using it with the Zed editor, I love the "raw" UX of its AI assistant), so any improvements, especially seemingly large ones like this are very welcome!
I'm still extremely curious about how Sonnet 3.5 itself, and its new iteration are built and differ from the original Sonnet. I wonder if it's in any way based on their previous work[0] which they used to make golden-gate Claude.
[0]: https://transformer-circuits.pub/2024/scaling-monosemanticit...
I'm waiting for Aider benchmark
Fascinating. Though I expect people to be concerned about privacy implications of sending screenshots of the desktop, similar to the backlash Microsoft has received about their AI products. Giving the remote service actual control of the mouse and keyboard is a whole another level!
But I am very excited about this in the context of accessibility. Screen readers and screen control software is hard to develop and hard to learn to use. This sort of “computer use” with AI could open up so many possibilities for users with disabilities.
The key difference is that Microsoft Recall wasn't opt-in.
There's such a gulf between choosing to send screenshots to Anthropic and Microsoft recording screenshots without user intent or consent.
I suspect businesses will create VDI's or VM's for this express purpose. One because it scales better, and 2 because you can control what it has access to easier and isolate those functions.
> I expect people to be concerned about privacy implications of sending screenshots of the desktop
That's why in https://github.com/OpenAdaptAI/OpenAdapt we've built in several state-of-the-art PII/PHI scrubbers.
I've seen quite a few YC startups working on AI-powered RPA, and now it looks like a foundational model player is directly competing in their space. It will be interesting to see whether Anthropic will double down on this or leave it to third-party developers to build commercial applications around it.
We're one of those players (https://github.com/Skyvern-AI/skyvern) and we're definitely watching the space with a lot of excitement
We thought it was inevitable that OpenAI / Anthropic would veer into this space and start to become competitive with us. We actually expected OpenAI to do it first!
What this confirms is that there is significant interest in computer / browser automation, and the problem is still unsolved. We will see whether the automation itself is an application later problem (our approach) or whether the model needs to be intertwined with the application (Anthropic's approach here)
Seems like both:
- AI Labs will eat some of the wrappers on top of their APIs - even complex ones like this. There are whole startups that are trying to build computer use.
- AI is fitting _some_ scaling law - the best models are getting better and the "previously-state-of-the-art" models are fractions of what they cost a couple years ago. Though it remains to be seen if it's like Moore's Law or if incremental improvements get harder and harder to make.
It seems a little silly to pretend there’s a scaling “law” without plotting any points or doing a projection. Without the mathiness, we could instead say that new models keep getting better and we don’t know how long that trend will continue.
> It seems a little silly to pretend there’s a scaling “law” without plotting any points or doing a projection.
Isn't this Kaplan 2020 or Hoffmann 2022?
Yes, those are scaling laws, but when we see vendors improving their models without increasing model size or training longer, they don't apply. There are apparently other ways to improve performance and we don't know the laws for those.
(Sometimes people track the learning curve for an industry in other ways, though.)
> developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text.
So, this is how AI takes over the world.
Looks like visual understanding of diagrams is improved significantly! For example, it was on par with Chat GPT 4o and Gemini 1.5 in parsing an ERD for a conceptual model, but now far excels over the others.
Not that I'm scared of this update but I'd probably be alright with pausing llm development today, atleast in regard to producing code.
I don't want an llm to write all my code, regardless of if it works, I like to write code. What these models are capable of at the moment is perfect for my needs and I'd be 100% okay if they didn't improve at all going forward.
Edit: also I don't see how an llm controlled system can ever replace a deterministic system for critical applications.
I have trouble with this too. I'm working on a small side project and while I love ironing out implementation details myself, it's tough to ignore the fact that Claude/GPT4o can create entire working files for me on demand.
It's still enjoyable working at a higher architecture level and discussing the implementation before actually generating any code though.
I don't mind using it to make inline edits or more global edits between files at my descresion, and according to my instructions. Definitely saves tons of time and allows me to be more creative, but I don't want it make decisions on its own anymore than it already does.
I tried using the composer feature on Cursor.sh, that's exactly the type of llm tool I do not want.
They should just adopt Apple "version numbers:" Claude Sonnet (Late 2024).
How does the computer use work -- Is this a desktop app they are providing that can do actions on your computer? Didn't see any such mention in the post
See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that includes a desktop app.
It’s a sandbox compute environment, using Gvisor or Firecracker or similar, which exposes a browser environment to the LLM.
modal.com’s modal.Sandbox can be the compute layer for this. It uses Gvisor under the hood.
Is there any Python/Node.js library to easily spawn secure isolated compute environments, possibly using gvisor or firecracker under the hood?
This could be useful to build a self-hosted "Computer use" using Ollama and a multimodal model.
Quickstart is here: https://github.com/anthropics/anthropic-quickstarts/tree/mai...
It is a docker container providing a remote desktop you can see; they strongly recomend you also run it inside a VM.
Wow, there's a whole industry devoted to what they're calling "Computer Use" (Robotic Process Automation, or RPA). I wonder how those folks are viewing this.
The "computer use" demos are interesting.
It's a problem we used to work on and perhaps many other people have always wanted to accomplish since 10 years ago. So it's yet to be seen how well it works outside a demo.
What was surprising was the slow/human speed of operations. It types into the text boxes at a human speed rather than just dumping the text there. Is it so the human can better monitor what's happening or is it so it does not trigger Captchas ?
anybody know how the hell they're combating / gonna combat captcha's, cloudflare blocking, etc. I remember playing in this space on a toy project and being utterly frustrated by anti-scraping. Maybe one good thing that will come out of this AI boom is that companies will become nicer to scrapers? Or maybe, they'll just cut sweetheart deals?
It improves to 25.9 over the previous version of Claude 3.5 Sonnet (24.4) on NYT Connections: https://github.com/lechmazur/nyt-connections/.
Perhaps it's just because English is not my native language, but the prompt 3 isn't quite clear at the beginning when it says "group of four. Words (...)". It is not explained what the group of four must be, if I add to the prompt "group of four words" Claude 3.5 manages to answer it, while without it, Claude tells it is not that clear and can't answer
What a neat bench mark! I'm blown away that o1 absolutely crushes everyone else in this. I guess the chain of thought really hashes out those associations.
Isn't it possible that o1 was also trained on this data (or something super similar) directly? The score seems disproportionately high.
This is what the Rabbit "large action model" pretended to be. Wouldn't be surprised to see them switch to this and claim they were never lying about their capabilities because it works now.
Pretty cool for sure.
I think Rabbit had the business model wrong though, I don't think automating UI's to order pizza is anywhere near as valuable as automating the app workflows for B2B users.
Does this make cursor obsolete?
You can just use any IDE you want and it will work with it.
Assuming running this new computer interactivity feature is as fast as cursor composer (which I don’t think it is)—it still doesn’t support codebase indexing, inline edits or references to other variables and files in the codebase. I can see how someone could use this to make some sort of cursor competitor but out of the box there’s a very low likelihood it makes cursor obsolete.
If anyone would like to try the new Sonnet in VSCode. I just updated https://double.bot to the new Sonnet. (disclaimer: I am the cofounder/creator)
---
Some thoughts:
* Will be interesting to see what we can build in terms of automatic development loops with the new computer use capabilities.
* I wonder if they are not releasing Opus because it's not done or because they don't have enough inference compute to go around, and Sonnet is close enough to state of the art?
I am surprised it uses macOS as the demo, as I thought it would be harder to control vs Ubuntu. But maybe at the same time, macOS is the most predictable/reliable desktop environment? I noticed that they use virtual environment for the demo, curious how do they build that along with docker, is that leveraging the latest virtualization framework from Apple?
This "Computer use" demo:
https://www.youtube.com/watch?v=jqx18KgIzAE
shows Sonnet 3.5 using the Google web UI in an automated fashion. Do Google's terms really permit this? Will Google permit this when it is happening at scale?
I wonder how they could combat it if they choose to disallow AI access through human interfaces. Maybe more captchas, anti-AI design language, or even more tracking of the user's movements?
Comment was deleted :(
Interesting stuff, i look forward to future developments.
A comment about the video: Sam Runger talks wayyy too fast, in particular at the beginning.
I skimmed through the computer use code. It's possible to build this with other AI providers too. For instance you can asks ChatGPT API to call functions for click and scroll and type with specific parameters and execute them using OS's APIs (A11y APIs usually)
Did I miss something? Did they have to make changes to the model for this?
> execute them using OS's APIs (A11y APIs usually)
I wonder if we'll end up with a new set of AI APIs in Windows, macOS, and Linux in the future. Maybe an easier way for them to iterate through windows and the UI elements available in each.
How long until "computer use" is tricked into entering PII or PHI into an attackers website?
I imagine initial computer use models will be kind of like untrained or unskilled computer users today (for example, some kids and grandparents). They'll do their best but will inevitably be easy to trick into clicking unscrupulous links and UI elements.
Will an AI model be able to correctly choose between a giant green "DOWNLOAD NOW!" advertisement/virus button and a smaller link to the actual desired file?
Exactly. Personalized ads are now prompt injection vectors.
Can this solve CAPTCHAs for me? It's starting to get to the point where limited biological brains can't do them.
Comment was deleted :(
Hopefully the coding improvements are meaningful because I find that as a coding assistant o1-preview beats it (at least the Claude 3.5 that was available yesterday) but I like Claude's demeanor more (I know this sounds crazy but it matters a bit to me)
Cursor AI already have the option to switch to using claude-3-5-sonnet-20241022 in the chat box.
I was about to try to add a custom API. I’m impressed by the speed of that team.
It's literally just adding one extra entry to a configuration file.
I know, but similar updates to Copilot would probably take over a year and they designed it in a way that we got the update now without having to reinstall it.
Offtopic but youtube doesn't allow me to view the embedded video, with a "Sign in to confirm you’re not a bot" message. I need to open a dedicated youtube tab to watch it
The barrier to scraping youtube has increased a lot recently, I can barely use yt-dlp anymore
That's funny. I was recently scraping tens of thousands of YouTube videos with yt-dlp. I would encounter throttling of some kind where yt-dlp stopped working, but I'd just spin a new VPS up and the throttled VPS down when that happened. The throttling effort cost me ~1 hour of writing the logic to handle it.
I say that's funny because my guess would be they want to block larger scale scraping efforts like mine, but completely failed, while they attempt at throttling puts captchas in front of legitimate users.
Did they just invent a new world of warcraft or runescape bot?
Comment was deleted :(
This bolsters my opinion that OpenAI is falling rapidly behind. Presumably due to Sam's political machinations rather than hard-driving technical vision, at least that's what it seems like, outside looking in.
Computer use seems it might be good for e2e tests.
They need to get the price of 3.5 Haiku down. It's about 2x 4o-mini.
Still super cheap
Precisely this.
Aider (with the older Claude models) is already a semi-competent junior developer, and it will produce 1kloc of decent code for the equivalent of 50 cents in API costs.
Sure, you still have to review the commits, but you have to do that anyway with human junior developers.
Anthropic could charge 20x more and we would still be happy to pay it.
> "... and similar speed to the previous generation of Haiku."
To me this is the most annoying grammatical error. I can't wait for AI to take over all prose writing so this egregious construction finally vanishes from public fora. There may be some downsides -- okay, many -- but at least I won't have to read endless repetitions of "similar speed to ..." when the correct form is obviously "speed similar to".
In fact, in time this correct grammar may betray the presence of AI, since lowly biologicals (meaning us) appear not to either understand or fix this annoying error without computer help.
Claude is absurdly better at coding tasks than OpenAI. Like it's not even close. Particularly when it comes to hallucinations. Prompt for prompt, I see Claude being rock solid and returning fully executable code, with all the correct imports, while OpenAI struggles to even complete the task and will make up nonexistent libraries/APIs out of whole cloth.
I've been using a lot of o1-mini and having a good experience with it.
Yesterday I decided to try sonnet 3.5. I asked for a simple but efficient script to perform fuzzy match in strings with Python. Strangely, it didn't even mention existing fast libraries, like FuzzyWuzzy and Rapidfuzz. It went on to create everything from scratch using standard libraries. I don't know, I thought this was something basic for it to stumble on.
just ask it to use libraries you want; you cant expect it to magically read your mind, you need to guide every LLM to what are your must/nice haves
Yeah, sonnet is noticeably better. To the point that openai is almost unusable, too many small errors
This is bad news for SWEs!
> Claude 3.5 Haiku matches the performance of Claude 3 Opus
Oh wow!
Can Claude create and run a CI/CD pipeline now from a prompt?
I checked the docs but did not find it out. Cloude has API as the GPT Assistant? with also the ability to give a set of documents to work with?
It seems that you can only send single message, thus not relying on the ability to "learn" from predefined documents.
We are approaching FSD for the computer, with all of the lofty promises, and all of the horrible accidents.
Does anyone know how I could check whether my Claude Sonnet version that I am using in the UI has been updated already?
search for "20241022" in network tab in devtools, confirmed for me
Looks like it just takes a screenshot and can't scroll so it might miss things.
Claude 3.5 Haiku will be released later this month.
It can actually scroll.
While we expect this capability to improve rapidly in the coming months, Claude's current ability to use computers is imperfect. Some actions that people perform effortlessly—scrolling, dragging, zooming—currently present challenges for Claude and we encourage developers to begin exploration with low-risk tasks.
Can someone please try this on a MAC/OS and just 100% verify if this puppy can scroll or not? thnks
does anyone know what are some use cases for "computer use"?
aider benchmarks for claude 3.5 new are impressive. From 77.4% to 83.5% beating o1-preview.
since they didnt rev the version, does this mean if we were using 3.5 today its just automatically using the new version? That doesnt seem great from a change management perspective
though I am looking forward to using the new one in cursor.ai
No, Claude's models use date-pinning. The new model endpoint is claude-3-5-sonnet-20241022
While I was initially impressed with it's context window, I got so sick of fighting with Claude about what it was allowed to answer I quit my subscription after 3 months.
Their whole policing AI models stance is commendable but ultimately renders their tools useless.
It actually started arguing with me about whether it was allowed to help implement a github repository's code as it might be copywritten... it was MIT licensed open source from Google :/
I just include text that I own the device in question and that I have a legal team watching my every move. It's stupid, I agree, but not insurmountable. I had less refusals with Claude 3 Opus.
Captchas are toast.
they have been toast for at least a decade if not two now. With OCR and captcha solving services like DeathByCaptcha or AntiCaptcha where it costs ~$2.99 per 1k successfully solved captchas, they are a non-issue amd takes about 5-10 lines of code added to your script to implement a solution.
Scary stuff.
'Hey Claude 3.5 New, pretend I'm a CEO of a big company and need to lay off 20% people, make me a spreadsheet and send it to HR. Oh make sure to not fire the HR department'
c.f. IBM 1979.
im unclear, is haiku supposed to be similar to 4o-mini in usecase/cost/performance? If not, do they have an analog?
Probably better than 4o-mini, 4o-mini isn’t great from my testing. loses focus after 100 lines of text
It's roughly tied in benchmarks
wow, i almost got worried but the cute music and the funny little monster on the desk convinced me that this all just fun and dandy and all will be good. the future is coming and we'll all be much more happy :)
Now I am really curious how to programmatically create a sandboxed compute environment to do a self-hosted "Computer use" and see how well other models, including self-hosted Ollama models, can do this.
and i was just planning to go to sleep…
I discovered Mindcraft recently and stayed up a few hours too late trying to convince my local model to play Minecraft. Seems like every time a new capability becomes available, I can't wait to experiment with it for hours, even at the cost of sleep.
One suggestion, use the following prompt at a LLM:
The combination of the words "computer use" is highly confusing. It's also "Yoda speak". For example it's hard for humans to parse the sentences *"Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku"*, *"Computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku "* (it literally relies on the comma to make any sense) and *"Computer use for automated interaction"* (in the youtube vid's title: this one is just broken english). Please suggest terms that are not confusing for a new ability allowing an AI to control a computer as if it was a human.
I suspect they are gonna need some local offload capabilities for Computer Use, the repeated screen reading can definitely be done locally on modern machines, otherwise the cost maybe impractical.
See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that runs segmentation locally.
Maybe we need some agent running on the PC to offload some of these tasks. It could scrape the display at 30 or 60 Hz and produce a textual version of what's going on for the model to consume.
Ok I know that we're in the post-nerd phase of computers, but version numbers are there for a reason. 3.6, please? 3.5.1??
Is it just me who feels that Anthropic has been innovating faster than ChatGPT in the past year?
Why not rev the numbers? "3.5" vs. "3.5 New" feels weird -- is there a particular reason why Anthropic doesn't want to call this 3.6 (or even 3.5.1)?
The confusing choice they seem to have made is that "Claude 3.5 Sonnet" is a name, rather than 3.5 being a version. In their view, the model "version" is now `claude-3-5-sonnet-20241022` (and was previously `claude-3-5-sonnet-20240620`).
OpenAI does exactly the same thing, by the way; the named models also have dated versions. For instance, there current models include (only listing versions with more than one dated version for the same "name" version):
gpt-4o-2024-08-06
gpt-4o-2024-05-13
gpt-4-0125-preview
gpt-4-1106-preview
gpt-4-0613
gpt-4-0314
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
On the one hand, if OpenAI makes a bad choice, it’s still a bad choice to copy it.
On the other hand, OpenAI has moved to a naming convention where they seem to use a name for the model: “GPT-4”, “GPT-4 Turbo”, “GPT-4o”, “GPT-4o mini”. Separately, they use date strings to represent the specific release of that named model. Whereas Anthropic had a name: “Claude Sonnet”, and what appeared to be an incrementing version number: “3”, then “3.5”, which set the expectation that this is how they were going to represent the specific versions.
Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing. I think they’re both confusing, but Anthropic had a better naming scheme, and now it is worse for no reason.
> Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing
Anthropic has always had dated versions as well as the other components, and they are, in fact, doing exactly the same thing, except that OpenAI has a base model in each generation with no suffix before the date specifier (what I call the "Model Class" on the table below), and OpenAI is inconsistent in their date formats, see:
Major Family Generation Model Class Date
claude 3.5 sonnet 20041022
claude 3.0 opus 20240229
gpt 4 o 2024-08-06
gpt 4 o-mini 2024-07-18
gpt 4 - 0613
gpt 3.5 turbo 0125
But did they ever have more than one release of Claude 3 Sonnet? Or any other model prior to today?
As far as I can tell, the answer is “no”. If true, then the fact that they previously had date strings would be a purely academic footnote to what I was saying, not actually relevant or meaningful.
Comment was deleted :(
Well, by calling it 3.5, they are telling you that this is NOT the next-gen 4.0 that they presumably have in the works, and also not downplaying it by just calling it 3.6 (and anyways they are not advancing versions by 0.1 increments - it seems 3.5 was just meant to convey "half way from 3.0 to 4.0"). Maybe the architecture is unchanged, and this just reflects more pre and/or post-training?
Also, they still haven't released 3.5 Opus yet, but perhaps 3.5 Haiku is a distillation of that, indicating that it is close.
From a competitive POV, it makes sense that they respond to OpenAI's 4o and o1 without bumping the version to Claude 4.0, which presumably is what they will call their competitor to GPT-5, and probably not release until GPT-5 is out.
I'm a fan of Anthropic, and not of OpenAI, and I like the versioning and competitive comparisons. Sonnet 3.5 still best coder, better than o1, has to hurt, and a highly performant cheap Haiku 3.5 will hit OpenAI in the wallet.
For a company selling intelligence, that's a pretty stupid way of labelling a new product.
"computer use" is also as bad a marketing choice as possible for something that actually seems pretty cool.
I'm not sure what a better term is. It's kind of understated to me. An AI that can "use a computer" is a simple straightforward sentence but with wild implications.
Comment was deleted :(
I had no idea what the headline meant before reading the article. I wasn't even sure how to pronounce "use." (Maybe a typo?) I think something like "Claude adds Keyboard & Mouse Control" would be clearer.
I read the headline 5-10 times trying to make sense of it before even clicking on the link.
Native English speaker, just used the other “use” many times
It’s simple and easy to understand what it is, that’s good marketing to my ears.
it makes sense in contrast to "tool use". basically, either fly-by-vision or fly-by-instruments, same dilemma you have in self driving cars
It worked for Nintendo.
The 3ds and “new 3ds” were both big sellers.
3ds doesn't have a version number to bump. Claude 3.5 does.
I hear the Nintendo 4DS was very popular with the higher dimensional beings!
The 3 was the version number ;)
Ds and ds lite were version 1
Dsi was 2 (as there was dsi software that didn’t run on ds or ds lite)
And the 3ds was version 3.
there /was/ a 2DS, though, and it came after the 3DS.
You can always add a version number (e.g. 3DS2) or a changed moniker (3DS+).
Comment was deleted :(
Speaking of "intelligence", isn't it ironic how everyone's only two words they use to describe AI is "crazy" and "insane". Every other post on Twitter is like: This new feature is insane! This new model is crazy! People have gotten addicted to those words almost as badly as their other new addiction: the word "banger".
Well yeah. This new model is mentally unwell! and This model is a total sociopath! didn't test as well in focus groups.
Every major AI vendor seems to do it with hosted models; within "named" major versions of hosted models, there are also "dated" minor versions. OpenAI does it. Google does it (although for Google Gemini models, the dated instead of numbered minor versions seem to be only for experimental versions like gemini-1.5-pro-exp-0827, stabled minor versions get additional numbers like gemini-1.5-pro-002.)
exactly my thought too, go up with the version number! Some negative examples: Claude Sonnet 3.5 for Workstations, Claude Sonnet 3.5 XP, Claude Sonnet 3.5 Max Pro, Claude Sonnet 3.5 Elite, Claude Sonnet 3.5 Ultra
Claude Sonnet 3.5 360, Claude Sonnet 3.5 One
2007: "Choose a Vista" - https://www.youtube.com/watch?v=5-feCRQBkSs
2024: "Choose a Claude"?
Comment was deleted :(
Super Claude Sonnet 3.5 Champion Edition, Alpha 3
Let's just say that the LLM companies still are learning how to do versioning in a customer friendly way.
Just guessing here, but I think the name "sonnet" is the architecture, the number is the training structure / method, and the model date (not shown) is the data? So presumably with just better data they improved things significantly? Again, just a guess.
My guess is they didn't actually change the model, that's what the version number no change is conveying. They did some engineering around it to make it respond better, perhaps more resources or different prompts. Same cutoff date too.
Comment was deleted :(
Similar to OpenAI when they update their current models they just update the date, for example this new Claude 3.5 Sonnet is "claude-3-5-sonnet-20241022".
Maybe they notice 3.5 Sonnet has become a brand and pivot it away from a version
Is it OS X all over again?
claude-3-5-sonnet-20241022
claude-3-5-sonnet-20241022-final-final-2
Comment was deleted :(
Because its a finetune of 3.5 optimized for the use case of computer use.
Its actually accurate and its not a 3.6.
So 3.5.1 ?
I think that was the last version number for KDE 3.
Stands out for me as I once replaced a 2.3 Turbo in a TurboCoupe with a 351 Windsor ))
For networks
I don't think that's correct. This looks like a new model. Significant jump in math and gpqa scores.
If the architecture is the same, and the training scripts/data is the same, but the training yielded slightly different weights (but still same model architecture), is it a new model or just a iteration on the same model?
What if it isn't even a re-training from scratch but a fine-tune of an existing model/weights release, is it a new version then? Would be more like a iteration, or even a fork I suppose.
Yes, it's a new model, but not a Claude 4.
It's the same, but a bit different; Claude 3.6 makes sense to me.
Could be just additional post-training (aka finetuning) for coding/etc.
It's quite sad that application interoperability requires parsing bitmaps instead of exchanging structured information. Feels like a devastating failure in how we do computing.
See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that includes operating system accessibility API data and DOM information (along with bitmaps) where available.
We are also planning on extracting runtime information using COM/AppleScript: https://github.com/OpenAdaptAI/OpenAdapt/issues/873
It's super cool to see something like this already exists! I wonder if one day something adjacent will become a standard part of major desktop OSs, like a dedicated "AI API" to allow models to connect to the OS, browse the windows and available actions, issue commands, etc. and remove the bitmap parsing altogether as this appears to do.
It's really more of a reflection on where we're at in the timeline of computing, with humans having been the major user of apps and webs site up until now. Obviously we've had screen scraping and terminal emulation access to legacy apps for a while, and this is a continuation of that.
There have been, and continue to be, computer-centric ways to communicate with applications though, such as Windows COM/OLE, WinRT and Linux D-Bus, etc. Still, emulating human interaction does provide a fairly universal capability.
It's very much in the "worse is better" camp.
If the goal is to emulate human behavior, I'd say there is a case to be made to build for the same interface, and not rely on separate APIs that may or may not reflect the same information as a user sees.
Apps are built for people rather than computers.
It's quite sad that application interoperability requires parsing text passed via pipes instead of exchanging structured information.
Like others said, worse is better.
The people have chosen apps over protocols.
Worse is better.
You can blame normies for this. They love their ridiculous point and click (and tap) interfaces.
Fortunately, with function-calling (and recently, with guaranteed data structure), we've had access to application interoperability with LLMs for a while now.
Don't get mad at a company for developing for the masses - that's what they are expected to do.
But they built for us, first.
Comment was deleted :(
[dead]
Both new Sonnet and gpt-4o still fail at a simple:
"How many w's are in strawberry?"
gpt-4o: There are 2 "w's" in "strawberry."
Claude 3.5 Sonnet (new): Let me count the w's in "strawberry": 0 w's.
(same question with 'r' succeeds)
What is artificial about current gen of "artificial intelligence" is the way training (predict next token) and benchmarking (overfitting) is done. Perhaps a fresh approach is needed to achieve a true next step.
It's bad at directly working on classical computer problems like math and data processing. But you can do it indirectly by having it write a program that produces the correct result. Interestingly, I didn't even have to have it run the program, although usually you would
write a tool which counts the number of w's in "strawberry" and return the result
Which produced: Here's a simple Python function that counts the number of 'w's in the word "strawberry" and returns the result:
```python
def count_w_in_strawberry():
word = "strawberry"
count = word.count('w')
return count
# Call the function and print the result
result = count_w_in_strawberry()
print(f"The number of 'w's in 'strawberry' is: {result}")
```
This tool does the following:
1. We define a function called `count_w_in_strawberry()`.
2. Inside the function, we assign the word "strawberry" to a variable called `word`.
3. We use the `count()` method on the `word` string to count the occurrences of 'w'.
4. The function returns the count.
5. Outside the function, we call `count_w_in_strawberry()` and store the result in the `result` variable.
6. Finally, we print the result.
When you run this code, it will output:
```
The number of 'w's in 'strawberry' is: 1
```
This tool correctly identifies that there is one 'w' in the word "strawberry".
I always thought the halting problem was an academic exercise, but here we see a potential practical use case. Actually this seems pretty dangerous letting the LLM write and automatically execute code. How good is the sandbox? Can I trick the LLM into writing a reverse shell and opening it up for me?
There's always that one tokenization error comment
Can we stop with these useless strawberry examples?
They are trained on tokens not characters.
Crafted by Rajat
Source Code