Ask HN: Claude 3.5 Sonnet vs. o1 vs. <other> for coding. Let's talk!

5 points by notnotrishi 14 hours ago

Both o1 (mini/preview) and Claude 3.5 Sonnet seem to be popular among devs, but opinions seem to be divided + all over the place. From my experience, both seem to have their strengths and weaknesses, and I find myself switching between both.

If you’ve used either — or ideally both — would love to hear your insights. Feel answers to the following questions will provide some context when you respond:

- What are the strengths & weaknesses of each from your experience?

- Any tips/tricks or prompting techniques you use to get the most from these models?

- How do you typically use them? (via. native apps like ChatGPT, Claude; or via Cursor, GitHub Copilot, etc.?)

- What programming language(s) do you primarily use them with?

Hopefully this thread provides a useful summary and some additional tips for readers.

(I’ll start with mine in the comments)

dauertewigkeit 10 hours ago

I prefer o1. I mostly use it as a knowledge system. Don't really care for the automatic code generation nonsense. Unless I'm really tired and the task is very simple, in which case I might decide to write a paragraph of text instead of 30 lines of Python. My experience is that when ChatGPT fails, Claude fails too. On some advanced coding tasks, I find ChatGPT's depth of reasoning ability to be better.

notnotrishi 14 hours ago

My notes:

- Sonnet 3.5 seems good with code generation and o1-preview seems good with debugging

- Sonnet 3.5 struggles with long contexts whereas o1-preview seems good at identifying interdependencies between files in code repo in answering complex questions

- Breaking the problem into small steps seems to yield better results with Sonnet

- I’m using primarily in Cursor/GH Copilot and with Python

  • jaggs 12 hours ago

    I concur. Sonnet is great at starting projects, but eventually gets 'bogged' down and starts losing the plot. o1 is then useful to sort out the issues and painfully pull things back on track.