Rendered at 10:38:28 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
Areena_28 1 days ago [-]
I know with Claude, hitting 5 messages every 5 hours mid-task is a real workflow problem.
Many times, when i used to hit claude limits mid task and switched to ChatGPT thinking the no-limit thing would make up for it. But it's really annoying.
I ended up just being more deliberate about how i use Claude. longer more complete prompts instead of back and forth, which naturally stretches the messages further. not a perfect fix but it changed the experience enough to stick with it.
2020science 17 hours ago [-]
My experience is that it all comes down to personal fit and feel. I switched from ChatGPT to Clause several months ago and much prefer it - although do get frustrated at glitches and hitting limits. But I'm a writer and academic, and the LLM fits my purpose better. With what I do ChatGPT does nott feel great to use.
kasey_junk 2 days ago [-]
I think it really depends on how fully formed you ai workflows are. I have a very opinionated set of skills and agents files and a harness for running prompts against both for code production.
I do head to head comparisons with this setup pretty regularly and what I’ve found is there is not much difference in outcomes between the 2 frontier labs at equivalent model settings. It’s hard to get statistically significant results on my budget and eval ability but my anecdotal feeling is that there is as much difference in group as out in outcomes.
Given that setup I use codex much more than Claude because it’s more reliable.
But I believe it’s easier to go from nothing to decent with Claude.
For other stuff I use Claude.
yo103jg 21 hours ago [-]
My current split: Claude for code,
Gemini for harder reasoning,
ChatGPT for more structured output.
ChatGPT is still useful, but mostly for tasks where formatting, organization, and response shape matter. If I’m judging mostly on raw capability, probably rank Gemini above it.
01jonny01 2 days ago [-]
Claude is good for producing 1 shot polished apps, but you will quickly burn through your allowance.
Chatgpt needs more prompting to get what you want, but its nearly impossible to reach your limit.
skyberrys 18 hours ago [-]
I have the same experience with Claude, but it's been rare for me to hit limits. I tend to ask it one or two improvements to the app and then I have a time burden of using the app to make sure it's still what I intended. I have the $200 per year plan which I think is the same as $20 per month just I paid in bulk for a discount.
Honestly, I am not very satisfied with ChatGPT. It might be good for some things but lacks in other. I am not saying the Claude is much better, but if I chose to pay for one of them if would be Claude.
pcael 2 days ago [-]
Have you tried Claude console client?
whatarethembits 2 days ago [-]
Do you mean Claude Code? If so, that's what I use(d) primarily for development, and Claude Desktop for general chats. My issue with Opus was that, every time I start a new task in Plan mode, it'd use 50k - 100k tokens and that'd by about 20% of the session limit. A bit of back and forth and its done for most of the work day. Just not feasible at all. The tasks I wanted it to perform were fairly small and contained, "Look at these three files @@@ and add xxx to @file. DON'T read any other files. If you need more context, ask me.". That worked sometimes but not always, still burned a lot of tokens.
pcael 2 days ago [-]
Yes I meant Claude Code client.
Indeed Opus is a token eater, I usually use Sonnet because or that.
khaledh 2 days ago [-]
I use both at the same time:
- Claude Opus for general discussion, design, reviews, etc.
- Codex GPT-5.4 High for task breakdown and implementation.
I often feed their responses to each other (manual copy/paste) to validate/improve the design and/or implementation. The outcome has been better than using one alone.
This workflow keeps Claude's usage in check (it doesn't eat as much tokens), and leverages Codex generous usage limits. Although sometimes I run into Codex's weekly limit and I need to purchase additional credits: 1000 credits for $40, which last for another 4-5 days (which usually overlap with my weekly refresh, so not all the credits are used up).
Many times, when i used to hit claude limits mid task and switched to ChatGPT thinking the no-limit thing would make up for it. But it's really annoying.
I ended up just being more deliberate about how i use Claude. longer more complete prompts instead of back and forth, which naturally stretches the messages further. not a perfect fix but it changed the experience enough to stick with it.
I do head to head comparisons with this setup pretty regularly and what I’ve found is there is not much difference in outcomes between the 2 frontier labs at equivalent model settings. It’s hard to get statistically significant results on my budget and eval ability but my anecdotal feeling is that there is as much difference in group as out in outcomes.
Given that setup I use codex much more than Claude because it’s more reliable.
But I believe it’s easier to go from nothing to decent with Claude.
For other stuff I use Claude.
ChatGPT is still useful, but mostly for tasks where formatting, organization, and response shape matter. If I’m judging mostly on raw capability, probably rank Gemini above it.
Chatgpt needs more prompting to get what you want, but its nearly impossible to reach your limit.
- Claude Opus for general discussion, design, reviews, etc.
- Codex GPT-5.4 High for task breakdown and implementation.
I often feed their responses to each other (manual copy/paste) to validate/improve the design and/or implementation. The outcome has been better than using one alone.
This workflow keeps Claude's usage in check (it doesn't eat as much tokens), and leverages Codex generous usage limits. Although sometimes I run into Codex's weekly limit and I need to purchase additional credits: 1000 credits for $40, which last for another 4-5 days (which usually overlap with my weekly refresh, so not all the credits are used up).