Show HN: Ctx – a /resume that works across Claude Code and Codex

ctx is a local SQLite-backed skill for Claude Code and Codex that stores context as a persistent workstream that can be continued across agent sessions. Each workstream can contain multiple sessions, notes, decisions, todos, and resume packs. It essentially functions as a /resume that can work across coding agents.

Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117

I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.

It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.

A few things it does:

- Resume an existing workstream from either tool

- Pull existing context into a new workstream

- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest

- Search for relevant workstreams

- Branch from existing context to explore different tasks in parallel

It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.

github.com

72 points

dchu17

3 days ago


30 comments

realdimas 2 days ago

Claude Code used to have a warning that toggling thinking within a conversation would decrease performance:

> Changing thinking mode mid-conversation will increase latency and may reduce quality. For best results, set this at the start of a session.

Neither OpenAI nor Anthropic exposes raw thinking tokens anymore.

Claude Code redacts thinking by default (you can opt in to get Haiku-produced summaries at best), and OpenAI returns encrypted reasoning items.

Either way, first-party CLIs hold opaque thinking blobs that can't be manipulated or ported between providers without dropping them. So cross-agent resume carries an inherent performance penalty: you keep the (visible) transcript but lose the reasoning.

LeoPanthera 2 days ago

I don't think I've ever /resumed a Claude Code session even once. What do people use that for? The way I use it is to make a change, maybe document the change, and then I'm done. New session.

  • meowface 2 days ago

    I have like 15 concurrent sessions I leave up for weeks, 50% Codex 50% Claude Code, even though I know they work better with fresh context. Then again I also always have least 200 browser tabs up. I probably just have a mental illness.

    • theowaway213456 a day ago

      lol after reading your first sentence I literally thought to myself "this sounds like the type of person who never closes their browser tabs"

  • shmoogy a day ago

    Most of the time it's when I want to go back and have a skill made for future reuse, but with remote control I've had some sessions open for remote diagnostics and it just works better than starting from scratch - even having lessons learned to create memories and update Claude.md.

    I know it's wasteful but often I've got a surplus of tokens and not enough of my time - so it's a trade off I've been fine with.

  • daemonologist 2 days ago

    I'd use it if I hit the 5 hour quota mid-change and then came back later in the day in a new terminal (depending on the input/output ratio of my now un-cached context, of course).

  • dgunay 2 days ago

    I spin up a lot of agents and don't always get back to them same day, so it helps a lot if my laptop restarts to install updates automatically.

giancarlostoro 2 days ago

Tooling like this is why I really want to build my own harness that can replace Claude Code, because I have been building a few different custom tools that would be nice as part of one single harness so I don't have to tweak configurations across all my different environments, projects and even OS' it gets tiresome, and Claude even has separate "memories" on different devices, making the experience even more inconsistent.

  • StanAngeloff 2 days ago

    I've actually had the same itch and decided to give it a go ... So far I'm one year into the project, learned a ton and highly recommend to anyone who'd listen - try writing you own harness. It can be fun, it can be intoxicating, it can also be boring and mundane. However you'll learn so much along the way, even if you thought you already were well versed.

  • nextaccountic 2 days ago

    The problem with this is that you won't get to enjoy the heavy subsidies of Claude subscriptions

    But yeah, after the price hikes, it's inevitable that people will run open source harnesses

ghm2180 2 days ago

Interesting. What kind of context usage does it have when switching between the two providers? Like is it smart about using the # tokens when you go from claude -> codex or vice versa for a conversation?

How does ctx "normalize" things across providers in the context window ( e.g. tool/mcp calls, sub-agent results)?

buremba 2 days ago

Since prompt caching won't work across different models, how is this approach better than dropping a PR for the other harnesses to review?

  • dchu17 2 days ago

    Sorry, I may be misunderstanding the question.

    The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).

    What do you mean by prompt caching?

    • Wowfunhappy 2 days ago

      Prompt caching is done on the provider side. If you send two requests to a provider in short succession and the beginning of your second request is the same as your first (for example, because your second request is the continuation of an ongoing chat), the repeated tokens are much less expensive the second time.

      Obviously, your tool does not provide this. But I think GP is undervaluing the UX advantages of having your conversation history.

      • buremba 2 days ago

        Yes that's it. I actually just ask codex/claude code to look up the session id when I want to resume sessions cross harness, it's just jsonl files locally so it can access the full conversation history when needed.

  • ycombinatornews 2 days ago

    Great callout about the prompt caching, this switch is going to burn subscription limits on Claude real real fast.

    Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.

t0mas88 2 days ago

Have you considered making it possible to share a stream/context? As an export/import function.

  • rkuska 2 days ago

    I wrote a tool for myself to copy (and archive) the claude/codex conversations github.com/rkuska/carn

  • dchu17 2 days ago

    that's interesting, I hadn't at this point but this sounds potentially useful

phoenixranger 2 days ago

really interesting idea! will check it out. and thanks for making it local-first!

ramon156 a day ago

Can we also get a /last ? 9/10 times i want to resume my last session. I know its only one extra tap, but still