This looks awesome.
What I would love to see either as something leveraging this, or built in to this, is if you prompt stagehand to extract data from a page, it also returns the xpath elements you'd use to re-scrape the page without having to use an LLM to do that second scraping.
So basically, you can scrape new pages never before seen with the non-deterministic LLM tool, and then when you need to rescrape the page again to update content for example, you can use the cheaper old-school scraping method.
Not sure how brittle this would be both going from LLM version to xcode version reliably, or how to fallback to the LLM version if your xcode script fails, but overall conceptually, being able to scrape using the smart tools but then building up basically a library of dumb scraping scripts over time would be killer.
I've been on a similar thread w my own crawler project -- conceptually at least, since I'm intentionally building as much of it by hand as possible... Anyway, after a lot of browser automation, I've realized that it's more flexible and easier to maintain to just use a DOM polyfill server-side and then use the client to get raw HTML responses wherever possible. (And, in conversations about similar LLM-focused tools, that if you generate parsing functions you can reuse, you don't necessarily need an LLM to process your results.)
I'm still trying to figure out the boundaries of where and how I want to scale that out into other stuff -- things like when to use `page` methods directly, vs passing a function into `page.evaluate`, vs other alternatives like a browser extension or a CLI tool. And I'm still needing to work around smaller issues with the polyfill and its spec coverage (leaving me to use things like `getAttribute` more than I would otherwise). But in the meantime it's simplified a lot of ancillary issues, like handling failures on my existing workflows and scaling out to new targets, while I work on other bot detection issues.
Yeah, I think someone opened a similar issue on GitHub: https://github.com/browserbase/stagehand/issues/389
Repeatability of extract() is definitely super interesting and something we're looking into
Cache the response for a given query-page hash pair maybe? So the LLM will only be consulted when the page content hash changes, the previous answer be reused otherwise
Agree. The worst part of integration tests are how brittle they often are. I don't want to introduce yet another thing that could give false test errors.
But of course, the way it works now could also help reduce the brittleness. With an xpath or selector, it quickly breaks when the design changes or things are moved around. With this, it might overcome this.
So tradeoffs, I guess.
there’s also llm-scraper: https://github.com/mishushakov/llm-scraper
disclaimer: i am the author