The proposed approach has a large number of drawbacks:
* It's not reliable, the project’s own readme mentions false positives.
* It adds a source of confusion where an AI agent tells the user that the CLI tool said X, but running it manually with the same command line gives something different.
* The user can't manually access the functionality even if they want to.
Much better to just have an explicit option to enable the new behaviors and teach the AI to use that where appropriate.
* The online tutorials the LLM was trained on don't match the result the LLM gets when it runs the tool.