What to do about passwords and LLMs? #2432
Replies: 3 comments 2 replies
-
Thank you for sharing this question. Sorry it took a bit to answer this! I do know usually passwords are saved in your device's keychain, and it's not something goose itself stores. It actually asks you for permission to access it. That's what I know off-hand. Let me tag @blackgirlbytes @angiejones @DOsinga @michaelneale to chime in c: |
Beta Was this translation helpful? Give feedback.
-
"Is there a mechanism to do things like, say, navigate to a certain URL and ensure that URL isn't sent back to the LLM off device?" - do you mean by hand or via a tool call result? (as if the latter, then LLM has already seen it at that point)> Yeah this is an area of concern - at some point it depends how much you trust any back end, in this can an LLM - do they have zero data retention/encryption at rest etc etc... Or, there are some tasks where you want to route things to local (machine/network) models only? |
Beta Was this translation helpful? Give feedback.
-
"With Goose having access to your codebase, and even though there is a concept of Goose ignore files for things like env variables and sensitive keys" yeah does make me think - in CI tools like actions, Jenkins and such, they can often know about secrets, track and redact and mask them. I wonder if it would be possible if you knew (client side), what secrets you had specifically, and then filtering what goes to the LLM replaces any incidents of that with "****" or some label indicating it is a masked secret, and then any tool calls or things that reference it are addressed client side. That would be a neat feature if it helped (masking is probably simple, won't be fool proof, but in theory if you had a list of masked things, then at the lowest level in provider perhaps base.rs code, it could mask those on the way out, and replace on the way in with the actual values?) not fool proof, but a familiar pattern, something like that @StephenPAdams ? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
One of the major concerns companies have with these AI tools is when it comes to proprietary or sensitive information being used to train models. We've seen leaky scenarios where API keys were autocompleted that belonged to other companies, and while it is certainly best practice not to commit sensitive information into repositories, this shows a problem that every company is going to have to deal with.
With Goose having access to your codebase, and even though there is a concept of Goose ignore files for things like env variables and sensitive keys, what about circumstances where you want to use Goose to help automate QA testing or the creation of QA test scripts that require the automation to put a username and password into the fields on screen via things like Selenium+MCP?
Is there a mechanism to do things like, say, navigate to a certain URL and ensure that URL isn't sent back to the LLM off device? And that Goose can prompt you to add your username and password and when that is done, it is only kept locally to pass to the browser's un/pw fields without sending that information to the LLM?
Beta Was this translation helpful? Give feedback.
All reactions