Rendered at 15:26:25 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
jakub_g 8 hours ago [-]
My favorite are the systems where you can only issue one token, so that you can't do a zero downtime rotation by creating new one, making it active in your system, and only then removing the old one.
In some cases this makes rotation a big event to be avoided because costs are higher than gains.
acdha 4 hours ago [-]
I am still surprised that Keycloak makes this so hard. They finally added support for n=2 but it’s still walled off behind a “this is experimental, use at your own risk” warning, and it’s something that literally every OIDC client needs to do if you have any kind of compliance requirements.
nightpool 16 hours ago [-]
Okay but now how do you recommend I hook up my Sentry instance to create tickets in Jira, now that Jira has deprecated long-lived keys and I have to refresh my token every 6 weeks or whatever. It needs long-lived access. Whether that comes in the form of a OAuth refresh token or a key is not particularly interesting or important, IMO.
420official 7 hours ago [-]
OIDC with JWT doesnt need any long lived tokens. For example, I can safely grant gitlab the ability to push a container to ECR just using a short-lived token that gitlab itself issues. So the answer might be to ask your sentry/jira support rep to fast track supporting OIDC JWTs.
You do what you can. Eliminating long-lived keys isn't always possible; you set up rotation instead.
nightpool 8 hours ago [-]
I disagree, I think increasing manual toil (having to log into Sentry every 6 months to put in a new Jira token) increases fatigue substantially for, in this case, next-to-no security benefit (Sentry never actually has any less access to Jira than it does in the long-lived token case, and any attacker who happens to compromise them is going to be gone well before six months is up anyway).
Instead, the right approach in this case is to worry less about the length of the token and more about making sure the token is properly scoped. If Sentry is only used for creating issues, then it should have write-only access, maybe with optional limited access to the tickets it creates to fetch status updates. That would make it significantly less valuable to attackers, without increasing manual toil at all, but I don't know any SaaS provider (except fly, of course) that supports such fine-grained tokens as this. Moving from a 10 year token to a 6 month token doesn't really move the needle for most services.
akerl_ 8 hours ago [-]
This sounds more like a reason to automate token management than an argument for long lived tokens.
ferngodfather 8 hours ago [-]
But then you just move the security issue elsewhere with more to secure. Now we have to think about securing the automation system, too.
This is the same argument I routinely have with client id/secret and username/password for SMTP. We're not really solving any major problem here, we're just pretending it's more secure because we're calling it a secret instead of a password.
orf 2 hours ago [-]
It’s like 12 lines of terraform to fully automate this, inside your existing IaC infrastructure. It’s not complex.
Flimm 6 hours ago [-]
Secrets tend to be randomly-generated tokens, chosen by the server, whereas passwords tend to be chosen by humans, easier to guess, and reused across different services and vendors.
collabs 4 hours ago [-]
How does this apply to ssh public keys?
akerl_ 2 hours ago [-]
> Long-lived production SSH keys may be copied around, hardcoded into configuration files, and potentially forgotten about until there is an incident. If you replace long-lived SSH keys with a pattern like EC2 instance connect, SSH keys become temporary credentials that require a recent authentication and authorization check.
datadrivenangel 10 hours ago [-]
Does having to refresh the key every 6 weeks instead of every year or whatever actually make a meaningful difference security-wise?
plorkyeran 9 hours ago [-]
At the minimum you’ll remember how to do it if you have to do it every six weeks.
1024kb 7 hours ago [-]
If the key becomes compromised, rotating the key sooner means you potentially limit the damage from unauthorised access.
tptacek 10 hours ago [-]
Yes? That's a huge difference.
nitwit005 15 hours ago [-]
> If you assume that someone is constantly trying to guess a key or password, the likelihood that they guess correctly grows over time.
If they can brute force the password or key, the rotation will, at best, force them to do it multiple times. You'll see more improvement from just adding another couple of characters to the length.
cassianoleal 15 hours ago [-]
Fair enough, but that doesn't protect you in case of a leak. If you're going to solve for the leak anyway, is it worth it to solve for brute force in isolation? You can always add another couple of characters. At which point do you stop?
peterldowns 15 hours ago [-]
Agreed! Been working on infra for an early-stage company recently and it's been awesome using OIDC and IRSA (or WIF if you're on google) for as many things as possible. Basically, there are no permanent keys for anything.
Slightly annoying to have to wrap some clis in scripts that generate the short-lived token, but it feels really magical to have services securely calling each other without any explicit keys or password to even store in our vault.
Lots of cool benefits --- for instance, we ran the compromised Trivy github action a few weeks ago, but our Github Actions had 0 keys for it to leak! Also really great that I don't have to worry about rotating shared credentials on short notice if an engineer on my team decides to leave the company.
wewtyflakes 9 hours ago [-]
Even still, I prefer the simplicity of API keys. The mental overhead is low, and having to explain the concept to customers is zero. Rotating keys is not great but the tedium of it is preferred over the labyrinthian shenanigans of getting whatever the hot new security style of the week setup is.
jp0001 44 minutes ago [-]
I'm still having problems trusting my compiler.
nazcan 9 hours ago [-]
I find it interesting how this all comes down to what do you trust. Like.. why not <1 minute keys? Or 1-request?
MartinodF 8 hours ago [-]
We do a lot of request signing (think AWS sigv4a) which practically speaking amounts to 1-request.
gleenn 15 hours ago [-]
After the Vercel hosting compromise and having to rotate a ton of keys recently, we are definitely implementing automated rotation of short lived keys. That was super painful.
XCSme 6 hours ago [-]
But how do you do that without also having a long-lived key or access token to those services?
noAnswer 3 hours ago [-]
The long-lived credentials life inside a stripped down machine. Cron/lego/Ansible handles the renewal. The machines on the edge can't renew their keys themselves.
XCSme 3 hours ago [-]
Oh, this makes sense, so instead of "the app is rotating its keys" is more like "the keys in our app are being rotated by an external service".
bzmrgonz 15 hours ago [-]
What about dynamic credentials. Why can't we deploy HSM(hardware security module), they are so much more affordable now. We then deploy fido2 keys, have our long lived keys in there and have HSM serve as dynamic credentials server.
collabs 11 hours ago [-]
Something I don't understand is the absolute phobia of service accounts. There are things that need to happen regardless of who is doing it. Emails need to get sent every day with reports, for example.
Forcing these workflows into the nonsense security theater of "we can't have service accounts" is stupid and unproductive. So every time we fire or lay off the person whose name is on the automation, we need to rotate the keys? What is the benefit here?
If you are screaming "managed identity" here, I have a bridge to sell you because clearly even Microsoft has not been able to figure out or implement managed identities for internal workloads... Well not as of 2022, at least.
theamk 10 hours ago [-]
Service accounts are great! I just wish instead of having a password which gets shared around via 1password, there were a clear permission list ("this is a service account.. "real" users X, Y, X can login as it")
Seems like it's just Microsoft that cannot figure it out. AWS had roles forever, fully supported from web console or CLI. But when I request Azure service account, I am handed username and password.
anon7000 11 hours ago [-]
Totally, but my service accounts own the api keys. But keys are still annoying to rotate. You know what’s not annoying to rotate? Short-lived tokens with very limited scope that get assigned more on demand
cyberax 15 hours ago [-]
On the contrary. We want long-lived keys. As long as they are not symmetric!
My private SSH key is rooted in hardware and can't even be extracted. This is awesome, I don't have to worry about it getting compromised.
The same should apply to all other keys, including the dreaded "bearer tokens".
Dragging-Syrup 13 hours ago [-]
I’m sorry to be pedantic, that’s not exactly true. I agree in the sense that extracting hw based keys is next to impossible, but if your machine is compromised, there isn’t much stopping malware from using your hw based key (assuming 1. Left plugged in, 2. Unlocked with either ssh-agent or gpg-agent, and 3. You don’t have touch to auth turned on). Reduced risk? Absolutely. No risk? Absolutely not.
traceroute66 4 hours ago [-]
> there isn’t much stopping malware from using your hw based key
Except the three pretty major things that do stop malware that you mentioned ;)
Perhaps especially "3. You don’t have touch to auth turned on".
bloppe 13 hours ago [-]
Never apologize for pedantry here
cyberax 13 hours ago [-]
Sure. They can use my key while my machine is compromised, but even then I won't _need_ to rotate it after the compromise is cleared.
It still would be a good idea just to make sure that it's easier to analyze logs, but it's not strictly needed.
hsbauauvhabzb 13 hours ago [-]
And if you want to be even more pedantic, shell access with a touch based key just means the attacker has to wait for you to auth, which makes touch based systems largely a waste of effort on the defenders part.
traceroute66 3 hours ago [-]
> shell access with a touch based key just means the attacker has to wait for you to auth
And if you want to be EVEN more pedantic, on most touch-based keys, you have to touch within 10–15 seconds otherwise it times out.
So it is not a waste of effort at all. First the need to touch at all eliminates a large chunk of attacks. Second the need to touch within 10–15 seconds eliminates a whole bunch more.
There would have to be some heavy-duty alignment of ducks going on to get past a touch requirement.
Even more if the target has touch AND PIN enabled.
entrope 3 hours ago [-]
The touch based key I use only responds once per touch. If someone compromises the machine it's plugged into, the action I expected to complete won't complete. This means the compromise becomes immediately visible.
kkl 13 hours ago [-]
Part of the threat model for an Engineering team is that people come and go. They move teams which have different levels of access. They leave the organization, in most cases, on good terms. I want to set up infrastructure where I don't need to remember that your SSH pubkey is baked into production configuration after you leave the company.
There are several options for setting up per-connection keys that are dispensed to users through the company SSO. That setup means you don't need to maintain separate infrastructure for (de-)provisioning SSH keys.
cyberax 11 hours ago [-]
This is completely solved by SSH certificates. You still have the same private key in the hardware, but instead of using the public key directly, you issue temporary (~1 hour) SSH key certificates. I even automated it using an SSH proxy.
The target machines then just need to put the CA cert in the authorized_keys files.
lelanthran 6 hours ago [-]
> The target machines then just need to put the CA cert in the authorized_keys files.
The word "just" is doing a lot of work there. You update authorized_keys every hour for your entire fleet?
winstonwinston 4 hours ago [-]
No, the ssh CA model works like this: servers trust one CA, and the CA signs user keys.
No more distributing individual public keys to every machine.
It is the user machine that needs new certificate signed by the CA once the short-lived one expires.
lelanthran 2 hours ago [-]
Understood. Not a bad idea.
pyvpx 6 hours ago [-]
Sounds like a job for dnssec and sshfp records
Ahh, now you have three problems…hrm
pfg_ 16 hours ago [-]
The fixed position background made it look like I had dust on my phone screen
serious_angel 16 hours ago [-]
It didn't for me, and I got the starry space feel, but I noticed the repeating patterns.
Perhaps some movement is needed? I do recall some relatively similar cases saved, if interested:
1. Moving forward in space (JavaScript/JS): https://codepen.io/the_artwork/pen/zYEdxyo
2. Rotating in space (JS): https://codepen.io/the_artwork/pen/NWMRYJP
3. Rotating in space (CSS+JS): https://codepen.io/the_artwork/pen/PoeNyyy
sandeepkd 14 hours ago [-]
I think the take on key lifetime is premature which taking into consideration
1. How key is used
2. Whats the threat vector
3. Cost of key rotation
4. Cost of key verification
At the end of the day its a trade off, the business use case, your expertise and the risk have to be evaluated together
themafia 10 hours ago [-]
Where possible I prefer to implement signed policy objects. Then I can constrain access based on source IP and other request parameters. You can also easily implement an expiration date if you feel any particular application requires it, but some simple constraints may be useful enough that you might skip this in the majority of server to server applications.
This not only provides security but provides some resistance to bugs in your code which either call services incorrectly or call the incorrect methods of a service. I've avoided accidental data deletions and other painful events because my policy document did not allow this action. It turns these bugs into failures at the security perimeter.
I've used this concept in a few user applications as well. Typically those documents will always have expiration dates and I'll provide a "license" API which allows a single authenticated client request to retrieve an appropriate policy document. This is particularly nice when you want to implement a service across different providers or want to avoid downstream authentication overhead.
dnnddidiej 16 hours ago [-]
You dont usually want keys at all. At least in the sense of copy this key from system A and paste it in this other place system B. Usually CI. You want some continual method of authentication and authorization.
serious_angel 16 hours ago [-]
Some magnificent systems have APP_KEY/APP_SECRET that is also used for cookie and database encryption. A frequent rotation of this is... inadequate... in systems with high traffic, to say the least, and hence I am sorry, but I do not believe it's the "usual" desire. As always, it depends on the context and transaction scope.
In some cases this makes rotation a big event to be avoided because costs are higher than gains.
- https://docs.gitlab.com/ci/secrets/id_token_authentication/#... - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_pr...
Instead, the right approach in this case is to worry less about the length of the token and more about making sure the token is properly scoped. If Sentry is only used for creating issues, then it should have write-only access, maybe with optional limited access to the tickets it creates to fetch status updates. That would make it significantly less valuable to attackers, without increasing manual toil at all, but I don't know any SaaS provider (except fly, of course) that supports such fine-grained tokens as this. Moving from a 10 year token to a 6 month token doesn't really move the needle for most services.
This is the same argument I routinely have with client id/secret and username/password for SMTP. We're not really solving any major problem here, we're just pretending it's more secure because we're calling it a secret instead of a password.
If they can brute force the password or key, the rotation will, at best, force them to do it multiple times. You'll see more improvement from just adding another couple of characters to the length.
Slightly annoying to have to wrap some clis in scripts that generate the short-lived token, but it feels really magical to have services securely calling each other without any explicit keys or password to even store in our vault.
Lots of cool benefits --- for instance, we ran the compromised Trivy github action a few weeks ago, but our Github Actions had 0 keys for it to leak! Also really great that I don't have to worry about rotating shared credentials on short notice if an engineer on my team decides to leave the company.
Forcing these workflows into the nonsense security theater of "we can't have service accounts" is stupid and unproductive. So every time we fire or lay off the person whose name is on the automation, we need to rotate the keys? What is the benefit here?
If you are screaming "managed identity" here, I have a bridge to sell you because clearly even Microsoft has not been able to figure out or implement managed identities for internal workloads... Well not as of 2022, at least.
Seems like it's just Microsoft that cannot figure it out. AWS had roles forever, fully supported from web console or CLI. But when I request Azure service account, I am handed username and password.
My private SSH key is rooted in hardware and can't even be extracted. This is awesome, I don't have to worry about it getting compromised.
The same should apply to all other keys, including the dreaded "bearer tokens".
Except the three pretty major things that do stop malware that you mentioned ;)
Perhaps especially "3. You don’t have touch to auth turned on".
It still would be a good idea just to make sure that it's easier to analyze logs, but it's not strictly needed.
And if you want to be EVEN more pedantic, on most touch-based keys, you have to touch within 10–15 seconds otherwise it times out.
So it is not a waste of effort at all. First the need to touch at all eliminates a large chunk of attacks. Second the need to touch within 10–15 seconds eliminates a whole bunch more.
There would have to be some heavy-duty alignment of ducks going on to get past a touch requirement.
Even more if the target has touch AND PIN enabled.
There are several options for setting up per-connection keys that are dispensed to users through the company SSO. That setup means you don't need to maintain separate infrastructure for (de-)provisioning SSH keys.
The target machines then just need to put the CA cert in the authorized_keys files.
The word "just" is doing a lot of work there. You update authorized_keys every hour for your entire fleet?
It is the user machine that needs new certificate signed by the CA once the short-lived one expires.
Ahh, now you have three problems…hrm
Perhaps some movement is needed? I do recall some relatively similar cases saved, if interested:
1. How key is used
2. Whats the threat vector
3. Cost of key rotation
4. Cost of key verification
At the end of the day its a trade off, the business use case, your expertise and the risk have to be evaluated together
This not only provides security but provides some resistance to bugs in your code which either call services incorrectly or call the incorrect methods of a service. I've avoided accidental data deletions and other painful events because my policy document did not allow this action. It turns these bugs into failures at the security perimeter.
I've used this concept in a few user applications as well. Typically those documents will always have expiration dates and I'll provide a "license" API which allows a single authenticated client request to retrieve an appropriate policy document. This is particularly nice when you want to implement a service across different providers or want to avoid downstream authentication overhead.