90-day access reviews are already stale by the time most teams export them. And if your evidence still lives in screenshots, CSVs, and side notes, you don't have an access review process. You have an audit reconstruction project.
Most teams think the hard part of access reviews is getting reviewers to respond. I don't think that's the real problem. The real problem is that the review, the revocation, and the evidence all live in different places. So every cycle becomes a weird scavenger hunt across Jira, Slack, your identity provider, and a spreadsheet someone swore they updated last week.
Key Takeaways:
- Access reviews break when review decisions, revocations, and audit evidence live in separate systems
- If your team is still exporting data into spreadsheets each quarter, you're adding delay and audit risk by design
- A usable review process needs clear app ownership, in-scope apps, and enforced revocation paths before the campaign starts
- Time-bound access reduces what needs to be reviewed later because standing privilege never piles up
- Usage context matters: if reviewers can't see last login and group details, most reviews turn into rubber stamps
- Jira-native governance works better because the work, approvals, and evidence stay on the same record
Why most access reviews fail before the review even starts
Access reviews fail before launch because the underlying system is already fragmented. By the time a reviewer sees a list of users, the request history is in one place, approvals in another, and proof of removal somewhere else entirely. That's why reviews feel slow even when the reviewer clicks fast.

The spreadsheet is the first red flag
A spreadsheet-based review process is a sign that the system of record isn't really a system. It's a patch. If your team exports app access into CSVs, assigns tabs to managers, then asks IT to clean up revocations later, you've split one control into three separate workflows.

Picture a security manager on quarter end week. Monday morning, they export users from the identity provider. Tuesday, they email app owners. Wednesday, half the owners reply in Slack, two reply in email, one updates the spreadsheet, and one says "remove everyone inactive" without naming who. Friday, IT is left translating all that into group changes manually. I've seen versions of this movie a lot. It never ends cleanly.
If you want a quick diagnostic, check these three things before your next review cycle:
- Can a reviewer see the user's last login without opening another tool?
- Can a revoke decision trigger the actual removal path automatically?
- Can an auditor trace the decision and the action from one record?
If the answer is no to even one of those, your access review is going to drift. If the answer is no to two, you're not reviewing access. You're documenting intentions.
A lot of teams stick with spreadsheets because they feel flexible. Fair point. Spreadsheets are easy to start with. They're also the fastest way to create silent failure, because flexibility is just another word for "no enforced path" once you pass a few dozen apps and a few hundred users.
If you want to see what a more controlled model looks like inside Jira, Learn more about Multiplier.
Review latency is usually an ownership problem
Most people blame reviewers for slow campaigns. Sometimes that's fair. More often, the delay starts earlier. Nobody has clear ownership for the app, the reviewer doesn't trust the data they're looking at, or the request lands without enough context to make a decision in one pass.

A simple threshold helps here. If more than 15% of your in-scope apps don't have a named individual owner before launch, don't start the campaign. Fix ownership first. Otherwise, the review window becomes a routing exercise, not a control.
One of the more overlooked issues is that teams confuse "someone can approve access" with "someone can review access." Those are not always the same person. The manager who approved a request six months ago may have no idea whether the entitlement still makes sense now. That's why reviews stall. The wrong person has the ball.
We've seen this in high-growth environments a lot. Headcount jumps, apps pile up, departments change, and suddenly nobody knows who really owns what. A mid-market SaaS team can go from 50 sanctioned apps to 150 faster than they think. Once that happens, app ownership debt shows up everywhere. Access reviews just expose it.
Manual revocation quietly kills the control
The review itself isn't the control. The control is decision plus enforcement plus proof. If reviewers click "revoke" and someone still has to remember to remove a group later, you've got a gap. Maybe a small gap. Still a gap.
This matters more than teams admit. If your post-review revocation queue takes more than 48 hours to clear, switch your process. Don't accept human follow-up as normal. Reviews with manual cleanup create a false sense of safety because the dashboard says "completed" while access is still live.
A fintech team dealing with privileged access learned this the hard way. They had long-lived elevated access hanging around after the work was done. Once they shifted to time-limited access and automatic expiry, they cut privileged access by 85% and automatically revoked more than 1,300 requests after approved windows ended. That's a surprising connection for some teams: the cleanest access review is often the access you never have to review later because it already expired on schedule.
That leads to the bigger question. If the old process is built on exports, manual follow-up, and fuzzy ownership, what should replace it?
The real fix is putting governance where the work already happens
Access reviews work better when they live inside the same operating system as requests, approvals, and tickets. That's the reframe. The issue isn't that reviewers are lazy or auditors are demanding too much. The issue is the ITSM and IGA split.
When governance lives outside Jira, every step creates another handoff. The request starts in one place, the approval happens in another, the provisioning happens through the identity provider, and evidence gets rebuilt later. Put all of that on one record, and the review process gets simpler because context doesn't leak out of the system.
Jira is already the workflow system your team trusts
Most IT and workplace teams already run service delivery in Jira Service Management. That's where intake, SLAs, approvals, escalations, and ticket history live. So when access governance sits off to the side in a separate portal, you're forcing people to leave the tool they already trust just to complete a control that should have been part of the workflow from day one.
I think this is where a lot of identity programs go wrong. They optimize for the architecture diagram, not the operating reality. On the diagram, a separate governance suite looks clean. In practice, it means another portal, another queue, another source of evidence, and another place where work gets ignored.
The logic for keeping governance separate is understandable. Specialized tools promise deeper policy control. That's real. I get why teams buy them. But if your access review still ends with someone copying results back into Jira, attaching screenshots, and chasing revocations manually, the extra depth isn't helping the actual workflow.
A good rule of thumb: if reviewers need more than one login and more than one browser tab to finish a review with confidence, the process is too fragmented for scale.
Audit readiness should be a byproduct, not a project
Audits should be ready by design in Jira, not rebuilt in spreadsheets. That's the contrarian take here, and I think it's the right one. Evidence is strongest when it's generated as part of the normal flow of work, not assembled weeks later by someone trying to remember what happened.
There's a practical test you can use. Pull any revoked access item from the last cycle and ask:
- Who approved or reviewed it?
- When was the decision made?
- When was access actually removed?
- Where is the proof?
If your team can't answer those four questions in under three minutes from one system, your evidence model is too brittle. That's not just an audit pain. It's an operating pain. Because the same mess that slows auditors also slows IT when someone asks, "Why does this person still have access?"
This is why usage context matters so much. A reviewer looking at names alone won't make strong decisions. Give them group memberships, department, title, and last login, and the review gets sharper fast. One field won't fix everything. But the difference between "I think they still need it" and "they haven't logged in for 94 days" is huge.
Least privilege gets easier when expiry is the default
Quarterly reviews are trying to clean up what daily operations allowed to pile up. That works, sort of. But it gets expensive. The better model is to shrink the pile in the first place.
If access is privileged, temporary, or tied to incident work, make it time-bound. If the request doesn't need permanent standing access, don't grant permanent standing access. This sounds obvious. It rarely gets enforced.
The threshold I'd use is simple. For admin-level or production access, if the work can reasonably be completed within 24 hours, default to time-bound access. Don't make permanent access the starting point. Make someone justify why it shouldn't expire. That's how least privilege moves from policy language into operations.
A lot of teams worry that stricter expiry creates friction. Sometimes it does. That's a valid concern. Nobody wants engineers blocked during an incident. But the tradeoff is worth it when the system supports fast approvals and clean extensions. Manual standing access feels easier in the moment. It creates far more cleanup later.
So if the fix is Jira-native governance with expiry by default, what does a good operating model actually look like day to day?
What a usable access review system looks like in practice
A usable system gives reviewers the right context, routes decisions to the right person, and enforces the action without another queue. That's the bar. Not perfect policy language. Not a giant implementation. Just a process that survives contact with real teams.
Start by reducing review scope before the campaign launches
The best access review starts weeks before the campaign. You narrow scope, clean ownership, and decide which apps are worth reviewing with real rigor. Not every app needs the same treatment.
Here's a practical split:
- Review privileged, finance, customer data, admin, and production-related apps every quarter
- Review lower-risk business apps every 6 months
- Exclude low-impact apps only if you can document why they're low impact
- Fix unnamed ownership before launch
- Remove inactive or duplicate entitlements before reviewers ever see them
If you skip that cleanup, reviewers get flooded with noise. And once a reviewer sees 200 low-signal lines in a list, quality drops hard. Rubber-stamping isn't usually laziness. It's often a volume problem disguised as a behavior problem.
Honestly, this surprised us more than anything else in a lot of teams. The issue wasn't reviewer resistance. It was bad scoping. People were being asked to review too much, with too little context, too late in the quarter.
Give reviewers enough context to say revoke with confidence
Reviewers need context that changes the decision. Name and app aren't enough. Last login, job title, department, group membership, and prior recommendation are usually enough to separate obvious keeps from obvious revokes.
A good diagnostic question for your process is: would a manager revoke access for someone they vaguely recognize but haven't worked with recently? Usually not. Add usage data, and that changes. A person who hasn't logged in for 90 days is a different conversation.
That's why a lot of modern review programs use inactivity as a signal. Not as the only signal. Just a strong one. If you have accurate login telemetry, then a simple rule works well: when a user hasn't logged in for 60 to 90 days on a paid app, push that item into a likely revoke bucket unless the reviewer has a clear reason to keep it. That one rule alone cuts down a lot of passive license waste and stale access.
And there is a connection people miss here. Access governance and SaaS spend are often the same cleanup motion seen from two different teams. Security sees stale access. Finance sees idle licenses. Same mess.
A fast-growing team in AI handled thousands of access requests in a year with a four-person IT ops team, and 75% of those requests were fully automated. That's not just an efficiency story. It's a signal that when intake, approval, and provisioning are standardized, review data gets cleaner too. Fewer exceptions. Fewer weird one-offs. Better controls downstream.
If you want to dig into what that Jira-native model looks like in more detail, See how Multiplier works.
Enforce the decision path, not just the review UI
A lot of teams put serious effort into the review interface and almost none into the enforcement path. That's backwards. A good review system should make "revoke" mean something operationally, not symbolically.
The easiest test is this: when a reviewer clicks revoke, what happens next? If the answer is "an IT admin handles it later," the process is underpowered. If the answer is "the system removes the mapped access path and writes the change back to the ticket," you're in much better shape.
Use this rule:
- If access is group-based in the identity provider, revocation should be automatic
- If access is manual or outside your SSO path, flag it as requiring a separate operational task
- If more than 20% of your in-scope apps still rely on manual revoke steps, don't call the program mature yet
That last number matters. Below 20%, manual exceptions are manageable. Above that, your review program will spend most of its energy coordinating cleanup rather than reducing risk.
Keep the system honest with versioning and reviewer flows
Access review programs decay quietly. New apps get added without owners. Roles change. A once-clean approval path gets bypassed for speed. That's normal. Which is why you need maintenance rules, not just launch energy.
Three checks keep the system honest:
- Review app ownership monthly for high-risk apps
- Revisit role-to-group mappings every quarter
- Audit exception paths, especially manual provisioning and manual revocation items
And if you want one maturity marker, use this: can you explain why a given role has a given entitlement set without opening five tools or asking three people? If not, your governance model is drifting.
Not everyone agrees with tightening these checks early. Some teams prefer to wait until after audit findings pile up. I think that's backwards. Once drift gets normalized, fixing it takes way longer because you're cleaning data and behavior at the same time.
How Multiplier makes Jira-native reviews and evidence real
Multiplier brings access governance into Jira Service Management so the request, approval, review, revocation, and evidence stay tied to the same record. That matters because the big failure point in most access review programs isn't intent. It's fragmentation. Multiplier closes that gap by keeping the operational work and the audit trail in one place.
Reviews happen in Jira with the context reviewers actually need
Multiplier's Access Reviews replace spreadsheet-driven certification cycles with Jira-native campaigns. Admins create campaigns, choose in-scope applications, assign reviewers, and launch from inside the same environment where the service workflow already lives. Reviewers land in a JSM Help Center dashboard and can see user attributes, groups, job titles, departments, last login, and revoke recommendations.
That changes the quality of the review immediately. You're not asking a manager to interpret a raw export with no context. You're putting the decision in front of them with enough signal to act. And when a reviewer marks revoke, Multiplier can remove the user from the relevant identity provider groups, create the Jira record of that change, and keep campaign progress updated. That's a much tighter loop than "please update the spreadsheet and someone from IT will handle it later."
For teams dealing with evidence headaches, Multiplier also supports CSV exports for auditors and can push evidence to Vanta. That's important because the audit artifact comes from the workflow itself, not from a separate reconstruction process weeks later.
Least privilege gets enforced through time-bound access and automated provisioning
Multiplier also makes least privilege practical, not just aspirational. With Time-Based Access, requesters can choose a duration like 1, 6, or 24 hours during submission. Once approved, access is provisioned through mapped identity provider groups, and on expiry, the group membership is removed and recorded in Jira. That means temporary access doesn't depend on someone remembering cleanup.

Pair that with Automated Provisioning via Identity Provider Groups and Approval Workflows, and the process gets much cleaner. Requests can route to the right approver in Jira or Slack, move to the approved status, and trigger the identity provider action that adds or removes the mapped group. Multiplier provisions through the identity provider group path, not directly inside individual SaaS apps, which is exactly why the record stays authoritative and auditable.
The Slack App helps on the speed side too. Users can request access in chat, approvers can approve in Slack, and the Jira issue remains the system of record underneath it all. So you get the fast operational feel teams want without breaking the evidence chain.
If your current process still depends on exports, screenshots, and cleanup queues, Get started with Multiplier.
Why clean access reviews start with cleaner operations
Access reviews get easier when they stop being a quarterly cleanup ritual for a broken operating model. Put governance inside Jira. Use time-bound access by default for elevated roles. Make revocation enforceable. Give reviewers usage context. Then the review becomes what it should have been all along: a control, not a rescue mission.
Most teams don't need more policy. They need fewer handoffs. And they need audit evidence that writes itself while the work happens. That's the shift. Once you make it, least privilege stops feeling theoretical and starts feeling operational.
Frequently Asked Questions
How do I automate access requests in Multiplier?
To automate access requests using Multiplier, start by setting up the Application Catalog in your Jira Service Management (JSM) portal. This allows employees to browse approved applications and select roles easily. Once they submit a request, Multiplier automatically creates a Jira ticket and routes it to the appropriate approver. Ensure that your identity provider is connected to Multiplier for seamless provisioning. This setup streamlines the entire process and minimizes manual intervention, making it efficient for both employees and IT teams.
What if I need to review access for multiple applications?
If you need to review access for multiple applications, you can create an Access Review campaign in Multiplier. Start by selecting the in-scope applications that are marked as 'Approved' in your catalog. Assign reviewers for each app and launch the campaign directly from JSM. Reviewers will receive a dashboard showing user attributes and last login dates, allowing them to make informed decisions quickly. This unified approach helps avoid fragmented reviews and ensures that all evidence is linked to the same record.
Can I enforce time-based access with Multiplier?
Yes, you can enforce time-based access in Multiplier. When users submit access requests, they can choose a duration for their access (e.g., 1, 6, or 24 hours). Once the request is approved, Multiplier provisions the access and automatically sets a timer to revoke it when the time expires. This approach helps maintain least privilege by ensuring that elevated access is only granted when necessary and reduces the risk of lingering permissions after tasks are completed.
When should I conduct access reviews?
You should conduct access reviews regularly, ideally every quarter for high-risk applications like finance or admin tools. For lower-risk business apps, consider reviewing them every six months. Before launching a review campaign, ensure that ownership is clear for all in-scope apps and that inactive or duplicate entitlements are removed. This proactive approach helps keep your access control tight and minimizes the workload during review cycles.
Why does my access review process feel slow?
Your access review process may feel slow due to fragmented systems where approvals, requests, and evidence are scattered across different tools. To improve this, use Multiplier to integrate access governance directly into Jira. This way, all steps—from request to approval to revocation—are tied to the same Jira record, reducing delays and confusion. Ensure that reviewers have access to relevant user context, like last login dates, to make quicker decisions.






