On March 18, 2026, between 10:06 and 13:47 UTC, customers experienced delays in Automation rules executing when triggered by Jira events such as Work Item creation, Work Item updates, and comments. Automation rules using other trigger types, including scheduled triggers, manual triggers, and incoming webhooks, continued to operate normally.
The incident was caused by an internal configuration change that inadvertently disabled the event delivery pathway used to notify the automation platform of changes in Jira. The incident was identified through customer support tickets and verified through our monitoring; engineering teams were engaged for resolution. Once the root cause was identified, the configuration was corrected and normal automation processing resumed.
Following restoration, the delayed events began flowing to the automation platform for processing. This backlog took approximately 14 hours to fully clear. During this recovery window, some automation rules ran on Work Items whose data had changed due to user actions, customer mitigation, or other causes. Since rule execution usually follows triggering events closely, many customer rules assume immediate execution on the Work Item. The delay allowed other changes —such as updates or customer actions to mitigate the incident's impact — to occur, causing unintended consequences when the rule executed later.
During the impact window, Jira Cloud customers were unable to rely on timely execution of event‑triggered automation rules. Rules that depended on Jira Work Item events - including Work Item created, Work Item updated, comment added, sprint changes, and version changes - ran with significant delays. This affected automated workflows responsible for Work Item routing, notifications, field updates, and other rule‑driven actions throughout the outage. Automation rules that used scheduled, manual, or incoming webhook triggers remained unaffected.
Following mitigation, a recovery period of approximately 14 hours was required to process the backlog of delayed events. During this window, processing delays peaked at approximately 12 hours from event occurrence to rule completion. In some cases, rules executing against Work Item data several hours after the rule trigger occurred, caused problems due to the rules being built with the expectation in mind that little time would pass between trigger and execution. This resulted in Work Items ending up in an unintended state; especially as some customers undertook manual intervention given the situation.
The incident was caused by a configuration change to an internal feature flag used to control event delivery to the automation platform.
A code change had been prepared to remove a feature flag from the event delivery system. However, this code change had not yet been deployed to production. When the feature flag was subsequently retired through our feature flag management system, the retirement process relied on usage telemetry that incorrectly indicated the flag was no longer active. This inadvertently created a blind spot where the flag appeared unused when it was in fact being actively evaluated.
When the flag was retired, the event delivery system interpreted its absence as an instruction to stop delivering Jira events to the automation platform, causing all event-triggered automation rules to stop firing.
We understand that outages impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritising the following actions to help reduce the likelihood and impact of similar incidents in the future:
Strengthen feature flag lifecycle safety controls
Improve event delivery monitoring and alerting
Improve our ability to clear delayed events faster and deliver controls for customers to decide alternative workflows for events based on delay.
We recognise the importance of Jira Automation to our customers' workflows and are committed to ongoing improvements to the reliability and resilience of our platform. We sincerely apologize for the disruption this incident caused, and we will continue to invest in measures that support a stable and dependable service.
Thanks,
Atlassian Customer Support