// series Attack Walkthroughs · Part 01

Noisy sign-in triage: the small joins that matter

3 min read

Sign-in alerts are one of the loudest signal sources in a multi-tenant SOC. Every day brings a wave of Atypical travel, Unfamiliar sign-in properties, and Sign-in from anonymous IP alerts. Most are benign. A few matter. The problem is sorting them without tuning so aggressively that you blind yourself.

The naive approach is a simple IP or country filter: exclude known VPN egress IPs, exclude the headquarters country, close the ticket. It works until it doesn’t. The filter catches the obvious cases, but it quietly drops alerts where a user signed in from a known IP range using a credential that was compromised a week earlier. The IP matched the exclusion list, so the alert never surfaced.

A better approach adds identity context to the sign-in event. Here’s the pattern.

The core join

let lookback = 3d;
let change_window = 1d;                               // tune to your tenant baseline
let suspicious_signins =
    SigninLogs
    | where TimeGenerated > ago(lookback)
    | where ResultType == "0"                         // successful auth only
    | where UserPrincipalName endswith "@contoso.com"
    | where IPAddress startswith "10."                // internal range — tune as needed
    | project
        TimeGenerated,
        UPN = UserPrincipalName,
        IP  = IPAddress,
        AppDisplayName,
        ConditionalAccessStatus,
        RiskLevelDuringSignIn;

let recent_pw_changes =
    AuditLogs
    | where TimeGenerated > ago(lookback)
    | where OperationName in ("Change user password", "Reset user password")
    | where Result == "success"
    | extend UPN = tostring(TargetResources[0].userPrincipalName)
    | project ChangeTime = TimeGenerated, UPN;

suspicious_signins
| join kind=leftouter (
    recent_pw_changes
) on UPN
| where isnotempty(ChangeTime)
    and TimeGenerated between (ChangeTime .. ChangeTime + change_window)
| project
    TimeGenerated,
    UPN,
    IP,
    AppDisplayName,
    ChangeTime,
    HoursAfterReset = datetime_diff('hour', TimeGenerated, ChangeTime)
| sort by TimeGenerated desc

What this does: it takes successful sign-ins from contoso.com accounts, then left-joins against password change/reset events from AuditLogs. The between filter keeps only sign-ins that happened within the change_window after a credential change. A sign-in within that window isn’t automatically malicious — but it deserves a second look, especially when combined with RiskLevelDuringSignIn != "none".

Why the time bound matters

The temptation is to do a simple join kind=inner with no time constraint. That produces false positives in the other direction: accounts that changed passwords months ago will match sign-ins today, generating noise that trains analysts to ignore the result set.

The window is a judgment call. A legitimate user who resets their own password typically signs in within hours. An attacker who force-reset a credential via an admin account may wait longer, but a tight window narrows the signal considerably. Pick a value that reflects how your tenant actually behaves — and revisit it when the baseline shifts.

Adding device context

For environments with Defender for Endpoint or Intune data, extend the query with a second join against IdentityLogonEvents:

| join kind=leftouter (
    IdentityLogonEvents
    | where TimeGenerated > ago(lookback)
    | where AccountUpn endswith "@contoso.com"
    | project LogonTime = TimeGenerated, AccountUpn, DeviceName
) on $left.UPN == $right.AccountUpn
| where isempty(DeviceName)   // flag sign-ins with no matching device logon

The absence of a correlated device logon during a sign-in from an internal IP is a weak signal on its own. Combined with a recent credential change and elevated sign-in risk, it tightens the case.

The tradeoff to be honest about

This pattern generates work. You’re trading raw alert volume for fewer, higher-confidence items that take more time to investigate. In a high-volume MSSP context, that’s only worth it if the downstream triage is structured — a runbook, a consistent set of follow-up questions, a clear close criteria.

Without that structure, the join produces a queue of “interesting” events that get stale before anyone works them. The KQL is the easy part. Getting the workflow right is the actual job.

Start with a single tenant’s data, tune the windows, then roll it out. The first few weeks will surface cases you’d have missed with a flat filter. That’s the point.