First Time Python Accessed Sensitive Credential Files
Detects the first time a Python process accesses sensitive credential files on a given host. This behavior may indicate post-exploitation credential theft via a malicious Python script, compromised dependency, or malicious model file deserialization. Legitimate Python processes do not typically access credential files such as SSH keys, AWS credentials, browser cookies, Kerberos tickets, or keychain databases, so a first occurrence is a strong indicator of compromise.
Rule type: new_terms
Rule indices:
- logs-endpoint.events.file-*
Rule Severity: medium
Risk Score: 47
Runs every:
Searches indices from: now-9m
Maximum alerts per execution: 100
References:
- https://blog.trailofbits.com/2024/06/11/exploiting-ml-models-with-pickle-file-attacks-part-1/
- https://github.com/trailofbits/fickling
Tags:
- Domain: Endpoint
- OS: macOS
- Use Case: Threat Detection
- Tactic: Credential Access
- Data Source: Elastic Defend
- Resources: Investigation Guide
- Domain: LLM
Version: 1
Rule authors:
- Elastic
Rule license: Elastic License v2
Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch __reduce__) — often target sensitive credential files such as SSH keys, cloud provider credentials, browser session cookies, and macOS keychain data. Since legitimate Python processes do not typically access these files, a first occurrence from a Python process is highly suspicious.
This rule leverages the Elastic Defend sensitive file open event, which is only collected for known sensitive file paths, combined with the New Terms rule type to alert on the first time a specific credential file is accessed by Python on a given host within a 7-day window.
- Examine the Python process command line and arguments to identify the script or command that triggered the file access.
- Determine if the Python process was loading a model file (look for
torch.load,pickle.load), running a standalone script, or executing via a compromised dependency. - Review the specific credential file that was accessed and assess the potential impact (SSH keys enable lateral movement, AWS credentials enable cloud access, browser cookies enable session hijacking).
- Check for outbound network connections from the same process tree that may indicate credential exfiltration.
- Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
- Look for file creation events in
/tmp/or other staging directories that may contain copies of the stolen credentials.
- Python-based secret management tools (e.g.,
aws-cli,gcloud) legitimately access credential files. Consider excluding known trusted executables by process path. - SSH automation scripts using
paramikoorfabricmay read SSH keys. Evaluate whether the access pattern matches known automation workflows. - Security scanning tools running Python may enumerate credential files as part of their assessment.
- Immediately rotate any credentials that were potentially accessed (SSH keys, AWS access keys, cloud tokens).
- Quarantine the Python process and investigate the source script, package, or model file that triggered the access.
- If a malicious file is confirmed, identify all hosts where it may have been distributed.
- Review outbound network connections from the host around the time of the credential access to check for exfiltration.
- Consider implementing
weights_only=Trueenforcement for PyTorch model loading across the environment.
event.category:file and host.os.type:macos and event.action:open and
process.name:python*
Framework: MITRE ATT&CK
Tactic:
- Name: Credential Access
- Id: TA0006
- Reference URL: https://attack.mitre.org/tactics/TA0006/
Technique:
- Name: Credentials from Password Stores
- Id: T1555
- Reference URL: https://attack.mitre.org/techniques/T1555/
Sub Technique:
- Name: Keychain
- Id: T1555.001
- Reference URL: https://attack.mitre.org/techniques/T1555/001/