Free Booklet: Monitoring And Auditing In AWS

Monitoring and Auditing in AWS is a practical guide designed to help developers, architects, and security professionals build robust visibility into their cloud environments. As organizations increasingly rely on AWS to power critical workloads, the ability to track activity, detect anomalies, and respond to potential threats becomes essential. This booklet explores the core services, patterns, and best practices that enable effective monitoring and auditing across AWS, providing a clear path to strengthening security posture, ensuring compliance, and maintaining operational resilience in modern cloud-native systems.

Eyes Everywhere: Secure Logging and Alerting for Modern Systems – Part III


Logging and alerting become truly powerful only when they are embedded into a well-designed security architecture. Individual applications emitting logs or isolated detection rules provide only partial visibility. Modern organisations operate distributed systems composed of microservices, cloud infrastructure, container platforms, and external APIs. In such environments, security visibility requires a unified architecture capable of collecting, analysing, and responding to telemetry across the entire system.


Designing Systems That Detect and Respond to Threats

A secure observability architecture combines several layers. Applications generate structured logs. Infrastructure produces telemetry about hosts, containers, and network activity. These signals are collected and aggregated through centralized pipelines, analyzed by security analytics platforms, and ultimately transformed into alerts and automated responses.

Application Layer Logging

Application logs represent the most detailed source of telemetry because they capture business logic events. These events include authentication attempts, financial transactions, configuration changes, and access to sensitive resources.

Applications should emit structured logs that can be easily parsed by downstream analytics systems. A simple implementation using Python might produce JSON-formatted events.

import json
import logging
from datetime import datetime

logger = logging.getLogger("app")

def log_user_action(user_id, action, resource):

event = {
"timestamp": datetime.utcnow().isoformat(),
"event_type": "user_action",
"user_id": user_id,
"action": action,
"resource": resource,
"service": "account-service"
}

logger.info(json.dumps(event))

Such structured events allow centralized systems to search and correlate activity across multiple services. A user accessing a resource through an API gateway, for example, may generate logs in multiple backend services. A consistent schema allows analysts to reconstruct the entire request path.

Application logs also provide the richest context for security detection because they capture events at the business logic level rather than merely technical operations.

Infrastructure Telemetry

While application logs capture internal logic, infrastructure telemetry reveals the behavior of the underlying environment. Systems must monitor operating systems, network activity, and runtime platforms to detect suspicious behavior that occurs outside the application layer.

Operating systems produce authentication logs, process execution records, and network activity events. On a Linux system, failed SSH login attempts may appear in the authentication log.

Failed password for invalid user root from 203.0.113.11 port 42122 ssh2

Infrastructure monitoring agents collect these events and forward them to centralized logging systems.

A simple agent configuration might stream logs to a central collector.

filebeat.inputs:
- type: log
paths:
- /var/log/auth.log

output.elasticsearch:
hosts: ["https://log-server.example.com:9200"]

Network telemetry is also crucial. Firewall and network gateway logs reveal scanning attempts, suspicious connections, and unusual traffic flows.

An example network connection event might appear as structured telemetry.

{
"event_type": "network_connection",
"source_ip": "198.51.100.44",
"destination_port": 22,
"protocol": "TCP",
"timestamp": "2026-06-01T13:42:18Z"
}

Combining application and infrastructure telemetry provides comprehensive security visibility.

Aggregation Layer

Large systems generate logs from thousands of sources. Without aggregation, these logs remain scattered across machines and services.

The aggregation layer collects logs from applications, infrastructure components, and network devices. Log collectors such as Fluent Bit, Logstash, or Vector ingest telemetry and forward it to centralized platforms.

A typical log collection configuration might look like the following Fluent Bit pipeline.

fluent-bit \
-i tail \
-p path=/var/log/app.log \
-o http \
-p host=logging.example.com \
-p port=8080

Collectors normalize log formats and enrich events with metadata such as host identifiers, container IDs, or geographic location. This enrichment improves searchability and correlation across systems.

Aggregation pipelines also buffer logs during network disruptions to ensure no data is lost.

Analysis Layer

Once collected, logs must be analyzed to detect suspicious behavior. This is the role of the analysis layer.

Security analytics platforms ingest aggregated telemetry and apply detection rules, statistical models, and anomaly detection algorithms.

A SIEM platform might evaluate login activity using a query such as the following.

SELECT username, COUNT(*) AS failures
FROM logs
WHERE event_type = 'authentication_failure'
AND timestamp > NOW() - INTERVAL '10 minutes'
GROUP BY username
HAVING COUNT(*) > 20

If this query produces results, the platform triggers an alert.

More advanced analytics engines also perform behavioral analysis. These systems learn typical usage patterns and detect deviations.

An anomaly detection algorithm implemented in Python might evaluate whether a user’s activity deviates significantly from historical behavior.

def detect_anomaly(current_activity, baseline):
threshold = baseline * 5
if current_activity > threshold:
return True
return False

These analytic processes convert massive volumes of telemetry into actionable intelligence.

Response Layer

Detection alone is insufficient. Systems must also respond to threats.

The response layer integrates alerting systems with operational tools such as incident management platforms, messaging systems, and automated security controls.

When an alert is triggered, the system may send notifications to security teams.

def send_alert(message):
alert = {
"alert_type": "security_event",
"message": message,
"timestamp": datetime.utcnow().isoformat()
}
notify_security_team(alert)

Automated responses may also be triggered for critical threats. For example, a brute-force attack might result in immediate blocking of the source IP address.

def block_ip(ip_address):
firewall.block(ip_address)

These automated defenses reduce response time and limit the impact of attacks.

Securing the Logging Pipeline

Protecting Log Transport

Log data often travels across networks before reaching centralized systems. If this communication is not secured, attackers could intercept or manipulate logs.

Secure logging pipelines encrypt log transport using TLS.

A log collector configuration may enforce encrypted transport.

output:
elasticsearch:
hosts: ["https://logs.example.com:9200"]
ssl.certificate_authorities: ["/etc/certs/ca.pem"]

TLS ensures confidentiality and prevents unauthorized interception of telemetry.

Mutual authentication can also verify that only trusted systems send logs to the platform.

Preventing Log Tampering

Attackers frequently attempt to erase or modify logs to hide evidence of their activity.

To protect against this, logging systems implement append-only storage and integrity verification.

One approach uses cryptographic hashing to chain log entries together.

import hashlib

def generate_log_hash(entry, previous_hash):
combined = entry + previous_hash
return hashlib.sha256(combined.encode()).hexdigest()

Each log entry includes the hash of the previous entry. If an attacker modifies an entry, the hash chain becomes invalid, revealing the tampering.

Write-once storage systems further protect logs by preventing modification after ingestion.

Isolation of Logging Infrastructure

Security telemetry must be isolated from application environments. If attackers gain access to the same infrastructure that stores logs, they may attempt to manipulate or delete evidence.

Organizations often deploy logging infrastructure in dedicated environments accessible only to security teams.

A simplified architecture may route logs from production systems to a separate security network.

app_server -> log_collector -> security_logging_cluster

Strict access control policies ensure that application administrators cannot modify stored logs.

Isolation ensures the integrity of forensic data during incident investigations.

Cloud-Native Logging and Alerting

Observability in Microservices Architectures

Microservices architectures introduce new challenges for observability. A single user request may traverse dozens of services before completing.

Distributed tracing provides visibility into these interactions by assigning correlation identifiers to requests.

A service might attach a request ID to every log entry.

const requestId = generateRequestId();

logger.info({
request_id: requestId,
event: "api_request_received"
});

Downstream services propagate this identifier so analysts can trace the entire execution path.

Kubernetes and Container Logs

Container orchestration platforms generate extensive telemetry about container lifecycles, pod scheduling, and cluster activity.

Kubernetes exposes logs through its API.

kubectl logs deployment/payment-service

Security-relevant cluster events may include pod creation, container crashes, and unexpected resource modifications.

A cluster event might look like the following.

{
"event_type": "pod_created",
"namespace": "production",
"pod_name": "api-server-6f45",
"timestamp": "2026-06-01T14:22:33Z"
}

Monitoring these events allows organizations to detect suspicious deployments or unauthorized configuration changes.

Serverless Logging Challenges

Serverless environments introduce additional complexity because execution environments are ephemeral. Functions may run for only a few milliseconds before terminating.

As a result, logs must be exported immediately to centralized systems.

A serverless function might log activity using a cloud-native logging service.

import logging

def handler(event, context):
logging.info({
"event_type": "function_execution",
"function": "payment_handler",
"timestamp": context.timestamp
})

Centralized logging ensures that transient environments do not lose telemetry.

Automation and AI in Security Monitoring

Automated Detection Pipelines

Security monitoring increasingly relies on automated detection pipelines capable of processing large volumes of telemetry in real time.

Machine learning models can identify patterns that traditional rule-based systems might miss.

A simple anomaly detection model might analyze login frequency.

from sklearn.ensemble import IsolationForest

model = IsolationForest()
model.fit(training_data)
prediction = model.predict(new_login_data)
if prediction == -1:
trigger_alert("Login anomaly detected")

Such models can detect subtle deviations in user behavior.

AI-Assisted Threat Analysis

Artificial intelligence systems can assist analysts by prioritizing alerts and correlating events across large datasets.

An AI system might analyze multiple signals simultaneously.

if unusual_login and new_device and large_data_access:
alert("Possible account compromise")

By correlating signals across logs, AI systems can identify complex attack patterns.

Automated Response

Automation can also mitigate attacks automatically.

For example, if a system detects repeated authentication failures from a specific IP address, it may block that address.

if failed_attempts > 50:
firewall.block(source_ip)

If suspicious activity occurs on a user account, automated controls may disable the account temporarily.

def disable_account(user_id):
account_service.disable(user_id)

Automation reduces the time between detection and response, limiting the damage attackers can cause.

Logging and Alerting Best Practices Checklist

Principles for Secure Observability

Effective observability systems focus on collecting meaningful telemetry, protecting the integrity of logs, and designing alerts that enable rapid response.

Applications should log security-relevant events using structured formats. Infrastructure telemetry must complement application logs to provide full visibility into system behavior. Detection rules must evolve continuously as new threats emerge.

Security monitoring is not a static system but an evolving process.

Continuous Improvement

Attackers constantly change tactics, techniques, and procedures. Security monitoring systems must therefore evolve continuously.

Detection rules should be refined based on real-world incidents and emerging threat intelligence.

Security teams often review historical incidents and update detection logic accordingly.

def update_detection_rules(new_patterns):
rules_engine.add(new_patterns)

This iterative improvement ensures that monitoring systems remain effective against modern threats.

Observability as a Security Strategy

Modern software systems produce enormous volumes of telemetry. Yet data alone does not provide protection. Security emerges only when organizations design observability systems that transform telemetry into actionable intelligence.

Secure logging provides the foundation. Alerting transforms logs into early warning signals. Automated responses limit the impact of attacks.

When implemented correctly, logging and alerting systems become the digital equivalent of surveillance infrastructure, continuously monitoring the environment for suspicious activity.

In an era where attackers move rapidly and stealthily, secure observability gives defenders the visibility required to detect threats early, investigate incidents effectively, and protect critical systems before damage occurs.

Eyes Everywhere: Secure Logging and Alerting for Modern Systems – Part II


Logging is the foundation of security visibility, but logs alone do not defend systems. A modern production environment may generate millions or even billions of log entries per day. Hidden within this massive stream of telemetry are the signals that reveal active attacks, compromised accounts, and data exfiltration attempts.

Without intelligent processing, these signals remain buried inside an ocean of noise.

Alerting transforms raw logs into actionable intelligence. It is the mechanism through which suspicious events are detected, prioritized, and escalated to the people or systems capable of responding.

In a mature security architecture, logging produces the raw telemetry, while alerting converts that telemetry into security awareness.


From Logs to Security Intelligence

Modern software systems produce enormous quantities of telemetry. A single API gateway may process tens of thousands of requests per second, while a Kubernetes cluster might generate hundreds of infrastructure events every minute. Each of these activities produces logs.

A simplified example of application logging may look like the following Python implementation.

import logging
import json
from datetime import datetime

logging.basicConfig(level=logging.INFO)

def log_api_request(user_id, endpoint, ip):
event = {
"event_type": "api_request",
"user_id": user_id,
"endpoint": endpoint,
"source_ip": ip,
"timestamp": datetime.utcnow().isoformat()
}

logging.info(json.dumps(event))

This code produces useful telemetry, but by itself it does not provide any defense capability. The system will continue logging events even if an attacker performs malicious actions.

To detect attacks, systems must interpret patterns in logs.

For example, a single failed login attempt is normal. Hundreds of failed login attempts from the same IP address within seconds indicate a brute-force attack.

A detection rule might analyze logs to identify such behavior.

def detect_bruteforce_attempt(log_events):

failed_attempts = {}

for event in log_events:
if event["event_type"] == "authentication_failure":
ip = event["source_ip"]
failed_attempts[ip] = failed_attempts.get(ip, 0) + 1

if failed_attempts[ip] > 10:
print(f"ALERT: Possible brute force attack from {ip}")

In practice, such analysis occurs inside centralized analytics platforms rather than application code. The example illustrates how raw telemetry becomes security intelligence only when interpreted.

Detection Engineering

Detection engineering is the discipline of designing rules and analytics that transform logs into security detections.

A detection rule describes a pattern that indicates suspicious activity.

Consider a typical credential stuffing scenario where attackers attempt to log into many accounts using stolen credentials.

A detection rule may look like the following example expressed in pseudo-SIEM query language.

SELECT source_ip, COUNT(*) AS failures
FROM authentication_logs
WHERE event_type = 'authentication_failure'
AND timestamp > NOW() - INTERVAL '5 minutes'
GROUP BY source_ip
HAVING COUNT(*) > 50

If the query returns results, it means a single IP address generated more than fifty authentication failures within five minutes. This pattern strongly suggests automated attack activity.

Detection engineering often involves continuous refinement. As attackers change tactics, detection rules must evolve to identify new patterns.

Another example might detect privilege escalation events.

SELECT user_id, COUNT(*) AS role_changes
FROM audit_logs
WHERE event_type = 'role_assignment'
AND timestamp > NOW() - INTERVAL '10 minutes'
GROUP BY user_id
HAVING COUNT(*) > 5

Multiple role changes within a short time window may indicate suspicious administrative activity.

These rules convert raw telemetry into detectable threats.

Security Analytics Platforms

Manual log analysis quickly becomes impossible in modern environments. Organizations therefore rely on specialized security analytics platforms to process telemetry.

Security Information and Event Management systems, commonly known as SIEM platforms, collect logs from multiple systems and apply detection logic.

Logs might be shipped to such a platform using log collectors.

fluent-bit -i tail -p path=/var/log/app.log -o http://siem.example.com/ingest

Once ingested, the SIEM platform normalizes logs and runs detection rules.

An example event inside such a system might look like the following JSON structure.

{
"event_type": "authentication_failure",
"username": "alice",
"source_ip": "198.51.100.44",
"service": "login-api",
"timestamp": "2026-05-12T14:18:22Z"
}

Behavioral analytics tools can also detect anomalies. Instead of relying only on fixed rules, they analyze historical patterns and identify deviations.

For example, a system might detect that a user account suddenly accesses resources from a different country.

if user_country != last_known_country:
generate_alert("Geographic anomaly detected")

These analytics systems convert telemetry into meaningful signals that security teams can investigate.

Designing Effective Security Alerts

Not all alerts are useful. Poorly designed alerts generate noise rather than insight.

A meaningful alert must provide enough information for responders to understand the event.

A minimal alert message might include context such as the affected user, IP address, and event type.

{
"alert_type": "brute_force_attack",
"source_ip": "203.0.113.10",
"failed_attempts": 120,
"time_window": "5 minutes",
"timestamp": "2026-05-12T15:01:22Z"
}

Such an alert provides investigators with the context required to understand the threat quickly.

Alerts must also avoid excessive false positives. If every minor anomaly generates a notification, responders quickly learn to ignore them.

Alert Severity Levels

Security alerts typically fall into several severity levels depending on their impact and urgency.

An informational event may indicate activity worth recording but not immediate action.

A suspicious event suggests unusual behavior requiring investigation.

A critical alert indicates a likely security incident that demands immediate response.

An example alert structure might encode severity explicitly.

{
"severity": "high",
"alert": "privilege_escalation_detected",
"user": "admin_17",
"action": "granted_admin_role",
"target_user": "user_448",
"timestamp": "2026-05-12T16:20:11Z"
}

Severity classification allows security systems to prioritize responses.

Mapping Alerts to Attack Scenarios

Effective alerting requires understanding how attacks unfold.

Credential stuffing attacks generate many login failures.

if failed_attempts > 100:
alert("Credential stuffing attack suspected")

Privilege escalation attempts involve changes to user roles.

if event["event_type"] == "role_assignment" and event["role"] == "admin":
alert("Administrative privilege granted")

Data exfiltration often involves unusually large data transfers.

if download_size > 100000000:
alert("Large data export detected")

By mapping alerts to real-world attack behaviors, systems detect threats earlier in the attack lifecycle.

Reducing Alert Fatigue

Alert fatigue is one of the most significant challenges in security monitoring.

When systems generate excessive alerts, security teams become overwhelmed. Investigators cannot review every alert, and critical events may be missed.

Consider an example where every failed login triggers an alert.

if event["event_type"] == "authentication_failure":
alert("Login failure detected")

Such a rule would generate thousands of alerts per day in a busy system.

Effective alerting must filter noise and focus on patterns.

Tuning Detection Rules

Detection rules often rely on thresholds.

For example, a system might trigger an alert only after repeated failures.

if failed_login_count > 10:
alert("Multiple failed login attempts detected")

These thresholds must be tuned carefully based on real system behavior.

Too low and alerts become noisy. Too high and attacks may go unnoticed.

Behavioral and Anomaly Detection

Static thresholds are not always sufficient. Behavioral detection analyzes historical data to understand what normal activity looks like.

If a user typically downloads a few megabytes of data per day but suddenly exports gigabytes of data, the system may flag this behavior.

if current_download > (average_download * 10):
alert("Unusual data access pattern detected")

Behavioral analysis allows systems to detect sophisticated attacks that evade simple rules.

Correlation of Multiple Signals

Sophisticated attacks often produce multiple weak signals rather than one obvious indicator.

Correlation combines these signals to detect complex threats.

For example, consider the following sequence:

  1. Multiple failed login attempts
  2. Successful login from new IP address
  3. Administrative privilege change

Individually, each event might seem harmless. Together, they suggest account compromise.

A correlation rule might analyze events across multiple logs.

if failed_logins > 20 and new_ip_login and privilege_change:
alert("Potential account takeover detected")

Correlation significantly improves detection accuracy.

Real-Time vs Delayed Alerting

Certain threats require instant response.

A brute-force attack against authentication endpoints may require immediate blocking of the source IP.

if failed_login_attempts > 100:
block_ip(source_ip)

Unauthorized administrative actions may also require real-time intervention.

if event["event_type"] == "admin_role_granted":
alert("Unauthorized administrative privilege assignment")

Immediate alerts are designed to stop attacks before damage occurs.

Investigative Alerts

Some alerts are better suited for investigation rather than immediate response.

For example, unusual access patterns might require analysis before action is taken.

if login_country != last_known_country:
alert("User login from new geographic region")

Such alerts inform analysts who then evaluate the context.

Incident Response Integration

Alerting systems often integrate with incident response platforms.

When an alert triggers, it may automatically create a ticket.

def create_incident(alert):
ticket = {
"title": alert["alert_type"],
"severity": alert["severity"],
"timestamp": alert["timestamp"]
}
send_to_incident_system(ticket)

Security orchestration tools may also automate responses.

if alert["severity"] == "critical":
disable_user_account(alert["user"])

These integrations accelerate response times.

Building an Effective Alerting Workflow

Alert routing ensures that notifications reach the appropriate teams.

Operational alerts may go to infrastructure teams, while security alerts go to security operations.

A routing rule might look like the following configuration.

alerts:
- type: security
route: security_team
- type: infrastructure
route: ops_team

Routing prevents unnecessary interruptions and ensures that specialists receive relevant alerts.

Escalation Paths

Some alerts require escalation if not addressed quickly.

An alert may first notify an on-call engineer. If unresolved after a defined period, it escalates to a broader response team.

if alert_not_acknowledged(minutes=10):
escalate_to_security_manager(alert)

Escalation policies ensure that serious incidents receive attention.

Incident Triage and Investigation

Once an alert is received, investigators must determine whether the alert represents a genuine threat.

Triage typically involves examining related logs.

def investigate_alert(alert_id):
related_events = query_logs(alert_id)
return analyze(related_events)

Investigators look for supporting evidence such as additional suspicious activity or known attack indicators.

If the alert is confirmed as a security incident, incident response procedures begin.

At this stage, logs become crucial again, allowing investigators to reconstruct the timeline of the attack.


Alerting transforms passive logging systems into active defense mechanisms. When detection rules are carefully engineered, alerts become the early warning system that reveals threats before attackers achieve their objectives.