Eyes Everywhere: Secure Logging and Alerting for Modern Systems – Part I


Modern software systems generate an enormous stream of operational data. Every authentication attempt, database query, API request, container deployment, and network connection leaves a digital trace somewhere inside the infrastructure. Historically, developers treated these traces primarily as troubleshooting aids—temporary clues to diagnose bugs when something went wrong. In contemporary security engineering, however, logs serve a far more profound purpose.

Logs are now a primary sensor layer for detecting attacks.


The Role of Logging in Modern Security

In early software systems, logging existed primarily to support debugging. Developers would emit messages describing program execution so they could understand failures during development or diagnose issues in production.

A typical early logging statement in an application might look like the following example in Python:

import logging

logging.basicConfig(level=logging.INFO)

def process_payment(user_id, amount):
logging.info(f"Processing payment for user {user_id}")
# Payment logic
logging.info("Payment completed successfully")

The purpose of these messages was operational clarity. If a bug occurred, developers could read the logs to understand where the program failed.

However, as systems became distributed, internet-facing, and heavily targeted by attackers, the meaning of logs changed. Logs became not just operational telemetry but security signals.

Consider a login endpoint in a modern web application. Each authentication attempt tells a story about user behavior. A single failed login may be harmless. Hundreds of failures from the same IP address could indicate a brute-force attack.

A secure system logs authentication attempts with sufficient detail to allow analysis.

const logger = require("pino")();

function login(username, password, ipAddress) {

logger.info({
event: "authentication_attempt",
username: username,
source_ip: ipAddress,
timestamp: new Date().toISOString()
});

if (authenticateUser(username, password)) {

logger.info({
event: "authentication_success",
username: username,
source_ip: ipAddress
});

return generateToken(username);
}

logger.warn({
event: "authentication_failure",
username: username,
source_ip: ipAddress
});

throw new Error("Invalid credentials");
}

This logging pattern transforms a simple debug trace into structured telemetry that security monitoring tools can analyze.

Attackers inevitably interact with systems. Every probe, exploit attempt, or unauthorized access generates signals. Logging captures those signals.

Logs as Evidence

When a security incident occurs, logs become the primary source of truth.

They provide a chronological record of system behavior that allows investigators to reconstruct what happened. Without logs, incident response becomes guesswork.

Consider a scenario where an attacker gains access to an administrative account. If detailed logs exist, investigators can determine:

  • when the account was accessed
  • from which IP address
  • which actions were performed
  • which resources were accessed

A well-designed audit log might capture this activity as structured data.

{
"event": "admin_privilege_used",
"user_id": "admin_42",
"action": "delete_user_account",
"target_user": "user_9812",
"timestamp": "2026-04-17T14:23:41Z",
"source_ip": "185.91.203.44",
"request_id": "req-3f92c2"
}

During forensic investigation, this information allows analysts to trace the chain of events.

Logs also serve an important role in compliance and legal accountability. Many regulatory frameworks require organizations to maintain detailed audit trails. Standards such as PCI DSS, ISO 27001, HIPAA, and SOC 2 mandate logging of security-relevant activity.

A system that cannot produce logs explaining who accessed sensitive data may fail compliance audits and expose organizations to legal liability.

Security Visibility and Attack Detection

Modern attacks rarely involve a single action. Instead, attackers move through several stages:

  1. reconnaissance
  2. credential compromise
  3. privilege escalation
  4. lateral movement
  5. data exfiltration

Each stage generates observable events.

For example, a privilege escalation attempt might involve modifying a user’s role. A secure system logs such changes explicitly.

_logger.LogWarning(
"Privilege escalation attempt detected. User {UserId} attempted to assign role {Role} to {TargetUser}",
currentUserId,
role,
targetUserId
);

These events allow security monitoring systems to detect suspicious activity in real time.

Logs also reveal unusual system behavior such as abnormal access patterns, excessive API usage, or connections from suspicious geographic regions.

Without logging, such events remain invisible.

With proper logging, they become detectable signals that trigger investigation.

What Should Be Logged in a Secure System

Security-focused logging begins with a fundamental design question: what events must the system record?

Not every event deserves a log entry. Logging must focus on security-relevant activities.

Authentication and Identity Events

Identity events form the backbone of most security investigations. Since compromised credentials remain one of the most common attack vectors, authentication activity must be logged comprehensively.

A secure authentication system records login attempts, whether successful or failed.

import json
from datetime import datetime

def log_login_event(username, success, ip):

log_entry = {
"event_type": "login_attempt",
"username": username,
"success": success,
"source_ip": ip,
"timestamp": datetime.utcnow().isoformat()
}

print(json.dumps(log_entry))

Password reset requests also deserve careful monitoring because attackers frequently exploit password reset flows.

{
"event": "password_reset_requested",
"user": "alice",
"ip": "203.0.113.21",
"timestamp": "2026-04-17T15:00:10Z"
}

Multi-factor authentication challenges and token issuance events must also be recorded. These logs reveal whether attackers are attempting to bypass authentication mechanisms.

Authorization and Privilege Changes

Authentication answers the question who is the user. Authorization determines what they can do.

Privilege changes therefore represent high-risk events that must always be logged.

Consider a system where administrators assign roles to users.

public void assignRole(String adminUser, String targetUser, String role) {

logger.info("ROLE_ASSIGNMENT admin={} target={} role={}",
adminUser,
targetUser,
role
);

roleService.assignRole(targetUser, role);
}

In many security incidents, attackers escalate privileges before executing destructive actions. If these events are logged, the escalation step becomes visible.

Data Access and Sensitive Operations

Data access events often reveal the true objective of an attacker.

Organizations must log operations involving sensitive information, including database queries, file downloads, and export operations.

For example, a file download event might produce the following structured log.

{
"event": "file_download",
"user_id": "user_8821",
"file_name": "customer_database.csv",
"timestamp": "2026-04-17T16:21:09Z",
"ip": "198.51.100.45"
}

Security teams can use this information to detect unusual access patterns, such as a user suddenly downloading large volumes of sensitive data.

Application Behavior and Business Logic Events

Many attacks exploit business logic rather than technical vulnerabilities.

For instance, an attacker may attempt to manipulate financial transactions or modify account settings.

Logging such events allows systems to detect anomalies.

log.Printf(
"ORDER_CREATED user=%s order_id=%s total=%.2f",
userID,
orderID,
totalAmount,
)

These business-level logs provide insight into actions that may indicate fraud or abuse.

Infrastructure and Platform Logs

Application logs alone are insufficient. Infrastructure events often reveal the earliest signs of compromise.

Operating systems generate logs when processes start, services stop, or users log into machines.

On Linux systems, authentication activity appears in the system authentication log.

Failed password for invalid user admin from 192.168.1.45 port 52234 ssh2

Container platforms such as Kubernetes generate additional security telemetry.

kubectl logs kube-apiserver

Network infrastructure also produces logs showing connection attempts.

{
"event": "network_connection_attempt",
"source_ip": "10.12.8.54",
"destination_port": 22,
"protocol": "TCP",
"timestamp": "2026-04-17T17:10:03Z"
}

These infrastructure-level signals often provide the first evidence of scanning or intrusion attempts.

Logging Design Principles for Secure Systems

Logging becomes useful for security only when it is structured, consistent, and context-rich.

Structured Logging

Traditional logs often appear as free-form text.

User John logged in from 10.2.1.4

Such logs are easy for humans to read but difficult for machines to analyze.

Structured logging uses machine-readable formats such as JSON.

import json

log = {
"event": "user_login",
"username": "john",
"ip": "10.2.1.4"
}

print(json.dumps(log))

Structured logs enable automated detection systems to search, filter, and correlate events.

Consistent Event Schema

Logs must follow a consistent schema so that security tools can analyze them reliably.

A typical event schema may include standardized fields.

{
"timestamp": "2026-04-17T17:22:44Z",
"event_type": "api_request",
"user_id": "user_123",
"request_id": "req_98721",
"source_ip": "192.0.2.14",
"service": "payment-api"
}

Correlation identifiers such as request IDs are particularly valuable in distributed systems. They allow investigators to trace a single request across multiple services.

Context-Rich Logging

A useful log entry answers several essential questions.

  • Who performed the action?
  • What action occurred?
  • Where did it originate?
  • When did it happen?
  • How was it performed?

A context-rich event might look like the following example.

{
"event": "account_update",
"user": "user_842",
"changed_field": "email_address",
"old_value": "old@example.com",
"new_value": "new@example.com",
"ip": "203.0.113.44",
"timestamp": "2026-04-17T17:45:22Z"
}

Without sufficient context, logs cannot support meaningful investigation.

Avoiding Excessive Logging

Logging every possible event may seem attractive, but excessive logging introduces performance overhead and creates overwhelming volumes of data.

A poorly designed logging system may generate millions of entries per minute, making analysis difficult.

The goal is not to log everything but to log the events that matter for security.

Well-designed logging focuses on high-value events that reveal authentication activity, privilege changes, and sensitive data access.

Security Risks of Poor Logging Practices

Logging itself can introduce security vulnerabilities if implemented carelessly.

Logging Sensitive Data

One of the most common mistakes is logging confidential information.

Consider a naive authentication implementation.

logger.info("User login attempt", {
username: username,
password: password
});

This code logs the password directly, which is extremely dangerous. If logs are compromised, attackers gain access to credentials.

Secure implementations must redact or omit sensitive fields.

logger.info("User login attempt", {
username: username,
password: "[REDACTED]"
});

Similarly, logs should never store API keys, authentication tokens, or personal data unnecessarily.

Log Injection Attacks

Logs may also become targets of attack.

If user-controlled input is written directly into logs, attackers may inject malicious content.

For example:

username=alice
username=attacker\nERROR: system compromised

If not sanitized, this input could corrupt log records or mislead investigators.

Secure logging systems sanitize input before recording it.

def sanitize(value):
return value.replace("\n", "_").replace("\r", "_")

Missing Audit Trails

Perhaps the most dangerous logging failure is the absence of logs entirely.

If a system performs sensitive actions without recording them, investigators cannot reconstruct events during an incident.

For example, deleting a user account without logging the event removes accountability.

Every critical action must leave an audit trail.

Secure Log Storage and Integrity

Logging security does not end when an event is recorded. Logs themselves must be protected.

Centralized Logging Architecture

Modern systems rarely store logs locally. Instead, they forward logs to centralized aggregation platforms.

Applications often ship logs using tools such as Fluentd or Logstash.

fluent-bit -i tail -p path=/var/log/app.log -o elasticsearch

Centralization enables correlation of events across multiple systems.

Cloud platforms also provide native logging systems. For example, a service running in a cloud environment may send logs directly to a managed logging platform.

import logging
from google.cloud import logging as cloud_logging

client = cloud_logging.Client()
client.setup_logging()

logging.info("Application started")

Centralized logs provide a unified view of system activity.

Tamper Protection

Attackers often attempt to erase logs to hide their tracks.

Secure logging systems protect against tampering through append-only storage and cryptographic verification.

One approach involves hashing each log entry.

import hashlib

def hash_log_entry(entry):
return hashlib.sha256(entry.encode()).hexdigest()

Each entry can be chained to the previous one, forming a cryptographic log chain similar to a blockchain structure.

If an attacker modifies an entry, the hash chain breaks, revealing the tampering.

Retention and Compliance

Log retention policies determine how long logs remain stored.

Different regulations impose different requirements. Financial systems may require years of audit history, while operational logs may be retained for shorter periods.

Retention systems must also enforce secure deletion policies to ensure expired data does not remain accessible.

In practice, organizations define retention rules in centralized logging platforms.

For example, a cloud logging system might retain security logs for one year while keeping application logs for thirty days.

Retention policies must balance legal requirements, investigative needs, and storage costs.


Secure logging transforms software systems into observability platforms capable of detecting and investigating threats. By capturing meaningful events, structuring logs for analysis, and protecting log integrity, organizations create the foundation for the next critical layer of security monitoring: alerting and detection, which transforms raw telemetry into actionable intelligence.

Build and Deploy Anywhere with GPT-5.3-Codex


Software engineering has always evolved alongside its tools. Compilers turned human ideas into executable programs. Integrated development environments improved productivity and debugging. Version control systems enabled collaboration at scale. Continuous delivery pipelines made rapid and reliable deployment possible.

In early 2026, another major step appeared: agentic coding systems capable of participating in the engineering process itself.


One of the most advanced examples of this new class of tools is GPT-5.3-Codex, OpenAI’s latest coding-focused model designed to reason across repositories, plan multi-step changes, execute development tasks, and collaborate with engineers across the full software lifecycle.

Unlike traditional autocomplete-style coding assistants, GPT-5.3-Codex is capable of operating across entire workflows: scaffolding projects, editing multiple files, generating diffs, interacting with terminal commands, and assisting in code review or refactoring tasks.

This article walks through a realistic development scenario using Codex from the command line — illustrating how an agentic coding system can participate in building a modern data-driven web application.

Beginning the Journey: Launching Codex

The process begins in a terminal.

The developer installs the Codex CLI and launches it inside a working directory.

npm install -g @openai/codex
mkdir poddata
cd poddata
codex

(Alternatively, Codex can be invoked without installation using npx @openai/codex.)

Once started, the CLI connects to the gpt-5.3-codex model and begins operating within the current workspace.

The Codex sandbox is a controlled execution environment where the Codex agent can safely read files, generate code, and run commands while limiting what it can access on your system. Think of it as a temporary “mini computer” or container that Codex uses to perform coding tasks without risking your machine or infrastructure.

You can change the model and effort by using the /model command:

The developer, then, can issue a high-level request:

Create a React project that analyses podcast metrics from data/data.csv, using D3 to build several charts.

Instead of producing a single snippet of code, Codex analyses the request in the context of the workspace and begins incrementally constructing the project. If it doesn’t find the specified data.csv file, it creates a sample one.

The terminal displays the actions it performs. When finished, it asks permission to install dependencies:

It then attempts to run the project for verification:

Finally, a summary is presented:

Results may vary widely, due to the nature of LLMs:

If the project is not yet under version control, initializing Git is recommended so Codex can produce structured diffs during future iterations:

git init
git add .
git commit -m "Initial scaffold"

You might run into the following error:

This comes from Git’s security feature introduced after the CVE-2022-24765 vulnerability. Git refuses to operate on repositories whose ownership differs from the current user, because this could allow a malicious repo to execute hooks or config under another user.

This happens frequently when:

  • using containers / sandboxes
  • using WSL
  • using Docker volumes
  • running tools like Codex CLI
  • accessing repos created by another user or root
  • mounting drives from another OS

The solution is in the error message itself and marks this repository as safe.

Creating a git repository is a simple step that dramatically improves the agent’s ability to reason about code changes.

Static Diffs and Multi-File Refactoring

A major strength of agentic coding systems is the ability to modify multiple files simultaneously.

For example, suppose the developer requests:

Add zoom and pan behaviour to the charts.

Codex analyses the existing chart components and introduces a reusable utility.

The resulting diff might look like this:

And the final summary:

Visual Editing and Multimodal Context

When Codex is used within environments that support multimodal inputs — such as IDE integrations or visual interfaces — it can incorporate annotated screenshots into its reasoning process.

For example, imagine the dashboard contains an introductory paragraph that the developer wants removed. An annotated screenshot pointing to the text may produce a patch like this:

--- a/src/App.jsx
+++ b/src/App.jsx
@@ -8,9 +8,6 @@ export default function App() {

<div className="container">

<h1>Podcast Metrics Dashboard</h1>

- <p className="intro">
- This dashboard explores episode performance and guest behavior.
- </p>

<Charts />

</div>

This workflow illustrates how modern coding agents can bridge visual design feedback and source code modifications.

Feature Development: Search by Guest

To explore a full feature lifecycle, the developer asks Codex to implement a guest search filter. Codex goes to work:

Once implemented, the system rebuilds and the dashboard now filters charts dynamically based on the selected guest. In real development environments, Codex can assist with generating diffs, running builds, and suggesting improvements such as input debouncing for large datasets.

Parallel Implementations and Experimentation

In cloud-based development pipelines, agentic systems can be used to generate multiple candidate implementations of a feature.

For example, when adding tooltip interactions to charts, two implementations might be explored: a simple static tooltip component and a dynamic cursor-tracking tooltip approach using a useTooltip hook. Developers can evaluate these alternatives in preview environments before selecting the preferred implementation.

This workflow transforms feature development from a single-attempt process into iterative experimentation.

Code Review and Collaboration

Agentic models can also assist during code review. When examining a pull request, Codex may analyze diffs and flag potential issues.

For example:

The SVG overlay used for zoom interaction appears above the chart elements and may intercept pointer events, preventing hover detection. Consider adjusting element ordering or pointer-events settings.

These types of observations mirror issues often caught during human code reviews. The difference is that the analysis can occur immediately after changes are generated, helping developers identify problems earlier in the workflow.

The Emergence of an Engineering Partner

Across the scenarios described in this article — scaffolding projects, generating visualization components, performing multi-file refactoring, integrating UI feedback, and assisting with review — one theme becomes clear: modern coding systems like GPT-5.3-Codex do not simply generate snippets of code. They participate in the engineering process itself.

The developer remains the architect and decision-maker, but the agent becomes a powerful collaborator capable of analysing repository context, generating structured diffs, coordinating multi-file changes, assisting with debugging and review, and accelerating experimentation.

For many years, AI coding tools were judged primarily by the quality of individual code suggestions. Today the bar is higher.

The new question is not:

Can an AI write code?

The real question is:

Can it participate meaningfully in the engineering workflow?

GPT-5.3-Codex represents a step toward that future. By combining reasoning, repository awareness, and tool interaction, it moves beyond simple autocomplete and toward a model that can collaborate with developers throughout the lifecycle of software creation.

The result is not automation replacing engineers — but a new kind of human-agent engineering partnership.