Today, CISOs and security teams face a rapidly growing volume of logs from a variety of sources, all arriving in different formats. They write and maintain detection rules, build pipelines, and investigate threats across multiple environments and applications. Efficiently maintaining their security posture across multiple products and data formats has become increasingly challenging. Spending time normalizing data is especially frustrating for these teams, who are often smaller, may lack vendor-specific expertise, and operate on tighter budgets.
To address security data management, over 120 leading organizations collaborated to develop the Open Cybersecurity Schema Framework (OCSF). OCSF is an open source, vendor-neutral schema designed to standardize event formats for security data, streamlining migrations between platforms and improving cross-tool interoperability. Launched in 2022, OCSF establishes a common taxonomy, simplifying the correlation of Tactics, Techniques, and Procedures (TTPs) and enabling modular schemas. This improves the way teams stop bad actors, derive actionable threat insights, and identify Indicators of Compromise (IoC).
Datadog Observability Pipelines enables you to aggregate, process, and route logs to multiple destinations. This enables organizations to take control of the log volume and flow of data to enrich logs, direct traffic to different destinations, extract metrics, scan for sensitive data, and more. Now, Observability Pipelines supports transformation to OCSF on the stream and enables you to send remapped logs to your preferred security destinations.
In this post, we’ll cover how Observability Pipelines enables you to easily remap your logs to OCSF and standardize your security data.
Remap logs to OCSF within Observability Pipelines and send to desired security destinations or data lakes
Logs come from various sources and applications, and teams enrich, transform, and route them to different destinations based on their use cases and budgets. For security teams, these logs often arrive at their desired solutions in many different formats, and they need high-quality standardized logs and completeness of coverage as quickly as possible to protect against threats. Teams spend critical time and effort normalizing data before they’re able to detect and investigate attacks. In-house data standardization can be hard to maintain and requires constant attention to ensure that the data translated between security platforms is scalable and consistent.
Using Datadog Observability Pipelines to transform logs into OCSF format can help you standardize your security data on stream to support your taxonomy requirements and send it to security vendors such as Splunk, Datadog Cloud SIEM, Amazon Security Lake, Google SecOps (Chronicle), Microsoft Sentinel, SentinelOne, and CrowdStrike.
Once you select the supported log type, Observability Pipelines will automatically remap those logs to OCSF format from sources including AWS, Google, Microsoft, Palo Alto Networks, Okta, GitHub, and more. Routing logs to Datadog Cloud SIEM provides intuitive, graph-based visualizations to surface actionable security insights across your cloud environments and includes Content Packs with out-of-the-box resources for popular security integrations such as AWS CloudTrail, Okta, and Microsoft 365.
Whereas mapping logs is a manual and complex process diverting security teams’ focus from threat detection and remediation, Observability Pipelines handles OCSF transformation on stream and enables you to send your logs anywhere. As a universal log forwarder, you don’t have to worry about vendor lock-in or rely on your security tool’s ability to manage OCSF transformation.
For example, let’s say you’re a CISO at a financial services company specializing in insurance or healthcare. Your company uses Datadog for DevOps troubleshooting and a different solution for security, such as Splunk or Amazon Security Lake. Managing the split of the logs across different products and mapping each log source to OCSF is time consuming, costly, and can result in taxonomy inconsistencies. With Observability Pipelines, you can transform your data to OCSF format before it leaves your environment, route your security logs to your desired destination, and keep your DevOps logs flowing to Datadog. Logs sent to Amazon Security Lake are automatically encoded in Parquet to meet AWS requirements.
How logs are remapped to OCSF
There are three major components to the OCSF model: Data Types, Attributes, and Arrays, Event Categories and Classes, and Profiles and Extensions.
Data Types, Attributes, and Arrays
- Data Types define the structure for each event class, including strings, numbers, booleans, arrays, and dictionaries.
- Attributes are individual data points such as unique identifiers, timestamps, and values for standardizing objects. An attribute dictionary provides a consistent, technical framework containing the foundation for how attributes are applied across datasets.
- Objects and arrays encapsulate related attributes, with objects acting as organized sets and arrays representing lists of similar items.
Event Categories and Classes
- Categories are high-level groupings of classes that segment an event’s domain and make querying and reporting manageable. For example, an Okta User Account Lock log would map to the IAM (3) category.
- Classes are specific sets of attributes that determine an activity type. Each class has unique names and identifiers that correspond to specific log types. For example, a Palo Alto Networks Firewall Traffic log would map to the specific Network Activity (4001) event class.
Profiles and Extensions
- Profiles are overlays that add specific attributes to event classes and objects. Profiles give granular details such as malware detection for endpoint detection tools. For example, the Cloud profile adds
cloud
andAPI
attributes to each applicable schema. - Extensions enable customization and creation of new schemas or existing schema modifications for specific security tool needs.
Using a sample Okta User Session Start log and the OCSF schema for Authentication (3002) class, we map the raw log’s attributes into the required, recommended, and optional segments of OCSF. In addition to remapping existing values, OCSF introduces its own enriched attributes into the final log.
{
"actor": {
"id": "gj388103mlfkae83",
"type": "User",
"alternateId": "Laser.lemon@myspace.com",
"displayName": "Laser Lemon",
"detailEntry": null
},
"client": {
"userAgent": {
"rawUserAgent": "Mozilla/5.0 (Linux; Android 12; Pixel 6 Pro Build/SQ3A.220705.003) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Mobile Safari/537.36",
"os": "Android 12",
"browser": "CHROME"
},
"zone": us-west-2,
"device": "Mobile Phone",
"id": PX6PRO998877,
"ipAddress": "172.16.254.1",
"geographicalContext": {
"city": "San Francisco",
"state": "San Francisco",
"country": "United States",
"postalCode": "94109",
"geolocation": {
"lat": 37.792,
"lon": 122.481
}
}
},
"device": {
"id": "aK89jLmX4zTqPvYcW2b3",
"name": "PixelBookGo",
"os_platform": "ChromeOS",
"os_version": "118.0.5993.170",
"managed": true,
"registered": false,
"device_integrator": Google Admin Console,
"disk_encryption_type": "FILE_BASED_ENCRYPTION",
"screen_lock_type": "PASSWORD",
"jailbreak": false,
"secure_hardware_present": false
},
"authenticationContext": {
"authenticationProvider": null,
"credentialProvider": null,
"credentialType": null,
"issuer": null,
"interface": null,
"authenticationStep": 0,
"rootSessionId": "RtSes98yG5opLMkVnZ347WxE",
"externalSessionId": "ExSes45QrTy9ZNkLpMwG72oX"
},
"displayMessage": "User login to Okta",
"eventType": "user.session.start",
"outcome": {
"result": "SUCCESS",
"reason": null
},
"published": "2024-11-09T18:43:10.367Z",
"securityContext": {
"asNumber": 539846,
"asOrg": "ASN 1023",
"isp": "at&t",
"domain": null,
"isProxy": false
},
"severity": "INFO",
"debugContext": {
"debugData": {
"requestId": "ca709c1fe22d52a84cdcbf7392d8ab01",
"requestUri": "/idp/idx/authenticators/poll",
"url": "/idp/idx/authenticators/poll"
}
},
"legacyEventType": "core.user_auth.login_success",
"transaction": {
"type": "WEB",
"id": "bc709c1fe22d52a84cdcbf7392d8ac02",
"detail": null
},
"uuid": "a23fd2d0-597c-12ef-8549-5b5295bf8d8b",
"version": 1,
"request": {
"ipChain": [
{
"ip": "172.16.254.1",
"geographicalContext": {
"city": "San Francisco",
"state": "San Francisco",
"country": "United States",
"postalCode": 94109,
"geolocation": {
"lat": 37.792,
"lon": 122.481
}
},
"version": "V6",
"source": null
}
]
},
"target": [
{
"id": "pfvbxqlyj8RZfhJ3k3d9",
"type": "AuthenticatorEnrollment",
"alternateId": "unknown",
"displayName": "Okta Verify",
"detailEntry": null
},
{
"id": "0oaqw7lef9gUvvqInq2t5",
"type": "AppInstance",
"alternateId": "Okta Admin Console",
"displayName": "Okta Admin Console",
"detailEntry": null
}
]
}
OCSF-specific fields, such as class name (authentication), severity ID (1), and class ID (3002), are appended into the transformed log. Raw log fields such as client.ipAddress
and published
are mapped using OCSF’s attribute dictionary to src_endpoint.ip
(recommended) and time
(required), respectively. Fields that don’t align with the schema, such as legacyEventType
, are placed into the unmapped section. Additional attributes, such as the actor
and device
objects, are injected into the log from the Host profile during the mapping process.
Route logs to security vendors in OCSF format with Observability Pipelines
Datadog Observability Pipelines can transform your logs into OCSF format and support your taxonomy requirements and security strategies with enhanced analytics capabilities, improved threat detection, and no vendor lock-in. On-stream, supported processing for popular log sources enables you to take advantage of OCSF without spending costly time normalizing data.
Routing logs in OCSF format is available by request in Preview to Observability Pipelines users. Observability Pipelines also enables Grok parsing with over 150 preconfigured parsing rules and custom parsing capabilities, GeoIP enrichment, JSON parsing, and more. Additionally, you can benefit from starting with Datadog Cloud SIEM, built on the most advanced log management solution, to elevate your organization’s threat detection and investigation for dynamic, cloud-scale environments.
Datadog Observability Pipelines can be used without requiring any subscription to Datadog Log Management or Datadog Cloud SIEM. For more information, visit our documentation. If you’re new to Datadog, you can sign up for a 14-day free trial.