Formats and Protocols
This compendium of IT and Information Security formats and protocols is summarised from the original sources for information only. Readers must consult the authoritative sources for the actual guidelines.
Open automation formats and protocols are essential tools that facilitate interoperability and integration among various systems, devices, and software applications. Designed to enable standardised communication, these formats and protocols provide a common language for systems in industries such as cyber security, telecommunications, and IT, allowing for seamless automation processes across diverse platforms. Some of these relevant for cyber security are referred in subsequent paras with purpose and usage.
STIX
Overview: To share CTI, it is imperative that the underlying platform converts the data into a form that is ready for being sent through the network. To that end a process is required to convert the data-object (a combination of code and data) in a form so that the state of the object is saved in a transmittable format. This is known as Serialization . Once the data is received at the destination in a Serialized form, it is Deserialized before further processing. STIX stands for Structured Threat Information Expression. It is an open source, XML based “language and serialization format used to exchange” CTI. STIX is a standardized language for representing and sharing cyber threat intelligence (CTI) in a consistent, machine-readable format. Developed by MITRE and now managed by OASIS, STIX enables organisations to share detailed threat information, including attack patterns, tactics, techniques, and procedures (TTPs), indicators of compromise (IOCs), and threat actor profiles. The structured nature of STIX allows cybersecurity tools to process threat data automatically, enhancing detection, analysis, and response capabilities across systems
Purpose: Leveraging STIX, organisations can collaborate effectively while exchanging CTI because of the consistency and machine readability that STIX provides. Consequently, security communities benefit as anticipation to attacks improve and agility in response mechanisms is enhanced. Further, its design is such that it augments capabilities, such as “collaborative threat analysis, automated threat exchange, automated detection and response” .
Usage: STIX at its core architectural layer contains “observables” that it refers to as “stateful properties or measurable events”, pertaining to the “operation of computers and networks”. For example, it could be stopping a service, reboot of a system or establishing a connection. Storage of these “observables” is done in XML using “CybOX language, another MITRE project”. Further, as a part of the framework, these “observables can be linked to indicators, incidents, TTPs, specific threat actors, adversary campaigns, specific targets, data markings, and courses of action” - that it evolves into a “complete threat intelligence management system”.
TAXII
Overview: TAXII stands for Trusted Automated Exchange of Intelligence Information. It is an “application protocol for exchanging CTI over HTTPS, defining “a RESTful API (a set of services and message exchanges) and a set of requirements for TAXII Clients and Servers” . An API, or an Application Programming Interface plays the role of a mediator between the Client and the Web Service. The advantage here is that information sharing is done “maintaining security, control, and authenticationâdetermining who gets access to what”. A RESTful (or REST - representational state transfer) API “conforms to the constraints of REST architectural style and allows for interaction with RESTful web services”. For an API to be a RESTful API, it needs to comply with certain criteria like “Client-server architecture”, with HTTP based requests, “Stateless Client-Server communication”, “Cacheable data” etc.
Purpose: It enables sharing CTI “across product, service and organisational boundaries” and acts as a “transport vehicle for STIX structured threat information”.
Usage: The two primary services TAXII provides are: (a) Collection and (b) Channel. The TAXII server has a repository of CTI objects that it allows its Users to host, which can be accessed by consumers. Collection is an interface to this repository. A Channel can be viewed as a many to many interface - that allows a producer to push data to many consumers and a consumer to receive data from many producers.
Traffic Light Protocol (TLP)
Information Sharing
The critical success factors for cyber resilience and sustenance of a digital ecosystem are: a) The speed and quality of detection of compromise and cyber-attacks within the entities. b) The rapidity by which information and guidance is disseminated to all the actual and potential targets. c) The speed and effectiveness of response within entities as well as across sectors and cross-sectors.
Conventional paper-based and email-based methods of sharing are not capable of achieving the required speed in information sharing amongst all the stakeholders and participants, since they are less amenable for automated processing. Traffic Light Protocol is a standardised mechanism for sharing information. TLP version 2.0 is the current version of TLP standardized by FIRST. The Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive information and more effective collaboration. Information sharing happens from an information source, towards one or more recipients. TLP is a set of four labels used to indicate the sharing boundaries to be applied by the recipients.
TLP definitions
Community: Under TLP, a community is a group who share common goals, practices, and informal trust relationships. A community can be as broad as all cybersecurity practitioners in a country (or in a sector or region).
Organisation: Under TLP, an organisation is a group who share a common affiliation by formal membership and are bound by common policies set by the organisation. An organisation can be as broad as all members of an information sharing organisation, but rarely broader.
Clients: Under TLP, clients are those people or entities that receive cybersecurity services from an organisation. Clients are by default included in TLP:AMBER so that the recipients may share information further downstream in order for clients to take action to protect themselves. For teams with national responsibility this definition includes stakeholders and constituents.
TLP Labels
TLP:RED = For the eyes and ears of individual recipients only, no further disclosure. Sources may use TLP:RED when information cannot be effectively acted upon without significant risk for the privacy, reputation, or operations of the organisations involved. Recipients may therefore not share TLP:RED information with anyone else. In the context of a meeting, for example, TLP:RED information is limited to those present at the meeting.
TLP:AMBER = Limited disclosure, recipients can only spread this on a need-to-know basis within their organisation and its clients. Note that TLP:AMBER+STRICT restricts sharing to the organisation only. Sources may use TLP:AMBER when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organisations involved. Recipients may share TLP:AMBER information with members of their own organisation and its clients, but only on a need-to-know basis to protect their organisation and its clients and prevent further harm. Note: if the source wants to restrict sharing to the organisation only, they must specify TLP:AMBER+STRICT.
TLP:GREEN = Limited disclosure, recipients can spread this within their community. Sources may use TLP:GREEN when information is useful to increase awareness within their wider community. Recipients may share TLP:GREEN information with peers and partner organisations within their community, but not via publicly accessible channels. TLP:GREEN information may not be shared outside of the community. Note: when âcommunityâ is not defined, assume the cybersecurity/defence community.
TLP:CLEAR = Recipients can spread this to the world, there is no limit on disclosure. Sources may use TLP:CLEAR when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release. Subject to standard copyright rules, TLP:CLEAR information may be shared without restriction.
OSCAL
Overview: In the context of NIST, an ATO or an Authorization (to operate) is defined as “The official management decision given by a senior organisational official to authorize operation of an information system and to explicitly accept the risk to organisational operations (including mission, functions, image, or reputation), organisational assets, individuals, other organisations, and the Nation based on the implementation of an agreed-upon set of security controls” . In essence there has to be a formal declaration by a designated authority that authorises operation of an IT product, which includes all the caveats around Risk. Apart from the inherent complexity in handling cyber threats, addressing ATO requirements coupled with the extensive manual processes in handling Risk Management increases the challenges of an Entity’s Information Security Team. The four key challenges in this regard stated by NIST are: - (a) Complexity of Information Technology (b) Omnipresent Vulnerabilities (c) Burdensome nature of Regulatory Frameworks (d) Esoteric nature of Risk Management (e) Accelerated rate of obsolescence of Documentation .
Purpose: The open source, Open Security Controls Assessment Language - or OSCAL, was developed by NIST in collaboration with industry to address the concerns around this space. To overcome the challenges around usage of word processing and spread sheet utilities to handle RMF requirements, improved automation was an imperative. There was a need, therefore, to evolve “a shared language for authorization systems to communicate”.
Usage: OSCAL uses XML, JSON, and YAML formats, providing “machine-readable representations of control catalogs, control baselines, system security plans, and assessment plans and results”. The advantages of OSCAL formats as stated by NIST are: - “Easily access control information from security and privacy control catalogs, Establish and share machine-readable control baselines, Maintain and share actionable, up-to-date information about how controls are implemented in your systems, Automate the monitoring and assessment of your system control implementation effectiveness” .
OpenSCAP
Overview: In any enterprise, the IT systems deployed need to be constantly monitored for vulnerabilities so that remediation measures in terms of configuration changes, patch updates and / or application of upgrades are done in a timely manner. This minimizes risk exposure of the assets within, and the enterprise is better guarded against security threats. The Security Content Automation Protocol - SCAP, is a collection of “number of open standards that are widely used to enumerate software flaws and configuration issues related to security”. In order to associate a metric or measure the possible impact due to the vulnerabilities within the system, applications written for security monitoring leverage these standards. Thus the “SCAP suite of specifications standardize the nomenclature and formats”, that is leveraged by these vulnerability management tools.
Purpose: OpenSCAP is an auditing tool that utilizes security check-list content in a defined format called the Extensible Configuration Checklist Description Format (XCCDF). Combing with other specifications it creates a SCAP-expressed checklist that can be processed by SCAP-validated products.
Audience: IT / IT Security Administrators, Auditors
Usage: The OpenSCAP stack consists of different products that can help with “assessment, measurement, and enforcement of security baselines”, maintaining “flexibility and interoperability” and “reducing the costs of performing security audits” . A few of the Tools in the stack include: - (a) OpenSCAP Base: A Command line tool for performing configuration & vulnerability scanning (b) OpenSCAP Daemon: Evaluating Infrastructureâs compliance to SCAP Policy (c) SCAP Workbench: Helps create a custom security profile and facilitates remote scanning.
Open Command & Control (OpenC2)
Overview: Attackers are in a position to conduct and coordinate various stages of an attack with remarkable sophistication and speed. If Defenders are to sustain their operations, they need to respond with equal alacrity; plan and conduct defense operations in a timeframe that matches the adversary so as to ensure that the latter fails to attain their objectives. But the penetrability of an attack surface is determined by a range of factors that include âcomplexity, functionality, connectivityâ etc. Further, most enterprises do not have an integrated cyber defense implementation; the systems operate in siloes and generally have static configurations. This results in very inefficient cybersecurity operations. Such an environment, by default creates a lag in incident-response-remediation, which gives an adversary “asymmetric time advantage”, to ensure the success of their attack. In a large enterprise there would be number of heterogenous platforms, technologies, devices, and systems. Effective incident response, getting into the root cause, assessing remediation measures, and mitigating them etc. become complex. In such a context, few challenges of integrating the various components of cyber defense include: - (a) Prohibitive costs (b) Tailored Communication Interfaces” (c) Extent of tight-coupling and cohesiveness of the chosen products for integration (d) Need for a middleware etc. As a consequence, scaling of such a solution would depend on ability to get suitable interfaces written or easily available APIs for the different products in use. Paradoxically though, while OEM diversity of solutions do provide an additional layer of security - as attack on one system does not guarantee attack on another, achieving cyber defense integration becomes complex and prohibitive in terms of costs. From an attack perspective, when an attacker penetrates a system and installs a malware which is leveraged to send remote commands from a C2 (Command & Control) Server to infected devices, it is termed a C2 attack. In other words, “command and control infrastructure” is used by attackers to issue “command and control instructions” to their victims. From a cyber-defense viewpoint, an assessment of the C2 approach and methods that an attacker uses could be leveraged to identify and associate attackers and attacks, and disrupt malicious activity that are ongoing.
Purpose: The limitations in integrating cyber defense solutions, stated earlier was the trigger for development of Open Command and Control (OpenC2) . It “is a suite of specifications that enable command and control of cyber defense systems and components” - facilitating “machine to machine communication”, through a “common language”. More important, it does it at very high speeds and in such a way “that is agnostic of the overall underlying technologies utilized”. But for this standard, every new device would need manual configuration or individual commands sent in real time, which apart from increasing the lag in incident response times, would also have human intervention, making it error prone.
Software Bill of Materials (SBOM)
Overview: The entire source code of a software is referred to as its codebase. It is a common practice for developers to integrate with their code, third party components that are licensed and either commercially available or are open source. A SBOM or software Bill of Materials is an inventory of all the “open source and third-party components” that are present, providing details of their versions and status of the patches within the codebase of a software application / product. Information security teams can use this information to assess security & license related risks. Akin to the Manufacturing industry’s Bill of Material (from where SBOM has been derived) where the codification schemes of the parts provided by OEM and Suppliers are distinct, based on which a successful trace to the correct supplier happens when a part goes defective; here too a trail to the accurate third party is established by virtue of the SBOM. There are four stages of an SBOM life cycle. They are: (a) Produce SBOM - Collating the information required for an SBOM at each stage of the software lifecycle from existing tools and solutions. Such solutions normally include: - “intellectual property review”, “license management workflow tools”, “version control systems” etc. (b) Deliver - In the case of opensource tools, the SBOM information can be held as metadata, while for compiled versions the SBOM and the software product are bundled together. (c) Update - If the software is updated in any manner, then any changes in SBOM too needs to be recorded. (d) Consume - In order to utilise SBOM, the SBOM data should be in a machine readable format. However, the decision on choosing the appropriate format that should be adopted for SBOM needs to be taken. The three standard and common formats in use are CycloneDX, SWID and SPDX.
CycloneDX
Purpose: “OWASP CycloneDX is a lightweight” SBOM specification “designed for use in application security contexts and supply chain component analysis”, created in 2017 . The intent was to create a “fully automatable, security-focused SBOM standard”. Existing specifications like Package URL, CPE, SWID etc., have been incorporated into CycloneDX. Each such SBOM can have (a) Bill of Material Metadata (supplier, manufacturer, firmware etc. that it describes), (b) Components - detailed inventory of the software components, which includes type, identity, license etc. (c) Services - description of external APIs, that the utility may invoke (d) Dependencies - interrelationship between components (e) Compositions - that describes the ‘completeness’ of the SBOM. (f) Extensions - For building rapid prototyping of enhancements.
Usage: Software Inventory description, Analysis, Analysis of vulnerabilities and their mitigation, Supply Chain (Software) risk assessments, Compliance of License requirements, Traceability of updates and modifications, Integrity checks of signed components etc.
SWID
Purpose: To enhance transparency in monitoring Software installed within the enterprise, Software Identification (SWID) tags were conceptualised and defined originally by ISO (2012) and later updated as ISO/IEC standard (2015). Description of a specific software release is held in the SWID Tag files. The standard further “defines a lifecycle where a SWID Tag is added to an endpoint as part of the software productâs installation process and deleted by the productâs uninstall process”. In order to capture the lifecycle of a software component, the SWID specification defines following types of Tags: (a) Primary Tag (b) Patch Tag (c) Corpus Tag and (d) Supplemental Tag. SWID Tag that identifies and describes: - (a) a software product installed on a computing device is called a Patch Tag (b) a patch that was installed for making a change of the software installed on the device is referred to as a Patch Tag (c) the pre-installation state of a software product is called a Corpus tag and finally a SWID Tag that permits “additional information to be associated with any referenced SWID tag” is referred to as a Supplemental tag.
Usage: Few use cases are: (a) It can be used as Software Bill of Material for the software products and applications installed (b) Inventory of installed software can be regularly monitored (c) Software vulnerability identification at end points can be carried out (d) Ensures proper patch management (e) Detects unauthorized or corrupt software prior to their installation and prevents their installation.
SPDX
Purpose: SPDX or Software Package Data Exchange , is an ISO /IEC standard. It is used for communicating software bill of material information that includes components, licenses, copyright and security information. It is an evolving “set of data exchange standards” that would assist enterprises with their software supply chain ecosystem by allowing each of them “to share human-readable and machine-processable software metadata” with one another. Like in the case of SWID, stated earlier, work here too began because of limitations that existed in the earlier days of sharing opensource components, resulting in duplication of efforts. Also, collaboration was difficult then. SPDX project created “a common metadata exchange format” which allowed details about software packages and associated content to be collected, shared and made effective automation of the related tasks, feasible. The file formats used here include JSON, XML and YAML. A valid SPDX document has a defined structure with defined sections as stated in the SPDX specification. The section that is mandatory is the “Creation Information’ (version numbers, license of data etc.) - primarily provided for forward and backward compatibility of the tools. The others include “Package Information”, “File Information”, “Snippet Information” and “Other Licensing Information.
Usage: Description of software components and their interrelationships; Tracking licensing, copyright information related to IP; Tracking executables to the source files etc.
OpenIOC
Overview: Like in the case of collating evidence, analyzing them and then weaving it together to reconstruct a crime scene, a cybersecurity analyst too as a part of her function tries to gather data around different events related to new and emerging threats. The nature of this data could be “metadata elements or much more complex malicious code and content samples that require advanced reverse engineering”. The final output of this action is what is generally referred to as IOC or Indicators of compromise. In other words, IOC can be defined as “a forensic artifact or remnant of an intrusion that can be identified on a host or network”.
Purpose: OpenIOC is an XML based open framework used for sharing CTI.
Usage: The format used is machine-readable and assists sharing threats related to the latest IOCs, with other enterprises, helping achieve real-time protection. OpenIoc provides a pre-defined set of indicators as a part of the framework. The extensible nature of the base schema allows add on indicators from different sources. The IOC use cases include: (a) Finding Malware - Used to find known malware (b) Methodology - These IOCs help find suspicious utilities that are unknown. For instance, writing an IOC for a specific condition of an unsigned dll that was loaded from a folder that did not have “windows32system” in its path (c) Bulk - Used for exact matches (d) Investigative - Tracking information like, metadata related to “installation of backdoors”, “execution of tools” etc. in an IOC and using that to prioritise the systems that would be taken up to be analysed.
OpenEDR
Overview: Endpoint detection and response or EDR is a technology that focuses on endpoints and network related events. The agent on the host system helps log the information generated and stores it in a database. This repository is what is leveraged for event monitoring and generating reports. Different EDR solutions package their functions and features differently. Some focus more at the agent level, some have different way of handling schedules for collection and analysis, some others carry out integration tasks with a difference. Functions of EDR are: (a) Traffic and data monitoring from endpoints and checking for anomalies hinting a breach. (b) Automated responses - Removal of malicious files and alerts to information security teams. (c) Search for threats and their signatures through analytics. The core functions of all tools in this space are the same - “continuous monitoring and analysis” for being proactive to respond to threats.
Purpose: OpenEDR is an EDR tool that is both free and its source code available in public domain. It consists of the following components: - (a) Core Library (b) Service (c) Process Monitoring (d) System Monitor (e) File-System Mini-Filter (f) Network Monitor (g) Low-Level Registry Monitoring Component (h) Self-Protection Provider (i) Low-Level Process Monitoring Component.
Usage: One of OpenEDRâs strengths is to enable a user to affect an accurate RCA. Without a clear understanding of root cause, mitigation measures would prove inadequate. The output from the tool provides “actionable knowledge” which helps speed up response. Further its, âIntegrated Security Architectureâ helps deliver a âFull Attack Vector Visibility including MITRE Frameworkâ.
OpenDXL
Overview: Proliferation of different technologies from various IT Infra and Security Vendors over time, with different upgrade paths, coupled with several home-grown initiatives â all, connected through an underlying network, has made IT Security Management a complex task. Achieving seamless integration between security products from different Vendors has been a constant challenge. API driven integration comes with major maintenance issues. Integration of products from different Vendors also has administrative overheads that includes external dependencies. For instance, getting a buy in from each of the Vendors, their prioritization, commercial interests etc. An application’s limitations for easily accessing emerging CTI data becomes an impediment for each, to apply the insights on emerging threats, making the environment vulnerability prone. As a result, the cyber resilience posture is affected adversely. The mechanism through which security solutions are connected and integration of disparate security systems are achieved is termed as Orchestration. Since substantial volume of output logs are created by security tools currently, SOCs tend to miss intrusions. Security Orchestration addresses this concern.
Purpose: Open Data Exchange Layer or OpenDXL has been developed with the intent of enabling “security devices to share intelligence and orchestrate security operations in real time”. It facilitates the developer community to establish handshakes between security products and minimize lead times of threat defense, by improving “context of analysis”, shortening “workflows of the threat defense lifecycle” etc. The key goals of OpenDXL can be summarized as follows: (a) Enhance flexibility (b) Make integration easier (c) Accelerate development by providing appropriate toolsets (d) Increase Sec Ops ability in creating “cross-product, cross-vendor security automation” and (e) Ensure that API changes by Vendors do not adversely impact integration.
Usage: Use Cases include: “Deception threat events, File and application reputation changes, Mobile devices and assets discovered, Network and user behavior changes, High-fidelity alerts, Vulnerability and indicator of compromise (IoC) dataâ.
Infrastructure as Code (IaC)
Overview: Like all disciplines in IT, managing IT Infrastructure has gone through its own evolution process. In the past it was customary for manually addressing all aspects of configuration, whether it was hardware or software over which the applications were required to run on. Costs, scalability, availability were all challenges that the organisations had to surmount. Monitoring performance of IT Infra or pinpointing the problem during troubleshooting continued to be an Achilles heel for the IT Infra team. Due to the human intensive nature of deploying configurations, inconsistencies crept in. Therefore, there was a need to explore the possibility of automating configuration management and thereby managing and provisioning IT Infrastructure through code. In that context, two file formats are defined here: (a) JSON: JSON or JavaScript Object Notation originated as a page scripting language in the Netscape Navigator era, for its web browser. It is a format with a defined set of rules that is “text-based, human-readable”, leveraged for data exchange between “web clients and web servers”, sometimes used as an alternative to XML. (b) YAML: YAML or yet another markup language (referred to also as, ain’t another markup language) - A human readable, easily comprehensible language, used for data and not documents is a “data serialization language” which is regularly leveraged while writing configuration files.
Purpose: Infrastructure as Code or IaC, is defined as “The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools”. In effect it leverages a DevOps approach to deploy IT infrastructure - networks, VMs etc. Akin to a codebase generating identical binary every time, here too the same environment is generated during every deployment. There are two approaches to IaC: (a) Declarative - Where the required state of the system is defined. An environment would need a tailored definition with regards its components and configuration. The definition file describes that. Depending on the kind file formats that the Platform supports - be they JSON, YAML or XML, syntax for describing IaC can be decided. (b) Imperative - “defines the specific commands needed to achieve the desired configuration”.
Usage: Some of the advantages of using IaC are: (a) Reduce Cost (b) Accelerate deployment (c) Less Errors (d) Enhance consistency (e) “Eliminate configuration drift”. More insights can be gleaned from studying Server automation and config management tools like Chef, Puppet, Saltstack etc.
Policy-as-Code
Overview: In a growing tech landscape, it is inevitable not to think of automation when rolling out policies. As proliferation of IT driven solutions take place, protecting the systems from threats is an imperative. For which, rules and procedures are defined as a part of security policy formulation - be it, access control, authorization, network security etc. In essence, a policy is a rule that stipulates the criteria for code deployment against the backdrop of the security control implemented or a set of steps to be executed automatically when there is a particular security event.
Purpose: Policy-as-code is the use of a high-level declarative language to automate managing rules and policies centrally. Leveraging this approach, users write the policies using Python, YAML or Rego - a high-level declarative language “built for expressing policies over complex hierarchical data structures”, depending on the kind of policy-as-code tool that is being used. While IaC benefits the IT operations team where automated provisioning of Infrastructure is involved, policy-as-code focuses on security operations and compliance management. To leverage policy-as-code the user may choose to use a tool like Open Policy Agent or OPA, purpose of which is to provide a common framework while applying policy-as-code to any domain. OPA is a “general-purpose policy engine that enables unified, context-aware policy enforcement across the entire stack”. It allows authoring of policies through Rego. OPA can be leveraged to outsource policy decisions from the relevant service. Eg: Boolean decisions of whether an API call should be allowed, Quota left for a user, the updates to be applied for a particular resource etc.
Usage: The advantages of using policy-as-code are: (a) Enhanced efficiency of enforcement and sharing of policies vis-a-vis the manual route (b) Accelerated operations because of automated policy enforcement (c) Swifter reviewing of policies as there is no requirement for engineers to discuss, they need to only examine the code (d) Improved collaboration within and across teams (e) Accuracy (f) Well defined version controlling of the policies and ability to roll back to the previous version (g) Helps reduce critical errors in production environments.
OMG Requirements Interchange Format (ReqIF)
Overview: Requirements management has been in vogue across different industries for a while. The collaboration requirements in the manufacturing industry â car manufacturing, for instance, uses dedicated authoring tools for their requirement management. Since collaboration requirements are intense, requirements management cross organisational boundaries - and different entities using different authoring tools find it a challenge to work on the same requirements repository. There was a compelling case, therefore, for a “generic, non-proprietary format” for information exchange. Hence, The Requirements Interchange Format or ReqIF was created for enterprises to exchange requirements data, by transferring XML documents that comply to the ReqIF format.
Purpose: Any product development during the initial stages has a requirement gathering phase. This is where ReqIF finds a key use, as multiple entities could be involved. ReqIF facilitates seamless sharing of requirements between agencies, even if each uses a different tool. It is far more efficient than formats like Word, Excel or PDF as it âallows for a loss-free exchangeâ.
Usage: Few of the advantages of using this format are: Improved collaboration between entities; Obviates the need to use the same authoring tools, Agencies interacting with multiple entities (eg: Suppliers) do not need stack up authoring tools that their respective Customers use.
OASIS CACAO for Cybersecurity
Overview: It is apparent that in order to defend against cybersecurity threats, a structured set of rules need to be defined. A cybersecurity playbook is generally considered to be a comprehensive manual that spells out actions to be taken when there is theft or loss of data. To that end, enterprises need to “identify, create, and document the prevention, mitigation, and remediation steps”, currently in most cases done manually. These steps considered together, constitute a course of action playbook. Like in other areas of the cybersecurity domain, here too lack of a standard mechanism to “document and share these playbooks across organisational boundaries” compounded with inadequate automation, proves to be a challenge.
Purpose: OASIS Collaborative Automated Course of Action Operations (or CACAO) for Cyber Security Technical Committee (TC), tackles this challenge by “defining a sequence of cyber defense actions that can be executed for each type of playbook”.
Usage: This will help entities to: (a) âcreate course of action playbooks in a structured machine-readable formatâ (b) âdigitally sign course of action playbooksâ (c) âsecurely share course of action playbooks across organisational boundaries and technological solutionsâ and (d) âdocument processing instructions for course of action playbooks in a machine-readable formatâ.