Get up to speed on the latest terms and trends.
Because of the level of expertise required, APTs were traditionally the tradecraft of only the most experienced nation-state threat actors and were directed at high-value targets such as the national infrastructure of political enemies or large corporations. However, the use of APTs has since broadened significantly.
Command and Control (C2) is often used by attackers to retain communications with compromised systems within a target network. C2 servers issue commands and controls to compromised systems (as simple as a timed beacon, or as involved as remote control or data mining). The compromised system/host usually initiates communication from inside a network to a command and control server on the public Internet. Establishing a C2 link is often the primary objective of malware.
Data Loss Prevention (DLP) is a strategic technology and business process designed to detect and prevent violations of corporate policies regarding the use, storage and transmission of sensitive data. It also describes software products that help a network administrator control what data end users can transfer. For example, if an employee attempts to upload a corporate file to a consumer cloud storage service, DLP would assure the employee was denied permission. Some DLP tools can also filter data streams on the corporate network and protect data in motion as well as monitoring and controlling endpoint activities. Due to increased insider threats and more demanding state privacy laws, many of which have stern data protection or access components, adoption of DLP is increasing.
Distributed Denial of Service (DDoS) is a form of cyber attack in which multiple compromised systems work together to disrupt an online service, server or network by overwhelming the target with malicious traffic. Often large enterprises with high profile sites are targeted (banks, credit card companies, payment gateways). DDoS attacks temporarily disrupt online services by targeting machines or resources with superfluous requests (from many sources) and overloading systems to prevent legitimate requests from being fulfilled. DDoS attacks can be particularly damaging for organizations that conduct a substantial amount of business on web platforms, such as eRetailers or financial services companies. Common types of DDoS attacks include UDP flood, ICMP (Ping) Flood, SYN Flood, Ping of Death, Slowloris, NTP Amplification and HTTP Flood.
Dynamic Application Security Testing (DAST) is a security solution used to uncover vulnerabilities in software during its running state, including when it is actually deployed in production. In this black box methodology, software is tested from the outside-in and attacked just as it would be by a true threat actor. It simulates attacks against the application (typically web-enabled applications and services) and analyzes the application's response to determine if it’s vulnerable. Minimal user interactions are required (once configured with host name, crawling parameters and authentication credentials). Unlike Static Application Security Testing tools (SAST), DAST doesn’t have access to source code. DAST tools (either open source, free or commercially available) are specifically designed to find security vulnerabilities and are sometimes required to comply with various regulatory requirements.
A Configuration Management Database (CMDB) contains all information about an organization’s IT services components (both hardware and software) and allows the IT group to log devices that move in and out of an environment. CMDB centralizes configuration data and dependencies for IT infrastructure, applications and services, facilitating easier targeting and patching of potential security vulnerabilities. These databases can be used for business intelligence, software and hardware builds, inventory, impact analysis, change management and incident management. With an accurate inventory of critical assets, a CMBD tool can support IT planning, security, compliance, and auditing. CMDBs help organizations understand the relationship between components of a system and track configurations.
Cyber Threat Intelligence (CTI) refers to insight gained by analyzing the tactics, techniques and procedures (TTPs) of threat actors. Based on a collection of intelligence using Open Source Intelligence (OSINT), Social Media Intelligence (SOCMINT), Human Intelligence (HUMINT), technical intelligence or intelligence from the Deep Web and Dark Web, CTI insight allows security operations teams to take proactive defensive action by prioritizing the remediation of known vulnerabilities in their environment against vulnerabilities which are actually being exploited in the wild by threat actors. Threat intel can be strategic (describing the who and why), operational (describing the how and where) or tactical (describing the what) in characterizing threat actor activity.
Governance, Risk and Compliance (GRC) is an organization's coordinated strategy for managing the broad issues of corporate governance, enterprise risk management and corporate compliance with regard to regulatory requirements. It describes technology platforms and business processes applied to monitor, inform and manage an organization's: governance relative to specific legal, contractual, internal, social and ethical parameters; compliance with relevant industry regulations; and comprehensive risk management efforts.
Fileless Attacks inject malicious code into RAM and exploit approved applications on targeted devices to achieve their objectives and thwart detection. With traditional file-based malware, the attacker must write a file to the local drive of the targeted device, an action that is more easily detected by modern security controls. By contrast, fileless attacks inject malicious code only into RAM (hence fileless) and exploit approved applications on targeted devices. This makes them far more difficult to detect. Fileless attacks commonly exploit administrative utilities such as Windows Powershell or Windows Management Instrumentation (WMI).
Encryption is a method in which plaintext or other data is converted from readable form to an encoded version that can only be decrypted with a key. It’s the most effective way to achieve end-to-end security for sensitive data transmitted across networks (data in motion) as well as for sensitive data held in various systems and devices (data at rest). Popular and effective security protocols using public-key encryption include Secure Sockets Layer (SSL) and Transport Layer Security (TLS), its successor. Today's most widely used encryption algorithms fall into two categories: symmetric (single-key, faster but requires sender to share the key with the recipient) and asymmetric (public-key cryptography, uses two linked keys - one public and one private). These technologies are commonly used by browsers and Internet servers when transmitting confidential data, such as financial transactions. Many regulatory organizations and standards bodies (such as PCI-DSS) require encryption.
Dwell Time represents the length of time cyber attackers have free rein in an environment from the time they get in until they’re eradicated. Sometimes referred to as the breach detection gap, Dwell Time is determined by adding Mean Time to Detect (MTTD) and Mean Time to Repair/Remediate (MTTR) and is usually measured in days. Lengthy Dwell Times give attackers more opportunity to access private data, siphon funds and observe/record user and network behavior. Long Dwell Times also afford intruders more opportunity to plant secondary malware or APTs. Dwell Times are a top concern for all organizations since the issue impacts brand reputation and creates potential legal repercussions.
Network Access Control (NAC) is a security solution stack that provides visibility and control of devices accessing a corporate network. NAC provides visibility of IT resources accessing corporate networks and enforces granular role-based or policy-based access to those networks. Demand for NAC has grown in recent years, partially in response to the proliferation of BYOD (bring your own device) and IoT devices which are interacting with corporate networks.
Mean Time to Respond/Remediate (MTTR) is the average amount of time it takes an organization to fix an identified threat or failure within their network environment. A threat is a malicious intrusion / infiltration into a system to steal information, disrupt operations or damage hardware or software, and threat remediation is the process organizations use to identify and resolve threats to their network environment. Ideally technological systems are augmented by “eyes on glass,” highly trained people adding human experience and insight to the processes. Having processes, plans and defined responsibilities builds confidence and empowers the right groups to act and remediate.
Mean Time to Detect (MTTD) is the average length of time it takes a cybersecurity team to discover incidents in its environment. The lower an organization's MTTD the more likely it will be to limit damage done by an intruder. Things that can help organizations lower MTTD include security orchestration, automation and response (SOAR) technologies, understanding the risk profile, robust cyber resilience programs and incident response plans. Lowering MTTD helps reduce Dwell Time.
Managed Security Service Providers (MSSPs) are IT service providers that perform any number of cybersecurity-related activities for clients on an outsourced basis. Services offered by MSSPs can include management of security tools, threat management, incident response and forensics. They typically serve multiple clients and utilize high-availability security operations centers that are staffed 24/7. Outsourcing to an MSSP can be an easy way for an organization to add specific security expertise it may lack, create cost savings by eliminating the need to hire full-time in-house resources or augment in-house capabilities to accomplish 24/7 security monitoring.
Managed Security Services (MSS) are security service functions (such as management of security tools, threat management, incident response and forensics) that have been outsourced to an external service provider. A company providing these services is referred to as a managed security service provider (MSSP). Outsourcing can be an easy way for an organization to add specific security expertise they may lack. It can create cost savings by eliminating the need to hire full-time in-house resources and it can also be used to augment in-house capabilities to accomplish 24/7 security monitoring.
Integrated Risk Management (IRM) is a new approach to risk management that incorporates risk activities from across an organization to Promote better and more sustainable strategic decision making. Gartner coined the term in 2016 to describe the evolution of technologies and processes beyond what they now consider legacy GRC approaches. Gartner differentiates IRM from GRC by suggesting GRC is primarily compliance-focused, confined within organizational silos and used by technical practitioners. By contrast, IRM is risk-focused, comprehensive and used by business leaders. IRM considers comprehensive operational and IT risk posture to drive strategic decision making.
Incident Response (IR) refers to actions a company takes to manage the aftermath of a security breach or cyberattack. Ideally, organizations have a plan to manage these situations in a way that reduces recovery time and costs and limits damage to both technology infrastructure and corporate reputation. The most effective IR plans have been formalized and rehearsed (perhaps through tabletop simulations) in advance of a true emergency. Common activities in incident response include identifying / containing / eradicating the threat and recovering the impacted data and systems. IR may also involve PR and Legal teams if public breach notification is required or some sort of legal risk is created. Finally, a good IR plan involves noting lessons learned and using that knowledge to iterate the plan, helping prevent future incidents or enhance the response to them.
Indicators of Compromise (IOCs) are pieces of forensic data, system log entries or files that can be considered unusual and may identify potentially malicious activity on a system or network. Virus signatures and IP addresses, MD5 hashes of malware files or URLs/domain names of botnet command and control servers are some classic IOCs. Other IOCs include unusual outbound network traffic, anomalies in privileged user account activity, other login red flags (to accounts that don't exist, or after-hours), swells in database read volume, HTML response sizes (if SQL injection is used to extract data), large numbers of requests for the same file (indicating trial and error), mismatched port-application traffic (unusual ports), suspicious registry or system file changes, DNS request anomalies (large spikes) and geographical irregularities.
Insider Threat refers to risk posed to an organization’s systems and protected data emanating from people within the organization or trusted third parties. Insiders can be current or former employees as well as current or former contractors or vendors. By leveraging legitimate permissions relating to their job functions malicious insiders are often able to evade detection, and their knowledge of the organization’s systems and policies mean they can potentially cause more damage compared to external threat actors. Insider threats can be intentional (sabotage, IP theft, espionage, fraud) or unintentional (human error, bad judgment, phishing, malware, unintentional aiding and abetting, stolen credentials).
Identity Governance and Administration (IGA) is an Identity Access Management (IAM) program component that ensures authorized users have access to applications at the appropriate time (and restricts access for unauthorized users). IGA tools manage digital identity and access rights across multiple systems. The more heterogeneous the environment in terms of applications and systems, the more complicated the challenge is for IGA. Hence, organizations often need an IGA tool to simplify identity management and help demonstrate compliance with identity management regulations. Common IGA functions include identity lifecycle and entitlement management (to keep track of user roles and which applications they should have access to); automated workflows to manage access rights (ensuring necessary approvals are granted for new service access requests); provisioning (to propagate changes initiated by the IGA tool to all affected systems); and auditing and reporting (to demonstrate the veracity of all the items above).
Identity and Access Management (IAM) describes the processes (technological and manual) used to create, manage, authenticate, control and remove the permissions a user (internal, external and customer) has to corporate technology resources. Internal IT users need to be granted access to corporate systems and applications to perform their job functions, and similarly an organization's customers need access to web-facing applications in order to complete any number of approved interactions. In all these instances, IAM helps manage user interactions and prevent cybersecurity incidents related to the abuse of credentials. Some of the most common components of an IAM program include single sign on (SSO), access management and user authentication.
Cloud Access Security Broker (CASB) describes technology platforms that help organizations better secure the use of cloud-delivered applications (SaaS) and infrastructure. SaaS applications’ ease of use has made them more accessible, as buyers can now purchase them without explicit vetting by the IT security department (creating shadow IT). This creates obvious security risks, which CASBs were designed to address. CASBs help organizations gain visibility to their shadow IT, enforce data protection policies and boost threat protection related to cloud resources.
Continuous Adaptive Risk and Trust Assessment (CARTA) is a Gartner-defined methodology that recognizes risk and trust in the digital world are dynamic and not fixed. Because of this, the risk and trust posture of each interaction and entity (user, system, application, etc.) in the environment should be continuously monitored and assessed. The ultimate goal is to have security controls applied adaptively. Indications of elevated risk or diminished trust lead to immediate reductions in access or entitlements. Lower concern allows for less restrictive security guardrails to optimize user experience and accelerate processes.
Desktop web browsers represent a primary threat vector due to inherent vulnerabilities in both the browsers themselves and their related plug-ins. Through the browser isolation process, any related security threats are thereby confined, or "isolated," to the dedicated server or service sandbox.
Botnet (combination of 'robot' and 'network') is a collection of Internet-connected devices, such as PCs, servers, mobile devices and IoT devices, that are controlled as a group. Bots can be useful or damaging. Productive uses include search engine crawling, web site health monitoring and vulnerability scanning. Malicious uses include web site scraping, DDoS attacks and comment spam. An increasingly common hacking technique involves taking control of multiple users’ devices of and combining them into a more powerful botnet to launch malicious campaigns aimed at breaching targeted resources.
Blockchain is a decentralized, distributed and public digital ledger used to record transactions across many computers in a way that assures the record can’t be altered retroactively without additionally changing all successive blocks and the consent of the network. Initially developed to serve as the ledger system for the cryptocurrency Bitcoin, blockchain provides high security by design: transactions are verified with advanced cryptography and spread across many computers in a peer-to-peer network (distributed ledger). Today the applications for blockchain are expanding beyond cryptocurrency.
Big Data describes new structures and techniques to harness - and distill insight from - massive quantities of data. The digital economy is generating data with substantial volume and velocity, and organizations can develop tremendous insight from analyzing these growing data assets. However, these data sets are often too large or complex to be managed by traditional data-processing applications. Current usage of the term tends to refer to the use of predictive analytics, user behavior analytics or certain other advanced data analytics methods that extract value from data as opposed to the particular size of data set.
Breach and Attack Simulation (BAS) Tools allow enterprises to simulate complex cyber attacks on demand, helping expose gaps to be remediated before a real attacker can exploit the gaps. While BAS (a term coined by Gartner) cannot fully simulate the ingenuity and capability of legitimate threat actors - or human-driven penetration testers - it can help identify security vulnerabilities on a continuous basis for proactive remediation. In addition, BAS can be used to instrument security infrastructure to ensure that it is functioning as intended.
Application Containerization enables a logical packaging system in which applications can be abstracted from the environment (including the operating system) in which they actually run. Containerization allows applications to be developed once and easily deployed across virtually any environment regardless of operating system, virtual machine or bare metal, on-prem data centers or public cloud. This provides obvious advantages in terms of application flexibility and agility. Containerization can also provide performance and scale improvements relative to virtual machine approaches. However, containers also create their own unique set of unique security considerations.
Artificial Intelligence (AI) describes technology that mimics "cognitive" functions that humans associate with the human mind, such as learning and problem solving. AI appears to emulate human behavior in that it can continually learn and draw its own conclusions (even based on novel or abstract concepts), engage in natural dialog with people and/or replace people in the execution of more complex (non-routine) tasks. Artificial intelligence may allow computer systems to improve their capabilities based on experience without specific programming. Potential cybersecurity applications for artificial intelligence are extensive. A problem that can take a human days to understand may only take minutes for an AI. Moreover, an AI’s algorithms can also learn and predict based on experience and results, actually lessening the time to detection. AI is expected to exert a profound impact on both malicious and defensive cybersecurity applications.
Adaptive Authentication is a method for selecting the appropriate authentication factors depending on a user's risk profile and tendencies (it adapts the authentication type to each situation). This new approach to authentication adds additional security while minimizing the intrusiveness of the process on users. There are two key components: Threat and risk checks and Multi-factor Authentication (MFA). Risk checks are performed in the background without the user’s awareness. MFA is required only if a risk is detected. For example, during a login attempt a risk check may evaluate the reputation of the network the user is on, the geographic location of the source or device characteristics. If an anomaly is detected, MFA kicks in to help authenticate the login attempt.
2-factor Authentication (2FA) and Multi-factor Authentication (MFA) are computer access control methods requiring multiple pieces of authentication. 2FA requires both knowledge (usually a password) and something tangible (such as a hardware or software authentication system) to access a protected system. MFA may require something unique to the user’s physical being in addition (for example, a fingerprint or retina scan). However, “MFA” is often used generically when there are only two factors. In traditional 2FA, the second authentication can be either hardware-based (like a token) or software based (such as a mobile app). The authentication device generates a unique, temporary cryptographic code that must be input, in addition to a password, to gain access to a computer resource. Without the code, a hacker who has stolen a user's password won’t be able to access a protected system, which makes 2FA a minimum security best practice.
Cloud-Delivered Security (also referred to as security-as-a-service) secures and monitors critical infrastructure, applications and data that exist in both the on-prem and cloud environments of an organization. Cloud-delivered can be easier to implement and maintain since the enterprise delivering the technology is responsible for updates and maintenance. It can also be less expensive as it’s usually sold on a subscription basis. Cloud-delivered security also has the advantage of more closely matching the high elasticity of cloud environments, so capacity can be more dynamically scaled based on need. Cloud-delivered security also has some drawbacks, including less functionality, privacy considerations (since a third party manages it) and possible data residency and compliance issues. Demand for cloud-delivered security formats varies by segment. At one end of the spectrum, Identity Access Management functionality and email security can be delivered via the cloud. Firewall technology has been relatively slow to move to cloud delivery.
Interactive Application Security Testing (IAST) is an emerging application security testing approach which combines elements of its more established siblings in SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing). IAST instruments the application binary, which can enable both DAST-like confirmation of exploit success and SAST-like coverage of the application code. In some cases, IAST allows security testing as part of general application testing process, which provides significant benefits to DevOps approaches. IAST holds the potential to drive tests with fewer false positives/negatives and higher speed than SAST and DAST.
Hardware Authentication is a user authentication method relying on a dedicated physical device (such as a token, which generates a unique, temporary cryptographic code that must be input), in addition to a basic password, to grant access to computer resources. The combination of the hardware authentication device and the password constitute a 2FA (2-factor Authentication) system. Without the code, a hacker who has stolen a user's password will not be able to gain access to a protected system. One problem with hardware-based authentication devices is they can be lost or stolen, creating login issues for legitimate users. Some alternatives to hardware-based authentication include software-based (which uses an app on a mobile phone or desktop to create the temporary codes) or SMS (which sends a text message to an associated mobile phone for user confirmation before an identity can be authenticated). Each system of authentication has unique advantages and disadvantages.
Endpoint Detection and Response (EDR) solutions record key endpoint activity and provide security analysts with necessary information to conduct both reactive and proactive threat investigations. Traditional endpoint security technologies focused on preventing potential attacks. However, as attacks grew more sophisticated and pierced even the best prevention technology, new technologies were required to both enable detection and guide the response for attacks. These factors gave rise to EDR.
DevSecOps is an enterprise application development best practice that embraces the inherent agility benefits of DevOps, but recognizes the security organization needs to be integrated as an early participant in the DevOps process. Focused on creating new solutions for complex software development in an agile framework, DevSecOps bridges gaps between IT and security while ensuring fast and safe code delivery. The process improves communication and strengthens shared responsibility for data security through all phases of the delivery process. It assumes everyone is responsible for security and make decisions as quickly as possible without sacrificing safety. Security testing is done in iterations without slowing down delivery while any identified security issues are dealt with as they are identified throughout the development process and not after a threat or compromise has happened.
Deception Platforms are decoy targets within an organization's technology infrastructure designed to lure in bad actors in order to collect intelligence about their tactics and intentions in order to improve other preventative security controls in real time. Intelligence captured from attacker interactions with deception tools can then be rapidly pushed out to reinforce the preventative capabilities of other security tools to proactively block these tactics from working elsewhere. They’re especially valuable at helping organizations gain insight into zero-day and advanced attacks.
The Dark Web is the part of the World Wide Web only accessible by means of special software (most commonly the Tor encryption tool), allowing users and site operators to remain significantly more anonymous. This collection of sites exists on an encrypted network and cannot be found by using standard search engines or visited by traditional Internet browsers. Web sites also use a scrambled naming structure that makes it impossible to remember URLs. The greater anonymity of the Dark Web provides a more secure foundation for cyber criminals to operate and collaborate. However, it also has a legal side and even its own version of social media.
Data Access Governance (DAG) is a data security technology allowing enterprises to gain visibility to sensitive unstructured data across the organization and to enforce policies controlling access to that data. Unstructured data consists of human-generated files (spreadsheets, presentations, PDFs, etc.). Traditionally, sensitive data was relatively well protected in structured systems in applications. However, in modern businesses an increasing amount of sensitive data is included in unstructured formats. This data often finds its way into less secured storage solutions like file shares, collaboration portals (such as SharePoint), cloud storage systems (OneDrive or Box) or email. DAG helps organizations gain visibility to this data no matter where it resides and it then helps enforce policies to assure it’s available only to users that should have access to it. DAG can complement a Data Loss Prevention (DLP) program.
Cloud Workload Protection Platform (CWPP) is a term developed by Gartner to describe an emerging category of technology solutions primarily used to secure server workloads in public cloud infrastructure-as-a-service (IaaS) environments. CWPP capabilities vary across vendor platforms, but commonly include functions such as system hardening, vulnerability management, host-based segmentation, system integrity monitoring and application whitelisting. CWPPs enable visibility and security control management across multiple public cloud environments from a single console.
Cryptocurrency is a digital asset / virtual currency designed to work as a medium of exchange using strong cryptography to secure financial transactions, control the creation of additional units and verify the transfer of assets. Cryptocurrencies utilize a decentralized system of control rather than being controlled by a centralized banking authority as is typical for traditional currencies. The decentralized control of each cryptocurrency works through distributed ledger technology, typically a blockchain. Bitcoin is currently the most popular cryptocurrency.
SQL Injection (SQLi) is a type of application exploit known as a code injection technique. In an SQLi hack, an attacker adds malicious Structured Query Language (SQL) code to a web form input box to access targeted resources. This technique can also be used for database manipulation or to access information not intended for viewing, including sensitive company data, user lists or private customer details. SQLi is one of the most common forms of attack and can remain undetected for long periods.
Security Operations Centers (SOCs) are formalized functions in a company focusing on preventing, detecting, analyzing and responding to cybersecurity incidents. The development of a formal SOC is a typical step toward improving the maturity and effectiveness of a cyberdefense program. Regulations may require a 24/7 security monitoring program, which can be fulfilled through the development of a SOC with either internal staffing or the utilization of outsourced resources.
Security Information and Event Management (SIEM) is a software tool that allows security operations teams to identify potential incidents by consolidating and correlating log data from many other tools in the environment. SIEM technology commonly ingests log data from IDS/IPS, firewalls and endpoint security solutions as well as numerous other sources. SIEMs then use rule sets which can be customized by the security operations team to correlate the log data and trigger alerts when violations of the rule sets occur. Many regulations require that companies store and regularly review log data as part of their cyberthreat defense program and SIEMs are increasingly integrating User and Entity Behavior Analytics (UEBA) to provide advanced analytics related to activity in an environment. In addition, SIEMs are integrating Security Orchestration and Automation and Response (SOAR) technology to help streamline (or automate) the alert triage and incident response process.
Shadow IT, also called Stealth IT or Client IT, is hardware or software used within an organization without official approval. Since the cloud economy has made massive amounts of technology readily accessible to a huge base of corporate buyers, it’s now common to see smaller departments within a company - or even individuals - independently procuring and utilizing SaaS solutions without explicit support from the IT department. Shadow IT can introduce security risks and compliance concerns because it hasn’t been officially vetted by the IT security team.
Software Development Lifecycle (SDLC) is a framework used to detail commonly accepted discrete phases - and associated requirements - comprising the full software development process. The formalized SDLC approach enables efficient delivery of high-quality applications and services. There are many SDLC models in general use, with DevOps and DevSecOps gaining in popularity due to their inherent agility and the ability to integrate security testing early in the process.
Static Application Security Testing (SAST) uncovers vulnerabilities in software during its static (not-running) state by analyzing such things as source code, byte code or binary code. SAST is employed during the programming and / or testing phase of the software development lifecycle. SAST is a white box testing methodology where the software is tested from the inside-out by examining the code for conditions that indicate a vulnerability might be present.
Ransomware is a type of malicious software, or malware, designed to deny access to, or "lock," a computer system until a sum of money (ransom) is paid. It’s often distributed as a trojan (malware disguised as a legitimate file) through phishing emails or links on an infected web site. Once a system is infected, the demand is typically displayed on the lock screen with directions on how to pay the ransom (which can be from hundreds to thousands of dollars, with cryptocurrency, which is untraceable, often being the preferred form of payment). Unfortunately, paying the ransom does not always result in restored access to files or removal of the ransomware. Some of the most damaging recent examples of ransomware include WannaCry, Petya and Locky.
Public Key Infrastructure (PKI) consists of a set of roles, hardware, software, policies, processes and procedures for creating, managing, distributing, using, storing and revoking digital certificates and managing public-key encryption. PKI authenticates and secures communications between two systems and is the foundation for the Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols, which protect and authenticate traffic over the Internet. It underpins the trusted business environment for digital signature and e-Commerce applications and is increasingly being utilized to secure IoT. Some experts say blockchain could become competitive with PKI.
Phishing is a fraudulent attempt to trick individuals into divulging sensitive information (usernames, passwords and banking details) by pretending to be a trusted source, often through an email communication. A phishing email may look legitimate and official, perhaps even containing the authentic logo of the supposed source, but it will include links to a fraudulent web site or some type of malware. Spear phishing even more targeted and personalized in the way it’s presented to the victim. The success of spear phishing depends upon three things: the apparent source must appear to be a known and trusted individual; the message contains information supporting its validity; and the request seems to have a logical basis. To avoid being victimized by phishing attacks, organizations must train employees to be suspicious of unexpected requests for confidential information; avoid divulging personal data in emails; or clicking on links in messages unless they are 100% sure of the source.
Patching is a modification to software, or the underlying computer system, designed to fix a security vulnerability, fix a performance issue (bug) or add new features. Patching is usually a part of lifecycle management, and a strategy and plan can improve the usability and performance of an environment. From a cyber threat resilience standpoint, a robust patch management program is crucial as a significant percentage of security issues arise from systems where there is a known vulnerability and an existing patch is available.
Security Orchestration, Automation and Response (SOAR) describes technology platforms that aggregate security intelligence and context from disparate systems and apply machine intelligence to streamline (or completely automate) the incident detection and response process. One of SOAR’s primary functions relates to security orchestration and automation. Security orchestration integrates and streamlines workflows across various tools in order to improve security analyst efficiency and threat detection and response. Security automation is used to execute security operations tasks without human intervention. Many of the day-to-day processes in a Security Operations Center (SOC) are repetitive and time-consuming when performed manually. For example, the process of investigating a typical alert can be a mundane, highly labor-intensive effort, requiring the analyst to pivot between numerous tools to aggregate necessary data. SOAR platforms help SOCs deal with the acute shortage of security talent and overwhelming flow of security alerts that they must process.
Serverless Computing is an emerging cloud computing paradigm that allows application developers to focus on building applications and services without concern for the underlying server resources. In the serverless public cloud model, cloud service providers (CSPs) are responsible for managing the underlying servers, infrastructure and operating systems. Hence, the CSP is responsible for managing the operating system and application runtime, including associated requirements such as patching, versioning, and availability. The serverless computing model has significant implications for security.
Security Orchestration is a method of integrating and streamlining workflows across disparate tools to improve both security analyst efficiency and threat detection and response. Modern Security Operation Centers (SOCs) typically use dozens of security tools to detect, investigate and remediate threats. More often than not, these tools do not "talk" to one another, requiring security teams to learn a variety of systems and navigate multiple dashboards to do their jobs effectively. Security orchestration addresses these challenges by integrating these tools and creating a more efficient threat detection and response workflow that typically requires input from each of these tools. Security orchestration is one part of a complete Security Orchestration Automation and Response (SOAR) solution.
Software Defined Networking (SDN) and Software Defined WAN (SD-WAN) are enterprise LAN/data center networking approaches that use software to abstract underlying network elements and logically centralize network intelligence and control. Under traditional approaches to networking, individual devices function relatively autonomously and with only limited awareness of the state of the overall network. SDN provides centralized network awareness and allows for the development of more intelligent policies that can optimize functions such as bandwidth management, security, restoration, etc. SD-WAN takes SDN principles and extends them to the enterprise's Wide Area Network (WAN). SD-WAN has found application within enterprises that have a significant branch office footprint to simplify the deployment and management of network services across locations. In addition, it can lower bandwidth costs by reducing reliance on expensive Multiple Protocol Label Switching links. SD-WAN can also serve as the foundation from which service providers offer value-added services such as firewall-as-a-service.
Red Team is an independent group that challenges an organization to improve its security effectiveness by assuming an adversarial role or point of view. The Red Team term is derived from war gaming, where the Red Team represents the aggressor whose job it is to test the capabilities of those on defense (Blue Team). In penetration testing, Red Team refers to a testing approach in which there are no restrictions imposed related to systems in scope or time windows to attack. This differs from typical pen tests, which often set limits around both of these parameters. Due to the more open-ended nature, Red Team tests provide the most accurate simulation of a real-world attack scenarios.
Runtime Application Self-Protection (RASP) is a term popularized by Gartner to describe an emerging application security technology where specific functionality is built directly into an application or added into its runtime environment or underlying operating system. By instrumenting an application's code from this position, RASP is capable of monitoring application behavior while it’s running and can take real-time actions to minimize malicious exploits.
Operational Technology, Industrial Control Systems and Supervisory Control and Data Acquisition Systems (OT/ICS/ CADA) - OT represents systems used to monitor and manage the manufacturing equipment or industrial process assets of an organization. OT is differentiated from IT, which represents the information technology assets of an organization. OT is closely related to ICS (industrial control systems) and SCADA (supervisory control and data acquisition systems).
Network Traffic Analysis (NTA) and Network Behavior Analysis (NBA) are related terms describing technologies that use advanced analytics, machine learning and rule-based techniques to detect suspicious activity on enterprise networks. NTA analyzes raw traffic and/or flow records (for example, NetFlow), builds models reflecting normal behavior of network devices and users and triggers alerts when traffic deviates from the normal baseline. NetFlow data only indicates which devices are communicating over the network and the volume of the conversations, and is lower fidelity compared to raw traffic (the actual content of the conversations themselves). Unfortunately, raw traffic is increasingly being encrypted, even within enterprise networks. Therefore, network analytics often require systems to decrypt ubiquitous SSL/TLS protocols to conduct analysis without the security of the underlying data being compromised. Network analysis tools can be used to monitor both north-south and east-west traffic (sometimes called lateral communications) between systems in a data center.
Machine Learning (ML) is the study of algorithms and statistical models computer systems employ in performing a task by relying on patterns and inference, and it’s currently the most common type of artificial intelligence (AI) application. ML distills insight from large data sets, a process which wouldn’t be feasible using only human-guided analysis. Through machine learning, computers can autonomously (without continuous human guidance) "learn" to improve the quality of their analysis as they accumulate more experience in processing a particular problem.
Microsegmentation is an emerging IT security best practice that implements granular isolation (segmentation) policies between data center workloads. Microsegmentation is a key tenet of the Zero Trust model, which recommends workloads be isolated from one another to enhance security and simplify security policy management.