Glossary of Terms

A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z


Abuse Case — A scenario in which software is used in ways other than its intended purpose to violate some policy and (presumably) benefit the perpetrator. Although typically documented as a counterpoint to software use cases to drive security thinking into the development life cycle requirements gathering and design phases, abuse case scenarios are also useful in secure design reviews of existing systems.

Advanced Encryption Standard (AES) — The U.S. National Institute of Standards and Technology (NIST) FIPS 197 specification standardizing certain configurations of the Rijndael symmetric block cipher for encrypting electronic data. AES specifies use of a 128-bit fixed block size with 128-, 192-, and 256-bit keys. The US government adopted AES in 2001 to supersede the Data Encryption Standard (DES) originally published in 1977.

Agile Development — An alternative to traditional software development methodology that helps teams respond to unpredictability with cyclical, incremental, and iterative work cadences. Compare with Waterfall Methodology.

American Standard Code for Information Interchange (ASCII)  A 128-character encoding scheme commonly used to represent text as 8-bit characters in text files and computer programs. Extended ASCII, EBCDIC, ISO Latin 1, and UNICODE are similar schemes for different characters sets.

Application Framework — A library of code used to implement the fundamental structure of an application for a particular target environment, such as a specific operating system, or for a Web or mobile application.

Application Programming Interface (API) — A function, object, and protocol that specifies how one application or module can access the resources of another application or module.

Application Security (AppSec) — The use of people, processes, and technology to protect software (e.g. Web and mobile applications) from successful attack, including efforts such as creating fewer defects, training, outreach, and tooling. AppSec is increasingly referred to as “software security” to be more inclusive of efforts on software, but doesn’t fit the definition of “applications,” such as device firmware, operating systems, components, middleware, etc.

Application Security Testing — The use of security analysis techniques on a specified (i.e., designed) or functional (i.e., running) instance of a software application to identify potential vulnerabilities or other risk. See also: Architecture Risk Analysis (ARA) , Dynamic Application Security Testing (DAST) , Interactive Application Security Testing (IAST) , and Static Application Security Testing (SAST).

Architecture Risk Analysis (ARA) — An interview and document review-driven process for identifying design-level security vulnerabilities (known as “flaws”) in a software architecture to determine the asset risk resulting from those flaws.

Arithmetic errors — A programming error that produces calculation results greater in magnitude than what a given register or storage location can contain or represent.

Array Overflow — Software flaws similar to Buffer Overflow.

Array overruns — Software flaws similar to Buffer overruns.

Attack Pattern — A grouping of security attacks that apply the same general approach or have similar objectives. General attack pattern examples include the use of a Trojan Horse and phishing. Software attack pattern examples include injection and race conditions. Attack patterns are useful for organizing the proliferation of publicized tactics into more manageable groupings (to inform an exercise such as threat modeling, for example), as well as to derive customized variant tactics that may be more effective at achieving the same objective (as might be performed in penetration testing, for example).

Attack Surface — In software security, the collection of interface points that may be attacked and potentially penetrated by a threat agent (i.e., attacker) to obtain unauthorized access to an asset. An attack surface contains the sum of entry points, but we are most interested in those that are exploitable (i.e., vulnerable) to obtain such access via attack vectors (i.e., paths to exploit the vulnerabilities).

Attack Vector — A pathway through which an attacker can exploit a vulnerability in a given piece of software.

Authentication — A mechanism for confirming a user’s (or other entity’s) claimed identity via the provided credentials. In software, this is typically performed by comparing credentials (e.g., username and password) provided by a user to a trusted store of such credentials (e.g., username/password database).

Authentication Token — A piece of information given to a user after successful authentication that subsequently acts as proof of authentication, relieving the user from having to re-authenticate upon each new attempted action within a system or application.

Automated Ethical Hack (AEH) — See Vulnerability Assessment.

Automotive Safety Integrity Level (ASIL) — A risk classification scheme used by the automotive industry and defined by ISO 26262 - Functional Safety for Road Vehicles standard.


Backdoor — (1) A method of bypassing established authentication or other security processes to obtain access to a system. Typically designed into a system for purposes of providing access to a select group of users (e.g., vendor maintenance personnel or law enforcement) in an undisclosed, undocumented fashion. (2) A method of bypassing established authentication or other security processes placed without approval for the purposes of facilitating ongoing unauthorized access.

Binary Code — This is code that has been compiled into a binary format so that a computer can read and execute it. Not human readable. Compare with Source Code.

Bill of Materials (BOM) — For software, this is a list of third-party code components inside a software package or firmware.

Black Box — A type of security analysis or testing that assumes the assessor/tester/adversary possesses little or no prior knowledge of the target system beyond that which can be gleaned from easily accessible sources (e.g., the Internet). Contrast with White Hat.

Black Hat — A hacker who attempts to gain unauthorized access to a computer or computer network or data asset.  Compare with White Hat.

Buffer Overflow — A programming error related to the mismanagement of memory that may result in unauthorized memory access. This often results in a vulnerability that can be exploited by inputting carefully crafted data that “overflows” into areas of memory that can hijack control of software execution. The programming languages C and C++ provide no native protection against accessing or overwriting data in any part of memory are most susceptible. Bounds checking in these languages may help prevent buffer overflows.

Buffer Overflow / Buffer Overrun— A programming error caused by the use of carefully crafted data that overflows the allotted areas of memory and could as a result potentially alter existing code or executing new code. The programming languages C and C++ provide no native protection against accessing or overwriting data in any part of memory are most susceptible. Bounds checking in these languages may help prevent buffer overflows

Bug — An implementation-level software problem or defect in code. Contrast with Flaw.


Caching — The temporary storage of content (e.g., Web pages, CPU instructions) to accelerate subsequent performance (i.e., by being able to access the content faster), usually in some high-speed memory or disk space (i.e., cache).

CAN Bus — An automotive standard that allows microcontrollers and devices to communicate with each other without the use of an operating system.

Chief Information Security Officer (CISO) — An organizational executive usually responsible for protecting the data and information security of the organization.

Clickjacking — An attack technique that consists of tricking a targeted user into clicking something different than what they think they’re clicking, causing the victim to perform a sensitive operation on behalf of the attacker.

Client-Side Trust — A software design practice that automatically authorizes actions performed by an endpoint component (i.e., client) deemed trustworthy by a server. This is poor software design because it typically bypasses explicit—and more reliable—mechanisms for authorizing activity, such as server-side authentication. It also carelessly relies on the integrity of the computing environment that surrounds the client, which in many cases is unreliable (e.g., PCs infected with malware, or mobile devices that are easily physically accessible to attackers). In contrast, a better practice is to validate requests on the server-side (e.g., by comparing with more trustworthy values that have, for example, been stored in a well-managed data center).

Cloud Computing — A model of enabling on-demand network and computing capabilities via a pool of shared configurable resources. These resources include networks, storage appliances, software applications, and services.

Cloud Security — The sum of all the security risk management techniques applied to protect applications and data stored and executed “over the Internet” (i.e., distributed across computing infrastructure usually maintained by a third party). For example, consider adaptations to security in the software development life cycle (SDLC) applied to applications developed or deployed via cloud.

Code Coverage — The amount of source to be code tested by a specific test suite. High code coverage tests more of the code than low code coverage.

Code decay — A process by which an application gradually deteriorates over time as a result individual components becoming more and more vulnerable.

Component — An individual item in a Bill of Materials report.

Concurrency bugs  — A programming property where several computations are executing simultaneously, some with errors.

Container Security  Containers provide an isolated, discrete, and separate environment for applications in  Cloud Computing.  Containers only contain what is necessary to run the given app hence the nickname JeOS, or "Just enough OS".

Continuous Integration/Continuous Delivery (CI/CD) — The process of merging all developer working copies of code to a shared main software library several times a day. This is done to ensure that an application can released into production quickly.

Cross-Site Scripting (XSS) — An attack on an application where malicious executable scripts are injected into a trusted application or website.

Common Vulnerabilities and Exposures (CVE) — From The MITRE Corporation, a publicly available dictionary of common names for publicly known information security vulnerabilities in the form year of disclosure and assigned number, example CVE 2016-0345.

Common Weakness Enumeration (CWE) — From The MITRE Corporation, a publically available dictionary of common software coding flaws that can may lead to vulnerabilities.

Component — An individual item in a Bill of Materials report.

Concurrency Bugs — A programming property where several computations are executing simultaneously, some with errors.

Container Security — Containers provide an isolated, discrete, and separate environment for applications in cloud computing. Containers only contain what is necessary to run the given app hence the nickname JeOS, or "Just enough OS."

Continuous Integration/Continuous Delivery (CI/CD) — A software engineering practice that combines moving developer changes to the main software repository frequently (e.g., one or more times per day) and ensuring the application can be moved into production at any time.

Cost — The total cost of the impact to an organization as the result of a particular threat experienced by a vulnerable target. Part of the assessment equation Risk = Threat x Vulnerability x Cost. Compare with Risk .

Cross-Site Request Forgery (CSRF) — A type of attack where an attacker causes a user to perform an unwanted or unintended action on an application where the user is already authenticated. These attacks usually intend to cause application state changes because the attacker cannot see the application’s response.

Cross-Site Scripting (XSS) — A type of attack where an attacker injects malicious executable scripts into the code of a trusted application or website. This is typically exploited by tricking a user into executing a maliciously-crafted link that appears to be legitimate, but due to a vulnerability in the affected application, executes code of the attacker’s choice on the user’s system.

Cascading Style Sheets (CSS) — A language used to define the style elements and presentation of a Web page, along with the precedence order in case of a conflict.

Cybersecurity — The sum of all security risk management techniques applied to protect a collection of information systems from exploitation that causes unauthorized access or harm.

Cyber Threat — A possible (but not actual) malicious attempt to damage or disrupt a computer system.

Cyber Warfare  An attack or set of attacks on technology (e.g., a network or piece of software) implemented by a nation-state or equivalent actor for purposes of furthering its interests, whether territorial, political, psychological, etc.


Data Breach — Also known as a data leak. Any unauthorized or otherwise improper release of sensitive information.

Data Security — The sum of all security risk management techniques applied to protect a collection of data from unauthorized access or harm.

Data Breach / Data Leak — Unauthorized access to or exposure of sensitive information.

Dead Code — A programming term for code that is executed but never used by any other computation, wasting computation cycles and potentially consuming memory.

Defect — A problem in the hardware, software, or related processes for a system. The problem may never be reachable by attackers, may never be discovered even if reachable, and may not cause any harm even if exploited. On the other hand, a defect may be easily exploitable and exploitation may be catastrophic. Implementation vulnerabilities (bugs) and design vulnerabilities (flaws) are both defects.

DevOps — A shortened phrase for the relationship between software developers and IT operations.

Denial of Service (DoS) — The act of preventing legitimate users of a system (or entities on a system) from accessing or using the system or its resources. The localized disruption of a compromised computer system may include a software crash.

Design Pattern  A repeatable solution to a commonly occurring design problem (e.g., a particular flaw) in systems, software, networks, data models, etc.

DevOps — A shortened phrase for the relationship between software developers and IT operations.

Distributed Denial of Service (DDoS) — A coordinated disruption of legitimate access to a specific online service involving more than one compromised computer system often via a botnet.

Division by Zero – A programming error that can, when a computer attempts to divide by 0, provide a cascade effect, stopping code from executing. This is especially true in some languages such as C or C++. In Java, division by zeros are thrown to exceptions.

Dumpster Diving —  The act of sifting through an organization’s discarded or recyclable materials in search of improperly disposed informational assets that can be subsequently leveraged in an attack.

Dynamic Analysis — A testing technique that involves executing an application and providing inputs or other failure conditions meant to find defects in real time. Contrast with Static Analysis.

Dynamic Application Security Testing (DAST) — A form of dynamic analysis performed specifically to find security defects, usually in Web applications.


Electronic Control Unit (ECU) — An embedded system that controls one or more of the electrical systems or subsystems in a motor vehicle, often to control a specific function such as brakes or the engine.

Electronic Design Automation (EDA) — Software tools for the design of electronic systems such as printed circuit boards and integrated circuits.

Encryption — The transformation of plaintext data into indecipherable data (ciphertext) in a manner that permits access only to authorized users (e.g., those who possess a corresponding decryption key that can reverse the transformation to produce the original plaintext).

Encryption Key — A piece of information, usually a string of bits that determines the functional output of an encryption algorithm or cipher.

Enterprise Security Architecture — The result of enterprise architecture efforts that implement security-related people, process, and technology controls throughout the enterprise.

Ethical Hacking — An authorized attempt to gain unauthorized access to a computer system, application, or data, often by duplicating strategies and actions of malicious attackers.

Exploit — (noun) A combination of technology, technique, and knowledge that can be applied to a vulnerability and cause a security compromise.

Exploit — (verb) The execution of a vulnerability to cause a security compromise.


Failure Mode and Effects Analysis (FMEA) — One of the first system techniques for failure analysis within systems, used in the automotive industry.

False Negative — A defect that is a real vulnerability, but is not found during security testing that could be reasonably expected to have found it (e.g., a simple SQL injection vulnerability that a DAST tool misses or doesn’t report). Sometimes referred to as a ‘Type II’ error.

False Positive — Any circumstance, whether a defect or not, that is reported as a vulnerability during security testing, but is not actually a vulnerability (e.g., a DAST-reported security defect that isn’t an actual security defect). Sometimes referred to as a ‘Type I’ error.

Fault Injection — A technique for improving the coverage of a test by introducing faults to test code paths, in particular error handling code paths that might otherwise rarely be followed.

File Transfer Protocol (FTP) — A common network protocol used to transfer data from one system to another. It operates over an unencrypted channel where files can be easily viewed and tampered with during transit.

Firewall (Network) — A hardware or software device used to filter or control network traffic, usually between a trusted zone (e.g., an internal network) and an untrusted zone (e.g., the Internet).

Flaw — A design-level or architectural defect in specifications or software.

Free Open Source software (FOSS) — Software in which the source code is publically available and free. See Open Source Software.

Fuzz Testing — A software testing method where malformed input is used to trigger a software crash or unexpected result.


Governance, Risk Management, and Compliance (GRC) — A common enterprise term where Governance is established and executed by the board of directors (BOD) toward achieving specific goals. Risk management is anything that may hinder the organization’s ability to achieve its objectives. Compliance ensures that the organization’s policies and procedures, laws and regulations, and strong and efficient governance contribute to the organization's overall success.


Hacker — Someone who takes something apart. Commonly, a computer hacker who attempts to deconstruct code or network traffic to learn more about it.  Compare with Black Hat  and White Hat.


Information Security (InfoSec) — A shortening of "information security" that is commonly used to describe all aspects of computer security.

Information Sharing and Analysis Organization (ISAO) — A group formed to gather, analyze, and disseminate critical information as outlined in Presidential Policy Directive 21. They are different from ISACs in that they are not tied directly to critical infrastructure sectors.

Infrastructure as a Service (IaaS) — A cloud service model that delivers “over the Internet” computing infrastructure as a service.

Injection Attack —  A specially crafted input that triggers an exploitation of a software or computer vulnerability most often found in SQL, LDAP, Xpath, NoSQL queries, OS commands, XML parsers, SMTP Headers, and program arguments. Can be detected by fuzz testing.

Integrated Development Environment (IDE) — A software tool that facilitates software development. An IDE generally consists of a source code editor, build automation tools, and a debugger.

Interactive Application Security Testing (IAST) — The combination of SAST and DAST into an interactive testing solution. IAST typically encompasses the use of software test harnesses (i.e., agents) to monitor an application being tested using DAST, as well as the use of corresponding SAST output to further tune testing—enhancing the overall application security testing in terms of coverage, speed, and accuracy.

Internet of Things (IoT) — The entire network of devices—such as cars, sensors, appliances, buildings, cameras, etc.—that have the technology and protocols allowing these devices to collect and share data.


Jailbreaking — The process of removing the limitations imposed by Apple on devices running iOS by using a custom-built kernel or other attacks to obtain root access. Equivalent to “rooting” an Android device.

Java — A programming language that is concurrent, class-based, and object-oriented.

JavaScript — A dynamic, weakly-typed, object-oriented, cross-platform scripting language commonly used in Web pages to execute client-side functions in a Web browser.


Key Management (Cryptography— The process of managing cryptographic keys (e.g., for encryption or signatures), including generation, exchange, storage, use, revocation, and replacement of the keys.


Malicious Code — Code intended to cause undesirable effects in the software or system within which it runs, including effects such as denial of service, unauthenticated access, data exfiltration, participation in a botnet, etc. See Malware.

Malware — Short for “malicious software.” A general term for any program, script, or other software designed to disrupt system operations, gather sensitive information, gain unauthorized privileges, or perform any other unwanted action. See Malicious Code.

Managed Security Services — Outsourced security functions operated by a third party, usually used for the purpose of cutting costs.

Man-in-the-Middle (MitM) Attack — A form of active eavesdropping in which the attacker sits in the middle of an existing communication between victims, or makes independent connections with the victims, and relays and possibly alters messages between them. A successful MitM attack makes the victims believe they are talking directly to each other over a private connection when, in fact, the entire conversation is controlled by the attacker.

Manual Ethical Hack (MEH)  — See Vulnerability Assessment  and Penetration Testing.

Memory Leaks  — A programming failure to release discarded memory which could lead to performance failure.

Mitigation — Reducing the severity or impact of an issue or vulnerability discovered in a security test, often through compensating controls such as log monitoring, application firewalls, and temporarily removing access or functionality. Contrast with Remediation.

Mobile Application Security Framework — A set of technologies that, when used correctly, offer additional security capabilities to mobile applications, such as advanced authentication and root access detection.


National Institute of Standards and Technology (NIST) — The federal technology agency that works with industries to develop and apply technology, measurements, and standards.

National Vulnerability Database (NVD) — A NIST database that maintains security checklists, security related software flaws, misconfigurations, product names, and impact metrics.

Network Security — The process of preventing unauthorized activity across a computer infrastructure or network.

Node.js — An open-source, cross-platform runtime environment used for developing server-side Web applications.

Null Pointer Bugs — A programming term for dangling or wild pointers that do not point to a valid object, producing unpredictable behavior.


Open Source Software (OSS) — Generally, source code that is available for use, modification, and distribution by anyone for any purpose. The definition has evolved significantly over the past 20 years and continues to do so. See and for the current dogma.

OpenSSL — An open source implementation of the SSL and TLS protocols.

OWASP — Open Web Application Security Project.

OWASP Top Ten — An OWASP effort that presents a list of what working groups consider the most critical Web application security vulnerabilities.


Path Manipulation  A class of attacks related to the abuse of file system paths, typically misdirecting a query to obtain unauthorized access to data. Also called path traversal attacks or directory traversal attacks. Common examples include the “dot dot slash” attacks that permit attackers to backtrack through a Web server directory structure and obtain sensitive data.

Payment Card Industry Council (PCI) — An industry self-regulating organization formed by Visa, MasterCard, American Express, Discover, and JCB to improve the security of credit card practices.

Payment Card Industry Data Security Standard (PCI- DSS) — A set of security regulations for businesses that processes credit or debit cards.

Penetration Testing  Goal-oriented security testing that emphasizes an adversarial approach (i.e., simulating attacker methods) in pursuit of one or more specific objectives (e.g., capture the flag). Contrast with Vulnerability Assessment.

Personal Identification Number (PIN) — A numeric sequence used to assist in verifying a user’s identity. PINs are usually a “second factor” in authentication (something you know) used in conjunction with, for example, a credit card or smartphone (something you have).

Phishing  A strategy to deceptively get someone to perform an otherwise undesirable action (e.g., divulge sensitive information or transfer assets) by posing as a trusted party in an electronic communication.

Predictable Session Identifiers  — A vulnerability in which Web applications produce guessable identifiers, facilitating various kinds of attacks.

Production Part Approval Process (PPAP) — An automotive supply chain process for establishing confidence in component suppliers and their production processes.

Procurement Language — A set of expectations set forth by the acquirer when obtaining third-party components or software.

Python — A general-purpose, interpreted, dynamic programming language that was in fact named after Monty Python.


Red Teaming — Goal-based, adversarial testing in which a person or group (the red team) evaluates the ability of an organization’s people, processes, and technologies to withstand a targeted attack that may use a variety of techniques across multiple organizational aspects (e.g., physical, personnel, network, operations, process, etc.)

Remediation — Fixing an issue or vulnerability identified in a security test. Contrast with Mitigation.

Requirement (Software)— A description of a need that must be met by software. Requirements are often categorized as functional (e.g., a requirement to use a certain type of authentication) or non-functional (e.g., expressing the emergent behavior of a system to address a negative situation or specific security function).

Reverse Engineering  The “bottom-up” analysis and discovery of technological principles and functionality of a device, object, or system through analysis of its structure, function, and operation.

Risk — (1) The probability that an undesirable event will actually occur. (2) A measure of the potential impact given an undesirable event occurring. Risk assessment is a combination of threat, vulnerability, and cost.  If any of these are 0, then the risk is 0.  Risk = Threat x Vulnerability x Cost

Risk-Based Security Testing — A type of software testing that prioritizes the security testing of features and functions based on the associated security risk.

Risk Management  — The ongoing business process of identifying and prioritizing issues based on the risk they represent, followed by the concerted application of resources to reduce or monitor the risk.

Rooting — The act of removing protections put in place by Android devices, allowing rogue code to be downloaded and executed on a device. Compare with Jailbreaking .

RSA — An asymmetric encryption system (the encryption key and the decryption key are different) where security is based on the difficulty of factoring large prime numbers. Key generation creates a public key that can be freely distributed and used for encryption, and a private key that is retained as a secret and used for decryption.

Ruby — An object-oriented, general programing language influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp.

Runtime Application Self-Protection (RASP) — An application security approach that encompasses a number of technological techniques to instrument an application so attacks can be monitored as they execute and, ideally, blocked in real time. In concept, RASP promises to leverage an application’s unique awareness of anomalous activity; but in practice, such anomaly detection-oriented approaches depend heavily upon baselining “normal” in order to successfully catch “anomalies.”


SANS Institute — This is the common business name of the Escal Institute Of Advanced Technologies, Inc. and provides InfoSec training and awareness.

SANS Top 25 — Top 25 software vulnerabilities as identified by the SANS Institute.

Satellite — An internal group indirectly responsible for software security. A satellite is often a virtual group that interacts directly with application teams in collaboration with and in addition to the software security group (SSG), without directly reporting to the SSG. A satellite is an important contributing factor to a successful software security initiative (SSI).

Secure Coding — The practice of writing software in such a way that it is resistant to attack by malicious people or programs.

Secure Design — The practice of constructing a software foundation that is resistant to attack by malicious people or programs, usually by following well-known secure design patterns and accounting for relevant risks that affect the given system.

Security Operations — A set of people, processes, and technology focused on monitoring, finding, and responding to security issues in operational environments.

Security Policy — A set of mandatory rules (e.g., constraints on behavior and decisions) aimed at governing certain security aspects of a system or organization.

Signoff — A process of gating software development so that code is tested throughout the software development life cycle and not only at the very end.

Smart Grid — An electrical grid that includes various types of computerized equipment (e.g., meters, appliances, generators, batteries, etc.) and the digital communications that allow them to interoperate safely and efficiently.

Social Engineering — A low-tech attack strategy relying on deceiving humans to bypass security controls.

Software Maturity Model — A process used to analyze maturity in a given business process, in this case software development.

Software Security — The overall process of designing, engineering, and testing software so that it continues to function correctly (i.e., as expected) even under malicious attack. A superset of Application Security.

Software Security Group (SSG)  — An internal organizational group directly charged with managing and/or executing software security efforts to achieve the SSI objectives. See also Software Security Initiative  and Satellite.

Software Security Initiative (SSI) — All of the activities undertaken for the purpose of building secure software, encompassing business, social, and organizational aspects, as well as process and technology.

Software as a Service (SaaS) — A cloud service model that makes software (and applications) hosted by a third party available over a network, usually the Internet.

Software Development Life Cycle (SDLC) — A framework defining activities performed throughout the software development (or application life cycle) process, usually spanning planning, creation, testing, deployment, maintenance, and eventual removal.

Source Code — The raw, uncompiled code itself, readable by a human. To use it, source code must first be compiled on your machine. Compare with Binary Code.

Source Code Review (SCR) — Review of software code using automated or manual approaches to identify potential security vulnerabilities. See also Static Analysis  and SAST.

Spoofing — The act of faking an action or request on behalf of a legitimate source (e.g., an email message with a falsified sender address spoofing the source of the message, or assuming another security user’s identity for purposes of gaining unauthorized access to a resource).

SQL Injection — An injection attack used against SQL-based applications. The attacks typically involve the insertion of specific SQL command sequences into an application via an unexpected interface (e.g., a Web form expecting a username). The command sequences in what was supposed to be a username are then inappropriately executed by the application rather than simply using the data as input. The impact of SQL injection attacks can be quite severe because they often result in execution of arbitrary commands at a highly privileged level (e.g., the SQL database administrator).

Spyware — Malware designed to secretly monitor the activities of a user while on their computer. It either reports the user’s behavior to the malware’s designer or takes some malicious action based upon the information acquired.

Static Analysis — Code Review that attempts to identify security vulnerabilities in “static” software—typically software that has been decomposed into its most basic form (i.e., source or object code), and that is not executing while the analysis is performed. See also Source Code Review  and SAST.

Static Application Security Testing (SAST) — A testing analysis of an application's source code to identify vulnerabilities without execution. Compare with DAST, IAST, and Static Analysis.

Structured Query Language (SQL)  A specialized programming language designed for managing data in relational database management systems.


Tailgating (Piggybacking) — Gaining unauthorized physical access to a building by following an authorized individual into the premises.

Taint Checking — A feature found in programming languages such as Perl and Ruby that is designed to prevent malicious code from executing on a computer. Taint specifically applies to websites compromised through SQL or buffer overflows.

Test-Driven Development (TDD) — A process for designing software components so that their behavior is defined through unit tests.

Threat — A composite of a threat agent, threat motivation, threat objective, threat method, and one or more attacks. Given dozens or hundreds of types for each (e.g., actors, motivations, objectives, methods, and attack actions), it is not feasible to make a generic list of all threats, and it may be a very long list even for a specific attack surface point on a specific system. A specific combination of threat, vulnerability, and controls yields risk (Risk = Threat x Vulnerability x Cost) that the threat can bypass the controls and exploit the vulnerability to achieve a goal undesirable to the system owners or other stakeholders. Compare with Risk .

Threat Agent — Malicious principal (typically a human being) that has one or more motives to cause harm to a system or its users. Synonymous with Threat Actor.

Threat Modeling — A type of security analysis that documents threat agents, skill, motivation, attack surface, attack vectors, discoverability, probability, impact, and mitigation information for a system to facilitate risk analysis. Identified risks are used as input to change design and to improve downstream security activities like penetration testing and secure code review.

Traceability Matrix — A table used to track the completeness and show the correctness of many-to-many relationships (e.g., software requirements vs. test cases). In threat modeling, a threat traceability matrix maps factors including threat agents, skill, motivation, attack surface, attack vectors, discoverability, probability, impact, and mitigation, providing a more formal correlation of these attributes to enhance the outcome of the threat modeling exercise.

Trojan — Software that masquerades as a beneficial program while surreptitiously destroying data and damaging the system.


Unit Testing — The process of analyzing each unit of software code separately.

Unsafe Environment Variable — A string used to configure operating system execution where use of the string or interpretation of the string is done incorrectly (or cannot ever be done safely) and represents a vulnerability.

Unsafe System Call — (1) A call into kernel software that can block, causing the CPU to become non-responsive until the call returns, effectively rendering the system inoperable. (2) A call into kernel software that inappropriately allows the caller to escalate execution or memory access privileges.


Vendor Assessment (Third-Party Assessment) — An assessment of the risk associated with using software built by a third party given their processes, maturity, resources, and the state of the software they’ve provided.

Virtual Private Network (VPN) — A method of securely connecting two local private networks (e.g., your home and your office) over a public network (e.g., the Internet), enabling systems to safely send and receive data as if they were directly connected.

Virus — A malicious software program specifically designed to replicate to other resources or systems once installed on a computer. Viruses often interfere with a computer’s operation and/or copy, corrupt, or delete data.

Vulnerability — (1) A bug in code or a flaw in software design that can be exploited to cause harm. Exploitation is usually perpetrated by an attacker, but can also occur as a result of authorized actions. (2) A lapse in security procedures or a weakness in internal controls that allow exploitation that would result in a security breach.

Vulnerability Assessment — A testing process used to identify and assign severity levels to as many security defects as possible in a given timeframe. The process may involve automated and manual techniques with varying degrees of rigor. The emphasis is comprehensive coverage. Vulnerability assessments might be targeted at different layers of technology, the most common being host-, network-, and application-layer assessments. Contrast with Penetration Testing.


Waterfall Model — A sequential software development process in which progress is achieved steadily downwards through the different phases of the process, often in a single pass with no iteration. These phases are Requirements, Architecture, Design, Code, Test, Deployment, and Operations. Contrast with Agile Development.

Web 2.0 — A category of Web technologies and applications that offer advanced information sharing, collaboration, and interoperability capabilities based on evolution from static to dynamic Web pages, user-generated content, and the weaving in of social media.

Web Application (Web App) — A client-server application where the client is a Web browser and the server is an application reached through an HTTP-based protocol.

White Box — A type of analysis or testing in which full information about the target system is used by the analysts or testers. This typically includes access to both source code and detailed design documentation, and possibly interviews with personnel involved in architecture and engineering. Contrast with Black Box assessments or tests.

White Hat— A hacker with permission to take apart or otherwise gain access to sensitive assets on a network for non-malicious reasons. Compare with Black Hat.

Worm — A self-propagating piece of malicious software that usually requires minimal interaction from victims to spread (e.g., the Morris Worm spread autonomously by scanning networks, finding specific vulnerabilities in network-exposed services, exploiting them, and then copying itself to the compromised machine.)