National Vulnerability Database


NVD

The National Vulnerability Database is an online database operated by the NIST. It can be found at http://nvd.nist.gov/nvd.cfm. The NVD uses the Security Content Automation Protocol (SCAP). This protocol is a set of standards designed to support automation of VM, compliance management, and other security functions. We have already discussed some of those standards, which include OVAL, CVE, and CVSS. There are three items we have not discussed: CCE and CPE, which concern target enumeration, and XCCDF, which provides checklists for target evaluation and standard formats for reporting.
CCE refers to Common Configuration Enumeration identifiers. These are identifiers used to correlate checks performed on system configurations with documents and tools that provide related information. CCE identifiers will not be discussed in depth except to suggest reviewing the CCE lists provided by Mitre Corporation.

CPE

Common Platform Enumeration identifiers provide a standard naming scheme for technology systems and components. In practical VM terms, CPE identifiers are used to indicate what systems or components are subject to a particular vulnerability. When a new vulnerability is announced, “which systems are vulnerable?” is the first question that is asked. CPE is intended to clearly document a platform so that applicability of a vulnerability announcement is easily determined through both automated and human methods.
A particular computer system can be assigned a CPE name, which represents the complete enumeration of that platform in terms of what is installed. This includes the hardware, the OS, and applications. It does not include detailed configuration options such as the status of particular switches or security policies. So, the first thing that might occur to you is that there are millions of combinations that can be enumerated. This is quite correct, but CPE has a basic requirement that addresses this issue. If a vulnerability is announced for CPE name “cpe:/o:microsoft:windows_xp::sp2,” then a system enumerated with a CPE name like “cpe:/a:microsoft:office:2003::standard” is subject to that vulnerability. This is a “grouping” approach to enumeration of systems subject to vulnerabilities, which in CPE parlance is called a prefix property. The enumeration of a platform typically requires multiple CPE names since a platform can be composed of many parts.

Encoding

The encoding of CPE names is logically structured just as described in the previous section: hardware, OS, and application. The encoding format follows the URI format although it is not officially recognized by IANA, the governing body for Internet assigned numbers typically found in URLs. This format is used for convenience and to leverage an established convention that works well for naming a resource on the Internet. Here is the basic structure of a CPE name:
  • cpe:/ {part} : {vendor} : {product} : {version} : {update} : {edition} : {language}
So, there are seven parts to this format:
  • Part: The part is defined as either hardware “h,” operating system “o,” or application “a.” The people at MITRE have left the door open for other parts as well, such as driver “d.”
  • Vendor: This is usually specified as a portion of the domain name for the vendor. So, Mozilla.Org would have the vendor name Mozilla. Strictly speaking, it is the highest organization level of the DNS name of the vendor. If there is more than one organization with this DNS name, then the entire DNS name is used.
  • Product: CPE uses an abbreviation for the product provided by the vendor. It is common for computer users to use “IE” to indicate Internet Explorer. So, this is the abbreviation used.
  • Version: This is a version number for the product. For example, “5.0.”
  • Update: These are specific updates that may be applied by the vendor to a particular version. Use of fields becomes pretty sporadic, depending on how the vendor issues releases and updates. Some vendors tend to have smaller port releases (versions) to perform updates.
  • Edition: The edition field is typically used to distinguish among the various flavors of a product. So, Windows Vista® would have several editions such as Home Basic, Home Premium, Business, and Ultimate.
  • Language: Intuitively, this is the language used in the target CPE name. This makes it easier to identify systems that are vulnerable based on their installed language pack.

Examples of CPE Names

Obviously, cpe:/a:adobe:flash_player:1.1 represents Adobe Reader version 8.1.1. But the specification is not specific to the OS. Any vulnerability using the name would apply to all version 8.1.1 instances of Adobe Reader on any OS. If we modify the name to apply only to Windows XP, then we could use the following name: cpe:/a:adobe:flash_player:9.0.20.0::windows_xp. If a specific OS is to be named for a vulnerability (e.g., Windows XP, all versions), then the following name would be used: cpe:/o:microsoft:windows_xp. However, if we wanted to be more specific (e.g., Windows XP SP1 Pro), we would use cpe:/o:microsoft:windows_xp::sp1:professional.
Hardware is identified in a similar fashion: cpe:/h:cisco:ip_phone_7960 represents a Cisco iPhone® model 7960. Notice that the model number, and not a version, is built into the product. This is because Cisco has chosen to represent product versions with a different model number. When identifying hardware, it is possible not only to identify a computer system but also a specific motherboard such as the Intel D845WN (cpe:/h:intel:d845wn_motherboard).


XCCDF

Earlier, we discussed the need to standardize the vulnerability testing methods using OVAL. Also, we have discussed how a data structure might look in a vulnerability scanner. Similarly, the Extensible Configuration Checklist Description Format (XCCDF) is an XML-based set of documents that specify checklists for validating security compliance for various types of target systems. XCCDF also specifies a standard format for reporting compliance and scoring. This simplifies the interoperability of various security systems. It is not a substitute for OVAL, but rather a supporting technology that can actually extend OVAL and enhance its interoperability with proprietary technologies.
XCCDF has a primary use case in the definition of compliance checks, compliant machine states, and results reporting. The language is designed to allow for the definition of security benchmarks that are reflected in detailed configuration item settings. Checks can be developed and submitted to the NIST checklist program for review. If accepted, the checks will be available to anyone in the world who supports XCCDF in their product. This happens to be very few vendors. XCCDF is primarily used in U.S. government security in support of NIST publication 800-53 and FIPS 199. The big benefit is that a standard mechanism has been created to validate and report security compliance to checklists and rules across multiple vendors and security systems. XCCDF supports the idea of VM as an integrated part of configuration management.
At first, you might think that any such language must be confining when it comes to defining checks and scoring. However, XCCDF is designed to be customizable and flexible for achieving consistent results for a variety of systems. For example, one option is “selectability.” A particular XCCDF document will contain a set of rules describing the state of a target in order to be compliant. Those rules can be selectively turned on or off (selected), depending on the target under scrutiny. Similarly, parameters can be substituted to accommodate flexible rules. For example, the size of an encryption key for a VPN configuration may be 256 bits for a system communicating insignificant data, and 4096 bits for one carrying sensitive information.
XCCDF has four types of objects:
  • Benchmark is a master container for everything else in the document. It is similar to the definitions in OVAL.
  • Item is similar to an object or test in OVAL. It contains a description and identifier. There are three types or classes of item: group (holds other items), rule (holds checks, scoring weights, and remediation information), and value (provides the previously mentioned substitution ability).
  • Profile provides references to item objects. It contains many of the values needed for a particular profile of a system. This significantly helps apply asset classification to the values applied to the rules where appropriate.
  • Test result holds the results of the test performed. Most significant in this object are the “rule-result” and “target-facts” elements. This object contains the actual results of the tests performed and is very informative during reporting and remediation processes. The target-facts information can be sensitive information about the actual result of the test against the target that resulted in compliance or non-compliance.
An interesting attribute of XCCDF is that it can reference the content of OVAL in a rule. A child element of a rule called “check” allows the document author to reference the system from which a check is obtained. For example, ‘’ is then followed by the specific reference of the check within the source system: ‘. This is only a reference to the OVAL rule and not the actual rule. Software that performs the checks will have to properly interpret and execute these references. The use of OVAL is not a requirement for any system unless the goal is SCAP compliance.
Overall, XCCDF is a great idea for standardizing configuration compliance checks across vendors and organizations. However, its use is mostly restricted to the U.S. Department of Defense. If you work for the government or are a vendor, more details can be found in the document entitled “Specification for the Extensible Configuration Checklist Description Format (XCCDF) Version 1.1.3 (Draft)” by Neal Ziring and Stephen D. Quinn.

The Standard for Vulnerability Severity Rating



A very important part of evaluating a vulnerability is knowing the impact or risk to the organization. Many vendors have their own evaluation methods. But, there must be some standard with which all software makers and vulnerability researchers can agree on the criteria for rating severity. The Common Vulnerability Scoring System (CVSS) was developed to provide a standard framework for assessing the impact of a vulnerability and its basic characteristics. Although the contents and methodology are not the complete picture, they help to assess risk by doing much of the technical work in advance by the Forum of Incident Response and Security Teams (FIRST). FIRST is a non-profit group of vendors, researchers, and other volunteers who work to enhance security incident response practices.
CVSS provides relevant vulnerability metrics that the user can look at and quickly determine whether further action is necessary to address risk. These metrics are organized into three groups: base, temporal, and environmental. For each of these groups, a score is calculated. Each group has metrics that are combined to calculate the score for that group. Figure 1 shows the relationships among the metric groups, metrics, and equations. You can follow this discussion by referring periodically to the figure. For clarification, items indicated with dashed lines are subequations or subgroups that only provide intermediate values or logical groupings of metrics.

 
Figure 1: Metric relationships in the Common Vulnerability Scoring System (CVSS).
Base metrics are constant. They are fundamental and do not change over time. The metrics of the base group are access vector (AV), access complexity (AC), authentication (AU), confidentiality impact (C), integrity impact (I), and availability impact (A). As any CISSP® should know, confidentiality, integrity, and availability (CIA) form the triangle of security, and so it is no surprise they should be included here.
Each of these metrics of the base metrics group has a value, depending on the severity or impact. For example, AV indicates what kind of access an attacker must have for the vulnerability to be exploited. If the vulnerability requires that the attacker be physically present and touch the keyboard (i.e., local access), then the value of this metric is 0.395. If the vulnerability can be exploited over the network (i.e., remote access), then the value of this metric is 1.0. This process is repeated for all of the metrics that apply to the vulnerability. The CIA metrics together are referred to as impact metrics and are combined in calculations to determine the total impact, which is then applied in equations for the base score. The equations are where all the work is performed to produce a total score for the group.
The reasons for the particular value of the metrics involves an understanding of the relative effect an exploit would have and how significant that metric is in the calculation of the score for the group. The greater the effect, the higher the value. But, not all metrics are created equal. AVs may be more important to the overall severity of a vulnerability than complexity of the exploit. The difference between low complexity (0.35) and high complexity (0.71) is 0.36. But, the difference between requiring local access (0.395) and network accessibility (1.0) is 0.605. AV, AC, and AU are three base metrics that work together to determine the overall exploitability (E) of a vulnerability. The equation for exploitability is E = 20 * AV * AC * AU. This may seem like a lot of trouble but the formulas and values of the metrics have already been worked out for you and save a lot of time.
Temporal metrics are optional and have values that can change over time. Base metrics are used as input into the “temporal” calculations and yield a score that may more accurately reflect the risk on a scale of 0 to 10. For example, a vulnerability may be in the proof-of-concept phase, which is less of a threat, and therefore is assigned a value of 0.9. As time passes, an automated script may become widely available that makes exploitation so simple a script kiddie can do it. Then, the value is 1.0 for this metric. The temporal equation uses a case function that adjusts the impact calculation from the base equations by multiplying by the previously mentioned metrics. Table 1 details the CVSS metrics and their values. Note that each value also has a numerical score not shown in the table. These numerical scores are subject to change as equations, described later, are refined.
Table 1: CVSS Metrics 
METRICTYPE
CVSSMETRIC
DESCRIPTION
VALUE
Base
AccessVector
Requires local access
0.395
Adjacent network accessible
0.646
Network accessible
1.0
AccessComplexity
High
0.35
Medium
0.61
Low
0.71
Authentication
Requires multiple instances of authentication
0.45
Requires single instance of authentication
0.56
Requires no authentication
0.704
ConfImpact
None
0.0
Partial
0.275
Complete
0.660
IntegImpact
None
0.0
Partial
0.275
Complete
0.660
AvailImpact
None
0.0
Partial
0.275
Complete
0.660
Temporal
Exploitability
Unproven
0.85
Proof-of-concept
0.9
Functional
0.95
High
1.00
Not defined
1.00
RemediationLevel
Official-fix
0.87
Temporary-fix
0.90
Work-around
0.95
Unavailable
1.00
Not defined
1.00
ReportConfidence
Unconfirmed
0.90
Uncorroborated
0.95
Confirmed
1.00
Not defined
1.00
Environmental
CollateralDamagePotential
None
0
Low
0.1
Low–Medium
0.3
Medium–High
0.4
High
0.5
Not defined
0.0
TargetDistribution
None
0
Low
0.25
Medium
0.75
High
1.0
Not defined
1.0
ConfReq
Low
0.5
Medium
1.0
High
1.51
Not defined
1.0
IntegReq
Low
0.5
Medium
1.0
High
1.51
Not defined
1.0
AvailReq
Low
0.5
Medium
1.0
High
1.51
Not defined
1.0
The environmental metric group is another optional one that can be very useful. The metrics in this group are designed to work outside of, but as a complement to, the other metric groups. This group has no effect on the weight of the other metrics if it is not used. It is there for you, the CVSS user, to employ as you see fit. It is, however, structured with guidelines so that it is uniformly interpreted. The environmental metrics group includes collateral damage potential (CDP), target distribution (TD), and security requirements: confidentiality (CR), integrity (IR), and availability (AR). It also factors-in an adjusted impact score from the base metrics and an adjusted temporal score.
CDP is a classic risk-management-style metric that measures how much financial damage or death and injury damage potential exists should the vulnerability be exploited. In risk management terms, it is single loss expectancy (SLE). For those who are not formally trained security professionals, the SLE in risk management is how much you expect it to cost should a loss occur one time. Although it is a measure of damage potential, CDP is a scale from 0 to 0.5 and it does not equate to a dollar amount.
TD is the measure of what percentage of the organization is vulnerable. This helps you to assess the scope of the threat in your environment. If 50 percent of the target hosts have a particular vulnerability, then this metric is considered to have a value of medium. When the TD is calculated in an equation, high = 1.0, medium = 0.75, low = 0.25, none = 0.0, and interestingly, not defined = 1.0. This is interesting because if you don’t know the TD, the assumption should be “high,” which allows for conservative estimates of damage potential. I recommend that if you know the exact distribution of hosts with a vulnerability in your organization based on the results of a vulnerability assessment, then use this percentage in an exact decimal form. This approach is outside of the CVSS guidelines but it is more precise than the high/medium/low approach.
The environmental security requirements metrics are unique. These metrics create a weight to the base metrics for CIA. If your particular environment puts a high value on the confidentiality of data, for example, then the value is increased. If the value is medium, the weight of C is neutral. The security requirements are used to reweight the impact (I) metric calculation in the base score. This modifies the base metric group score according to the requirements of your organization. However, if an impact metric from the base group is 0 (i.e., not a factor), then the resulting modified impact score will be unaffected. This is because the equation for modified impact metrics includes a multiplication of the security requirement and the impact value from the base group:

where AdjustedImpact = min(10,10.41 * (1–(1–ConfImpact * ConfReq) * (1–IntegImpact * IntegReq) * (1–AvailImpact * AvailReq))).
For each of the metric groups, an equation has been designed to calculate a score based on a set of mathematical rules. The equations are based on a rationale that varies depending on the type of metric. The merits of each of these equations is widely analyzed and debated and is of little benefit to discuss here. CVSS is explained in more detail at http://www.first.org/cvss/cvss-guide.html.
If you want to calculate your own CVSS scores, you can try some Web-based calculators. One popular calculator can be found at http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2. You can enter whatever values you like and receive a set of CVSS scores. To understand the impact of a particular metric on the overall score, try changing only one and then recalculate. You will begin to get a feel for what score is good and what is really bad. I also suggest that you omit the environmental components so that you can become familiar with the CVSS scores you will find in the NVD

Popular Posts