A close examination of the scoring method employed on the VM system is essential. Following is a list of basic requirements of a scoring system as it relates to security operations:
- Take asset value into consideration. More valuable assets in your organization should definitely receive higher priority handling than those with little value. Somewhere in either the scoring method or alerting capabilities should be a consideration of an asset value in either numeric or category terms. The process of valuing assets may be time-consuming if it cannot be obtained from an existing asset management system.
- Severity can be logically or intuitively determined from the score. If the number seems arbitrary or relative to zero, then it is difficult to determine if a score of 300 is severe, moderate, or informational. At some point, either from experience with the system or from knowledge of the method by which the score is derived, you should be allowed to determine the category of severity.
- Current knowledge of available exploits should be included in adjusting the severity. Vendors would have to revise their scoring method if a new, easily scriptable exploit is released to the general public. The score should be dynamic to keep pace with a dynamic threat environment.
- A standards-based primary or secondary score such as CVSS should be included for comparison to public databases. This will prevent any confusion about the meaning of a score that may be derived from a proprietary scheme.
- Optionally, a score should have a cardinality component. This means that the score will vary, depending on the source of the assessment. If a vulnerability can be detected over the network from the public Internet, then it should have a higher score than a vulnerability detected from the local segment.
Determining the appropriate score for a vulnerability is partly mathematical and partly a matter of requirements. It is not unlike the computation of risk:
where p(x) is the probability of occurrence, ε is the loss expectancy from a single event, and ρ is the rate of occurrence per year. An example of a vulnerability score computation might be based on the following variables:
Severity of compromise (α)
|
Remote control of system = 100
Remote access to system = 75
Remote reconnaissance = 50
Local control of system = 50
Local access = 40
Local reconnaissance = 30
|
Ease of attack (β)
|
Easy (scripted) = 1
Medium = .75
Difficult = .5
|
Existence of exploit code in the wild (χ)
|
Exists = 1
Proof-of-concept only = .5
None = .25
|
So, the computation is a simple multiplication formula:
This simple approach to creating a score confines the score to a value between 3.75 and 100. That is, if “local reconnaissance” (30) is the severity, “difficult” (.5) is the ease of attack, and “none” (.25) is the state of exploit code availability, then:
Should the severity of compromise become “remote control of system,” which is very bad, and all other factors remain the same, then the score rises to 12.5. By extension of this simple approach, the worst possible score is 100, with the β and χ factors equal to 1 and “remote control of the system” being the outcome.
Many scoring methods are more sophisticated than this and consider additional factors such as the length of time the vulnerability has been in existence. Furthermore, other scoring methods do not confine themselves to a simple scale of 0 to 100, but rather have no upper limit. This will naturally occur when unbound numbers such as age of vulnerability are considered. Also, the example given here has very limited bounds because it uses simple multiplication; but other methods prefer to make a clear distinction between that which is really bad and that which is of low risk by comparison. Operators such as squares and factorials can drive the score very high for greater risk distinction. But such scores may be more difficult to interpret. Whatever method is preferred, be sure that it can be transformed into a form suitable for input into any risk assessment methodology in use.
0 comments:
Post a Comment