Active Scanner Deployment: Physical


The first step in implementing a strategy for deploying physical scanners is to select a central location to which all scanners will report in the future. In the case where there may be multiple central reporting servers, select a location with the most hosts on the local network, and preferably one that has unfettered network access to other major locations. Verify that there is sufficient bandwidth available to support a schedule structured using the method previously discussed. Then, follow this iterative process:
  1. Perform test scanning and remediation reporting on the local network where the reporting server is installed or is on another less critical network. A 24-bit classless inter-domain routing (CIDR) block would be sufficient.
  2. Review the report results and any unexpected impact on the environment.
  3. Adjust the system and scan parameters to compensate. Be sure that any adjustments you make are scalable to hundreds or even thousands of networks once the deployment proceeds.
  4. Add as many similar network ranges in the local office as possible, one at a time.
  5. Repeat the previous steps to validate the reports and impact on the environment. Adjust accordingly.
While waiting for a few cycles of this activity to complete in the local area, plan and coordinate the next phase of scanning over the WAN to other offices in the scope of your scanning strategy. These are offices that will not receive equipment but have the capacity to be scanned through their WAN connection without impact to operations. Again, repeat the evaluation and refinement process and update the scanning standards documentation accordingly.
As the expansion of scanning begins outside of the local office, begin generating and refining the management reports. At this stage, management will become quite curious about the results of the new system. Be sure that the reports are reconcilable and that questions about their content can be addressed. If the pre-acquisition testing has been done properly, this should be easily done with possibly minor assistance from the vendor. Having an executive sponsor who is willing to preview these reports before they go to a wider audience can help identify discrepancies and questions earlier.
At this point, as much as 75 percent of the initial zone of scanning should be deployed and processes beginning to take firm, repeatable shape. The next zone of deployment should then be planned by selecting another major office with as many good connections to other target offices. Then, repeat the gradual expansion process discussed in the previous steps.

Deployment Methods | Vulnerability Management



There are many ways to deploy a system, and what is needed for the operating environment will possibly affect the solution chosen. In some ways, this is related to architecture, as discussed earlier in this chapter. But, there are other considerations to be made related to how you will deploy a system. In this section, we will discuss the major issues affecting deployment.
The goal in the approach to deployment is to achieve maximum effectiveness in the shortest period of time with the least investment. Following is the basic deployment strategy to achieve this:
  1. Establish a foothold with maximum access to target systems.
  2. Test processes on a small scale and refine.
  3. Expand deployment by adding targets until the foothold is 75 percent deployed.
  4. Simultaneous to item 3, create and refine management reports at all levels.
  5. Take additional steps in other locations.
We will review each of these steps for active physical scanners, active virtual scanners, passive analyzers, and agents.

Access Control



Access control is an essential feature of any application. Depending on the infrastructure standards, it is valuable to insist on an external authentication mechanism. Roles and access rights likely will have to be defined in the VM system but external authentication should be possible.

Active Directory

Active Directory® is one of the most commonly used authentication mechanisms for Windows systems. Later versions support lightweight directory access protocol (LDAP) and LDAP over SSL for directory loading. Kerberos and NTLM are common options for authentication. Since Active Directory capabilities are so common in the corporate environment and standards are available to interface with other systems, this is a good choice. However, any LDAP directory service should work. There are two common approaches to Active Directory integration.
One method synchronizes directory information periodically, looking for additions and deletions. A copy of the directory entries is stored in the VM database for quick reference to access privileges. This is the most common and compatible approach that will use LDAP. Usually, special credentials have to be created to log into the directory system and retrieve the basic information about the users. Using LDAP also affords the system the option of portability to other directory services platforms.
Later, when a user attempts to log into the VM system, the credentials supplied by the user are sent to the authentication system using NTLM or Kerberos. Once the credentials are accepted, the VM system will apply the privileges stored in the VM database for that user.
A second approach is to natively integrate with Active Directory using the Active Directory in Application Mode (AD/AM) capability that comes with Windows .Net erver 2003. This enables the VM application to have its own instance of a directory service with schema extensions and built-in attributes but still participate in the security structure of the Active Directory domain. Naturally, the services that support this capability must run on a Microsoft-technology-based server. This provides a tightly integrated directory product for Microsoft-directory-committed organizations. A significant advantage of this approach is that Active Directory groups can be used to grant privileges in the VM system rather than creating an internal set of roles or user groups. The disadvantage is that you may be committed to the Active Directory platform.

RADIUS and TACACS+

In a more network-centric environment, RADIUS is a very common protocol option. It is an old method typically used for dial-in systems. However, the protocol is no less useful for network equipment. With RADIUS and TACACS+ , however, the user ID must be entered into the VM system since no directory service is provided. This user ID must exactly match that which is expected by the authentication server. The most significant difference between RADIUS and TACACS+ is the use of UDP versus TCP, respectively. TCP has the security advantage of being able to communicate with an authenticating source without spoofing since there is a handshake process in the protocol.
Some vendors of network authentication products go so far as to support RADIUS, LDAP, Kerberos, TACACS+, and other methods. These authentication products are able to effectively “glue” together various authorities and protocols, allowing a variety of methods to be employed.
Changing from one authentication method to another can be difficult, depending on the implementation. To avoid complications in the future, select a method from the beginning and stay with it.

Authorization

Authorization capabilities should be able to meet the various roles you have planned for operations. This is one key reason that requirements must be defined prior to beginning the RFP process. One of those requirements is a definition of who will perform what functions and what capabilities are needed. Role-based access control is the mechanism that will have to be vetted during the selection process. You should consider some of the following capabilities:
  • Separation by network: Some users will be permitted only to take actions against certain networks. Local IT personnel in Mumbai should not be able to examine any vulnerability reports for Chicago, for example. Scan parameters should not be modifiable by anyone except the VM administrator.
  • Actions: Closer to the definition of a role, there are the specific activities to be performed, such as defining a network or administering IP ranges.
  • Running reports: Many people are likely to be able to perform this function. Being able to generate a report is fundamental.
  • Conducting a scan: Few people should be able to conduct a scan. Active scanning capabilities are potentially disrupting to operations, and therefore should remain in the custody of those who are qualified to assess the impact on the network and the need for current audit results.
  • Maintaining the health of the overall system and available audit parameters: The parameters of audits should rarely be tampered with unless extensive testing has been done. Adding TCP ports or additional checks can impact the entire scanning schedule. Only a few people should be able to make changes to this. These people likely have a combination of security and compliance roles.
A good way to document authorization requirements is with a permissions grid. This grid should indicate the capabilities across the top and the roles down the side. See Table 1 for an example. Where a particular function can be performed, a “Y” or “N” is indicated. If access type is specified, then an “R” and/or a “W” are specified for “Read” and “Write,” respectively.
Table 1: Permissions Grid Example 
ROLE/CAPABILITY
REPORT
SCAN
SCHEDULE
MAINTENANCE
PARAMETERS
System administrator
N
N
Y
Y
R
Local IT
Y
N
N
N
N
Regional security manager
Y
N
N
N
N
Global security
Y
Y
Y
N
RW
Global compliance
Y
N
N
N
R

Scoring Method | Vulnerability Management



A close examination of the scoring method employed on the VM system is essential. Following is a list of basic requirements of a scoring system as it relates to security operations:
  • Take asset value into consideration. More valuable assets in your organization should definitely receive higher priority handling than those with little value. Somewhere in either the scoring method or alerting capabilities should be a consideration of an asset value in either numeric or category terms. The process of valuing assets may be time-consuming if it cannot be obtained from an existing asset management system.
  • Severity can be logically or intuitively determined from the score. If the number seems arbitrary or relative to zero, then it is difficult to determine if a score of 300 is severe, moderate, or informational. At some point, either from experience with the system or from knowledge of the method by which the score is derived, you should be allowed to determine the category of severity.
  • Current knowledge of available exploits should be included in adjusting the severity. Vendors would have to revise their scoring method if a new, easily scriptable exploit is released to the general public. The score should be dynamic to keep pace with a dynamic threat environment.
  • A standards-based primary or secondary score such as CVSS should be included for comparison to public databases. This will prevent any confusion about the meaning of a score that may be derived from a proprietary scheme.
  • Optionally, a score should have a cardinality component. This means that the score will vary, depending on the source of the assessment. If a vulnerability can be detected over the network from the public Internet, then it should have a higher score than a vulnerability detected from the local segment.
Determining the appropriate score for a vulnerability is partly mathematical and partly a matter of requirements. It is not unlike the computation of risk:
where p(x) is the probability of occurrence, ε is the loss expectancy from a single event, and ρ is the rate of occurrence per year. An example of a vulnerability score computation might be based on the following variables:
Severity of compromise (α)
Remote control of system = 100
Remote access to system = 75
Remote reconnaissance = 50
Local control of system = 50
Local access = 40
Local reconnaissance = 30
Ease of attack (β)
Easy (scripted) = 1
Medium = .75
Difficult = .5
Existence of exploit code in the wild (χ)
Exists = 1
Proof-of-concept only = .5
None = .25
So, the computation is a simple multiplication formula:

This simple approach to creating a score confines the score to a value between 3.75 and 100. That is, if “local reconnaissance” (30) is the severity, “difficult” (.5) is the ease of attack, and “none” (.25) is the state of exploit code availability, then:
Should the severity of compromise become “remote control of system,” which is very bad, and all other factors remain the same, then the score rises to 12.5. By extension of this simple approach, the worst possible score is 100, with the β and χ factors equal to 1 and “remote control of the system” being the outcome.
Many scoring methods are more sophisticated than this and consider additional factors such as the length of time the vulnerability has been in existence. Furthermore, other scoring methods do not confine themselves to a simple scale of 0 to 100, but rather have no upper limit. This will naturally occur when unbound numbers such as age of vulnerability are considered. Also, the example given here has very limited bounds because it uses simple multiplication; but other methods prefer to make a clear distinction between that which is really bad and that which is of low risk by comparison. Operators such as squares and factorials can drive the score very high for greater risk distinction. But such scores may be more difficult to interpret. Whatever method is preferred, be sure that it can be transformed into a form suitable for input into any risk assessment methodology in use.

Customization and Integration | Vulnerability Management



Although frequently overlooked or treated as an afterthought check box, custom development is a significant area for specifying requirements. Very often, those who implement VM initiatives discover too late that significant customization is needed to maximize the potential of their security systems. This can happen when there are change management systems, often custom-developed, around which entrenched internal processes are built. Since it is unlikely that a new change management system will be purchased, the ability to efficiently extract information from the VM system and initiate a change and/or incident process is critical.
At a high-level, as previously described, when interfacing to a change or incident management system, the following items will need to be managed by the interfaces:
  • Extract vulnerability data from the VM system.
  • Initiate a change or incident in the internal system while retaining the reference provided by the VM system.
  • Take a provisional closure of the incident and update the VM system. This will optionally start a verification assessment.
  • An event from the VM system will trigger closure of the incident or reinitiation of the incident, depending on the outcome of verification.
Depending on the process and systems in your enterprise, these steps can vary the data structures and processing required in the interface code. A clear understanding of the interface capabilities of the candidate vulnerability systems is essential prior to purchase. Once understood, assess what coding requirements there will be for each candidate. This will help highlight any potential shortcomings that may significantly limit overall effectiveness. For example, the planned internal VM processes may require that detailed recommendations on remediation be transmitted from the VM system to the change management system. If this critical data element is not provided in the ticket generation process, then this may cause a critical gap in achieving IT security service goals.

System Integration | Architecture



It is not unusual for an organization to simply purchase and install technology without consideration of its active role in the rest of the IT infrastructure. The end result is often a system with greatly limited technical functionality requiring significantly greater effort to integrate with other systems or major process changes to compensate.
There are numerous ways that a VM system can provide data to maximize the benefits to security and operations. Among those systems to be considered are change and incident management. These systems are commonly found in mid-size to large organizations trying to maturely and consistently extract higher performance from IT services.


Change Management

Change management is a critical component of the remediation process. As previously discussed, when a vulnerability is found, the details can be sent automatically to a change management system to initiate a change process.
The ability of the VM system to be interfaced with change management is never as straightforward as vendors tend to suggest. Some custom development is almost always needed. Development of this type is typically for the conversion of data types, format, and communication method. The vendor’s product may deliver SNMP traps for newly discovered vulnerabilities; yet the change system will require XML or use an e-mail listener process. Someone will inevitably have to code an interface between these two completely different technologies. The following common data elements are exchanged, whatever the interface method:
  • vulnerability details sent to change system,
  • vulnerability event identifier sent to the change system,
  • change status update sent to vulnerability system once remediation is complete, and
  • reopening or re-creating the change if the vulnerability is still present.


Incident Management

Similar to interfacing to change management, incident management may be a part of the portfolio of an operational support system in your organization. Interfacing issues are similar and will vary by process. In some cases, organizations prefer to handle changes in a change system but incidents are reserved for a very specific set of circumstances.
On the other hand, it is not unusual to generate an incident for tracking the vulnerability and remediation process, and then to use the change management system to track only the impact on systems, resources, and processes when that change is made.
In either case and as previously mentioned, the completion of a change will initiate either the closure of an incident or the immediate notification of completion to the VM system. The VM system will then reassess the target to determine successful compliance.


Intrusion Prevention

Some vendors have attempted to integrate their products with IPSs only to find that IPS vendors have dreams of competing against the VM vendor. This has led to a few failed attempts by vendors that would have otherwise benefitted the customer greatly. If you are lucky enough to have an IPS that is compatible with vulnerability data from a selected vendor, then by all means have a hard look at the benefits, as there are many obstacles.
Standards of format compatibility are a significant obstacle. Although we discuss standards at length in this book, few vendors fully support them in the IPS world, or the vulnerability world, for that matter. At this time, the two industries are so far apart in interoperability that only a demanding customer base will be able to influence change. However, the basic idea is that if a new vulnerability is discovered on a target system, then the appropriate upstream IPS will be notified to activate the signature that would protect the asset until it is properly remediated.
There are two major benefits to this type of integration. First, a vulnerability is protected until full remediation can be completed, which lowers the overall dynamic vulnerability level in the environment. Second, the IPS optimizes its performance since only the necessary rules are activated above the standard policy implementation. This is particularly important when a very expensive IPS is heavily loaded on a busy DMZ segment and a hardware upgrade does not offer sufficient cost–benefit.


SEIM

Security event and incident management (SEIM) integration is generally easier to accomplish. SEIM vendors make compatibility with myriad data sources an important selling point. The collection of data is their strongest suit and it is very likely that they will accept data from VM systems with little modification.
If your organization has an SEIM program, you would be remiss in not accepting this important data feed. Where the IPS integration is not possible, the SEIM program can at least use the data to determine the severity of an incident and escalate it accordingly. If your vendor does not easily support one of the major vendors of VM products, then they have likely chosen poorly.

Active Scanning Architecture



Basic architecture of active scanning and the considerations to be made concerning consumption of bandwidth. One other key consideration is the management of several hardware devices. While this may seem trivial to an organization with hundreds or even thousands of servers, usually the staff maintaining the VM system is limited and requires unique training. They rely on resources of other locations and departments to maintain devices with which they are generally unfamiliar. Many of these devices have command lines, serial ports, and network requirements that may not be fully understood. Although many of the administrative responsibilities can be centralized and automated, there are inevitably malfunctions in the device or the environment that need to be corrected on-site.
For example, network connectivity can be lost at some point between the management server and the device itself. Despite all of the available tools, it may not be possible to determine the cause of the failure. A local engineer will have to check the physical connections, switch configuration, device power, and logical network configuration. This may involve plugging in a serial cable, configuring a terminal, logging in with local administrator privileges, and performing command-line functions. In all likelihood, the engineer has not done this in the six or eight months since the device was deployed. The VM operator will have to provide written or verbal instructions and receive some feedback. The use of a network KVM (keyboard video mouse) device is helpful but not perfect. The physical environment may still require inspection. If the network connection to the site is lost, then little can be done remotely.
Additionally, replacement of devices that have failed may be difficult but not for want of technical expertise. Some countries have import duties and restrictions on technologies that can extend the replacement cycle for months. Certain locations seem particularly unfriendly to commerce, particularly where technology is concerned. Russia, Venezuela, and even Mexico can be very resistant to receiving technology, to the detriment of their own citizens. It is even possible that final delivery in some locations may call for a small bribe to the delivery person. Ultimately, a virtual machine version of a product, if available, can be sent electronically and made operational overnight.
With an understanding of these issues, your plan will have to carefully consider the number of devices, location of each, skills of local personnel, languages, reliability of network, available bandwidth, power requirements, environmental conditions, customs procedures, and vendor presence and inventory. There are some clever ways to accommodate deficiencies in many of these components. Depending on your specific challenges, one or more of these strategies will help.


Determining the Number of Devices

Determine the number of devices required and include a growth factor. Volume is simply more difficult to manage. Determine the number of networks, strength of network connections, and number of targets. From this information, an estimate can be made for total load for scanning:

Then, estimate the amount of time required to scan a set of those targets with the candidate vendor’s product. A test of a scan under specific conditions provides the average time per host. Extensive testing is recommended since every environment is unique and will respond differently to the various approaches of vendors.
For example, a company has several networks in separate physical locations, as shown in Table 1. This table shows the amount of bandwidth provisioned, used, and available. In order to get an accurate estimate of the amount of time required to audit a network, there are two tests performed with these sites to determine the average amount of time required to scan a host. Some simple math gets a close estimate of how much time is overhead for gathering and transmitting results to the reporting server. Two tests are required with the largest practical target sample size. The key is to make sure the difference between the two samples is substantial enough to meaningfully calculate thedifference, indicating the time required to scan a target. Following is the overall process for determining the number of required scanners:
  1. Review the networks and select a representative sample of sites, WAN connection types, bandwidths, and host types. The type of WAN connection and bandwidth will impact the response time of scan activities and impact the total time on a large scale. Connections such as frame relay will show longer response time than higher speed Asynchronous Transfer Mode circuits (ATM) or dedicated private lines. Bandwidth will have an impact as well, but only up to a limit. Scanning activity has limits in device hardware and protocol connection limits.
  2. Select a sample size (S1) of targets to scan in the representative networks. This sample size should be at least 10 percent of the total number of hosts but not less than 10. A second sample size (S2) should also be taken that is larger than the first by at least 50 percent and not less than 20. This will assure that the sample size is sufficient to show a meaningful difference in the results.
  3. Capture measures to complete the table in Table 1. In the table, calculate the time required to scan each target (ST) by subtracting the time to scan the first sample (T1) from the time to scan the second sample (T2), and dividing by the difference in the sample number of hosts. This is the total time required to scan a single host (TT) absent the overhead for gathering, formatting, and transmitting results.
  4. Calculate the amount of time required for the scanning overhead (OH) mentioned earlier. The overhead for the larger sample is the amount of time required to audit S2 minus time to scan all of S2 targets: S2 – (S2 – hosts * ST). The point is that there is a significant difference in the amount of time to scan targets versus the time to complete all of the activities of the audit.
  5. Extrapolate and estimate the time to scan the entire network by multiplying the number of total hosts in the network by TT and add overhead: X = TT * OH. The final column in Table 5.1 shows this number. The sum of these figures shows the number of hours required if a single scanner were to audit all of the networks from a single location one after another. Although this scenario almost never happens with so broad a geographical distribution, it does provide a feel for the amount of device time required.
  6. Decide on how often an audit of every target in the organization is needed. This will indicate the amount of time you have to allocate a single scanner for scanning targets and ultimately lead to the total number of scanners needed. A typical network should have audits performed at least weekly to be sure that newly discovered or reported vulnerabilities are captured on reports and that remediation activities are taking place in a timely fashion.
  7. Create a proposed schedule for auditing. Since many targets will have to be in use during a scan, a schedule will have to be created with the information from the previous step. With a schedule, you can determine how many targets can be audited by a single scanner over a day or week. Note that you will have to consider local time, office customs, and target availability when making this schedule. Figure 1 shows a sample schedule. Looking at each time slot for audits will show how many audits must take place at a given time. This sample has four time slots per day each spanning a six-hour period on a Greenwich Mean Time (GMT) clock.

     
    Figure 1: Audit schedule GMT chart.
  8. Get the vendor’s recommendation on how many simultaneous audits can be performed by one device and how much impact that will have on scan performance. For example, a scan of network A may take 45 minutes when performed alone. But, when performed with another simultaneous scan of a network of equal specification, the scan may take 30 percent longer on average, assuming none of the network devices in between introduces more delay.
  9. Estimate the number of devices needed. Referring again to Table 1, you can assess how many scanners are required by counting the recommended number of simultaneous audits to be performed and dividing by the number recommended. In some cases, you can reduce the number of scanners by rearranging the audit schedule. N.B.: Always leave room for error and growth. This arrangement shows us that if an audit of networks is performed from a central location, it is probably manageable with a single device, provided the device can conduct two audits simultaneously without significant performance degradation.
Table 1: Scan Time Estimation Chart 
 
AUDIT TIME
 
LOCATION
TARGETS
BWDTH
UTIL
AVAIL
10 TARGETS
20 TARGETS
TIME PER TARGET
ESTIMATE 20 TARGETS
HQ
450
N/A
N/A
N/A
9
11.5
0.25
119.0
Dallas
260
2000
72
560
13
15
0.2
63.0
Hong Kong
241
4000
45
2200
12
16
0.4
104.4
Santiago
75
384
80
76.8
11
15
0.4
37.0
Mexico City
245
4000
52
1920
8
13
0.5
125.5
Chicago
325
4000
62
1520
11
14.5
0.35
121.3
Atlanta
175
1544
70
463.2
13
16
0.3
62.5
San Francisco
310
4000
55
1800
9
12
0.3
99.0
 
Total Hours Required
12.2
If it were determined that, due to operational constraints, the audit of “Chicago-Mfg” could only be performed on Wednesdays, then there may be contention for a time slot. Negotiating compromises among the operational requirements of each site can resolve such issues and further minimize devices. Also note that server networks can often be audited on weekends and late evenings, providing some relief to audit user workstation networks during the daytime.
Every network is unique and results may vary. Testing is an effective tool but not perfect in predicting the idiosyncrasies of networks and systems. Leaving some flexibility in requirements for adaptation is essential. Some advance planning and testing can save a lot of money when making a purchase. It will also enhance the credibility of the system and its operators in the eyes of the user population. Once the scanners are deployed, any top-end VM system should provide a means to report on scanner resource utilization, so the system manager can reallocate in a changing environment.


Select Locations Carefully

The location selection should be based on:
  • the number of targets to be scanned locally,
  • the bandwidth available to other adjacent networks,
  • the skill availability and skill level of the support staff for the location, and
  • the regulatory restrictions on IT issues such as privacy and union work rules.
Naturally, other factors such as shipping costs, taxes, and other important considerations should be made. Also, basic deployment logistics such as power supplies, rack space, and network physical layout should be considered but are typically minor factors.

Agent-Based Architecture | Vulnerability Management



With agents, it is necessary to install software on every host to be evaluated, with the exception of some agents capable of performing network-based audits of adjacent systems. In some cases, vendors will charge a higher price for server agents versus desktop agents. There is no significant functional difference in the two that merits this pricing model but rather a matter of sales volume and cost recovery. Some VM architecture strategies will deploy agents solely on servers in order to minimize the impact on the network connections with the assumption that the agent itself will not impact other software. Vendors often recognize this and pricing is understandably higher.
The use of agents can become more complex when virtual machines are used. The agent can be installed on several guest OSs, yet they are deployed on a single physical server. The impact on the hardware is multiplied. One should seek some guidance from the vendor on how significantly the agent can impact the CPU and memory resources of a virtual machine and the underlying host OS and hardware. It is also likely that the vendor will charge for each OS and not per CPU core.
Since agents have to be deployed and maintained on every host under assessment, the solution is less prone to network limitations and more a problem of operating the software on every host. It can complicate installation and deployment but virtually eliminate the cost of shipping. Organizations with a large, mobile sales force will benefit greatly from an agent-based system, since the WAN connection speed is unpredictable and the frequency of presence in the local office seldom.

Architecture | Vulnerability Management



The architecture of a VM system is an important part of its ability to work in your corporate physical structure. It also will probably be the largest factor affecting the cost of the system. Considerations such as geography, office size, network equipment, security architecture, WAN bandwidth, and government regulations will impact greatly your decision regarding vendors, type of product, and cost.
The last item, cost, is our starting point for making a decision on vendors. Begin with a survey of the VM market, pricing, and products. Determine what the approximate cost per active IP address would be to your organization. An estimate of the number of IP addresses would be sufficient if you can stay within 10 percent. Multiply the average price per IP in the market by the number of IP addresses in your organization, and then add 20 percent for annual maintenance. Then, add 10 percent for shipping and customs if you have foreign offices, and another 10 percent for consulting services if you have more than 15,000 IPs. For example, 20,000 IPs × $8 per IP = $160,000; then add 10 percent for shipping and 10 percent for consulting = $32,000. Total fixed costs: $192,000. Then, calculate the recurring cost of maintenance: $32,000 annual maintenance (160,000 × .20). Total cost of the system for the first year is $224,000.
This is only a cost estimate and will be impacted significantly by the previously mentioned architectural factors. In an active scanning system, physical scanners must be purchased and installed in good vantage points in the network. In an extreme example, if your environment has 100 offices with the only good scanning point being in each office, that is, no strong WAN links, then you will end up purchasing up to 100 devices. These devices can be expensive, depending on the product. There are sometimes less expensive solutions, including a virtual machine version of the product or the use of host agents with no per-instance charge. However, there will be some hardware costs for the buyer who supplies his or her own. It may be more cost-effective if CPU cycles are available in a virtual machine.


 Passive Architecture

Passive scanners that observe network traffic may even be more subject to this limitation since the traffic on each inspected network switch must be copied to the device. For large but complex central offices, a passive scanner is a very workable solution where connections are typically very fast and can withstand the loads that can be imposed by the remote switched port analyzer (RSPAN) function. For organizations that have several large offices with ample WAN bandwidth, active scanners can be an excellent solution. Since bandwidth is becoming less expensive and more abundant in remote areas of the world, careful planning and scheduling of audits can allow scanning to be more cost-effective with fewer scanners and lower shipping and customs costs.
Centralizing the VM function as much as possible can result in considerable savings.

Popular Posts