Agent Architecture | Vulnerability Management



Agents typically execute one or more services in the background of a system with system privileges sufficient to carry out their purposes. These services normally consume very little CPU resources except when requested to perform a major task. Usually, at least two services are running at any given time with other active services, depending on the architecture of the product. Vulnerability assessment agents are inextricably linked to the audit of the target, whereas appliances can be used for more than one audit method.
As shown in Figure 1, one service listens on the network for configuration and assessment instructions from a controlling server. This same service or an additional service may be used to communicate assessment results back to the server. The second service is one that performs the actual vulnerability assessment of the local host and, in some cases, adjacent hosts on the network.


Figure 1: Agent architecture.
The basic kinds of agents include the following:
  • Autonomous: They do not require constant input and operation by another system or individual.
  • Adaptive: They respond to changes in their environment according to some specified rules. Depending on the level of sophistication, some agents are more adaptive than others.
  • Distributed: Agents are not confined to a single system or even a network.
  • Self-updating: Some consider this point not to be unique to agents. For VM, this is an important capability. Agents must be able to collect and apply the latest vulnerabilities and auditing capabilities.
A VM agent is a software system, tightly linked to the inner workings of a host, that recognizes and responds to changes in the environment that may constitute a vulnerability. VM agents function in two basic roles. First, they monitor the state of system software and configuration vulnerability. The second function is to perform vulnerability assessments of nearby systems on behalf of a controller. By definition, agents act in a semiautonomous fashion. They are given a set of parameters to apply to their behavior, and carry out those actions without further instruction. An agent does not need to be told every time it is to assess the state of the current machine. It may not even be necessary to instruct the agent to audit adjacent systems.
Unlike agents, network-based vulnerability scanners are typically provided detailed instructions about when and how to conduct an audit. The specifics of each audit are communicated every time one is initiated. By design, agents are loosely coupled to the overall VM system so they can minimize the load and dependency on a single server.
The method of implementation involves one or more system services along with a few on-demand programs for functions not required on a continuous basis. For example, the agent requires a continuous supervisory and communication capability on the host. This enables it to receive instructions, deliver results, and execute audits as needed. Such capabilities take very little memory and few processor cycles.
Specialized programs are invoked as needed to perform more CPU-intensive activities such as local or remote network audits. These programs in effect perform most of the functions found in a network vulnerability scanner. Once completed, the information gathered is passed onto the supervisory service to be passed back to the central reporting and management server.
The detection of local host vulnerabilities is sometimes carried out by performing an audit of all configuration items on the target host in a single, defined process during a specific time window. An alternative approach is to monitor the configuration state of the current machine continuously. When a change is made, the intervening vulnerability assessment software evaluates the change for vulnerabilities and immediately reports the change to the management server. This capability is intertwined today in the growing end point security market. The detection of configuration changes and added capability of applying security policy blurs the relationships among end point protection, configuration compliance, and vulnerability audit. This combination will ultimately lead to tighter, more responsive security.

Hardware: The Appliance Model



The hardware appliance model is exactly that: hardware with built-in software to perform the desired vulnerability scans. The devices are typically placed throughout a network and report back to a central server. The scanning appliances are usually complete but simple computer systems. A typical design has an operating system (OS), supporting software modules, and the specialized code written by the developers to perform scans and communicate results. Some vendors use open-source tools and others will use a commercial OS and components.
One major advantage of a hardware-based system is that the vendor will have in-depth knowledge about the configuration of the host. The vendor takes responsibility for the maintenance and stability of that configuration. Any failure of the software to perform as advertised should be addressed in the client–vendor relationship.
In deployment, the hardware approach has the disadvantage of having to be shipped to the location and installed by someone who may not be qualified to do so. In most cases, however, deployment is not so complex. If the local technologist can configure a typical host computer, he or she can configure a vulnerability scanner. If you are uncertain about the capabilities of local personnel, then you may be well-advised to preconfigure the device and provide simple installation instructions.
In most designs, each scanner will report back to a central server. The vulnerability and compliance information collected will be transmitted back to the server for analysis and reporting. Devices will also receive assessment instructions over the network. Those instructions may be delivered by polling, on-demand connection, or through reverse polling. The impact of these strategies will be minimal but important, depending on your network security architecture.
Polling is the process of taking a poll of the vulnerability scanners associated with a central server. Each scanner is typically contacted through a TCP port with special authentication methods that keep the entire conversation encrypted. The devices that are polled may be only those for which the server has a job prepared or in progress. The server checks the status to see if any data is available or if the unit is ready to accept a job. This approach can be cumbersome but has the advantage of only requiring a connection originating from the server. In some cases, not all scanners are polled unless there is scheduled work that can result in not knowing the status of a scanner until that time. Most vendors that poll will poll all scanners. Figure 1 illustrates the simple polling approach.
 
Figure 1: The simple polling approach.
Reverse polling is the process whereby each scanner contacts the server on a regular basis. Should there be a job scheduled for the scanner, it would then be provided. The same strong authentication and encryption methods apply. The scanner will send the results of the scan back to the central server either during the scan or at the conclusion, depending on the software designer’s choice. This approach has the added advantage of allowing the scanner to complete a local job even if the connection with the server is lost. The scan results may simply be cached until a connection can be re-established.
Reverse polling also has an advantage when deployed in a secure zone where in-bound communications to the scanner may be undesirable in order to limit possible external connections. This is also a disadvantage should the scanner be deployed outside the organization’s boundaries because accommodations must be made in the security infrastructure for connections from the scanner.

Popular Posts