Management network security pdf


















A path through the network having a low combined historical immunity value represents a security issue. And, if such a path exists, the system administrator needs to consider possible risk mitigation strategies focusing on that path.

To do that, we build a minimum spanning tree for d for its segment of the SCG. The weight of this minimum spanning tree will represent how vulnerable this segment is to an attack.

The more vulnerable the system is, the higher this weight will be. There are several algorithms for finding a minimum spanning tree in a directed graph [5] running in O n2 time in the worst case. It actually denotes the damage possible through d. We use the following equation to calculate the SBEd! If the service sd present in host d is patched then this probability can be derived directly from the expected risk ER sd defined previously in 5.

If the service is unpatched and there is existing vulnerabilities in service sd of host d, then the value of P d is 1. The equation is formulated such that it provides us with the expected cost as a result of attack propagation within the network. Although spurious traffic may not exploit vulnerabilities, it can potentially cause flooding and DDoS attacks and therefore poses a security threat toward the network.

In order to accurately estimate the risk of the spurious traffic, we must consider how much spurious traffic can reach the internal network, the ratio of the spurious traffic to the total capacity and the average available bandwidth used in each internal subnet. Assuming that maximum per-flow bandwidth allowed by the firewall policer 1 We exclude here spurious traffic that might be forwarded only to Honeynets or sandboxes for analysis purpose.

We take the minimum of all the Residual Capacity associated with host d to reflect the maximum amount of spurious traffic entering the network and therefore measuring the worst case scenario. This is the spurious residual risk after considering the use of per-flow traffic policing as a counter-measure and assuming that each of the hosts within the network has a different cost value associated with it.

When we are using the severity score of a vulnerability, it should be multiplied by the EF to take into account for the exposure of the service to the network.

We call it as Network Exposure Factor EF and it is calculated as the fraction of the total address space to which a service is exposed to. A service that is exposed to the total possible address space e. Thus the range of this factor will start from the minimum value of 1 for the service totally hidden from the network, and will reach the maximum value of 2 for the service totally exposed to the whole of the address space.

Of course, if we can identify which part of the address space is more risky than the others, we can modify the numerator of 17 to be a weighted sum. Therefore, if this metric has a high value, it indicates that the network should be partitioned to minimize communi- cation within the network and attack propagation as well.

The possible risk mitigation measures that can be taken include 1 Network redesigning introducing virtual LANs. In case of SPR, High score for this metric indicates that the firewall rules and policies require fine tuning. When the score of this measure rises beyond the threshold level, ROCONA adds additional firewall rules to the firewall.

The allowable IP addresses and ports need to be checked thoroughly so that no IP address or port is allowed unintentionally. We have used Java Programming Language for this implementation. The tool can run as a daemon process and therefore provide system administrators with periodic risk updates.

It provides risk scores for each of the 5 components. The measures are provided as risk gauges. The risk is shown as a value between 0 and as can be seen from Fig. The users can setup different profiles for each run of the risk measurement. Such parameters include: 1. The a1 and a2 weights required for measurement of EVM. The decay factor b in the measurement of AHVM. The network topology file describing the interconnections between the nodes within the network. Options for manually providing the list of CVE vulnerabilities present in the system or to automatically extract them from third party vulnerability scanning software.

The current version can only use Nessus for the automatic option. After the tool completes its run, it provides the system administrator with the measurement process in details. All the details can be seen in the Details tab. All this information can be saved for future reference or for comparison purposes. The client side is installed on individual computers in the system whereas the server part is installed for the administrator of the system.

Each client communicates with the server to get the parameter values provided by the administrator. Third party softwares must also be installed so that the client program can extract the services currently running on a particular computer. The client side then sends the scores for individual components to the server where all the scores are combined to provide a unified score for the whole network system.

We deployed our tool in two of our computer systems in The University of Texas at Dallas Database and Data Mining Laboratory to perform a comparative analysis of the risk for both the machines. Since both the machines are in the same network and under the same firewall rules, the AP and SPR values have been ignored i.

AOL Active Security Monitor ASM provides a score based on seven factors, which include factors like firewall, virus protection, spyware protection and p2p software. Nessus on the other hand, scans for open ports and how those ports can be used to compromise the system.

Based on this, Nessus provides a report warning the user of the potential threats. In Table 1, the comparison is provided. As can be seen from the table, System B is more vulnerable than System A. The same trend can be seen using the other two tools as well. We also performed comparisons between systems where both had the same service and softwares, but one was updated and other was not. Since, new vulnerabilities are found frequently, for the same state i.

But in such a case, both Nessus and AOL Active Security Monitor would provide the same scores, as their criteria of security measurement would remain unchanged. So far our knowledge, there is no existing tool that can perform such dynamic risk measurement and therefore, we suffice by providing comparison with these two tools. Here the scores themselves may not provide much information about the risk but can provide a comparison over time regarding the state of security of the system.

This allows effective monitoring of the risk towards the system. The component values assigned by this equation will be monotonically decreasing functions of the components of the Total Vulnerability Measure of the system.

The parameters c1, c2, c3, c4 and c5 provide control over how fast the components of the Quality of Protection Metric QoPM decreases with the risk factors. The QoPM can be converted from a vector value to a scalar value by a suitable transformation, like taking the norm or using weighted averaging. Intuitively, one way to combine these factors is to choose the maximum risk factor as the dominating one e. Although we advocate generating a combined metric, we also believe that this combination framework should be customizable to accommodate user preferences e.

Another important aspect of this score is that, for the vulnerability measures i. In case of QoPM, higher score indicates higher level of security or lower risk towards the system. In contrast to other similar research works that present studies on a few specific systems and products, we experimented using publicly available vulnerability databases. We evaluated and tested our metric, both component-wise and as a whole, on a large number of services and randomly generated policies.

In our evaluation process, we divided the data into training sets and test sets. In the following sections, we describe our experiments and present their results. The NVD provides a rich array of information that makes it the vulnerability database of choice. For each vulnerability, the NVD provides the products and versions affected, descriptions, impacts, cross-references, solutions, loss types, vul- nerability types, the severity class and score, etc. The NVD severity score has a range of 0, We present some summary sta- tistics about the NVD database snapshot that we used in our experiments in Table 2.

The severity score is calculated using the Common Vulnerability Scoring System CVSS , which provides a base score depending on several factors like impact, access complexity, required authentication level, etc.

We varied b so that the decay function falls to 0. Here, we first chose services with at least 10 vulnerabilities in their lifetimes, and gradually increased this lower limit and observed that the accuracy increases with the lower limit. As expected of a historical measure, better results have been found when more history is available for the services and observed the maximum accuracy of The graph in Fig.

First, we conducted experiments to evaluate the different ways of calculating the probability in 3. We conducted experiments to compare the accuracies obtained by the exponential distribution, empirical distribution and time series analysis method. Here, we obtained the most accurate and stable results using Exponential CDF. The data used in the experiment for Exponential CDF was the interarrival times for the vulnerability exposures for the services in the database.

We varied the length of the training data and the test data set. We only considered those services that have at least 10 distinct vulnerability release dates in the 48 months training period. For Expected Severity, we used similar approach.

For evaluating Expected Risk ER , we combined the data sets for the probability calculation methods and the data sets of the expected severity. In the experiment for Exponential CDF, we constructed an exponential distribution for the interarrival time data and computed 3 using the formula in 7.

For each training set, we varied the value of T , and ran validation for each value of T with the test data set. In Fig. For the test data set size of 12 months, we observed the highest accuracy of We present the results of the experiment for Expected Severity in Fig. The maximum accuracy obtained in this experiment was The results of the Expected Risk experiment are presented in the Fig. For the Expected Risk, we observed the best accuracy of It can be observed in Fig.

This implies that this method is not sensitive to the volume of training data available to it. From Fig. It increases quite sharply with decreasing values of training data set data size. This means that the expectation calculated from the most recent data is actually the best model for the expected severity in the test data.

In absence of other comparable measures of system security, we used the following hypothesis — if system A has a better QoPM than system B based on training period data, then system A will have less number of vulnerabilities than system B in the test period. We assume that the EVM component of the measure will be 0 as any existing vulnerability can be removed. In the experiment, we generated a set of random policies and for each policy we evaluated In our experiment, we varied the number of policies from 50 to in steps of In generating the policies, we varied the number of services per system from 2 to 20 in increments of 2.

We present the results obtained by the experiment in Fig. As mentioned previously, a policy can be regarded as a set of rules indicating which services are allowed access to the network traffic.

We set up different service combinations and consider them as separate policies for our experiments. We can observe from the graph in Fig. However, the accuracy does vary with the number of services per policy — the accuracy decreases with increasing number of services per policy.

This trend is more clearly illustrated in Fig. The machine used to run the simulation had a 1. The results are shown in Fig. The running time was then calculated for several hosts within each network. The average value of the running time per host was then calculated and used in Fig. The highest running time per host for a network of nodes was very reasonable at less than 5 seconds. Thus, the algorithm scales gracefully in practice and is thus feasible to run for a wide range of network sizes.

Keeping this in mind, many organizational standards have been evolved to evaluate the security of an organization. Details regarding the methodology can be found in [10]. In [11] NIST provides a guidance to measure and strengthen the security through the development and use of metrics, but their guideline is focused on the individual organizations and they do not provide any general scheme for quality evaluation of a policy.

There are some professional organizations as well as vulnerability assessment tools including Nessus, NRAT, Retina, Bastille and others [12].

They actually try to find out vulnerabilities from the configuration information of the concerned network. However, all these approaches usually provide a report telling what should be done to keep the organization secure and they do not consider the vulnerability history of the deployed services or policy structure. There has been a lot of research in the security policy evaluation and verification as well. Attack graphs is another technique that is well developed to assess the risks associated with network exploits.

The implementations normally require intimate knowledge of the steps of attacks to be analyzed for every host in the network [14, 15]. In [16] the authors, however, provide a way to do so even when the information is not complete.

Still, this setup causes the modeling and analysis using this model to be highly complex and costly. Mehta et al. But their work do not give any sort of prediction of future risks associated with the system and also, they do not consider the policy resistance of firewall and IDS. There has been some research focusing on the attack surface of a network. Mandhata et al. Another work based on attack surface has been done by Howard et al.

Atzeni et al. In [22] Pamula propose a security metric based on the weakest adversary i. In Alhazmi et al. Our work is more general in this respect and utilizes publicly available data. There has also been some research work that focus on hardening the network. Wang et al. They also attempt to predict future alerts in multistep attacks using attack graph [25]. A previous work of hardening the network was done by Noel et al.

They use the graphs to find some initial conditions that, when disabled, will achieve the purpose of hardening the network. Sahinoglu et al. But all these work do not represent the total picture as they predominantly try to find existing risk and do not address how risky the system will be in the near future or how policy structure would impact on security.

Their analysis regarding security policies can not be regarded as complete and they lack the flexibility in evaluating them. A preliminary investigation of measuring the existing vulnerability and some historical trends have been analyzed in a previous work [29]. That work was still limited in analysis and scope. In this paper, we present a proactive approach to quantitatively evaluate security of network systems by identifying, formulating and validating several important factors that greatly affect its security.

Our experiments validate our hypothesis that if a service has a highly vulnerability prone history, then there is higher probability that the service will become vulnerable again in the near future.

These metrics also indicate how the internal firewall policies affect the security of the network as a whole. Our experiments provide very promising results regarding our metric.

The accuracies obtained in these experiments vindicate our claims about the components of our metric and also the metric as a whole. Combining all the measures into a single metric and performing experiments based on this single metric also provides us with an idea of its effectiveness. Acknowledgments The authors would like to thank Muhammad Abedin and Syeda Nessa of The University of Texas at Dallas for their help with the formalization and experiments making this work possible.

Alhazmi, O. In: Pro- ceedings of reliability and maintainability symposium, Jan , pp. Lee, S. Abedin, M. Bock, F. Gordon and Breach, pp. Al-Shaer, E. Hamed, H. Schiffman, M. Rogers, R. Dykstra, T. Syngress Publishing, Inc. Swanson, M. Kamara, S. Ammann, P. Network security is defined as an activity designed to secure the usability and integrity of the network and information.

It includes both the hardware and software applied sciences, its function is about targeting the various threats and blocking them from entering into the network and spreading in the network.

Network security secures the network of the organization or firm that is furnishing the required services to the customers and their employees.

Along with that, network security aids to secure the proprietary data from attack and finally it secures the reputation of the people. In the year of , we had 13 million users of internet experienced programs who had the ability to exploit the network but coming to the present situation everyone can play the role of a hacker by downloading software from the internet. The few network attacks are as follows:. There are many types of network securities and some of them are as follows:.

Thus, everyone must have the knowledge of protecting tools because the people can at least protect their own network from all suck attacks. If you want to secure the present threat environment, there is a need to provide more than just basic network security. It needs to equip with technologies like deep learning, AI, etc.

It will make sure the upcoming threats get tackled adequately. Content of the Seminar and pdf report for Network Security.



0コメント

  • 1000 / 1000