Return On Security Investment (ROSI): A Practical Quantitative Model
Wes Sonnenreich
SageSecure, LLC
116 W. 23rd St. 5th Floor, NYC, NY 10011泥塑教程
A summary of Rearch and Development conducted at SageSecure by:
Wes Sonnenreich, Jason Albane () and Bruce Stout ()
ABSTRACT
原来是爱
Organizations need practical curity benchmarking tools in order to plan effective curity strategies. This paper explores a number of techniques that can be ud to measure curity within an organization. It propos a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.
1. INTRODUCTION
In a world where hackers, computer virus and cyber-terrorists are making headlines daily, curity has become a priority in all aspects of life, including business. But how does a business become cure? How much curity is enough? How does a business know when its curity level is reasonable? Most importantly, what's the right amount of money and time to invest in curity?
Executive decision-makers don't really care whether firewalls or lawn gnomes protect their company's rvers. Rather, they want to know the impact curity is having on the bottom line. In order to know how much they should spend on curity, they need to know:
• How much is the lack of curity costing the business?
• What impact is lack of curity having on productivity?
• What impact would a catastrophic curity breach have?
• What are the most cost-effective solutions?
• What impact will the solutions have on productivity?
Before spending money on a product or rvice, decision-makers want to know that the investment is financially justified. Security is no different -- it has to make business n. What decision-makers need are curity metrics that show how curity expenditures impact the bottom line. There's no point in implementing a solution if its true cost is greater than the risk exposure. This paper will prent a model for calculating the financial value of curity expenditures, and will look at techniques for obtaining the data necessary to complete the model.
2. A RETURN ON INVESTMENT MODEL FOR
SECURITY
"Which of the options gives me the most value for my money?" That's the fundamental question that Return On Investment (ROI)凄凉是什么意思
is designed to answer. ROI is frequently ud to compare alternative investment strategies. For example, a company might u ROI as a factor when deciding whether to invest in developing a new technology or extend the capabilities of their existing technology.
Investment
of
Cost
Investment
of
Cost
-
成都二本大学Returns
Expected
=
ROI (1)
To calculate ROI, the cost of a purcha is weighed against the expected returns over the life of the item (1). An overly simplistic example: if a new production facility will cost $1M and
is expected to bring in $5M over the cour of three years, the ROI for the three year period is 400% (4x the initial investment
of net earnings).
A simple equation for calculating the Return on Investment for a curity investment (ROSI) is as follows:
Cost
Solution
Cost
Solution
-
Mitigated)
Risk
%
Exposure
(Risk •
=
ROSI(2)
Let's e how this equation works by looking at the ROI profile for a virus scanner. ViriCorp has gotten virus before. It estimates that the average cost in damages and lost productivity due to a virus infection is $25,000. Currently, ViriCorp gets four
of the virus per year. ViriCorp expects to catch at least 3 of the 4 virus per year by implementing a $25,000 virus scanner.
Risk Exposure: $25,000, 4x per year = $100,000
Risk Mitigated: 75%
Solution Cost: $25,000
%
相山公园200
$25,000
$25,000
-
%)
75
($100,000=
=•
ROSI (3)
The virus scanner appears to be worth the investment, but only becau we're assuming that the cost of a disaster is $25,000, that the scanner will catch 75% of the virus and that the cost of the scanner is truly $25,000. In reality, none of the numbers are likely to be very accurate. What if three of the four virus cost $5,000 in damages but one cost $85,000? The average cost is still $25,000. Which one of tho four virus is going to get past the scanner? If it's a $5,000 one, the ROSI increas to nearly 300% -- but if it's the expensive one, the ROSI becomes negative!
Coming up with meaningful values for the factors in the ROSI equation is no simple task. At the time of writing, there is no "standard" model for determining the financial risk associated with curity incidents. Likewi, there are also no standardized methods for determining the risk mitigating effectiveness of curity solutions. Even methods for figuring out the cost of solutions can vary greatly. Some only include hardware, software and rvice costs, while others factor in internal costs, including indirect overhead, and long-term impacts on productivity.
There are techniques for quantitatively measuring risk exposure, but the results tend to vary in accuracy. For most types of risk, the exposure can be found by consulting actuarial tables built from decades of claims and demographic statistics. Unfortunately, similar data on curity risk does not yet exist. Furthermore, the variability in exposure costs can lead to misleading results when predictin
g bad on actuarial data. In the ViriCorp example, the exposure cost is misleading -- the average cost of $25,000 doesn't reflect the fact that most incidents cost very little while some cost quite a lot.
Is there any point to calculating ROSI if the underlying data is inaccurate? Apparently so, since some industries have been successfully using inaccurate ROI metrics for decades. The advertising industry is one such example. Ads are priced bad on the number of potential viewers, which is often extrapolated from circulation data and demographics. The ad buyers assume that the true number of ad viewers is directly correlated to the number of potential viewers; if the viewer ba doubles, roughly twice as many people will probably e the ad. Therefore, even though they may never know the true number of viewers, ad buyers can nonetheless make informed purchasing decisions bad on other, more reliable measurements.
If the method for determining ROSI produces repeatable and consistent results, ROSI can rve as a uful tool for comparing curity solutions bad on relative value. In the abnce of pure accuracy, an alternate approach is to find consistent measurements for the ROSI factors that return comparably meaningful results. This task is much easier, and breaks through the barrier of accuracy that has kept ROSI in the domain of academic curiosity.
KEY POINT:Repeatable and consistent metrics can be extremely valuable -- even if they're "inaccurate".
2.1. Quantifying Risk Exposure
A simple analytical method of calculating risk exposure is to multiply the projected cost of a curity incident (Single Loss Exposure, or SLE) with its estimated annual rate of occurrence (ARO). The resulting figure is called the Annual Loss Exposure (ALE).
While there are no standard methods for estimating SLE or ARO, there are actuarial tables that give average statistical values bad on real-world damage reports. The tables are created from insurance claim data, academic rearch, or independent surveys.
Risk
Exposure
= ALE = SLE * ARO (4)
It's very difficult to obtain data about the true cost of a curity incident (the SLE). This is becau fe
w companies successfully track curity incidents. Security breaches that have no immediate impact on day-to-day business often go completely unnoticed. When a breach does get noticed, the organization is usually too busy fixing the problem to worry about how much the incident actually costs. After the disaster, internal embarrassment and/or concerns about public image often result in the whole incident getting swept under the rug. As a result of this "ostrich respon" to curity incidents, the volume of data behind existing actuarial tables is woefully inadequate.
Currently, the "best" actuarial data comes from efforts such as the annual survey of business conducted by the Computer Security Institute (CSI) and the U.S. Federal Bureau of Investigation (FBI). The business are asked to estimate the cost of curity incidents for various categories over the cour of a year. Unfortunately, the methods ud to calculate the costs vary from business to business. For example, one business might value a stolen laptop at its replacement cost. Another might factor in the lost productivity and IT support time, and yet another might factor in lost intellectual property costs. As a result, some business value a laptop theft at $3000; others put it down as $100,000+. The final number is more likely to be influenced by business factors (how much will insurance reimbur, what are the tax implications, what impact will a large loss have on the stock price) than by financial reality.
For the purpos of ROSI, the accuracy of the incident cost isn't as important as a consistent methodology for calculating and reporting the cost, as previously discusd. It would be quite challenging to get companies to agree upon a standard technique for tabulating the internal cost of a curity incident. Therefore, the focus must be on cost factors that are independently measurable and directly correlate to the verity of the curity incident.
One potentially significant cost is the loss of highly confidential information. In organizations valued for their intellectual property, a curity breach resulting in theft of information might create a significant loss for the business yet not impact on productivity. The cost of a curity incident in this ca is the estimated value of the intellectual property that is at risk, using industry-standard accounting and valuation models. For most industries, analysts are already externally measuring this value. If an organization doesn't already estimate the value of its IP asts, it probably doesn't need to consider this cost.
Another significant cost is the productivity loss associated with a curity incident. For many organizations the cost in lost productivity is far greater than the cost of data recovery or system repair. Security can be directly connected to an organization's financial health by including lost productivity in the cost of a disaster. This approach automatically forces curity projects to
improve business efficiency and eliminates tho projects justified solely by fear of the unknown.
Lost productivity can have a rious impact on the bottom line. Just ten minutes of downtime a day per employee can add up to a significant amount pretty quickly, as illustrated in Table 1.
Table 1: Lost Productivity Adds Up 1000 employees * 44 Hours/year curity related "downtime" * $20 per hour average wage =
$880,000
per year in lost productivity
Whether an organization us lost productivity, intellectual property value or a combination of both as a measurement of risk exposure depends on whether it's more worried about theft of data, availability of data, or both. Professional rvice firms such as law and accounting firms tend to be more nsitive toward data availability -- if they can't access critical files they can't bill effectively. This directly impacts on the bottom line. R&D-intensive organizations such as biotech labs will be much more concerned about data theft -- the information might enable a competitor to gain an edge on time-to-market. The disaster spectrum diagram below further illustrates this concept.
Analysts and accountants can provide consistent valuations of intellectual property, but how can lost productivity be measured? Internally, productivity is often measured using a combination of performa
nce appraisals and profit/loss metrics. The problem with this approach is that isolating curity's impact on productivity from other factors (such as poor performance) is impossible. Technical measurements of system downtime are also not adequate since system downtime is only relevant when it prevents someone from doing their job. An hour of rver downtime at 3am usually doesn’t have a significant impact on productivity. It's much more important to measure the end-ur's perception of downtime, since this directly corresponds to their productivity. Measuring employee perception of downtime can be accomplished with a survey. If the survey is correctly constructed, there will be a strong correlation between the survey score and financial performance. Specifically, if a department shows a decrea in perceived downtime, it should also show an increa in productivity on the internal balance sheets.
A good survey will ask the employees questions that have coar quantitative answers, or answers that imply a quantitative
value. For example, one question might be, "How much spam do
you receive each day?" The employee might have to choo between four answers: less than 10, 10-30, 30-50 or more than 50. Average minutes of downtime can be associated with each answer. F
or example, dealing with 30-50 spam messages per day can cau up to ten minutes of downtime, especially if it's hard to tell the difference between spam and desired messages.
The key to getting consistent results from a survey that measures employee perception is to ensure that the questions are quantitative, clear and answerable without too much thought. For example, a bad question would be "Estimate the amount of downtime you had this month," since few people could answer this without logging every even as it happens. A better question is to ask, "How often is the filerver unavailable for more than 10 minutes (daily, weekly, monthly, rarely)". A person who experiences weekly filerver problems is unlikely to put down "daily" unless the problem is extremely frequent.
Once the survey answers are scored, the result will be an indication of monthly downtime. This can be converted into a dollar amount of lost productivity by using salaries expresd as hourly rates. For example, if the average salary for a department is $75/hour and the average downtime is 30 hours per month, then the company is losing $2250 in non-productive time per employee due to curity-related issues. In a professional rvice firm, the employees might also generate revenue. The hourly billable rate multiplied by the revenue realization rate and the monthly downtime gives an additional quantification of lost revenue opportunity. Tuning the productivity survey so that t
he calculated loss exhibits stronger correlation with internal financial measurements of profit and loss can increa accuracy. KEY POINT: With a good survey and scoring system for productivity, combined with external measurements of intellectual property value, it becomes possible to quantify risk exposure in a repeatable and consistent manner.
A downtime asssment can provide a post-mortem analysis of lost productivity during a curity incident. The loss measured can be ud when calculating the ROI of curity solutions designed to prevent similar problems in the future. Unfortunately, there has yet to be a study combining such analys into an actuarial table associating productivity loss with particular curity incidents. This means that if a particular incident has already happened to an organization, it can't rely on commonly available statistics for estimating loss.
It is possible to u a downtime asssment to estimate the productivity loss associated with an incident that hasn't yet happened. If an organization wanted to predict the impact of a virus, it might conduct a downtime asssment to gain a baline measurement of productivity. It would then take the asssment results and varying respons to questions dealing with lost data, bandwidth issues, etc. The result would be a range of potential productivity loss, which could be ud to calculate a maximum and minimum ROI for a solution preventing a virus outbreak. A uful tool for this type of a
nalysis is a Monte Carlo simulation, which automates the process of varying a number of factors at the same time and returns a range of potential results.
Another uful application of a downtime asssment is when examining the general impact of curity on the organizational
productivity. Minor, everyday curity breaches and technology
failures can cau significant productivity loss when aggregated over time. Table 2 (below ) shows just a handful of factors that can eat up a few minutes here, and a few minutes there.The average company will generally have at least five of the problems, which accounts for an hour of downtime per day. The Return on Security Investment equation takes on a new meaning if everyday productivity loss is ud as the risk exposure figure. The implication is that a cure organization will have less minor breaches and technology failures, and therefore less lost productivity. The risk due to a major breach is ignored. It completely sidesteps the problem of calculating ROSI for an event that might not happen by focusing on problems that are constantly happening. If a curity solution can improve overall curity while eliminating some of the problems, it will actually have a positive ROSI, even if it never stops a rious incident. KEY POIN
T: There are a number of ways in which lost productivity can provide a meaningful estimate of risk exposure, any of which can be ud to calculate ROSI.
Table 2: Potential Daily Caus of Lost Productivity 1
Problem
Average Downtime (in minutes)
Application and System related crashes 10
Email Filtering Sorting and Spam 15
Bandwidth Efficiency and Throughput 10
Inefficient and ineffective Security Policies 10 Enforcement of Security Policies 10 System related rollouts and upgrades from IT 10 Security patches for OS and applications 10
Incure and Inefficient Network Topology 15
Virus, Virus Scanning 10
Worms 10
Trojans, Key logging 10 Spyware, System Trackers 10 Popup Ads 10 Compatibility Issues - Hardware and Software 15
Permissions bad Security Problems (Ur/Pass) 15
File System Disorganization 10
Corrupt or inaccessible data 15
Hacked or stolen system information and data 15
Backup / Restoration 15
Application Usage Issues 15
Total Time
240 minutes
2.2. Quantifying Risk Mitigated
Determining the risk-mitigating benefits of a curity device is as difficult as measuring risk exposure. Most of the problems stem from the fact that curity doesn't directly create anything tangible -- rather it prevents loss. A loss that's prevented is a loss that you probably won't know about. For example, a company's intrusion detection system might show that there were 10 successful break-ins last year, but only 5 this year. Was it due to the new curity device the company bought, or was there just 5 less hackers attacking the network?
1
Bad on aggregate SecureMark results and analysis What is the amount of damage that might occur if a curity solution fails? While a few breaches may be the result of direct attacks by tho with harmful or criminal intent, most are not intentionally malicious -- they're the result of automated programs and curious hackers. Significant damage, while rarely intended by the hackers, is nevertheless a possibility. This damage is not just confined to systems and data -- rious incidents can lead to a loss in customer/investor confidence. The following argument has been ud to justify a simple, fixed percentage for risk mitigation:
• A curity solution is designed to mitigate certain risks. • If the solution is functioning properly, it will mitigate nearly 100% of the risks (85% to be conrvative). •
Therefore, the amount of risk mitigation is 85%.
Unfortunately, there are a number of rious problems with this "logic":
•
上海森林公园
Risks are not isolatable -- a well-locked door mitigates 0% of risk if the window next to it is open
• Security solutions do not work in isolation - the existence and effectiveness of other solutions will have a major impact. • Security solutions are rarely implemented to be as effective as possible due to unacceptable impact on productivity • Security solutions become less effective over time, as hackers find ways to work around them and create new risks A better approach is to conduct a curity asssment and "score" the asssment bad on some consistent algorithm. This
score can reprent the amount of risk currently being mitigated. By evaluating risk mitigation within the context of the network's overall curity, the two problems of isolation mentioned above are avoided. A good asssment will also capture the impact of implementation choices made for the sake of usability and
productivity. Likewi, a good scoring algorithm will factor in the
time impact on solution effectiveness.
When evaluating a curity solution, the asssment can be conducted as if the solution were already in place. The difference between this score and the actual score is the amount of risk being mitigated due to the solution. When calculating ROSI, the predicted score (not the difference) should be ud as the overall risk mitigation.
The accuracy of the score as a measurement of mitigated risk is dependent on the quality of the asssment and scoring algorithm. Following asssment guidelines published by standard-tting groups such as the International Security Forum (ISF), National Institute of Standards in Technology (NIST), and the International Standards Organization (ISO) will lead to the creation of good asssments. Artificial Neural Networks can be ud to create particularly good scoring algorithms, the details of which will be discusd in a forthcoming paper.
KEY POINT: Even with an inaccurate scoring algorithm, using a scored asssment as a method of determining risk mitigation is effective becau the scores are repeatable and consistent, and therefore can be ud to compare the ROI of different curity solutions.
2.3. Quantifying Solution Cost
By this point, it should be apparent that the cost of a solution is not just what's written on its price tag. At the very least, the internal costs associated with implementing the solution also need to be taken into consideration. But this is also not enough. Once again, productivity is going to rear its ugly head and demand accountability.
Productivity is important becau curity almost always comes at the cost of convenience. Most curity solutions end up creating hurdles that employees need to jump in order to do their jobs. Depending on the size and frequency of the "hurdles", the lost productivity cost can riously add up. Table 3 shows how time can easily be lost due to problems actually created by the very solutions designed to fix other curity problems: Table 3: Productivity Loss Due to Security Solutions
Problem
Average Downtime
Application and System related crashes 10 Mins Bandwidth Efficiency and Throughput 10 Mins Over-restrictive Security Policies 10 Mins Enforcement of Security Policies 10 Mins System related rollouts and upgrades from IT 10 Mins Security patches for OS and applications 10 Mins Trouble Downloading Files Due to Virus Scanning 10 Mins Compatibility Issues – Hardware and Software 15
Mins Too Many Passwords/Permissions Security Problems 15 Mins
It is also possible for a curity solution to increa productivity. This happens when a side effect of the solution happens to eliminate other significant problems that were hampering productivity. For example, implementing a firewall might require a network restructuring. The new structure might solve rious bandwidth problems that were previously creating extensive downtime.
This productivity impact can be measured by re-running the productivity surveys ud to estimate risk exposure. The given answers are adjusted to assume that the solution has been put into place. The difference between the current and projected productivity is the impact factor that needs to be included in this calculation.
Let's factor productivity into our earlier example with ViriCorp's virus scanner. We can e that if cost of the solution exceeds $60,000, the ROI is 0% and therefore it's not worth purchasing. Assuming the full cost of the system remains at $30,000, there's a margin of $30,000. For 100 employees earning an average of $20/hour, that margin equates to 3.5 minutes per day of downtime. If implementing the virus scanner creates more
than 3.5 minutes of downtime each day, it's more cost effective to not purcha the scanner. On the
other hand, if the scanner can eliminate downtime by minimizing the impact of virus, it could make the scanner quite attractive in terms of ROI.
KEY POINT:The cost of a solution must include the impact of the solution on productivity, since this number is often large enough to make or break the viability of a given solution.
2.4. Taking A Long-Term View
For long-term investments, most financial professionals will want to factor in the time-value of money. The money spent on the investment is money that could have been invested in other places. For example, imagine that you must choo between two functionally equivalent solutions where one costs $100,000 up-front, and the other $50,000 per year for two years. Both solutions ultimately cost $100,000. But the cond solution is preferable becau you can invest the other $50,000 in something el for a year. The true cost of the cond solution is actually less than $100,000 when the investment potential is factored in. This "adjusted" cost is called the Net Prent Value (NPV).
One of the important factors in calculating Net Prent Value is the "discount rate" -- the estimated rate of return that you could get by putting the money in some other form of investment. Another interesting piece of information can be obtained by figuring out what discount rate is necessary to re
sult in an NPV of zero. This is called the Internal Rate of Return (IRR) and basically tells you what rate the investment is effectively earning. In general, having an IRR above the discount rate is a good sign.
In most cas, Net Prent Value and the Internal Rate of Return are better indicators than a simple Return on Investment calculation. But if you can't accurately predict the timing or magnitude of the costs and benefits over the lifetime of the investment, you will get misleading results. To illustrate the problem, let's look at the NPV and IRR of a $10,000 network curity device. In the first example, the device prevents a $50,000 disaster in the fifth year after it's installed. In the cond example, the same disaster is prevented during the first year: Rate Cost Y1 Y2 Y3 Y4 Y5 NPV IRR ROI
#1 0.05 -10000 0 0 0 0 50000 $27,786 38% 400%
#2 0.05 -10000 50000 0 0 0 0 $35,827 400% 400% Unfortunately, nobody can predict when a curity device will prevent a problem. As a result, one solution is to spread the savings out across the predicted lifetime of the device. You could also "front-load" the savings, under the assumption that the device will be most effective at the beginning of its life, and lo effectiveness as the years progress and hackers figure out how to bypass the device:
阅读宣言
Rate Cost Y1 Y2 Y3 Y4 Y5 NPV IRR ROI
#3 0.05 -10000 10000 10000 10000 10000 10000 $31,709 97% 400%
#4 0.05 -10000 17500 15000 10000 5000 2500 $33,316 153% 400% The problem with using Net Prent Value for curity investments is that accuracy is quite critical to obtaining comparatively meaningful results. While ROSI doesn't factor in
the time value of money, it can at least provide comparable figures with inaccurate (but consistent) data. This may be a ca where it's better to be meaningful than preci.
2.5. Putting It All Together: The SecureMark System
The rearch and theories put forth in this article are not the result of academic study -- they are the foundation and result of a business venture. SageSecure was founded with the goal of enabling business to financially justify their curity spending. After studying many different theoretical models and finding no standard practical models, we decided to develop our own. After a year of development and successful field u, we believe that our system is on the right track.
The SecureMark system is a real-world implementation of the concepts put forth in this article. It's go
al is to provide a trustworthy standard for curity benchmarking, one that produces consistently repeatable results that are strongly correlated to financial performance. SecureMark scores can truly be ud to compare curity expenditures bad on meaningful Return on Security Investment calculations. Our scoring model is constantly improving and approaching its ultimate goal of providing meaningful, accurate and consistent results.
SecureMark's asssment surveys are bad on NIST and ISF standards. All major areas recommended by the standards are covered by questions found in the SecureMark survey. There is even the ability to provide an alternate scoring that quantifies compliance with NIST and ISF recommendations. This is not a standard focus of SecureMark, however, since we believe that 100% compliance with NIST and ISF does not necessarily equate to ideal curity, and certainly would create rious productivity issues in most organizations. We believe that specific compliance goals are dependent on the industry and size of an organization. Achieving 95% compliance with a standard is not impressive if the missing 5% is in areas of critical importance.
A particularly unique approach taken by SecureMark is its focus on productivity. Risk exposure is measured as the productivity loss due to existing curity issues. Solutions are prented that minimize this loss and therefore provide instantly realizable returns, as oppod to returns that only
happen if the curity solution prevents a major disaster. Our assumption is that rious disasters are rare and hard to quantify, but everyday incidents create a significant amount of aggregate loss. Solving the problems provides real returns and improves curity at the same time, which has the side effect of preventing some of tho major disasters. That said, SecureMark could also be ud to measure the productivity loss due to a major disaster. This figure can be ud as a specifically accurate risk exposure figure when comparing the return on curity investment of preventative solutions for that particular type of incident. Either way, productivity is a critical factor and is the cornerstone of SecureMark's analysis.
Not only is productivity a major factor in calculating risk exposure, but it's also a significant factor in the cost of a solution. Security solutions can have a positive, negative or neutral influence on organizational productivity. This influence can be significant, and must be factored into the cost of the solution. SecureMark can estimate the impact a given solution will have on overall productivity. This impact is factored in when prioritizing underlying problems and their respective solutions.1
The resulting SecureMark scorecard gives all the factors necessary to calculate the Return On a Security Investment: Risk Exposure expresd in dollars of lost productivity, and the percentage of risk currently mitigated expresd as a SecureMark Score. The analysis indicates the top problems
prioritized by their impact on risk exposure and lost productivity. Likewi, the solutions prented are lected bad on their predicted ability to mitigate risk and minimize lost productivity.
In a few years, the data accumulated by SecureMark will allow an unprecedented amount of accuracy in its scoring and analysis. For now, we have not yet collected enough data to begin eliminating subjectivity from SecureMark's scoring and analysis. That said, our system is still consistent, which allows for meaningful comparison of solutions. It also allows for meaningful industry comparisons -- a company can tell if its score is above or below industry average. Until the system can automatically provide accurate results, SageSecure curity experts review all scores and analys to ensure consistency and accuracy. The result is the only automated, repeatable and consistent ROSI benchmarking system available to date.
中国人民银行征信中心个人信用信息服务平台3. CONCLUSION
In this paper we've prented an analysis of the problem of determining a meaningful Return on Security Investment for curity expenditures. We prented a model for calculating ROSI, and then showed how the various factors could be obtained. Some unique approaches to measuring Risk Exposure and Risk Mitigation were explored, specifically tho that focud on lost productivity as a
critical factor. The importance of factoring productivity into both exposure and solution cost was stresd. The suitability of using Net Prent Value in this context was explored, and a real-world implementation of the entire model (SecureMark) was examined.
We hope the concepts discusd in this paper will encourage further rearch into the connection between productivity and curity. We feel that this is one of the most promising areas in which a strong connection can be made between curity and financial performance. The authors are reachable for comment and discussion at:
1 It might appear that the productivity impact of a curity solution is getting factored in twice: once becau the Risk Mitigated * Risk Exposure gives a $ figure for productivity savings, and a cond time when factored into the cost. The are actually two different ways in which productivity affects ROSI. The first shows that any curity improvement will minimize the chance of productivity draining incidents, and therefore reclaims some lost productivity, proportional to the increa in risk mitigation. The cond way is the impact that the solution itlf will directly have on productivity loss. For example, implementing a spam filter will marginally improve overall curity by stopping a number of different email-borne threats. This will impact on overall productivity by minimizing downtime due to the threats. This impact will be captured by the increa in risk mitigation. The sp
am filter may also save employees up to 15 minutes per day by improving their email usage efficiency. Factoring the productivity impact into the cost of the solution will capture this gain. In some cas there is a small amount of overlap between the two influences, but this is generally inconquential and can be further minimized by adjusting the scoring system.