This article was translated by the author from the original Dutch as it appeared in the magazine of the Platform for Information Security
Trust is considered as a good characteristic. Trust means someone or something is highly reliable. Because trust can be abused, it is from a security point of view a bad characteristic.
People usually trust their gut feelings to decide to trust or not to trust something. This makes it virtually impossible to make a statement how trustworthy something or someone is.
- On the website of a web shop you read comments of others who bought the item already. If so many people write their positive experiences, the item just has to be good.
- By adding subtle phrases to an article like: “Brain scans showed …”, people experience a dramatic increase of the reliability of the article.
Most research on trust is about how we experience trust. Reliability is analyzed just like wine tasters rate wine: the results are judged by the subjective taste and prejudices of a group of experts that represent our interests. We do not always elect those experts, they often consult others and their conclusions are forced upon us.
The concept of trust is so old that you find in proverbs and sayings:
- I trust him as far as I can throw him.
- I trust him with my money, but not with my wife
- Free translated from Arab: Trust in God, but tie your camel
- Trust is good, control is better.
Relationship therapists have much to do with trust and distrust. The reason of distrust is not always in the point of dispute, but often goes back to something that happened a long time ago, sometimes to something in a complete different context. The end result of the approach of the perception of the confidence gap is often that someone accepts that some problems cannot be controlled.
Most people think experience is an all or nothing characteristic. There is trust or distrust or something is reliable or unreliable. Something in between is hardly conceivable. We don’t even have an appropriate word for this in our language (Dutch).
Sometimes you can’t avoid it. You have to decide whether to trust some one or not. Or to trust something for that matter. This is called operational trust.
Approaching operational trust intuitively is similar as solving security problems intuitively. Unfortunately most of what we understand about trust is based on experience, how it makes us feel. Therefore we are often not able to quantify the amount of trust.
We use illogical arguments when making decisions about trust:
- if you don’t trust anyone, your not trustworthy yourself
- you can’t possibly someone who looks like that
- you can’t possibly someone who lives like that
- seatbelts don’t help, because a friend of mine had a deadly accident although he was wearing them
- so many positive and so little negative comments, this movie just has to be real good
To determine trust in an objective manner, we have to get rid of the intuitive approach. In 2006 the European Union started the OpenTC project, Open Trusted Computing. During this project the reasons to trust something were investigated. After analysis it was found that a lot of these reasons were based on the same principle.
After everything was ordered there were 10 reasons left to trust. These ten reasons are called the trust properties (unfortunately, there are really 10. People sometimes think these are the 10 most important and the others are not discussed, but we simply did not find an eleventh reason yet).
If you apply these trust properties to all your trust decisions, not only you get an idea about the level of trust you can put into this, but also what is lacking.
The 10 trust properties are explained at the end of the article. In this explanation a source and a subject are mentioned. The source is the one who has to trust the subject.
Determining the Trust Level
The trust level can be determined like this. For each of the trust properties you determine as objective as possible if this element deserves a lot of trust (100%) or a little (0%) or where it lies between these extremes.
E.g.: symmetry of trust is 0% if the source is dependent on the subject and not the other way around. If source and trust are equally dependent on each other, a 50% score is used. For transparency you can rate a 75% score if there are four operational components of which three are visible and one is not.
Following this procedure you can determine a score for each of all trust properties. The average of these scores (add everything and divide by 10) gives the level of trust you can put in the subject.
If thee end result is disappointing, you can find out which trust properties can increase. You can do that because now you know on which elements the subject is more reliable and in which it is less. You find the solutions by finding the low scores, because these are easier to increase than increasing a score, which is already high.
To get a fast impression you can use speed trust by using the RACA method: Remove, Apply, Count, Average:
Remove: All properties for which we cannot think of a way to determine them fast are eliminated. They score 0%. All certainties score 100%.
Apply: For all other properties you look for facts. Focus on absolute numbers, so you can do simple calculations. If you cannot determine numbers, then use a low score or even 0%.
Count: count the numbers and convert them to a percentage. E.g. for transparency: if someone works 5 days a week 8 hours a day, then you can know what he or she is doing for 40 of 168 available hours per week. The score is that calculated as 40/168*100=24%
Average: if you found multiple score for an element, then take the average.
Finally you calculate the average of all values. This is the trust level, the amount of trust the source can put in the object.
This method is fast and the speed is achieved on expense of accuracy. Most of the times this doesn’t matter, because now you have an idea about the order of magnitude of the trust and you have insight on which points you should take measures to increase the trust.
Let’s apply this on an example. In the original article I took the example on the Dutch Electronic Medical File (Elektronisch Patienten Dossier or EPD). This is EPD is world famous in The Netherlands, but I can imagine that outside Dutch borders it is not.
EPD is an initiative for an "Electronic Medical File" for all Dutch civilians where all medical data for all patients is accessible nation wide for all doctors, dentists, pharmacies and other health care takers.
When I wrote this article in October 2010, the system was designed and already largely build. It was even being tested on a small scale. In May 2011 Dutch parliament decided nevertheless a “no go” for the EPD.
There are many caretakers adding data to the EPD and just as much who retrieve data from is. There size is big, a lot of people have to be trusted and a lot of people can abuse this trust. Therefore the score is low: 0%.
2. Symmetry of trust
The source is dependent on subjects. Het subject barely depends on the source in this relation. Again a low score: 0%.
The amount of operational parts en processes is high. In the design there is some level of monitoring included. The first time a health care taker accesses data, the patient is informed about this. This information is supplied after the data is accessed and only if it is accessed for the first time. Nothing is known about other monitoring. Because some monitoring does exist by design, I chose for a 20% score.
The data in EPD are not accessible for the patient. On this part the EPD scores low: 0%.
The source has no control over the subject. The constraint to permit access to the data is not counted, because only the influence that can really be executed is taken into consideration. Control is 0%.
This is a tough one. There is no past of EPD yet and there are no events that prove the reliability or unreliability of the system. Since most people act in good faith, you can assume this is also true for health care takers.
Then consistency would score high. But the health care taker also knows EPD can contain data that can lead to a better result. Therefore there is a realistic chance the health care taker violates the rules in the interest of the patient. The health care taker will probably consider this as good faith.
For this calculation, I tend to estimate that value high, but not the full 100%. I put it on 90%.
Please note this estimation is not objective. A more accurate estimation is perhaps possible, but that will cost time. The end value will not exceed 100% and probably not below 80% either. The inaccuracy on the end result will be less than ±1%.
Even before EPD was fully operational, future expansions were speculated like access for health insurance companies, scientific research and the like. Although such usage is prohibited with strong reasons, it is realistic to assume extra areas of application will be permitted at some time in the future.
But this is only a personal expectation and nothing of this has happened yet. And if none of these happen, it deserves a high score for integrity. I determine the endscore as 50%.
Verdicts of the Dutch medical disciplinary tribunal are not famous for giving sufficient satisfaction/guarantee to victims of medical failure. From this point of view it can not be expected the subject will satisfy the source or the subject will get an appropriate penalty if the trust is abused. Offset becomes 0%.
8. Value of Reward
Trusting the EPD can result in good medical treatment, resulting in good health. The reward for trust therefore is very high: 100%.
When data is retrieved from the system, the source of this data can come from multiple health caretakers. In number, this can easy be five or more. Therefore the score for components will not exceed 20%.
Porosity can be determined once the EPD is tested for operational security. Since this value is not known yet, it has to be estimated. Experience shows the value of porosity in between 80% (at bad controlled systems) and 98% (at well controlled systems). We estimate porosity in between those two values: 90%.
Trust Property Example Value
Symmetry of trust 0%
Transparency avg. of 20% and 0% 10%
Integrity Avg. of high and low score 50%
Value of reward 100%
Porosity Avg. of abt. 80% and abt. 98% 90%
The total of all scores is 360%, the average 36%. This is the order of magnitude of the amount of trust you can put in EPD: 36% trustworthiness. One could translate this into 100% – 36% = 64% untrustworthiness.
We found a lot of people have fallacies when determining the trust properties. The most important fallacies are:
This means the amount of trust is determined based on the opinion from a lot of people, even if it is not certain these people are reliable or knowledgeable. It is a human characteristic to copy behavior of others, especially if those others belong to the same group as the decision taker does. But all this doesn’t make their opinion right.
Example: Smoking can’t be bad at all, because 25% of smokers in this country can’t be all wrong.
This is taking over an opinion of someone you trust yourself. It is a chain of trust.
Example: your mother trusts her doctor, so you trust that doctor too.
Often the trust level has to be determined only once for a specific case. But for a recurring business process, it can be useful to set up formal rules to determine the trust level. By following these formal rules, the results of several subjects can be compared to each other or can be determined if the trust level of a subject changes over time. Examples of such business processes are hiring new personnel or selection of vendors.
For each of the trust properties rules are created to assist to determine the proper trust value. For hiring new personnel it can be estimated the amount of time the new employee will work alone, without supervision, compared to the total amount of time at work. For symmetry the ratio can be determined between the amount of colleagues that have to trust the new employee and the amount of colleagues that have to be trusted by the new employee.
Such trust rules have to be quantifiable, objective and understandable for normal humans (not specifically for security people). Further they must result in a percentage of trustworthiness. It is also necessary the can be verified in a specific and objective manner.
This approach of trust, distrust, reliability and unreliability is totally different from what we are used to. Therefore it takes some getting used to this manner of doing a trust audit. But the result is worth all that effort. Finally there is a manner to objectively determine the amount of trust you can put into something or someone.
And it is about time. More and more companies depend on third parties for running their core business. Think for instance on recent trends as SaaS, IaaS, Paas, DaaS and cloud computing, but also on things we are used to for some time like out sourcing en off shoring.
Although trust audits still have to prove themselves, we can expect added value from them. The trust audits themselves and especially setting up trust rules, give more certainties than what we had until now: the “good feeling” for a vendor, a non disclosure agreement and the like. Even auditors will frown less with all this external involvement, because trust audits support their motto: trust is good, control is better.
The size is about the number of subjects to be trusted. Does the source only have to trust one subject or more? And are those subjects depending of trusting third parties themselves? The bigger the size, the bigger the trust has to be and the more controls are needed.
2. Symmetry of Trust
This is about the direction of the trust. Are you depending on someone else or is someone depending on you? If you don’t depend on someone, then there is no reason to distrust. At mutual dependency the other parte has to take into account the consequences of abusing the trust.
Transparency tells about the visibility of all operational elements and processes of the subject in its environment. It is not necessary the visibility of the subject itself. If the source knows it can be visible to third parties every now and then, the trust can already increase. The visibility is taken into consideration as far as it is relevant to trust, like duringing business hours or on specific locations.
The amount of influence the source can execute on the subject. Often this influence is limited in time, like a work relationship (manager/employee) or imprisonment (guard/inmate). During an audit only the actual amount of executed influence is counted. The sole possibility to control the subject is not counted.
Consistency is about the historical evidence of compromising or corruption of the subject. Past behavior of the subject can be an indication for future behavior. How often did a subject violate the trust in the past?
Don’t only count the instances that prove unreliability, but also count the facts that prove reliability. Also pay attention to frequency and don’t focus on absolute numbers only. Is behavior increasing or decreasing? Or is there a correlation with other events?
Integrity tells how the behavior of the subject changed over time. It is normal everything changes over time. Pay attention to indications of changing behavior of the subject. These indications can be indirect. If the changes are brought under your attention by third parties, then take the trustworthiness of that third party into your consideration.
There has to be sufficient guarantees for financial compensation to the source or a fine to the subject when the trust is violated.
8. Value of Reward
The financial profit or value of reward is high enough to compensate the risk of trusting. This is about personal benefit for the source and not a manner of force or punishment subject as is the case with offset.
This is the amount of elements that provide services to the subject and on which the subject is dependent. Even if a computer is completely reliable, this is not necessary true for the data, the power supply, the user input etc.
This is the state of safety of the subject. It shows the balance between all the access, security controls and limitations.
About the Author
Cor Rosielle works as a security consultant at Lab106, a sister organization of Outpost24 Benelux BV and affiliate partner of ISECOM (Institute for Security and Open Methodologies). The Trust Metrics are the result of an ISECOM project. Cor Rosielle is one of the first who was certified by ISECOM as a Trust Analyst.