Summary: Public-interest vulnerability disclosure depends not only on clear rules but on the moral character of those who apply them. Virtue is not an addition to ethics but its precondition. Without honesty, restraint, and care, deontology becomes bureaucracy and consequentialism becomes opportunism. This post explores how the cultivation of virtue through self-control, courage, humility, and practical wisdom makes responsible disclosure possible. Trust, not permission, is what turns unsolicited hacking into civic service.
In the ethics of vulnerability disclosure, most debates revolve around rules and risks. We ask when it is acceptable to exploit a vulnerability, how to disclose it, and when to hold back. Frameworks like Coordinated Vulnerability Disclosure (CVD) as well as Threshold Deontology and Principlism help answer these questions by balancing autonomy, fairness, harm prevention, and accountability.
Yet all of these approaches assume something more fundamental: that the people applying them already possess the judgment and restraint to do so well. Ethical decision-making in cybersecurity begins not with the rulebook but with the kind of person who opens it.
Vulnerability disclosure, especially when unsolicited, depends on trust: the trust that a researcher will act with care, that their motives are civic rather than self-serving, and that they will stop where help ends and intrusion begins. That kind of trust cannot be mandated by law or codified in policy. It comes from character, from a moral culture that values restraint as much as curiosity.
The text below explores that foundation. Virtue ethics, an older tradition often overshadowed by rules and consequences, can help us understand why public-interest disclosure works in some contexts and not in others. It shifts the question from the justification of an act to the motivation and discipline of the person committing it.
From Rules to Character
Virtue ethics does not begin with isolated actions but with habits of moral excellence. Rather than prescribing what to do, it asks who we should become and how repeated practice forms reliable judgment. In Aristotle’s words, virtue is a cultivated disposition to choose the mean between extremes, guided by phronesis, or practical wisdom.
This way of thinking fits naturally with the craft of ethical hacking.
Good vulnerability disclosure is not a formula to follow but a skill to refine. It demands sensitivity to context, proportion, and consequence. Just as a craftsperson improves through careful repetition, security researchers develop their sense of restraint, transparency, and fairness through each disclosure that is discussed and reviewed.
Shannon Vallor argues in Technology and the Virtues that modern technological practice requires more than compliance with rules. It calls for the cultivation of technomoral virtues: moral habits that help us act well under conditions of uncertainty. Vallor identifies twelve: honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and wisdom.
Not all of these are equally central to vulnerability disclosure.
The work of securing systems depends most on honesty, self-control, justice, courage, care, humility, and practical wisdom. These virtues shape how researchers handle risk, power, and uncertainty: when to probe further, when to stop, and how to communicate what they find. Others such as civility, magnanimity, and empathy are not absent, but they belong more to the broader social negotiation of technology than to the focused ethics of discovery and disclosure.
Together, these core virtues give disclosure its moral texture:
- Honesty and justice ground communication in fairness and accuracy.
- Self-control prevents curiosity from becoming intrusion.
- Courage allows action where inaction would leave others at risk.
- Humility and care temper expertise with respect for those affected.
- And phronesis, or practical wisdom, binds them together by helping researchers recognise when a strict rule would itself cause harm.
Truly ethical vulnerability disclosure is therefore not only about applying a deontological or consequentialist view. It embodies the virtues that make those frameworks meaningful. Threshold Deontology, for instance, depends on actors who can tell when restraint turns into negligence. That kind of judgment cannot be automated or legislated. It must be cultivated through the discipline of balancing duty with care.
Virtue, in that sense, is the condition that makes ethics work at all. Without the habits of restraint, honesty, and care, deontology becomes bureaucracy and consequentialism becomes opportunism. Virtue keeps both from collapsing into moral convenience.
The Virtues Behind Vulnerability Disclosure
Virtue becomes visible through trust: the social capital that turns unsolicited action into civic contribution rather than intrusion. Trust allows unsolicited disclosure to be read as help, not harm. It is the reason a report from a volunteer hacker can reach a national authority instead of only a prosecutor’s desk. When researchers act with restraint, communicate clearly, and document their methods transparently, they make their motives legible. Over time, that legibility becomes a form of moral infrastructure. It gives institutions and vendors confidence that those who uncover vulnerabilities do so in service of the public, not for profit or prestige.
This dynamic is visible in places where disclosure has matured into a civic practice. In the Netherlands, for example, cooperation between the hacking community and public agencies has produced a steady accumulation of social and moral capital. Each well-handled disclosure reinforces the expectation that unsolicited research is guided by care and proportion. That expectation, in turn, lowers the perceived risk of collaboration. Ethical trust becomes procedural trust.
Still, it would be too simple to say that this success rests only on virtue. The Dutch model also reflects legal pragmatism and institutional learning. Agencies interpret good-faith intrusion charitably not only because they trust researchers’ intent but because the arrangement has proven socially efficient. In that sense, moral and institutional trust strengthen each other.
The visibility of virtue also depends on the context that surrounds it. In high-trust societies such as the Netherlands, restraint and transparency are interpreted as signs of integrity. In lower-trust contexts, such as the United States, where cooperation between researchers and the state is more contested, similar gestures are often read through contractual mechanisms like bug-bounty programs. Social capital does not replace virtue, but it shapes whether virtue is recognised as such. And in low-trust environments, individual integrity becomes even more important to bridge institutional suspicion.
When Virtue Meets Policy
Yet trust alone cannot guarantee legitimacy. There are laws for good reason. Law and policy can formalise processes, but they cannot create virtue. Article 12 of the NIS2 Directive promotes Coordinated Vulnerability Disclosure and defines expectations for the handling of notifications. It prescribes contact points, timelines, and reporting channels. Yet none of these instruments can replace the moral judgment required to act responsibly under uncertainty or without consent.
Where virtue ethics explains why one should act, laws and policies provide the conditions under which such action is recognised as legitimate. Virtue without structure risks arbitrariness, while structure without virtue can produce proceduralism and distrust. The two do not always harmonise, but when they do, moral reliability sustains legal legitimacy.
The Dutch model shows how this balance can emerge. The hacking community, academic researchers, and public agencies have built a cooperative disclosure culture that functions largely on mutual trust. It works because institutions are willing to interpret good-faith intrusion as a civic contribution rather than an offence. Yet this success also hides a vulnerability of its own. In 2020, the Dutch Public Prosecution Office published a non-prosecution policy that defines proportionality and subsidiarity as requirements for ethical intrusion. Both depend on prior moral orientation, yet the model assumes that those who act in the public interest already know where the boundaries lie. In practice, that orientation is rarely taught, resulting in well-intending researchers making mistakes with legal liability as a consequence, or even the erosion of general willingness to notify.
We cannot expect everyone to possess the same virtues or to navigate the same moral thresholds intuitively. Ethical restraint, transparency, and proportionality are learned dispositions rather than natural instincts. Cultivating virtue requires more than ethical instruction. It requires opportunities to exercise moral judgment within safe and guided contexts. Encouraging people to take initiative in vulnerability disclosure without first teaching them how to handle ethical and legal risks breeds confusion, and sometimes harm. The capacity for moral judgment does not appear automatically with technical skill.
If the Netherlands wants to maintain its reputation for responsible disclosure, it must invest not only in frameworks and contact points but in education and mentorship as well. The same institutions that rely on civic virtue must also help cultivate it. Law can protect those acting in good faith, but it should also help form that good faith through guidance and training.
Across Europe, frameworks now define how vulnerabilities should be received, but not how they should be found or reported. Without guidance for the notifying side, accountability becomes uneven and moral initiative discouraged. A sustainable disclosure culture cannot depend solely on trust in moral instinct. It must also provide pathways to learn how to act responsibly before acting at all.
Virtue Before Permission
Vulnerability disclosure began as a moral exception: an act taken without invitation, justified by necessity. Over time it evolved into a structured practice, but the underlying tension remains the same. Acting in the public interest often means acting before one is formally allowed to. The legitimacy of such acts does not come from the absence of rules but from the presence of trust in those who cross boundaries carefully.
That trust is fragile. It depends on the visible presence of character, on the quiet confidence that those who discover a vulnerability will handle it with restraint, honesty, and care. Policy can protect this space, but it cannot fill it. Virtue does not justify disregard for law, but it helps interpret whether a boundary was crossed in service of the public or against it. Without virtue, permission becomes meaningless.
If disclosure is to remain a civic act rather than a criminal risk, it will require more than frameworks and points of contact. It requires societies that can recognise ethical intent, and researchers who can demonstrate it. This is what virtue before permission means: not that ethics should replace law, but that the law should trust ethics enough to leave room for judgment and intervene only when that judgment clearly strays from the public interest.
Truly ethical vulnerability disclosure begins long before a report is sent or a patch is deployed. It begins in the cultivation of moral skill. That ability, once widespread, becomes its own kind of security: a shared understanding that acting with care is not a privilege but a responsibility. Before permission, there must be virtue—and before virtue, the willingness to act responsibly.
