No one wants to be a proverbial guinea pig; less than anything, developers give their time and energy to make the world a better place. You would think that, with all the recent discussion about consent, researchers would look more closely at ethical boundaries. Still, a group of researchers from the University of Minnesota not only crossed the line, but crossed it while shouting defiantly.
In response, the Linux Foundation, which is the core of the open source community, took the unprecedented step of banning the entire University of Minnesota from contributing to the Linux kernel. The open source community is based on the principles of trust, cooperation and transparency. This group gives high value industry time and skills to create, maintain and improve free and widely adopted software with the interest of making technology more accessible. Linux is a widely used operating system that is found in everything from servers to mobile phones.
However, a group of researchers abused the trust of this community by not only introducing vulnerabilities into the code base, but effectively bragging about it in the name of research. In February 2021, a UMN team published a research article that explained how they systematically and stealthily introduced vulnerabilities into open source software. They did so through comments that seemed beneficial but actually introduced critical vulnerabilities. While claiming that it was aimed at open source as a whole, much of the researcher’s attention was directed to the Linux kernel. The kernel is the basis of the operating system and manages the interactions between hardware and applications.
“Experiments” like this, without informed consent, call into question the ethics that even the most novice cybersecurity professionals learn. In addition, after publishing the paper, the researchers continued this non-consensual test until they were publicly called to the Linux kernel mailing list. Reviewers had identified that numerous bad patches were still coming in. When confronted, the researchers dismissed the concerns; stating that the code recommendations came from a static analyzer they were still developing.
The core group responded and noted that “obviously they were obviously not created by a static analysis tool that is intelligent, as they are all the result of totally different patterns and obviously don’t even fix anything.”
In light of what appears to be a blatant deception and unwillingness to take responsibility, the group had no choice but to draw a hard line in the sand. They noted that this was not consensual and that data from test tools like this are generally explicitly indicated. The history of this specific research team of unrepentant abuse and the failure of the climbs to their university to remedy it left the Kernel group little choice. They banned future contributions to the entire University and are working to eliminate all previous submissions.
This whole situation revolves around the concepts of ethics within the cybersecurity profession. When conducting cybersecurity research, how is consent requested when consent may alter the results? Do the ends justify the means?
The importance of ethics in cybersecurity research
Given that I can (and want to) list ways in which researchers in the above situation could have behaved more ethically and continue to manage their research, it is clear, in this case, that what is missing is to understand the role crucial that ethics plays in cybersecurity. .
They had more ethical choices
One option is that the research team could have started a new open source project that they themselves owned and managed. As project owners, they oversee the final commitment process. This would have allowed them to inject subversive code for a general review, along with what others present, to see what survived the process, as was the goal of their research. Once they reached the final approval door, they could have eliminated the wrong submissions and prevented anything dangerous from going into operation.
Alternatively, they could have worked with the Linux Foundation to conduct this research as a controlled experiment. Obtaining the consent of the Foundation would mean that administrators knew which submissions were subversive, which allowed them to be filtered before they were published. While both options would have reduced the risk of vulnerability to a living product on which people depend, they still fall into a gray area of ethics; it amounts to a social experiment in people who gave their time and skill in good faith. Any of these would certainly have been better than the path they chose, but they are not yet fully ethical. Experimenting with or with human behavior is always a complicated proposition.
As it is, the path they chose and the reasoning behind it is reminiscent of the early days of technology, when the line between bona fide security testing and cybercrime activity blurred. This was the impetus for legislative intervention and a code of ethics within the hacking / cybersecurity community. Ethics is the critical line that differentiates a white hat hacker from a bad black hat actor.
Consent to security
To differentiate ourselves from the criminal element and to be taken seriously, the cybersecurity community openly adopts an ethical approach to all of our activities. The SANs and EC-Council publish explicit examples of this; the latter explicitly mentions the consent of a third party for certain research and investigation activities. This is vital, as even in the best of circumstances, security investigators who carry out their work responsibly and appropriately may be forced to defend themselves and take action against legal issues. The case of the CoalFire team in 2019 demonstrated how this can happen.
Finding security vulnerabilities is essential to developing security products and processes, because the last thing anyone wants is for criminals to reveal our weaknesses through an attack. However, ignoring consent runs the risk of painting the investigation as cybercrime. Banks definitely want to know if there are weaknesses in their security; however, they are unwilling to support the stress and expense of an incident. No one wants their security team to mobilize, customer accounts to be blocked, and systems to be disconnected because a random “investigator” was testing.
This is the reason why penetration testing companies exist. The role of a penetration tester is clearly defined in both scale and range. When contracted, the parameters and limitations of the experiment or tests to be performed are clearly defined. There is even a solid ethical component throughout the education curriculum and training needed to become a certified pen verifier.
As in the field of security, the scientific community also subscribes to a set of ethical guidelines on how it conducts its research. Specifically, at UMN, they have an Institutional Review Board (IRB), which describes the type of research that is acceptable to human subjects and intended to be reviewed and approved or denied, if they do not conform to ethics. guidelines – studies with human subjects.
Apparently, UMN’s IRB does not consider the Linux kernel developer community to be human; according to the research paper, they provided an exemption to the team. I’m not sure how investigating how a development team reacts to subversive behavior is not a study of humans or human behavior. However, my experience is related to cybersecurity for a reason. We should also consider the possibility that the UMN IRB may have been deceived. The fact that UMN has recently initiated an investigation seems to support this possibility.
Regardless of the IRB’s decision, the researchers still consciously chose to do a study on participants who did not want to. This raises the question of whether they believed that the final results of this research justified the questionable means of achieving it. Throughout the history of science, this has been a long-running debate. Institutions were established IRBs to prevent the kind of abuses that can occur without consent, as occurred in cases such as that of Tuskegee aviators or the atrocities of the Holocaust experimentation. While this incident of non-consensual experimentation pales alongside these historical cases, the fact is that it is a slippery slope from one to the other.