Skip to main content
  • Commentary and Discussion
  • Open access
  • Published:

Emergent technologies, divergent frames: differences in regulator vs. developer views on innovation

Abstract

Technology innovation is inherently uncertain. The risk–benefit divide for such innovation is a classical debate within scholarly literature and is often framed on a monetary scale where innovation approval is granted if benefit outweighs risk. However, such discussion leaves out a critical yet subjective vein of discussion within the innovation evaluation process — stakeholder context. Specifically, regulators and technology developers are often described as having respective motivations that are often at odds with one another. In theory, efforts towards balancing risk and benefit for technology evaluation should be driven by relatively efficient, inexpensive, robust methods, and processes. In practice, however, technology evaluation is often expensive, slow, and often of questionable quality for new and emerging technologies. Literature often frames the innovation-regulation tradeoff as a zero-sum game driven by regulators and developers that are inherently at odds with one another. However, we argue that such a relationship is actually worse than zero-sum and is a classic framing problem as described by Kahneman and Tversky. Specifically, the divergent frames adopted by regulators and technology developers, respectively, can drastically affect their perception of risk and tolerance for further development and commercialization of a given technology. There are known and natural solutions to such problems that can smooth the path towards realizing the societal potential of emerging technologies.

Introduction

Technology innovation is a key driver of scientific and industrial development and enables us to generate new and exciting breakthroughs in fields ranging from medicine to communication. However, the process of developing technological innovation is rife with uncertainty [7]. Such innovation is generally posited to provide evolutionary or revolutionary benefits over existing conventional applications yet may also possess varying degrees of risk to humans or the environment. Differing stakeholder views regarding the risks associated with emerging technologies can create a stumbling block on the path towards attaining their benefits [1]. This paper provides an explanation for differing stakeholder views and suggests a way forward.

The risk–benefit divide is a classical debate within scholarly literature and overall project development, and it is often scaled such that innovation approval is granted if proposed benefits outweigh risks. In this task, regulatory authorities serve as a critical gatekeeper of how and whether certain innovations can enter the marketplace [12].

While this risk–benefit assessment offers a clear understanding of regulatory authority as an arbiter of technology approval and commodification, such simplified risk–benefit analyses omit a critical yet subjective obstacle within the innovation evaluation and commercialization process — stakeholder context. Variations in stakeholder context are often posited to be driven by their respective incentives (i.e., private companies seek profit whereas government agencies seek public protection, health, and stability). In addition to incentives driving decisions, the manner in which stakeholders frame any given problem surrounding emerging technology will influence decision-making. The notion that framing, in addition to incentives, can impact decision-making in the context of innovation is a direct application of Kahneman and Tversky’s Prospect Theory [10].

Specifically, regulators and technology developers are often described as having respective motivations that are often at odds with one another [2]. Such motivations are not arbitrary in nature, as they are generally derived via the specific and relative missions that regulators and developers possess. On one hand, regulators are charged with the responsibility to safeguard the public from potentially harmful risks by allowing products into the market only when their safe use procedures have been verified. To accomplish this goal, many regulators conduct statutorily required risk analyses which emphasize unacceptable or highly uncertain threats to human or environmental health. Developers, on the other, seek to strengthen their organization’s capacity to survive and profit from their labor and creativity — placing emphasis upon their need to participate within the market by developing new and potentially disruptive technologies and materials that outcompete existing alternatives.

Common sense would indicate the need for balance between a generally risk-averse regulator and a profit-seeking developer in order to best represent the immediate and long-term interests of society. In theory, efforts towards balancing risk and benefit for technology evaluation should be driven by relatively efficient, inexpensive, robust methods, and processes. This logic is further driven by the widespread availability of risk assessment tools and frameworks that have decades of use, as well as various other tools to evaluate technological benefit and societal implications. At the conclusion of such evaluation, regulators and decision-makers may then choose to slow or halt the process of technology commodification and adoption — something that may benefit developers and the general public on a case-by-case basis.

In practice, however, technology evaluation is often difficult, time-consuming, and ambiguous for new and emerging technologies like nanotechnology, synthetic biology, or artificial intelligence. In these and other examples, there is a growing gap between the rate of innovation in emerging technologies and the rate of effective regulation and governance pertaining to these technologies. While many existing legislative and regulatory instruments can reduce health risks or stave off socially undesirable outcomes, exceedingly burdensome regulation can stifle innovation by disincentivizing research and experimentation of new commercial opportunities [4]. Literature often frames the innovation-regulation tradeoff as a zero-sum game driven by regulators and developers that are inherently at odds with one another. However, we argue that such a relationship is actually worse than zero-sum in that it is manifested in a classic framing problem as described by Tversky and Kahneman [11]. Specifically, the adversarial frames adopted by regulators and developers regarding technology assessment and evaluation neither protects local populations from potential risk, nor do they empower societies to leverage benefits stemming from such breakthroughs — creating an environment with growing scientific uncertainty regarding what best practices and good governance for a given technology or product should be. Many emerging technologies such as synthetic biology are already at-risk of falling into such a governance paradox, where regulators lack robust quantitative insight to assess technology risk, yet developers lack the institutional support and mandate to pursue their research that would acquire such missing data.

Prospect theory and the framing effect

An essential idea underlying Prospect Theory (presented as a departure from the normative traditional economic theory) is that the way in which a problem is framed influences how a decision-maker evaluates the options, possible outcomes, and contingencies associated with a decision. While traditional economic theory assumes a neoliberal attitude on behalf of decision-makers, who are equipped with full information and constant, rational preferences, the framing effect reveals inconsistencies across decision-makers and their preferences given decision problems [5, 11]. These inconsistencies are revealed when decision alternatives and their associated outcomes are framed and reframed in terms of certainty versus uncertainty or gains versus losses. The manner in which alternatives are framed and presented to a decision-maker can drastically affect their perception of associated risks and benefits of a given decision problem.

At the organizational level, organizational stakeholders face decision problems with their organization’s salient values, motivations, and missions in mind. According to Tversky and Kahneman, “ … the frame that a decision-maker adopts is controlled partly by the norms, habits, and personal characteristics of the decision-maker” [11]. It would be unsurprising for the individuals comprising an organization to hold relatively similar characteristics and daily habits, thereby reinstating the strength of an organization’s particular mission. Organizational missions inspire the salient actions that help achieve those missions and motivations. Even when technology regulators and developers aim to resolve the same societal issue, their actionable responses often differ with respect to their missions and ultimately spark the discrepancy in their decision problem frame and preferred solution.

The mission and values of regulatory organizations emphasize the safety and health of society, which inspires their work in protecting the public and the environment. The notion of low-risk protective mechanisms is salient to regulatory organizations, who are in touch with the benefits of current technologies (herein called “status quo technologies”) and often liable for their failures. For organizations focused on technology development, the mission to create and produce emerging technologies leads to hands-on work with these technologies. Developers are directly exposed to the benefits—both economically and societally—of deploying emerging technologies, which yields their risk-seeking frame that embraces risk at the prospect of outperforming the status quo technology. The differences organizational frames between regulators and developers largely concentrate on certainty vs uncertainty in gains vs losses, much as proposed by Kahneman and Tversky.

For instance, given a societal problem that calls for stakeholder action, variations in whether results of actionable responses are viewed in a positive frame (i.e., terms such as “lives saved”) or a negative frame (i.e., terms such as “lives lost”) tend to influence an organization’s preferences. Positive frames are associated with risk-averse behavior. Risk aversion leads decision-makers to choose an alternative with certain gains over an alternative that entails uncertain gambles, even if the gamble has the potential to be more beneficial than the certain gain. Negative frames are associated with risk-seeking behavior, which leads decision-makers to choose an alternative that prevents certain loss. For the specific case of technology innovation, regulators as described above would frame alternatives in terms of the positive and certain gains of the status quo technology, as the status quo has known benefits and low risks. This aligns with the general mission of most regulatory agencies, who are statutorily required to mitigate and manage risk for proposed technologies. Developers, however, recognize when status quo technologies ensure a known loss compared to the potential better future that they see. Technology developers develop new products that simultaneously offer new market benefits, while also providing new commercial opportunities for industrial developers to gain profit from. Thus, developers would be risk seeking in how they frame an alternative, emerging technology relative to the status quo. Should an emerging technology potentially curtail the known loss of the status quo, developers see that the potential risks of the emerging technology are justified.

In context, regulators adopt a risk-averse mindset that prefers the status quo technology and emphasizes the need to avoid losses posed by emerging technologies, regardless of the potential benefits that may be accrued as the technology develops. This phenomenon is observed in activity ranging from pharmaceutical and medical device testing to industrial chemical products to cosmetics. Such a mindset is reinforced by regulatory requirements by agencies to evaluate risk of products coming to market as well as the political fallout that such agencies would receive should they approve products with harmful side effects. Developers frame their objectives such that the near-certain benefits associated with their emerging products are emphasized. This is not to say that developers ignore or are not concerned with risk, where such developers may face a variety of liability concerns should their products induce harm to humans, animals, or the environment. However, developers frame their work by focusing on the benefits of the product or process, and subsequently, commercialization allows the company to derive profits that permit organizational growth and development.

Case study: framing and emerging technology solutions for vector control of bloodborne disease

To better illustrate how differentiation in framing may impact innovation development and regulation, we explore a case study of the Aedes aegypti mosquito—a mosquito species known to spread bloodborne disease, such as Zika virus or dengue fever. Recent advancements in synthetic biology have made it possible to limit the population growth of this mosquito via innovations in genetic engineering. Various engineering approaches have been proposed, including limiting the viability of offspring bred from released engineered Aedes aegypti to the use of the bacterium Wolbachia to limit the ability of such mosquitos to carry and transmit the dengue virus [8]. Developers of these engineered mosquitos frame the innovation such that human lives will be increasingly saved, and therefore, this innovation is a worthy pursuit despite any potential risks that are inherent to the uncertain nature of an emerging technology. The innovation can reduce this mosquito population significantly in a relatively short period of time, and therefore, developers adopt a frame that embraces the potential risks in light of increasing the number of human lives saved.

Conversely, regulators may frame the problem and alternative solutions in terms of the uncertain risks that innovation may bring (i.e., uncertain threats to human health and the ecosystem). Focusing on the uncertain and potentially negative impacts of this innovation, regulators prefer to maintain existing, effective mosquito control methods, and technologies, such as distributing mosquito nets to vulnerable areas. Regulators are less concerned with innovations’ benefits unless the status quo outcomes are already intolerable, which will change based upon context. Developers and regulators are respectively incentivized to enhance public health outcomes in the face of bloodborne disease. Yet, given the uncertainty in emerging synthetic biology technologies, both sets of decision-makers cannot evaluate the decision problem with full information and rational preferences. How the mosquito control problem is framed by these entities in terms of uncertainty and risks versus rewards will skew their mosquito population control preference.

Let us adapt Tversky and Kahneman’s example to this case. Assume that without the technology, the population will contain 1,000,000 people who acquire blood borne diseases and 99,000,000 who do not. Of these 1,000,000, 999,600 will survive and 400 will die. The new technology can work or have harmful effects. If it works, 1,000,000 will live and 0 will die. If it has harmful effects, 999,400 will live and 600 will die. Further assume there is a 1/3 chance the technology will work, and a 2/3 chance it will not.

The developer takes as a baseline that the emerging technology will work and its benefits will be achieved. The decision frame adopted by the developer is presented in Fig. 1a. If the emerging technology is not used, the developer views this choice as causing 400 people die as a result of not developing the technology. Conversely, if the technology is developed, there is a 1/3 chance that no one dies but a 2/3 chance that 600 people die. A prospect theoretic utility curve for the developer is shown in Fig. 2a where, because the reference point is 0 people dying, the shape of the curve is concave upward across the range of possible outcomes. In this case, a utility of 1 for the best case, 0 represents the worst case, and the utility of the intermediate status quo case (in which 400 people die) is 0.20. For the new technology, the tradeoff of the possible upside and downside yields an expected utility of 0.33, which is greater than and therefore preferred to the utility of 0.20 as yielded by the status quo technology. In Kahneman and Tversky’s terminology, the developer has a negative frame in that the second and third best cases (400 deaths, 600 deaths) are framed as losses compared to the best case.

Fig. 1
figure 1

a, b Decision frames for developer and regulator. A hypothetical case study based on status quo technology vs. emerging technology solutions to mosquito control is presented. The projected number of lives gained and lost given each technology and their respective expected utilities influence the final decision outcome

Fig. 2
figure 2

Utility curves under developer (a) and regulator (b) frames. The utility curves are centered around the respective reference points for the developer and for the regulator

Unlike the developer, the regulator regards the status quo as the baseline (Fig. 1b). Following prospect theory, the utility curve should be concave upward below the reference point and concave downward above it and also be steeper below the reference point than above it. Such a curve for a typical regulator is shown in Fig. 2b, with the status quo technology which yields 0 lives gained or lost having a utility of 0.50. In the case that the emerging technology is developed, its success will lead to an additional 400 people surviving and thus a utility of 1; however, if it has harmful effects, then none of the 600 people affected by the emerging technology and those subject to harm under the current technology will survive, and the regulator’s utility for this result is 0. The regulator thus prefers the status quo with a utility of 0.5 to the new technology with expected utility of 0.33, because the regulator has a positive frame in which the best case is framed as a gain relative to the reference point, while the worst case is framed as a loss relative to the reference point.

The key points in this example are that the regulator is risk averse when considering the potential gain and views the harm from the potential loss as outweighing the potential gain. In contrast, the developer focuses on the potential losses of the status quo technology and is risk seeking in avoiding these losses. Thus, developers prefer the emerging technology because it offers at least a chance of there being no loss relative to the reference point. Following Kahneman and Tversky’s logic, the developer is risk-seeking in avoiding losses associated with the status quo, while the regulator is risk averse in protecting the known gains of the status quo. Hence, the developer prefers the new technology, while the regulator prefers the old.

Conclusion

Rather than setting the stage for assessing tradeoffs between potential benefits and potential risks, stark differences in decision framing produce tension that makes any compromise between regulator and developer appear to each side to be destroying societal value, separate from the discussion of what portion of that value goes to the developer. This seems like an unavoidable problem because regulators often rely on developers to provide data but at the same time, the inherent nature of emerging technologies means that developers do not yet have the data to calm the fears of regulators [3]. Therefore, these decision problems will always contain a large degree of uncertainty, which might be alleviated by a cooperative style of governance as suggested by Kelemen [6] if-and-only-if the diverging frames of regulators and developers are addressed at the onset. Such cooperative governing approaches require greater trust and unity of effort to address technological uncertainty, characterizing and prioritizing technological hazard, and generating data and pertinent qualitative information to assess technological risk [9].

The current default is that developers and regulators are typically left to their own devices to implicitly adopt their own different frames. The known biases associated with framing affect how alternatives will be interpreted. Fortunately, along with identifying the framing bias, Kahneman and Tversky also provide a solution to mitigate it. By mapping this bias to the regulator-developer decision context, we can also map the solution. Using their logic, it is possible to improve technology governance and foster a cooperative governing approach by reducing the biases associated with the actors’ diverging frames by using a more complete description which lays out the information associated with potential outcomes of the technology in absolute terms in order to neutralize the positive and negative terms. For instance, the situation can be described more neutrally in terms of life and death, e.g., “with the uncertain option, either 0 will die and 600 will survive or 600 will die and 0 will live, while with the status quo 400 will die and 200 will live”. This framing could be used instead of a more compact but also more bias-inducing description such as “pursuing the uncertain option will lead to either 400 lives gained or 200 lives lost” from the positive frame or “with the uncertain option either 0 or 600 lives will be lost, while with the status quo 400 lives will be lost” from the negative frame. This allows for an acknowledgment of differing viewpoints, yet structurally guides regulators and technology developers to collectively adopt an unbiased frame.

Such an approach cannot and should not eliminate all differences and points of contention between the different actors related to technology governance and commercialization. However, by framing technological risks in congruent terms, it can at least bring stakeholders to the same table to allow for debate and contestation around the diverging interests of each stakeholder group.

A key aspect of research is identifying potential technologies and identifying the ways in which they may be utilized, e.g., to solve societal problems or bring benefits. However, charting the course for emerging technology development and deployment also requires anticipating where there will be societal friction of any type. The friction between regulators and business can be especially strong for innovations that have inherent uncertainty in the dimensions of interest to regulators such as health and safety. This paper proposes an explanation for such frictions, and with this explanation, a standard solution. The simple example here with two actors, two options, and two possibilities incorporates a foundational result in judgment, and this result can be applied directly to situations that match these parameters. Further, this example suggests how futures researchers can deploy the vast literature on judgment and decision to the range of nuanced and complex situations as society wrestles with change.

References

  1. Collier ZA, Trump BD, Wood M, Chobanova R, Linkov I (2016) Leveraging stakeholder knowledge in the innovation decision making process. Int J Bus Contin Risk Manag 6:163–81

  2. Jaffe AB, Palmer K (1997) Environmental regulation and innovation: a panel data study. Rev Econ Stat 79(4):610–619

    Article  Google Scholar 

  3. Jasanoff S (2009) The fifth branch: science advisers as policymakers. Harvard University Press, Cambridge

  4. Jones HD (2015) Regulatory uncertainty over genome editing. Nat Plants 1(14011):10–1038

    Google Scholar 

  5. Kahneman D, Tversky A (1986) Rational choice and the framing of decisions. J Bus 59(4):251–278

    Google Scholar 

  6. Kelemen RD (2011) Eurolegalism: the transformation of law and regulation in the European Union. Harvard University Press, Cambridge

  7. Linkov I, Trump B, Keisler J, Cegan J, Foran C, Collier Z, Lambert J, Kuklja M (2020) Signals and metrics identifying partnerships for innovation. IEEE Eng Manag Rev 48(2):39–46

    Article  Google Scholar 

  8. Pan X, Pike A, Joshi D, Bian G, McFadden MJ, Lu P, Liang X, Zhang F, Raikhel AS, Xi Z (2018) The bacterium Wolbachia exploits host innate immunity to establish a symbiotic relationship with the dengue vector mosquito Aedes aegypti. ISME J 12(1):277

    Article  Google Scholar 

  9. Trump BD (2017) Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore. Health Policy 121(11):1139–1146

    Article  Google Scholar 

  10. Tversky A, Kahneman D (1979) Prospect theory: an analysis of decision under risk. Econometrica 47(2):263–291

    Article  Google Scholar 

  11. Tversky A, Kahneman D (1981) The framing of decisions and the psychology of choice. Science 211(4481):453–458

    Article  Google Scholar 

  12. Wilson JQ (1989) Bureaucracy: what government agencies do and why they do it. Basic Books, New York

Download references

Acknowledgements

This study was funded in part by the U.S. Army Engineer Research and Development Center FLEX project on Compounding Threats. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the policies or endorsements, either expressed or implied, of their employers.

Author information

Authors and Affiliations

Authors

Contributions

The author(s) read and approved the final.

Corresponding author

Correspondence to Benjamin D. Trump.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Keisler, J.M., Trump, B.D., Wells, E. et al. Emergent technologies, divergent frames: differences in regulator vs. developer views on innovation. Eur J Futures Res 9, 10 (2021). https://doi.org/10.1186/s40309-021-00180-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40309-021-00180-5

Keywords