Skip to main content
  • Original Article
  • Open access
  • Published:

Evaluation of technology foresight projects

Abstract

Foresight is a well-known and widely used methodology for the creation of medium and long-term visions of technological, economic and social development. The need for evaluating foresight projects is unquestionable, but it is still a scarce phenomenon. The interests of the authors of the paper are focused mainly on foresight impact as one of the principal aspects of foresight evaluation, although they are aware of numerous objectives and aspects of foresight evaluation. The authors show the outcomes of case study analyses of selected evaluations conducted with regard to national and transnational foresight projects. Furthermore, current attempts to create systemic foresight evaluation frameworks are presented. They comprise general evaluation frameworks meant for the evaluation of different aspects of foresight projects execution, with respect to the process and results, including foresight impact as one of evaluated aspects as well as frameworks devoted strictly to the foresight impact evaluation. Scientific work on foresight evaluation models is still in progress and the authors of the paper indicate the current stage of models’ development.

State of the art

The dynamic growth in the importance of the competitiveness and innovativeness of the advanced product and process technologies, and the concomitant need for strategic planning of the areas for action at the national, regional, and institutional levels, determine the actions of the most developed economies directed at the identification of developmental changes and trends, particularly in the medium and the long term. Due to the growing complexity of relations between science, technology, environment, and society, the analysis of market trends is becoming an increasingly difficult process. Concurrently, the determining and shaping R&D priorities of the future seems to be much more indispensable for the effective investment in science and new technologies, aimed, in the long term, at the improvement of the quality of life of the entire society. Since the cost of scientific and technological development has increased, the need for the introduction of systems for the early identification of change tendencies, the evaluation of risk and indication of possible opportunities the science and technology development may bring, has emerged. Therefore, both R&D organisations and enterprises should be equipped with the appropriate tools supporting the identification of the directions of the future development of the advanced technologies, because they would help them increase the level of their competitiveness and innovation performance. Foresight constitute a very effective tool ensuring the achievement of such objectives.

Foresight means http://dictionary.reference.com/browse/foresight “prescience,” “the act or power of foreseeing,” “an act of looking forward.” Before the term “foresight” has started to be widely used with reference to future studies in the 1990s, the term “forecasting” developed in the late 1940s and the 1950s was more common. Other labels such as ‘anticipation’ or ‘la prospective’ were also used [1]. With the emphasis on technology foresight, R&D funding priorities have often been of central concern, as well as the Science, Technology and Innovation system of the country, or specific technological challenges [2].

One of the most often quoted definitions was the one formulated by Ben Martin in 1995: Foresight is “the process involved in systematically attempting to look into the longer-term future of science, technology, the economy and society with the aim of identifying the areas of strategic research and the emerging generic technologies likely to yield the greatest economic and social benefits” [3].

The key elements of foresight concerning prospective studies and policymaking are stressed in numerous definitions, among others by J. F. Coates [4]: “Foresight is a process by which one comes to a fuller understanding of the forces shaping the long-term future which should be taken into account in policy formulation, planning and decision-making… Foresight includes qualitative and quantitative means for monitoring clues and indicators of evolving trends and developments.”

In contemporary specialist terminology, apart from the aspects mentioned, the interaction aspects resulting from active participation are included, in the execution of such projects, of representatives of different spheres: R&D, industry, government, media, and society are emphasised:

“The key notions related to foresight are policy making, public participation, learning, alternatives, complex socio-technical systems, and science-society relationships” [5].

Foresight projects have already been in use for a few decades. The first foresight projects date back to 1960s and 1970s and were introduced in the USA and Japan. The growing interest in national foresight projects could be first noticed in Western Europe in the 1990s [6]. Foresight has won general acclaim and has been performed on a large scale to direct science, technology, and innovation policies in a number of countries and on many different organisational levels, including supra-national, national, sectoral, regional, and corporate (used both by public and private organisations) [5].

Foresight practitioners usually focus on developing methodologies and conducting foresight exercises and do not have an influence on the implementation of their results. Foresight sponsors (either public or private institutions) seldom contract further research aimed at the implementation of the achieved results (examples include the National Foresight Programme “Poland 2020” [7]). Nevertheless, some foresight projects have ended with the practical application of their results, i.e. for shaping national and regional policies on innovation and planning, e.g. in Japan and South Africa [8], launching new research programmes at the national or regional level, e.g. in Great Britain, The Czech Republic, Poland (e.g., [911]). Additionally, results of foresights were applied to introduce changes in existing project financing mechanisms, e.g. in Germany and France [1214].

Although foresight is presently seen as a well-established tool by policy makers and managers, it has not been systematically evaluated as an instrument of science and innovation policy [15]. The first attempts to evaluate foresight projects were undertaken in the late 1990s [16]. Due to the role of foresight projects in creating long-term strategies as well as the fact that foresight projects are instruments of public policy, which consume time, human, and financial resources, it seems to be justified that they should undergo evaluation.

Research on foresight evaluation has been carried out in the recent past, in a fragmentary way, and by a limited number of foresight researchers. The aspects of foresight evaluation are described mainly by M. van der Steen, P. van der Duin [17], K. Cuhls [18], R. Popper, L. Georghiou, I. Miles, M. Keenan [19], S.-S. Li, M.-H. Kang, L.-C. Lee [20], M. Butter, F. Brandes, M. Keenan, R. Popper [21], P. Destatte [22], L. Georghiou [23], L. Georghiou and M. Keenan [24], and A. Havas et al. [25]. The subjects of research papers related to foresight evaluation comprise mostly factors of foresight success, areas of foresight impact, and different aspects of the foresight process. [26]. Evaluation is “the activity that consists simply in the gathering and combining of performance data with a weighted set of goal scales to yield either comparative or numerical ratings.” [27]. Evaluation concerns different kinds of undertakings, including research programmes [2830] and foresight studies which are implemented in a form of projects or programme [16].

Taking into account the time of conducting the evaluation, the following types can be distinguished: ex-ante (conducted before launching a project, aimed at supporting decision making if the project should be launched) [31], mid-term, on-going (conducted in the course of the project execution focused on the project progress and problems) [32], ex-post and follow-up (conducted at the end or a few years after of the project close-down, respectively, focused on final achievements and results) [33, 34].

Evaluation is mainly executed with the use of standard criteria comprising [35]: relevance, effectiveness, efficiency, appropriateness, utility, impact, complementarity, complexity, and sustainability. Three basic tests can be applied for the generalised evaluation framework of foresight projects: accountability (efficiency of activities conducted), justification (foresight effects), and learning (ways of the foresight process improvement) [15]. The impact of foresight activities is the principal indicator of foresight evaluation [16]. It is stressed by numerous authors of recent foresight literature (among others J. Smith [36], A. Havas et al. [25], L. Georghiou, and M. Keenan [24]).

At the same time, measuring impact has been identified in the foresight literature as being difficult to do. Problems with measuring the impact, including the time lag between the foresight project and the occurrence of its results, as well as the possibility of direct and indirect impact, were stressed by C. Cagnin et al. [5], J. Calof and J. Smith [8], and A. Havas et al. [25] and others. Furthermore, it was stressed that the impact of foresight depends on the relevance to major problems faced by society, as well as foresight timing and the quality of the achieved results.

Practical examples of foresight results

Despite the importance of the evaluation of foresight project results and activities that have been undertaken in this area, the development of a coherent and agreed framework for foresight impact evaluation has progressed very slowly.

Because foresight projects are carried out in specific macro- and microeconomic conditions, numerous factors determine their execution and the effects they bring. Therefore, several criteria that can be used for the evaluation of such undertakings can be identified. Some attempts to evaluate foresight projects have been undertaken, and selected examples concerning the evaluation of foresight projects at a national and transnational levels are presented in this article. The main rationale for selecting the case studies was whether the aspect of foresight impact was taken into account in the course of their evaluation. All studies, apart from one case, concern ex-post evaluation.

The analysis of the scope of the evaluation of national foresight projects in Germany, Hungary, United Kingdom, Colombia as well as transnational foresight activities carried out by European Commission and UNESCO (Table 1) has demonstrated that the assessment mainly concerned the efficiency of the (1) foresight process (e.g. methodology, expert engagement, organisational structure, management procedures, financial contribution) and the efficiency of the (2) foresight outputs (e.g.: the products and services, tangibles and intangibles, which result from the foresight exercise).

Table 1 Selected examples of the evaluated national foresight projects

The case study analysis covers the following aspects:

  • The scope of evaluation,

  • The methods of evaluation, and

  • The results of evaluation.

In Germany, the evaluation process was designed in order to analyse both the process and the results of the national foresight programme – Futur (2001–2005) [18]). Two evaluations took place: the first one in the 2002–2003, and the second in the 2004–2005 period. In each case, it was executed by the International Panel of Foresight Evaluation Experts led by a renowned foresight scientist (L. Georghiou and A. Salo, respectively) composed of foresight practitioners and two representatives of German academic and business communities. The achievement of Futur objectives, the adequacy, and the efficiency of Futur foresight methodology and the correlation of Futur outputs with strategic research programmes launched by German Federal Ministry of Education and Research (BMBF) constituted the coverage of the evaluation (2002–2003). The methodology of the evaluation process was developed by the Fraunhofer Institute for Systems and Innovation Research and included such methods as online questionnaires, interviews, document analyses, participatory observations, and expert panels. The key remarks from the evaluation report included the following:

  • The over-complicated structure of the process and the under engagement of the foresight sponsor (BMBF) as disadvantages of Futur, and

  • The high efficiency in the development of leading visions, special fund assigned by BMBF to execute interdisciplinary and interdepartmental research projects in the priority areas as advantages of Futur.

The Hungarian Technology Foresight Programme (TEP) executed in the 1997–2000 period was evaluated in 2004. The evaluation covered two issues, assessing the degree to which aims were achieved and developing recommendations for future foresight projects in Hungary [37]. As in Germany, the evaluation was carried out by an international panel of evaluation experts led by the Manchester Institute of Innovation Research. The main evaluation methods included questionnaires, interviews, and document analyses. The questionnaire was answered by ca. 60 TEP experts, who assessed such issues as TEP methodology, TEP organisation structure, and TEP results and impacts. There were two main messages from the evaluation of the TEP programme: (1) The results of the Hungarian national foresight programme have not been implemented into science and technology policymaking (disadvantage). (2) The TEP participants (representatives of science, industry, public administration communities) changed their attitude towards the way of thinking about the future in favour of more complex and interdisciplinary approaches (advantage). In general, stakeholders support the idea of launching the next national foresight programme (advantage), but there is no such support on a governmental (political) level (disadvantage) at this time.

The United Kingdom Foresight Programme executed in 2002 was evaluated in 2006 by the Manchester Institute of Innovation Research and addressed the impact of the Programme and its constituent projects, its cost-effectiveness, and its management. The main instruments of the evaluation were interviews with 8 foresight team members and 28 stakeholders. Interviewers were asked to assess the impact of UK Foresight (its immediate outputs and influence on medium to long-term policy making) and the effectiveness of process management. Additional methods included a discussion forum, web-based consultations, and benchmarking (the comparison of the UK Foresight Programme with national foresight exercises executed in Sweden, Denmark, Germany, Japan, Spain, and France.)

The overall conclusion [38] was that the UK Foresight Programme achieved its objectives of identifying ways in which future science and technology could address future challenges for society, and identifying potential opportunities. All projects were successful in mobilising diverse groups of specialists (senior policymakers, scientists, business representatives) to work in a multidisciplinary framework and across disciplinary boundaries. In the evaluation report, it was stated that such mobilisation could probably not have been achieved by conventional research programmes. With regard to the cost-effectiveness, it was underlined that the approach adopted in the overall process was “fit-for-purpose,” and it delivered high quality outputs and offered good value for money. However, at the same time, it was emphasised that some projects were moderately under-resourced, and, as a result, the most costly ones had the greatest impact. According to the evaluation team, the management structure also needed improvement. There was an imbalance between the supply of projects and the demand resulting in the under-exploitation of foresight potential. A modest expansion was recommended, with one more project undertaken each year, with a commensurate expansion of management and executive resources. To sum up, UK Foresight created a process for original thinking in government and particularly for the application of science based evidence and foresight techniques to policy issues.

The evaluation of the Colombian Technology Foresight Programme (CTFP) [39] focused on the second cycle (2005–08) of the CTFP. It was also executed by the Manchester Institute of Innovation Research in 2010. The methods of evaluation were similar to the instruments applied in the case studies described above and included interviews, document analyses, international evaluation panels, online stakeholder surveys, and benchmarking the CTFP practices against Europe and South America. The evaluation criteria (20) included, among others, measures for the assessment of the management process, the level of achievement of objectives, cost-effectiveness, the efficiency of methods, the engagement of participants, the impact of the results on public and private beneficiaries and stakeholders, and the level of quality and novelty of the outcomes. On the whole, the CTFP objectives were assessed as appropriate and successfully achieved. The evaluation report concluded that the CTFP introduced a wide and effective portfolio of forward-looking approaches and tools. With regard to management and cost-effectiveness, it was stated that the total cost was too low for the amount of work carried out by the Programme. Although the scale of the programme showed excellent value for money, it was recommended that future programmes find ways either to increase funding or to reduce the number of simultaneous projects. The evaluation team recommended that a wider participation of the general public in the CTFP should be encouraged. In addition, the composition of expert panels could be improved, because there was overrepresentation of figures from administration. Nevertheless, the CTFP positively influenced the activities of key governmental programmes and agencies that define Science and Technology policies and research agendas. The most significant influence of CTFP on public policy was the work on the STI Vision 2019 used for the preparation of the National STI Plan 2019.

The evaluation of activities within the UNESCO Anticipation and Foresight programme undertaken over the period 1999–2005 was part of the Evaluation Plan and was aimed at the assessment of the results and impact of UNESCO’s activities in this area [40]. The evaluation conducted in 2005 was focused on analysing the results achieved and lessons learned in the course of the execution of the Foresight and Anticipation programme as well as orientating future activities within the programme. The evaluation criteria included relevance, effectiveness, efficiency, and impact. The evaluation was carried out with the use of the following methods: desk research (document analysis), semi-structured in-depth phone, face-to-face interviews with UNESCO staff, and a variety of stakeholders.

The evaluation indicated some challenges for the Foresight and Anticipation programme:

  • There was no evidence that foresight activities had influenced strategy formulation and policy design in UNESCO.

  • Suggestions were made how the programme and its impact may be improved, among others, thanks to taking Member States’ needs in terms of foresight on UNESCO-related issues and applying more ways for dissemination of foresight results.

The Foresight activities implemented by the EU indirect actions under FP5 and FP6 and managed by unit K2 in DG Research of the European Commission have been the subject of a Mid-Term Assessment carried out in 2004 by a panel of independent experts [41]. The evaluation was aimed at assessing the initial results and potential for impact of the expected outcomes of the foresight activities. It covered the three types of foresight activities – STRATA projects, High Level Expert Groups, conferences, seminars, and workshops. Each type of action was evaluated with three criteria – relevance, quality, and impact. The methods applied comprised document analysis and interviews held in five member states. The evaluation resulted in critical and positive comments, as well as recommendations for improvements. The following are of particular interest here:

  • The products and results of EC foresight activities were assessed as satisfactory to highly satisfactory.

  • Direct impact on decision making concerning Science and Technology, both in the Member States and in the European Commission could not be easily identified; however, evaluators assume the occurrence of indirect impact.

  • Foresight should be treated as an instrument to be applied in the Science and Technology decisionmaking process at different levels: European Commission, national governments and independent R&D organisations, including private industry.

  • The relevance and benefits of applying foresight as a policy tool should be promoted to a greater extent.

The presented national and transnational case studies discussed demonstrate the similarity of the evaluation approaches with regard to the following:

  • The coverage of evaluation (foresight process, its outcomes and impact),

  • The organisational structure (evaluation expert panel), and

  • Methodology (the main methods: interviews and document analysis).

The applied evaluation approaches should be extended because they, e.g., did not fully succeed in measuring the impacts of foresight results on project beneficiaries and stakeholders. In the authors’ opinion, assessing the impact of foresight undertakings is extremely important for decision-makers and thus remains an essential challenge for the foresight research community.

Evaluation models

The importance of the issue of foresight evaluation together with the practical attempts to evaluate this type of projects indicates the need to develop systemic foresight evaluation models. Such trials have been undertaken by researchers and comprise, among others, a foresight evaluation framework by P. Destatte [22], a framework of the foresight evaluation process by E. A. Makarova, A. V. Sokolova [16], an evaluation framework by S. S., Li M. H. Kang, L. C. Lee [20], an input–output-impact schema of foresight by K. A. Piirainen, R. A. Gonzales, J. Bragge [42], a Foresight Impact Schema by R. Johnston [43], an instrument for measuring the impact of foresight by R. Johnston and J. Smith [36].

The proposed models and frameworks for foresight evaluation take into account different aspects of foresight project execution, including its impact. Two of them, the models by R. Johnston and by R. Johnston and J. Smith, are devoted strictly to the evaluation of impact.

The assessment criteria change along with the evaluation of the objectives of individual foresight projects [44]. The foresight evaluation framework by P. Destatte [22] is based on two dimensions, which are the process and the outcomes. The following foresight evaluation criteria, concerning different elements of the model, are proposed: effectiveness, efficiency, utility, relevance & appropriateness, sustainability, fairness, and behavioural additionality. In the model, the impact is understood as the consequences of the foresight exercise for the addressees after the achievement of the strategy. The author recognises direct impact (on the direct addresses) or indirect impact (on other winning and losing addresses). He distinguishes long-term impacts, which can be called “sustainable impacts.”

The framework for foresight evaluation developed by S. S. Li, M. H. Kang, and L. C. Lee [20] applies two dimensions - process and outcome - and uses four criteria proposed by P. Destatte [22]: effectiveness, efficiency, relevance & appropriateness and behavioural additionality, and it proposes evaluation indicators. The framework comprises both process and result evaluations. The author proposes the measured item (evaluation key point) and define indicators for them. As an example, with regard to measuring the impact, the following measures (“indicators”) are proposed:

  • Moving towards interdisciplinary thinking,

  • Establishing a long-term perspective, and

  • Increasing the level of foresight culture and cognition [20].

The input–output-impact schema of foresight developed by K. A. Piirainen, R. A. Gonzales, and J. Bragge [42] can be used starting from the phases of foresight planning (ex-ante evaluation) to the evaluation of the finished project (ex-post evaluation). It is advisable to use the framework starting from the phase of its design up to its completion, because this supports a continuous evaluation throughout the entire lifecycle of the project and comprises its results. The developed scheme takes into account the three levels of evaluation, i.e. the utility and the delivery level (fulfilment of objectives, the quality of the process, the content and delivery of the foresight project as important factors in creating the impact of foresight), the technical level (the technical quality of execution, data quality, sustainability of methods), and the ethical level (ethical dimension of future studies). The proposed framework is quite detailed and contains multiple perspectives. However, the authors of the schema assume that, for individual foresight projects, relevant and specific evaluation criteria and measures should be developed.

Moreover, they stress the possibility and the potential benefits of the application of the proposed framework for evaluating the input, output, and the sustained impact of foresight projects. For each level of evaluation and each stage of foresight project execution, sets of the “pre-activity” evaluation questions are proposed. As an example, for evaluating the sustained impact at the utility and delivery level, the following questions were proposed:

  • Was the perspective the one that was needed?

  • Did the analysis prove sufficient to support answering the question/problem?

  • What limitations did the analysis uncover?

  • Did the prospection answer the question?

  • Were the results satisfying to the stakeholders?

  • Was it engaging and inspiring enough to have impact on the imagination of the readers?

  • Were the strategies feasible, and were they based on the foresight?

  • Were they implemented?

  • Were the forecasts on the right level of analysis and depth to support strategising [42]?

Although the proposed framework can be treated as a useful tool for evaluation, its authors indicate that there is a lack of evidence confirming its applicability; thus, it is necessary to evaluate the evaluation framework itself.

Apart from general frameworks for foresight evaluation, there can be indicated models devoted strictly to the foresight impact evaluation. In such cases, the selection of evaluation criteria depends mostly on the types of impacts.

Classifications of foresight impacts were proposed by O. Da Costa et al. [45], A. Havas et al. [46], L. Georghiou and M. Keenan [47] and M. Ladikas and M. Decker [48]. The authors of the paper quote the classification developed by A. Havas, D. Schartinger and M. Weber [46] (Table 2), in which different planned and unintended impacts are gathered depending on the foresight functions in relation to policy-making processes as one of the most important aspects of foresight projects and the time lag at which an impact occurs.

Table 2 A Framework to classify impacts of foresight activities

These types of impacts were taken into account while building the Foresight Impact Schema by R. Johnston [43]. It has been developed with the objective to guide practitioners in the appropriate design and execution of foresight projects, which would maximise the impact of such undertakings. The Foresight Impact Schema is quite general, it focuses on activities mainly undertaken within existing organisations and structures and can be applied for different types of foresight projects, on the condition that the design of a detailed impact assessment protocol is based on precise details that take into account the objectives and characteristics of a particular project. The Schema needs to be tailored to the specific character and requirements of any particular foresight project. It considers four types of impacts while taking into account classifications developed by other researchers, including A. Havas et al. [46]. The four types of impacts are as follows:

  1. (1)

    Awareness raising – increasing the understanding of target audiences with regard to the need, value, etc. of foresight;

  2. (2)

    Informing – providing information that can be useful for the improvement of planning and decision-making;

  3. (3)

    Enabling – providing tools for the better management of uncertainty associated with the future; and,

  4. (4)

    Influencing – shaping the policy, strategies, research priorities, etc.

For each type of impact, typical outcomes and possible metrics are identified. As an example, possible metrics for the type of impact with regard to influencing are as follows:

  • The number of departments influenced by a particular report;

  • The extent of influence (e.g. major, moderate, and minor) reported;

  • The number and scale of follow-on and spin-off foresight projects;

  • National comparative performance in high value added goods and services;

  • The contribution of research to major national and international issues; and,

  • Public confidence in research [43].

The Foresight Impact Schema has been applied in two case studies (UK Foresight Programme, foresight project on irrigated agriculture in Australia).

The instrument for measuring the impact of foresight developed by R. Johnston and J. Smith [36] comprised 54 measures gathered in several different lenses (measure groupings) aimed at measuring the impacts of foresight. The following lenses (levels of impact interests) are distinguished:

  • The key role of foresight and client impact;

  • General benefits from foresight for those directly involved;

  • Critical success factors for foresight process designers and planners;

  • Meta measures connected primarily with training and skills development and secondly with risk management;

  • Pre-policy measures (design and planning);

  • Policy implementation measures (policy support impact);

  • Post-policy measures (implementation impact).

The instrument was pilot-tested on two Canadian foresight programmes.

Since there are many methodologies to conduct foresight, there are also various frameworks recommended for their evaluation. The presented general evaluation frameworks can be applied for evaluating the process and results of foresight projects with taking, as one of aspects, their impact. On the other hand, some frameworks devoted strictly to evaluating foresight impact, assume the evaluation of different aspects of impact and propose possible measures. However, proving the impact of foresight on policy using the measures available in the present literature is very difficult. Many of the measures proposed have qualitative character and thus are difficult to collect, analyse and derive specific conclusions from. Authors of the models indicate their general character and the need to adjust to the specificity of particular foresight projects. More extensive empirical verification of the proposed frameworks should result in improving their theoretical assumptions.

Conclusions

As shown in the case studies analysed in the paper, initiatives undertaken in order to evaluate national and transnational foresight projects focused mainly on the evaluation of the organisational and methodological aspects of the programmes and whether the planned objectives were achieved. However, presently, proving and measuring the value and impact of foresight studies becomes the critical challenge. The challenge has been recognised by some researchers who have taken the effort to develop models and frameworks for the evaluation of foresight projects, including the aspect of foresight impact. Some general foresight evaluation frameworks have been proposed. For example, some considered impact as one of the considered aspects, like those developed by K. A. Piirainen et al. [42], E. A. Makarova, A. V. Sokolova [16], S. S., Li et al. [20], and P. Destatte [22]. Some developed frameworks devoted exclusively to foresight impact evaluation (R. Johnston [43], and by R. Johnston and J. Smith [36]). As shown in the literature review presented in this paper, the scientific work on foresight evaluation models is still in progress.

The interests of the authors of the paper are focused mainly on foresight impact as one of the principal aspects of foresight evaluation, although they are aware of numerous objectives and aspects of foresight evaluation. The authors intend to continue their analyses and, taking into account, possible types of impacts of foresight activities, propose sets measures that can be applied for their evaluation. It is a complex and challenging task, but definitely needed, especially in times of economic crisis, when public funds should be used most efficiently and selective towards publicly funded research.

References

  1. Martin BR (2010) The origins of the concept of ‘foresight’ in science and technology: An insider’s perspective. Technological Forecasting and Social Change 77:1438–1447

    Article  Google Scholar 

  2. Miles I (2008) Introduction to technology foresight, Paper for UNIDO Workshop “Technology Foresight for Practitioners (Roadmapping)” Prague, November

  3. Martin BR (1995) Foresight in science and technology. Technology Analysis & Strategic Management 7:139–168

    Article  Google Scholar 

  4. Coates JF (1985) Foresight in federal government policymaking, Futures Research Quarterly, pp. 29–53

  5. Cagnin C, Keenan M, Johnston R, Scopolo F, Barré R (eds) (2008) Future-Oriented Technology Analysis. Strategic Intelligence for an Innovative Economy. Springer, Germany

    Google Scholar 

  6. United Nations Industrial Development Organisation (2005) Technology foresight manual, Vol. 1–2, Vienna

  7. Czaplicka-Kolarz K (2011) From National Polish Foresight (NPF 2020) towards it implementation - to foster smart growth, European Forum on Forward Looking Activities (EFFLA), Second meeting, Warsaw, 16–17 November

  8. Calof J, Smith JE (2012) Foresight impacts from around the world: a special issue. Foresight 14(1):5–14

    Article  Google Scholar 

  9. Keenan M, Marvin S, Winters C (2002) Mobilising the regional foresight potential for enlarged European Union, United Kingdom Country Report, Brussels

  10. Klusacek K (2003) “Technology Foresight in the Czech Republic”, Discussion Paper Series, article 03–15

  11. Mazurkiewicz A, Sacio-Szymańska A, Poteralska B (2012) Setting priority R&D directions for the strategic research institutes, The XXIII ISPIM Conference – Action for Innovation: Innovating from Experience, Barcelona, Spain, 17–20 June

  12. Giesecke S (2007) Futur – The German Research Dialogue, Foresight Brief No.1, The European Foresight Monitoring Network EFMN

  13. Cadiou Y (2003) From key-technologies to key-competencies. Scientific and technological competencies at the regional level related to the French “Key-Technologies” exercises, The Second International Conference on Technology Foresight, Tokyo

  14. Hoffmann B, Rader M (2003) Review and analysis of national foresight. Case study France - technologies clés 2005, Forschungszentrum Karlsruhe GmbH in der Helmholtz-Gemeinschaft, Institut für Technikfolgenabschätzung und Systemanalyse

  15. Georghiou L (2003) Evaluating Foresight and Lessons for Its Future Impact., paper presented at The Second International Conference on Technology Foresight, 27–28 February, Tokyo

  16. Makarova EA, Sokolova AV (2012) Foresight evaluation: lessons from project management. Basic Research Program. Working Papers. National Research University, Higher School of Economics

  17. Steen Van der M, Duin Van der P (2012) “Learning ahead of time: how evaluation of foresight may add to increased trust, organizational learning and future oriented policy and strategy”, Futures, Special Issue: Looking back on looking forward, Vol. 44, Issue 5

  18. Cuhls K (2011) Foresight as ex-ante evaluation – the case of the BMBF foresight process, Fraunhofer ISI

  19. Popper R, Georghiou L, Miles, I. and Keenan M (2010) Evaluating Foresight: Fully-Fledged Evaluation of the Colombian Technology Foresight Programme (CTFP), Cali: Universidad del Valle, ISBN 978-958-670-842-5, http://community.iknowfutures.eu/pg/file/popper/view/2204/evaluating-foresight-fullyfledged-evaluation-of-ctfp

  20. Li S-S, Kang M-H, Lee L-C (2009) Developing the evaluation framework of technology foresight program: lesson learned from European countries, Atlanta Conference on Science and Innovation Policy, Atlanta Conference on Science and Innovation Policy

  21. Butter M, Brandes F, Keenan M, Popper R (2008) Evaluating Foresight: an introduction to the European Foresight Monitoring Network. Foresight 10(6):3–15

    Article  Google Scholar 

  22. Destatte P (2007) Evaluation of Foresight: how to Take long term impacts into consideration? FOR-LEARN Mutual Learning Workshop Evaluation of Foresight, Brussels, IPTS-DG RTD, http://forlearn.jrc.ec.europa.eu/guide/6_follow-up/documents/0709%20Destatte%20Evaluation%20of%20Foresight.pdf

  23. Georghiou L (2008) Advances in the Organisation of Foresight and the Evaluation of Foresight, The University of Manchester

  24. Georghiou L, Keenan M (2006) Evaluation of national foresight activities: Assessing rationale, process and impact. Technological Forecasting & Social Change 73:761–777

    Article  Google Scholar 

  25. Havas A, Schartinger D, Weber M (2010) The impact of foresight on innovation policy-making: recent experiences and future perspectives. Research Evaluation 19(2):91–104

    Article  Google Scholar 

  26. Rossi PH, Freeman HE, Lipsey MW (2004) Evaluation. A systematic approach, 7th edn. Sage publication, Thousand Oaks

    Google Scholar 

  27. Scriven M (1967) The methodology of evaluation. Rand McNally, Chicago

    Google Scholar 

  28. Cronbach L, Ambron S, Dornbusch S, Hess R, Hornik R, Phillips D, Walker D, Weiner S (1980) Towards Reform of Program Evaluation. Jossey-Bass, San Francisco, CA

    Google Scholar 

  29. Patton M (1997) Utilization focused evaluation: the new century text. Sage Publication, London

    Google Scholar 

  30. Owen JM (2006) Program evaluation: forms and approaches. Allen & Unwin, Australia

    Google Scholar 

  31. European Commission (2013) Guidelines for the ex-ante evaluation of 2014 – 2020 EMFF OPs. September

  32. EVALSED (2008) The Resource for the Evaluation of Socio-Economic Development, Guide, European Commission

  33. Evaluating EU Expenditure Programmes: A Guide: Ex post and intermediate evaluation (1997) XIX/02 - Budgetary overview and evaluation Directorate-General XIX – Budgets European Commission

  34. ASTD (2008) Measurement & Evaluation: Essentials for Measuring Training Success. Tips, tools and intelligence for trainers. The American Society for Training and Development

  35. Haber A (2007) Ewaluacja ex-post. Teoria i praktyka badawcza. Polska Agencja Rozwoju Przedsiębiorczości, Warsaw

    Google Scholar 

  36. Smith J (2012) Measuring Foresight Impact, EFP Brief No. 249

  37. Evaluation of the Hungarian Technology Foresight Programme (TEP) (2004) Report of an International Panel, May. www.nih.gov.hu/english/technology-foresight/evaluation-of-the-080519

  38. Evaluation of the United Kingdom Foresight Programme (2006) Final Report, PREST, Manchester Business School, University of Manchester, http://www.techforesight.ca/InternationalForesightReports/UKForesightProgramEvaluation2006.pdf

  39. Popper R, Georghiou L, Miles I and Keenan M (2010) Evaluating Foresight: Fully-Fledged Evaluation of the Colombian Technology Foresight Programme (CTFP), Cali: Universidad del Valle, ISBN 978-958-670-842-5. http://community.iknowfutures.eu/pg/file/popper/view/2204/evaluating-foresight-fullyfledged-evaluation-of-ctfp

  40. De Laat B, Dani S (2006) Evaluation of UNESCO Anticipation and Foresight Programme, Internal Oversight Service Evaluation Section, UNESCO, July

  41. Mid-Term Assessment of Foresight Activities (2004) Report to Directorate General Research of the European Commission, October

  42. Piirainen KA, Gonzales RA, Bragge J (2012) A systemic evaluation framework for futures research. Futures 44

  43. Johnston R (2012) “Developing the capacity to assess the impact of foresight”, Foresight, vo. 14, no 1, 2012, Emerald Group Publishing Limited

  44. Miles I (2012) “Dynamic foresight evaluation”, Foresight, vol. 14, no 1, Emerald Group Publishing Limited

  45. Da Costa O, Warnke P, Cagnin C, Scapolo F (2008) The impact of foresight on policy-making insights from the FORLEARN mutual learning process, Technology Analysis and Strategic Management, vo. 20 3, pp. 369–387

  46. Havas A, Schartinger D, Weber M (2007) Experiences and Practices of Technology Foresight in the European Region, Technology Foresight Summit

  47. Georghiou L, Keenan M (2008) Evaluation and impact of foresight. In: Georghiou L, Cassingene HJ, Keenan M, Miles I, Popper R (eds) The handbook of technologu foresight: concepts and practices. Edward Elgar, Cheltenham

    Google Scholar 

  48. Ladikas M, Decker M (2004) Assessing the impact of future-oriented technology assessment, paper presented at EU-US Seminar: New Technology Foresight, Forecasting & Assessment Methods, Seville, 13–14 May

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Beata Poteralska.

Rights and permissions

This article is published under license to BioMed Central Ltd. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Reprints and permissions

About this article

Cite this article

Poteralska, B., Sacio-Szymańska, A. Evaluation of technology foresight projects. Eur J Futures Res 2, 26 (2014). https://doi.org/10.1007/s40309-013-0026-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40309-013-0026-1

Keywords