- Original Article
- Open Access
Evaluation of technology foresight projects
© The Author(s) 2013
- Received: 28 June 2013
- Accepted: 10 October 2013
- Published: 24 November 2013
Foresight is a well-known and widely used methodology for the creation of medium and long-term visions of technological, economic and social development. The need for evaluating foresight projects is unquestionable, but it is still a scarce phenomenon. The interests of the authors of the paper are focused mainly on foresight impact as one of the principal aspects of foresight evaluation, although they are aware of numerous objectives and aspects of foresight evaluation. The authors show the outcomes of case study analyses of selected evaluations conducted with regard to national and transnational foresight projects. Furthermore, current attempts to create systemic foresight evaluation frameworks are presented. They comprise general evaluation frameworks meant for the evaluation of different aspects of foresight projects execution, with respect to the process and results, including foresight impact as one of evaluated aspects as well as frameworks devoted strictly to the foresight impact evaluation. Scientific work on foresight evaluation models is still in progress and the authors of the paper indicate the current stage of models’ development.
- Technology foresight
- Foresight evaluation
- Foresight impact
- Systemic foresight evaluation models
The dynamic growth in the importance of the competitiveness and innovativeness of the advanced product and process technologies, and the concomitant need for strategic planning of the areas for action at the national, regional, and institutional levels, determine the actions of the most developed economies directed at the identification of developmental changes and trends, particularly in the medium and the long term. Due to the growing complexity of relations between science, technology, environment, and society, the analysis of market trends is becoming an increasingly difficult process. Concurrently, the determining and shaping R&D priorities of the future seems to be much more indispensable for the effective investment in science and new technologies, aimed, in the long term, at the improvement of the quality of life of the entire society. Since the cost of scientific and technological development has increased, the need for the introduction of systems for the early identification of change tendencies, the evaluation of risk and indication of possible opportunities the science and technology development may bring, has emerged. Therefore, both R&D organisations and enterprises should be equipped with the appropriate tools supporting the identification of the directions of the future development of the advanced technologies, because they would help them increase the level of their competitiveness and innovation performance. Foresight constitute a very effective tool ensuring the achievement of such objectives.
Foresight means http://dictionary.reference.com/browse/foresight “prescience,” “the act or power of foreseeing,” “an act of looking forward.” Before the term “foresight” has started to be widely used with reference to future studies in the 1990s, the term “forecasting” developed in the late 1940s and the 1950s was more common. Other labels such as ‘anticipation’ or ‘la prospective’ were also used . With the emphasis on technology foresight, R&D funding priorities have often been of central concern, as well as the Science, Technology and Innovation system of the country, or specific technological challenges .
One of the most often quoted definitions was the one formulated by Ben Martin in 1995: Foresight is “the process involved in systematically attempting to look into the longer-term future of science, technology, the economy and society with the aim of identifying the areas of strategic research and the emerging generic technologies likely to yield the greatest economic and social benefits” .
The key elements of foresight concerning prospective studies and policymaking are stressed in numerous definitions, among others by J. F. Coates : “Foresight is a process by which one comes to a fuller understanding of the forces shaping the long-term future which should be taken into account in policy formulation, planning and decision-making… Foresight includes qualitative and quantitative means for monitoring clues and indicators of evolving trends and developments.”
“The key notions related to foresight are policy making, public participation, learning, alternatives, complex socio-technical systems, and science-society relationships” .
Foresight projects have already been in use for a few decades. The first foresight projects date back to 1960s and 1970s and were introduced in the USA and Japan. The growing interest in national foresight projects could be first noticed in Western Europe in the 1990s . Foresight has won general acclaim and has been performed on a large scale to direct science, technology, and innovation policies in a number of countries and on many different organisational levels, including supra-national, national, sectoral, regional, and corporate (used both by public and private organisations) .
Foresight practitioners usually focus on developing methodologies and conducting foresight exercises and do not have an influence on the implementation of their results. Foresight sponsors (either public or private institutions) seldom contract further research aimed at the implementation of the achieved results (examples include the National Foresight Programme “Poland 2020” ). Nevertheless, some foresight projects have ended with the practical application of their results, i.e. for shaping national and regional policies on innovation and planning, e.g. in Japan and South Africa , launching new research programmes at the national or regional level, e.g. in Great Britain, The Czech Republic, Poland (e.g., [9–11]). Additionally, results of foresights were applied to introduce changes in existing project financing mechanisms, e.g. in Germany and France [12–14].
Although foresight is presently seen as a well-established tool by policy makers and managers, it has not been systematically evaluated as an instrument of science and innovation policy . The first attempts to evaluate foresight projects were undertaken in the late 1990s . Due to the role of foresight projects in creating long-term strategies as well as the fact that foresight projects are instruments of public policy, which consume time, human, and financial resources, it seems to be justified that they should undergo evaluation.
Research on foresight evaluation has been carried out in the recent past, in a fragmentary way, and by a limited number of foresight researchers. The aspects of foresight evaluation are described mainly by M. van der Steen, P. van der Duin , K. Cuhls , R. Popper, L. Georghiou, I. Miles, M. Keenan , S.-S. Li, M.-H. Kang, L.-C. Lee , M. Butter, F. Brandes, M. Keenan, R. Popper , P. Destatte , L. Georghiou , L. Georghiou and M. Keenan , and A. Havas et al. . The subjects of research papers related to foresight evaluation comprise mostly factors of foresight success, areas of foresight impact, and different aspects of the foresight process. . Evaluation is “the activity that consists simply in the gathering and combining of performance data with a weighted set of goal scales to yield either comparative or numerical ratings.” . Evaluation concerns different kinds of undertakings, including research programmes [28–30] and foresight studies which are implemented in a form of projects or programme .
Taking into account the time of conducting the evaluation, the following types can be distinguished: ex-ante (conducted before launching a project, aimed at supporting decision making if the project should be launched) , mid-term, on-going (conducted in the course of the project execution focused on the project progress and problems) , ex-post and follow-up (conducted at the end or a few years after of the project close-down, respectively, focused on final achievements and results) [33, 34].
Evaluation is mainly executed with the use of standard criteria comprising : relevance, effectiveness, efficiency, appropriateness, utility, impact, complementarity, complexity, and sustainability. Three basic tests can be applied for the generalised evaluation framework of foresight projects: accountability (efficiency of activities conducted), justification (foresight effects), and learning (ways of the foresight process improvement) . The impact of foresight activities is the principal indicator of foresight evaluation . It is stressed by numerous authors of recent foresight literature (among others J. Smith , A. Havas et al. , L. Georghiou, and M. Keenan ).
At the same time, measuring impact has been identified in the foresight literature as being difficult to do. Problems with measuring the impact, including the time lag between the foresight project and the occurrence of its results, as well as the possibility of direct and indirect impact, were stressed by C. Cagnin et al. , J. Calof and J. Smith , and A. Havas et al.  and others. Furthermore, it was stressed that the impact of foresight depends on the relevance to major problems faced by society, as well as foresight timing and the quality of the achieved results.
Despite the importance of the evaluation of foresight project results and activities that have been undertaken in this area, the development of a coherent and agreed framework for foresight impact evaluation has progressed very slowly.
Because foresight projects are carried out in specific macro- and microeconomic conditions, numerous factors determine their execution and the effects they bring. Therefore, several criteria that can be used for the evaluation of such undertakings can be identified. Some attempts to evaluate foresight projects have been undertaken, and selected examples concerning the evaluation of foresight projects at a national and transnational levels are presented in this article. The main rationale for selecting the case studies was whether the aspect of foresight impact was taken into account in the course of their evaluation. All studies, apart from one case, concern ex-post evaluation.
Selected examples of the evaluated national foresight projects
Title of the foresight project
Date of project realisation
Date of project evaluation
Technology Foresight Programme (TEP)
UK Foresight Programme
Colombian Technology Foresight Programme (CTFP)
Foresight activities implemented by the EU indirect actions under FP5 and FP6 and managed by unit K2 in DG Research of the European Commission
Anticipation and Foresight Programme
The scope of evaluation,
The methods of evaluation, and
The results of evaluation.
The over-complicated structure of the process and the under engagement of the foresight sponsor (BMBF) as disadvantages of Futur, and
The high efficiency in the development of leading visions, special fund assigned by BMBF to execute interdisciplinary and interdepartmental research projects in the priority areas as advantages of Futur.
The Hungarian Technology Foresight Programme (TEP) executed in the 1997–2000 period was evaluated in 2004. The evaluation covered two issues, assessing the degree to which aims were achieved and developing recommendations for future foresight projects in Hungary . As in Germany, the evaluation was carried out by an international panel of evaluation experts led by the Manchester Institute of Innovation Research. The main evaluation methods included questionnaires, interviews, and document analyses. The questionnaire was answered by ca. 60 TEP experts, who assessed such issues as TEP methodology, TEP organisation structure, and TEP results and impacts. There were two main messages from the evaluation of the TEP programme: (1) The results of the Hungarian national foresight programme have not been implemented into science and technology policymaking (disadvantage). (2) The TEP participants (representatives of science, industry, public administration communities) changed their attitude towards the way of thinking about the future in favour of more complex and interdisciplinary approaches (advantage). In general, stakeholders support the idea of launching the next national foresight programme (advantage), but there is no such support on a governmental (political) level (disadvantage) at this time.
The United Kingdom Foresight Programme executed in 2002 was evaluated in 2006 by the Manchester Institute of Innovation Research and addressed the impact of the Programme and its constituent projects, its cost-effectiveness, and its management. The main instruments of the evaluation were interviews with 8 foresight team members and 28 stakeholders. Interviewers were asked to assess the impact of UK Foresight (its immediate outputs and influence on medium to long-term policy making) and the effectiveness of process management. Additional methods included a discussion forum, web-based consultations, and benchmarking (the comparison of the UK Foresight Programme with national foresight exercises executed in Sweden, Denmark, Germany, Japan, Spain, and France.)
The overall conclusion  was that the UK Foresight Programme achieved its objectives of identifying ways in which future science and technology could address future challenges for society, and identifying potential opportunities. All projects were successful in mobilising diverse groups of specialists (senior policymakers, scientists, business representatives) to work in a multidisciplinary framework and across disciplinary boundaries. In the evaluation report, it was stated that such mobilisation could probably not have been achieved by conventional research programmes. With regard to the cost-effectiveness, it was underlined that the approach adopted in the overall process was “fit-for-purpose,” and it delivered high quality outputs and offered good value for money. However, at the same time, it was emphasised that some projects were moderately under-resourced, and, as a result, the most costly ones had the greatest impact. According to the evaluation team, the management structure also needed improvement. There was an imbalance between the supply of projects and the demand resulting in the under-exploitation of foresight potential. A modest expansion was recommended, with one more project undertaken each year, with a commensurate expansion of management and executive resources. To sum up, UK Foresight created a process for original thinking in government and particularly for the application of science based evidence and foresight techniques to policy issues.
The evaluation of the Colombian Technology Foresight Programme (CTFP)  focused on the second cycle (2005–08) of the CTFP. It was also executed by the Manchester Institute of Innovation Research in 2010. The methods of evaluation were similar to the instruments applied in the case studies described above and included interviews, document analyses, international evaluation panels, online stakeholder surveys, and benchmarking the CTFP practices against Europe and South America. The evaluation criteria (20) included, among others, measures for the assessment of the management process, the level of achievement of objectives, cost-effectiveness, the efficiency of methods, the engagement of participants, the impact of the results on public and private beneficiaries and stakeholders, and the level of quality and novelty of the outcomes. On the whole, the CTFP objectives were assessed as appropriate and successfully achieved. The evaluation report concluded that the CTFP introduced a wide and effective portfolio of forward-looking approaches and tools. With regard to management and cost-effectiveness, it was stated that the total cost was too low for the amount of work carried out by the Programme. Although the scale of the programme showed excellent value for money, it was recommended that future programmes find ways either to increase funding or to reduce the number of simultaneous projects. The evaluation team recommended that a wider participation of the general public in the CTFP should be encouraged. In addition, the composition of expert panels could be improved, because there was overrepresentation of figures from administration. Nevertheless, the CTFP positively influenced the activities of key governmental programmes and agencies that define Science and Technology policies and research agendas. The most significant influence of CTFP on public policy was the work on the STI Vision 2019 used for the preparation of the National STI Plan 2019.
The evaluation of activities within the UNESCO Anticipation and Foresight programme undertaken over the period 1999–2005 was part of the Evaluation Plan and was aimed at the assessment of the results and impact of UNESCO’s activities in this area . The evaluation conducted in 2005 was focused on analysing the results achieved and lessons learned in the course of the execution of the Foresight and Anticipation programme as well as orientating future activities within the programme. The evaluation criteria included relevance, effectiveness, efficiency, and impact. The evaluation was carried out with the use of the following methods: desk research (document analysis), semi-structured in-depth phone, face-to-face interviews with UNESCO staff, and a variety of stakeholders.
There was no evidence that foresight activities had influenced strategy formulation and policy design in UNESCO.
Suggestions were made how the programme and its impact may be improved, among others, thanks to taking Member States’ needs in terms of foresight on UNESCO-related issues and applying more ways for dissemination of foresight results.
The products and results of EC foresight activities were assessed as satisfactory to highly satisfactory.
Direct impact on decision making concerning Science and Technology, both in the Member States and in the European Commission could not be easily identified; however, evaluators assume the occurrence of indirect impact.
Foresight should be treated as an instrument to be applied in the Science and Technology decisionmaking process at different levels: European Commission, national governments and independent R&D organisations, including private industry.
The relevance and benefits of applying foresight as a policy tool should be promoted to a greater extent.
The coverage of evaluation (foresight process, its outcomes and impact),
The organisational structure (evaluation expert panel), and
Methodology (the main methods: interviews and document analysis).
The applied evaluation approaches should be extended because they, e.g., did not fully succeed in measuring the impacts of foresight results on project beneficiaries and stakeholders. In the authors’ opinion, assessing the impact of foresight undertakings is extremely important for decision-makers and thus remains an essential challenge for the foresight research community.
The importance of the issue of foresight evaluation together with the practical attempts to evaluate this type of projects indicates the need to develop systemic foresight evaluation models. Such trials have been undertaken by researchers and comprise, among others, a foresight evaluation framework by P. Destatte , a framework of the foresight evaluation process by E. A. Makarova, A. V. Sokolova , an evaluation framework by S. S., Li M. H. Kang, L. C. Lee , an input–output-impact schema of foresight by K. A. Piirainen, R. A. Gonzales, J. Bragge , a Foresight Impact Schema by R. Johnston , an instrument for measuring the impact of foresight by R. Johnston and J. Smith .
The proposed models and frameworks for foresight evaluation take into account different aspects of foresight project execution, including its impact. Two of them, the models by R. Johnston and by R. Johnston and J. Smith, are devoted strictly to the evaluation of impact.
The assessment criteria change along with the evaluation of the objectives of individual foresight projects . The foresight evaluation framework by P. Destatte  is based on two dimensions, which are the process and the outcomes. The following foresight evaluation criteria, concerning different elements of the model, are proposed: effectiveness, efficiency, utility, relevance & appropriateness, sustainability, fairness, and behavioural additionality. In the model, the impact is understood as the consequences of the foresight exercise for the addressees after the achievement of the strategy. The author recognises direct impact (on the direct addresses) or indirect impact (on other winning and losing addresses). He distinguishes long-term impacts, which can be called “sustainable impacts.”
Moving towards interdisciplinary thinking,
Establishing a long-term perspective, and
Increasing the level of foresight culture and cognition .
The input–output-impact schema of foresight developed by K. A. Piirainen, R. A. Gonzales, and J. Bragge  can be used starting from the phases of foresight planning (ex-ante evaluation) to the evaluation of the finished project (ex-post evaluation). It is advisable to use the framework starting from the phase of its design up to its completion, because this supports a continuous evaluation throughout the entire lifecycle of the project and comprises its results. The developed scheme takes into account the three levels of evaluation, i.e. the utility and the delivery level (fulfilment of objectives, the quality of the process, the content and delivery of the foresight project as important factors in creating the impact of foresight), the technical level (the technical quality of execution, data quality, sustainability of methods), and the ethical level (ethical dimension of future studies). The proposed framework is quite detailed and contains multiple perspectives. However, the authors of the schema assume that, for individual foresight projects, relevant and specific evaluation criteria and measures should be developed.
Was the perspective the one that was needed?
Did the analysis prove sufficient to support answering the question/problem?
What limitations did the analysis uncover?
Did the prospection answer the question?
Were the results satisfying to the stakeholders?
Was it engaging and inspiring enough to have impact on the imagination of the readers?
Were the strategies feasible, and were they based on the foresight?
Were they implemented?
Were the forecasts on the right level of analysis and depth to support strategising ?
Although the proposed framework can be treated as a useful tool for evaluation, its authors indicate that there is a lack of evidence confirming its applicability; thus, it is necessary to evaluate the evaluation framework itself.
Apart from general frameworks for foresight evaluation, there can be indicated models devoted strictly to the foresight impact evaluation. In such cases, the selection of evaluation criteria depends mostly on the types of impacts.
A Framework to classify impacts of foresight activities
Targeted and/or unintended impact
• Increased recognition of a topic area
• Awareness of science, technology and innovation among players, creating database
• Awareness of systemic character
• Training of participation in foresight matters
• New combination of experts and stakeholders, shared understanding (knowledge network)
• Articulation of joint visions of the future, establishing longer-term perspectives
• Integrate able new actors in the community
• Make hidden agendas and objectives explicit
• Devising recommendations and identifying options for action
• Activate and support fast policy learning and policy unlearning processes
• Identify hidden obstacles to the introduction of more informed, transparent, open participatory processes to governance
• Influence on (research/policy) agendas of actors, both public and private (as revealed, for instance, in policy strategies and programmes)
• Incorporate forward-looking elements in organisations’ internal procedures
• Effective actions taken
• Formation of action networks
• Creation of follow-up activities
• Adoption of foresight results in the research and teaching agenda of organisations; Foresight spin-off activities in various disciplines
• Improved coherence of policies
• Cultural changes towards longer-term, holistic, and systemic thinking
Awareness raising – increasing the understanding of target audiences with regard to the need, value, etc. of foresight;
Informing – providing information that can be useful for the improvement of planning and decision-making;
Enabling – providing tools for the better management of uncertainty associated with the future; and,
Influencing – shaping the policy, strategies, research priorities, etc.
The number of departments influenced by a particular report;
The extent of influence (e.g. major, moderate, and minor) reported;
The number and scale of follow-on and spin-off foresight projects;
National comparative performance in high value added goods and services;
The contribution of research to major national and international issues; and,
Public confidence in research .
The Foresight Impact Schema has been applied in two case studies (UK Foresight Programme, foresight project on irrigated agriculture in Australia).
The key role of foresight and client impact;
General benefits from foresight for those directly involved;
Critical success factors for foresight process designers and planners;
Meta measures connected primarily with training and skills development and secondly with risk management;
Pre-policy measures (design and planning);
Policy implementation measures (policy support impact);
Post-policy measures (implementation impact).
The instrument was pilot-tested on two Canadian foresight programmes.
Since there are many methodologies to conduct foresight, there are also various frameworks recommended for their evaluation. The presented general evaluation frameworks can be applied for evaluating the process and results of foresight projects with taking, as one of aspects, their impact. On the other hand, some frameworks devoted strictly to evaluating foresight impact, assume the evaluation of different aspects of impact and propose possible measures. However, proving the impact of foresight on policy using the measures available in the present literature is very difficult. Many of the measures proposed have qualitative character and thus are difficult to collect, analyse and derive specific conclusions from. Authors of the models indicate their general character and the need to adjust to the specificity of particular foresight projects. More extensive empirical verification of the proposed frameworks should result in improving their theoretical assumptions.
As shown in the case studies analysed in the paper, initiatives undertaken in order to evaluate national and transnational foresight projects focused mainly on the evaluation of the organisational and methodological aspects of the programmes and whether the planned objectives were achieved. However, presently, proving and measuring the value and impact of foresight studies becomes the critical challenge. The challenge has been recognised by some researchers who have taken the effort to develop models and frameworks for the evaluation of foresight projects, including the aspect of foresight impact. Some general foresight evaluation frameworks have been proposed. For example, some considered impact as one of the considered aspects, like those developed by K. A. Piirainen et al. , E. A. Makarova, A. V. Sokolova , S. S., Li et al. , and P. Destatte . Some developed frameworks devoted exclusively to foresight impact evaluation (R. Johnston , and by R. Johnston and J. Smith ). As shown in the literature review presented in this paper, the scientific work on foresight evaluation models is still in progress.
The interests of the authors of the paper are focused mainly on foresight impact as one of the principal aspects of foresight evaluation, although they are aware of numerous objectives and aspects of foresight evaluation. The authors intend to continue their analyses and, taking into account, possible types of impacts of foresight activities, propose sets measures that can be applied for their evaluation. It is a complex and challenging task, but definitely needed, especially in times of economic crisis, when public funds should be used most efficiently and selective towards publicly funded research.
This article is published under license to BioMed Central Ltd. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
- Martin BR (2010) The origins of the concept of ‘foresight’ in science and technology: An insider’s perspective. Technological Forecasting and Social Change 77:1438–1447View ArticleGoogle Scholar
- Miles I (2008) Introduction to technology foresight, Paper for UNIDO Workshop “Technology Foresight for Practitioners (Roadmapping)” Prague, NovemberGoogle Scholar
- Martin BR (1995) Foresight in science and technology. Technology Analysis & Strategic Management 7:139–168View ArticleGoogle Scholar
- Coates JF (1985) Foresight in federal government policymaking, Futures Research Quarterly, pp. 29–53Google Scholar
- Cagnin C, Keenan M, Johnston R, Scopolo F, Barré R (eds) (2008) Future-Oriented Technology Analysis. Strategic Intelligence for an Innovative Economy. Springer, GermanyGoogle Scholar
- United Nations Industrial Development Organisation (2005) Technology foresight manual, Vol. 1–2, ViennaGoogle Scholar
- Czaplicka-Kolarz K (2011) From National Polish Foresight (NPF 2020) towards it implementation - to foster smart growth, European Forum on Forward Looking Activities (EFFLA), Second meeting, Warsaw, 16–17 NovemberGoogle Scholar
- Calof J, Smith JE (2012) Foresight impacts from around the world: a special issue. Foresight 14(1):5–14View ArticleGoogle Scholar
- Keenan M, Marvin S, Winters C (2002) Mobilising the regional foresight potential for enlarged European Union, United Kingdom Country Report, BrusselsGoogle Scholar
- Klusacek K (2003) “Technology Foresight in the Czech Republic”, Discussion Paper Series, article 03–15Google Scholar
- Mazurkiewicz A, Sacio-Szymańska A, Poteralska B (2012) Setting priority R&D directions for the strategic research institutes, The XXIII ISPIM Conference – Action for Innovation: Innovating from Experience, Barcelona, Spain, 17–20 JuneGoogle Scholar
- Giesecke S (2007) Futur – The German Research Dialogue, Foresight Brief No.1, The European Foresight Monitoring Network EFMNGoogle Scholar
- Cadiou Y (2003) From key-technologies to key-competencies. Scientific and technological competencies at the regional level related to the French “Key-Technologies” exercises, The Second International Conference on Technology Foresight, TokyoGoogle Scholar
- Hoffmann B, Rader M (2003) Review and analysis of national foresight. Case study France - technologies clés 2005, Forschungszentrum Karlsruhe GmbH in der Helmholtz-Gemeinschaft, Institut für Technikfolgenabschätzung und SystemanalyseGoogle Scholar
- Georghiou L (2003) Evaluating Foresight and Lessons for Its Future Impact., paper presented at The Second International Conference on Technology Foresight, 27–28 February, TokyoGoogle Scholar
- Makarova EA, Sokolova AV (2012) Foresight evaluation: lessons from project management. Basic Research Program. Working Papers. National Research University, Higher School of EconomicsGoogle Scholar
- Steen Van der M, Duin Van der P (2012) “Learning ahead of time: how evaluation of foresight may add to increased trust, organizational learning and future oriented policy and strategy”, Futures, Special Issue: Looking back on looking forward, Vol. 44, Issue 5Google Scholar
- Cuhls K (2011) Foresight as ex-ante evaluation – the case of the BMBF foresight process, Fraunhofer ISIGoogle Scholar
- Popper R, Georghiou L, Miles, I. and Keenan M (2010) Evaluating Foresight: Fully-Fledged Evaluation of the Colombian Technology Foresight Programme (CTFP), Cali: Universidad del Valle, ISBN 978-958-670-842-5, http://community.iknowfutures.eu/pg/file/popper/view/2204/evaluating-foresight-fullyfledged-evaluation-of-ctfp
- Li S-S, Kang M-H, Lee L-C (2009) Developing the evaluation framework of technology foresight program: lesson learned from European countries, Atlanta Conference on Science and Innovation Policy, Atlanta Conference on Science and Innovation PolicyGoogle Scholar
- Butter M, Brandes F, Keenan M, Popper R (2008) Evaluating Foresight: an introduction to the European Foresight Monitoring Network. Foresight 10(6):3–15View ArticleGoogle Scholar
- Destatte P (2007) Evaluation of Foresight: how to Take long term impacts into consideration? FOR-LEARN Mutual Learning Workshop Evaluation of Foresight, Brussels, IPTS-DG RTD, http://forlearn.jrc.ec.europa.eu/guide/6_follow-up/documents/0709%20Destatte%20Evaluation%20of%20Foresight.pdf
- Georghiou L (2008) Advances in the Organisation of Foresight and the Evaluation of Foresight, The University of ManchesterGoogle Scholar
- Georghiou L, Keenan M (2006) Evaluation of national foresight activities: Assessing rationale, process and impact. Technological Forecasting & Social Change 73:761–777View ArticleGoogle Scholar
- Havas A, Schartinger D, Weber M (2010) The impact of foresight on innovation policy-making: recent experiences and future perspectives. Research Evaluation 19(2):91–104View ArticleGoogle Scholar
- Rossi PH, Freeman HE, Lipsey MW (2004) Evaluation. A systematic approach, 7th edn. Sage publication, Thousand OaksGoogle Scholar
- Scriven M (1967) The methodology of evaluation. Rand McNally, ChicagoGoogle Scholar
- Cronbach L, Ambron S, Dornbusch S, Hess R, Hornik R, Phillips D, Walker D, Weiner S (1980) Towards Reform of Program Evaluation. Jossey-Bass, San Francisco, CAGoogle Scholar
- Patton M (1997) Utilization focused evaluation: the new century text. Sage Publication, LondonGoogle Scholar
- Owen JM (2006) Program evaluation: forms and approaches. Allen & Unwin, AustraliaGoogle Scholar
- European Commission (2013) Guidelines for the ex-ante evaluation of 2014 – 2020 EMFF OPs. SeptemberGoogle Scholar
- EVALSED (2008) The Resource for the Evaluation of Socio-Economic Development, Guide, European CommissionGoogle Scholar
- Evaluating EU Expenditure Programmes: A Guide: Ex post and intermediate evaluation (1997) XIX/02 - Budgetary overview and evaluation Directorate-General XIX – Budgets European CommissionGoogle Scholar
- ASTD (2008) Measurement & Evaluation: Essentials for Measuring Training Success. Tips, tools and intelligence for trainers. The American Society for Training and DevelopmentGoogle Scholar
- Haber A (2007) Ewaluacja ex-post. Teoria i praktyka badawcza. Polska Agencja Rozwoju Przedsiębiorczości, WarsawGoogle Scholar
- Smith J (2012) Measuring Foresight Impact, EFP Brief No. 249Google Scholar
- Evaluation of the Hungarian Technology Foresight Programme (TEP) (2004) Report of an International Panel, May. www.nih.gov.hu/english/technology-foresight/evaluation-of-the-080519
- Evaluation of the United Kingdom Foresight Programme (2006) Final Report, PREST, Manchester Business School, University of Manchester, http://www.techforesight.ca/InternationalForesightReports/UKForesightProgramEvaluation2006.pdf
- Popper R, Georghiou L, Miles I and Keenan M (2010) Evaluating Foresight: Fully-Fledged Evaluation of the Colombian Technology Foresight Programme (CTFP), Cali: Universidad del Valle, ISBN 978-958-670-842-5. http://community.iknowfutures.eu/pg/file/popper/view/2204/evaluating-foresight-fullyfledged-evaluation-of-ctfp
- De Laat B, Dani S (2006) Evaluation of UNESCO Anticipation and Foresight Programme, Internal Oversight Service Evaluation Section, UNESCO, JulyGoogle Scholar
- Mid-Term Assessment of Foresight Activities (2004) Report to Directorate General Research of the European Commission, OctoberGoogle Scholar
- Piirainen KA, Gonzales RA, Bragge J (2012) A systemic evaluation framework for futures research. Futures 44Google Scholar
- Johnston R (2012) “Developing the capacity to assess the impact of foresight”, Foresight, vo. 14, no 1, 2012, Emerald Group Publishing LimitedGoogle Scholar
- Miles I (2012) “Dynamic foresight evaluation”, Foresight, vol. 14, no 1, Emerald Group Publishing LimitedGoogle Scholar
- Da Costa O, Warnke P, Cagnin C, Scapolo F (2008) The impact of foresight on policy-making insights from the FORLEARN mutual learning process, Technology Analysis and Strategic Management, vo. 20 3, pp. 369–387Google Scholar
- Havas A, Schartinger D, Weber M (2007) Experiences and Practices of Technology Foresight in the European Region, Technology Foresight SummitGoogle Scholar
- Georghiou L, Keenan M (2008) Evaluation and impact of foresight. In: Georghiou L, Cassingene HJ, Keenan M, Miles I, Popper R (eds) The handbook of technologu foresight: concepts and practices. Edward Elgar, CheltenhamGoogle Scholar
- Ladikas M, Decker M (2004) Assessing the impact of future-oriented technology assessment, paper presented at EU-US Seminar: New Technology Foresight, Forecasting & Assessment Methods, Seville, 13–14 MayGoogle Scholar