Symbol of the Government of Canada

Common menu bar links

Innovation Canada: A Call to Action

5. Program Effectiveness

This chapter addresses the first question in the government's charge to the Panel:
"What federal initiatives are most effective in increasing business research and development (R&D) and facilitating commercially relevant R&D partnerships?"

While the Panel determined that Canada is considered to be among the leaders in program assessment,1 it was concerned to learn, in the course of briefings with federal officials, that the tools are not in place to undertake comparative assessments as contemplated in this question. In the federal framework for program evaluation, "effectiveness" is defined by the Treasury Board of Canada Secretariat (2009) as "the extent to which a program is achieving expected outcomes." There is no common evaluation framework in place to determine relative program effectiveness across departmental lines. As a result, standardized performance and outcome indicators do not exist for the roughly $5 billion of business innovation programs in the review, and the supporting information is not retained in a common form or database.

This changes the nature of the advice that the Panel is able to offer. Instead of assessing the relative effectiveness of the 60 programs described in Chapter 3, the Panel is making recommendations that respond to stakeholder issues and concerns and that, if implemented, will establish the missing framework needed to shape a comprehensive and consistent evaluation of R&D program effectiveness going forward.

To establish a context for the Panel's recommendations, the following sections summarize (i) the relevant evaluation machinery already in place in the federal government, (ii) some international experience in respect of the evaluation and comparative assessment of programs that support business innovation and (iii) what the Panel heard from stakeholders regarding the effectiveness (and shortcomings) of innovation support programs in Canada.

Existing Assessment Procedures for Federal Programs

There are several mechanisms in place for assessing federal program expenditures, including audits by the Auditor General, strategic reviews and ongoing program evaluations. Performance assessment has a well-defined role within the government's expenditure management system (EMS) — the overall framework for decision making on spending. In recent years, the EMS has evolved to put greater focus on results. The 2006 Federal Accountability Act requires departments and agencies to review the relevance and effectiveness of their grants and contributions every five years. Budget 2007 announced a new EMS that includes a requirement for spending proposals to clearly define expected results, and for departments to manage against these results and formally evaluate program performance (Department of Finance 2007). Budget 2007 also introduced strategic reviews of departmental expenditures on a four-year cycle to determine whether they are achieving their intended results and are aligned with the government's priorities. Because these strategic reviews are Cabinet documents, their results were not available to the Panel. In Budget 2011, the government announced a strategic and operating review that will assess $80 billion in direct program spending across the federal government in order to achieve at least $4 billion in ongoing annual savings by 2014–15 (Department of Finance 2011).

Assessment of Program "Effectiveness"

Effectiveness, as noted earlier, is defined by the Treasury Board Secretariat as simply "the extent to which a program is achieving expected outcomes." Based on the Panel's assessment of programs, it is clear that individual programs' "expected outcomes" are as varied as the programs themselves, and to a large extent are incommensurable. For example, the Scientific Research and Experimental Development (SR&ED) program, because it is delivered through the tax system, does not have a results-based accountability framework and its performance is not evaluated on a regular basis. However, the Department of Finance has occasionally undertaken economic assessments of the program from a net benefit perspective — that is, estimating the economy-wide benefits and netting out the estimated costs of administration, compliance and the imposition of taxes (for the most recent assessment, see Parsons and Phillips 2007). Work undertaken for this review applied the net public benefit methodology to a sample of other programs but, for reasons outlined in Chapter 6, the Panel has concluded that the method is not sufficiently precise and well developed at this stage for general use to assess the comparative effectiveness of programs.

The more sector-focussed programs, such as the SD Tech Fund and FPInnovations, have intended final outcomes that are relatively narrow in scope — for example, increased market growth, sector competitiveness and specific environmental benefits. On the other hand, many programs include outcome objectives at the scale of the entire economy — for example, productivity growth or the overall prosperity of Canadians. Such ultimate impacts of individual programs are effectively impossible to measure, since the specific contribution of the program in question can rarely be isolated from the myriad factors that affect all macroeconomic outcomes.

Intermediate outcomes, which occur closer to a program's point of influence, are obviously easier to measure and attribution is stronger, although almost never definitive. The Panel has observed that intermediate outcomes are typically identified according to each program's specific objectives. Internship programs, for example, have as desired outcomes increased job opportunities in the business sector for graduates. Other programs seek to increase partnerships and collaboration, establish networks, foster an entrepreneurial culture, develop and retain researchers in Canada, or advance the commercialization of new products and processes. All of these contribute in varying degrees to business innovation, and thus to business competitiveness and prosperity for Canadians, but the linkage to such ultimately desired outcomes is usually indirect and long term. In the end, the linkage must be assessed based on a combination of econometric analysis, anecdote, accumulated experience and intuitive plausibility.

The diversity of outcomes in the portfolio of R&D programs in this review further complicates comparison of their relative impact on business innovation and commercially relevant R&D partnerships. It is nevertheless possible to conceive of common intermediate outcomes for similar program types, and potentially to evaluate the comparative effectiveness of those programs in achieving the common outcomes. This is explained later in this chapter in Recommendation 1.6.

International Practices in Comparative Program Assessment

The Panel met with government representatives from the United Kingdom (UK), Germany, the United States, Australia, Singapore, Finland and the Organisation for Economic Co-operation and Development (OECD) to discuss approaches to performance measurement and the comparative review of programs. It concluded that all jurisdictions recognize the importance of performance measurement and regularly evaluate individual programs within specific ministerial accountabilities. However, whole-of-government assessments of innovation programs for comparative effectiveness are either still in development or not contemplated at all. To the extent that comparative assessments are undertaken, they look at the broad performance of innovation systems and policy — for example, the recent study of the Finnish innovation system (Finland 2009) — and not at the relative effectiveness of individual programs.

Other jurisdictions are thus confronted with the same issues as Canada in assessing the value of their innovation support measures. In a forthcoming publication on business innovation policies, the OECD notes that, while growing attention is being paid to evaluation internationally, the overall evaluation record remains "patchy" (OECD forthcoming). The Panel believes that Canada should encourage, through the OECD, focussed collaboration regarding analysis and best practices in the evaluation of innovation policy and associated suites of programs.

Consultations with Stakeholders

In the absence of the data and methodologies needed for objective and consistent assessment of relative program effectiveness, the opinions of stakeholders provide some basis for judgment on comparative effectiveness. For example, programs that have little uptake or awareness by target clients — even though they may be well designed — are not able to have a significant impact on business innovation nationally.

The Panel heard the opinions of domestic stakeholders through (i) consultations in the form of in-person group sessions, held in nine cities across Canada, and a call for written submissions, which elicited 228 responses and (ii) a survey of firms conducted by EKOS Research Associates Inc., which generated responses from more than one thousand R&D-performing businesses of varying sizes, sectors and provinces. The full results of this survey will be made available through the Panel's website at www.rd-review.ca.

These activities shed valuable light on aspects of the comparative effectiveness of the programs under review. It must of course be borne in mind that stakeholders, by definition, have vested interests. They are beneficiaries of the programs being discussed. Inevitably, some may be excessively complimentary or critical, depending on individual objectives and experience. Nevertheless, because the Panel's consultations — both in person and via written submissions — were extensive and reasonably representative of the span of interests, it is likely that the most oft-repeated views have good grounding in reality.

Consultations

Three federal programs that support R&D were most often mentioned in the Panel's consultations and survey: SR&ED tax credits, the Industrial Research Assistance Program (IRAP) and, to a lesser extent, the Natural Sciences and Engineering Research Council's (NSERC) suite of business-facing programs in support of internships, networking and collaboration. Some corporate stakeholders called the SR&ED program and IRAP the lifelines that saved their businesses; without them, their companies would have foundered.

The SR&ED program, in view of its scale and scope, drew considerable commentary. Much was positive: the program is seen to encourage new investment in R&D, offset the high cost of exploratory work, directly support operations, generate cash flow, and facilitate access to credit, while leaving the specific choice of R&D activity up to the individual business. At the same time, reflecting the fact that it is the best known of the programs being reviewed, the SR&ED program also drew more critical commentary than any other R&D program. Many stakeholders called the claims process cumbersome, complex and time-consuming. Uncertainties associated with qualification and timing are sometimes so great that the SR&ED program is excluded from R&D investment decisions. Many smaller businesses find the claims process so unwieldy that they are forced to engage SR&ED "consultants," sometimes surrendering significant percentages of their refunds as contingency fees. (These issues, as well as the striking preponderance of SR&ED tax credits in the total mix of business R&D support, are addressed in detail in Chapter 6.)

IRAP was widely praised as an effective, well-run program that provides industry with non-repayable contributions, mentorship and technical business advice. The main criticism is the exhaustion of funds very early in the fiscal year, but this is also one indication of the high level of demand for the program. Some believe that the amounts of IRAP funding awards are too small to be effective and that the application process is excessively difficult for first-time applicants.

A frequently raised issue was the need for more efficient and targeted collaboration between post-secondary institutions and businesses, particularly as it relates to mobilizing academic contributions for commercialization. NSERC's current suite of industry-facing programs is extensive, but is not well known by the population of firms it is meant to support. Those stakeholders who were aware of NSERC programs were generally pleased with them. Others, however, urged that the programs be marketed more widely, because businesses often do not know that they can access R&D through post-secondary institutions. This problem is clearly related to the small scale of most of NSERC's business-facing programs and reduces their overall impact. The Panel emphasizes that, although program budgets may be fully subscribed, it is still important for the programs to be widely known in order to attract more applicants. The selection process would then be more competitive, leading to better overall outcomes for any given program budget.

Regarding the supply of graduates and their skills, the Panel heard that businesses require a full spectrum of skills, at all levels, to support their R&D and innovation activities. This is consistent with a recent OECD report on workforce skills and innovation:

As with the broader concept of innovation, there is great diversity in the range of activities undertaken within R&D and, consequently, considerable diversity in the occupational structure of the R&D workforce… The great bulk of business R&D expenditure is devoted to Development, not Research; that is to say, it is directed not at fundamental or basic research, but to improve existing products, services and production methods… Such activities are a key function of trade and technician occupations.

(Toner 2011, pp. 24–25)

Participants in consultations suggested that greater use of co-ops and internships would improve the market readiness of higher education graduates. To this end, they suggested more support for such programs and broad eligibility to include students at colleges and polytechnics. They also endorsed wider participation in programs that blended science and/or technology skills with management training.

Many provincial collaboration programs were endorsed in the consultations, including the Ontario Network of Excellence, the Colleges Ontario Network for Industry Innovation (CONII), MaRS Innovation centres, the College Centres for the Transfer of Technologies of the Quebec-based Cégeps, BCIC New Ventures, and Alberta Innovates. While stakeholders urged the federal government to include elements of these programs in its own R&D supports, they cautioned against duplication of existing provincial initiatives. Instead, they said, the federal government should consider whether its programs could become accessible to more firms if they were delivered through existing provincial and local organizations and, to this end, they encouraged greater collaboration between the two orders of government.

In addition to feedback on specific programs, stakeholders also commented on the program landscape as a whole. Some frequent themes in this regard were that (i) many programs are not known to as many businesses as they should be and (ii) businesses that learn of the existence of programs are often bewildered by the array of choices across many departments and agencies. Stakeholders called for programs to be consolidated and delivered by fewer organizations. Another oft-raised comment was that the government's business innovation support is heavily oriented toward R&D and is in fact dominated by the SR&ED tax credit. Consequently, there is a need to provide complementary support for other kinds of activities along the continuum from idea to commercial success.

Survey of R&D-Performing Firms

The survey questionnaire enabled the Panel to obtain more comprehensive feedback from R&D-performing firms (see also Box 2.4 in Chapter 2). The findings emerging from this work provide an additional layer of insight.

Surveyed firms that performed in-house R&D were most likely to have R&D employees who held undergraduate degrees (62 percent of firms in the sample) or graduate degrees (59 percent) or who were technicians or technologists (52 percent). By comparison, only 18 percent of responding firms had PhD holders working as R&D employees (Figure 5.1).

Of the 1009 R&D-performing firms surveyed, about one-third (331 respondents) reported never having attempted to access a federal program, including tax credits, that supports business or commercially oriented R&D. When asked, from a prompted list, why they had never participated, more than half said they were not aware of any available programs. Slightly more than a third claimed the application process was too burdensome. All other responses were mentioned much less frequently (Figure 5.2).

Figure 5.1 Types of R&D Performers Employed by Firm
Figure 5.1 Types of R&D Performers Employed by Firm

Two-thirds of the 1009 firms surveyed reported having used a federal R&D program sometime in the past and 72 percent of those (488 firms) had accessed such a program in the past three years. Of the 678 firms that were past or present program users, 58 percent stated that their firms expended more on R&D as a result of receiving federal support. This included 79 percent of IRAP users and 71 percent of SR&ED program users — further evidence of the impact of these two large-scale programs on the propensity of companies to undertake R&D. (This of course still does not permit conclusions about the comparative effectiveness of the SR&ED program and IRAP, nor objective quantification of their net public benefit.)

Among the 488 survey respondents that had accessed a federal R&D program in the past three years, 73 percent reported using the SR&ED program and 17 percent IRAP. No other program was identified (unprompted) by more than 1 percent of the companies (Figure 5.3). This strongly suggests that other federal programs are not well known to firms. Moreover, it implies that the survey responses to questions about firms' experiences in using federal R&D programs for the most part concern the SR&ED program and/or IRAP, with the SR&ED program the more prevalent by a ratio of more than four to one.

Bearing this significant qualification in mind, recent program users were also asked to rate their satisfaction in relation to ten aspects of the programs they had used (Figure 5.4). Generally speaking, the surveyed firms expressed high levels of satisfaction with federal R&D support programs — in effect, the SR&ED program and IRAP. More than seven in ten were satisfied with the overall quality of program delivery and the form of support. Roughly two-thirds were also satisfied with the conditions on eligibility, eligible expenses and length of time between decision and receipt of funds. At the bottom of the list, although still garnering majority satisfaction ratings, were the reporting requirements, the length of time between application and decision, and the appropriateness of the selection process.

Figure 5.2 Reasons for Not Participating in R&D Programs
Figure 5.2 Reasons for Not Participating in R&D Programs

Following from the overview of the 60 programs in Chapter 3, the discussion of existing program effectiveness evaluation "infrastructure" and the survey of stakeholder views regarding the effectiveness of programs, the Panel believes that change is required to improve the effectiveness and impact of federal programs in support of business innovation in Canada.

Specifically, the rest of this chapter sets out a comprehensive and transformative agenda of recommendations that will, over time, greatly improve the impact of federal direct support programs for business innovation. Of foremost importance, the Government of Canada must build a focal point for direct support programming — a new funding and delivery agency for business innovation support.

Figure 5.3 Program from Which Funding Received
Figure 5.3 Program from Which Funding Received
Figure 5.4 Satisfaction with Various Aspects of the Program
Figure 5.4 Satisfaction with Various Aspects of the Program

1 More specifically, the net public benefit evaluation of the SR&ED tax credit by the Department of Finance in 2007 (Parsons and Phillips 2007) is considered to be state-of-the-art for program assessments of its type. (Return to reference 1)