NIST Home  > Baldrige > Examiner Reference Shelf > Frequently Asked Questions

Baldrige Excellence Framework: Examiners’ Frequently Asked Questions

The questions below are the ones most frequently asked by members of the Baldrige Board of Examiners.
Please submit additional questions to baldrige@nist.gov.

Comment WritingTop

“NERD” (Nugget—Examples—Relevance—Done) is a handy way to remember the three elements (NER) to include in a comment. But writing every comment in the NER order may (1) put the applicant to sleep and (2) more important, make your comments less effective. Consider whether the order NRE; R tucked inside the N, then E; or even E tucked inside the N, followed by R, makes for a stronger message. See Baldrige case study feedback reports for examples. And you aren’t necessarily Done! Read the comment for effectiveness and clarity.
Starting an OFI comment with a long “although/while” statement focused on a strength can send a mixed message to the applicant (“Is this a strength or an OFI?”). Instead, we recommend making the comment immediately actionable by pinpointing the OFI. For example, instead of “The applicant uses comparative data to assess its performance. However, it is not evident how organizations and performance dimensions are selected for comparisons,” we recommend something like this: “It is not evident how the applicant selects organizations and performance dimensions to include in the comparative data it uses.” That way, the OFI is upfront and clear, and the other information is simply the background for the OFI (if it’s needed at all).
We ask you to write "around 6" comments to focus the organization on its most important strengths and OFIs without overwhelming the organization. (This is different from the practice in some Baldrige-based programs, which may ask for as few as 3 and as many as 12 comments.)
Baldrige feedback at the national level is actionable, but nonprescriptive and nonpredictive. Use “may” instead of “will”: “Doing X may help the organization do Y” or “Not doing X may result in Y,” or something similar.
No. A mathematical approach doesn't take the importance of results into account. Instead, look at the results that are relevant to the area to address you are writing about and to the organization's key factors, and make a holistic determination of how well they respond to the requirement and the evaluation factors (LeTCI).

Criteria GenerallyTop

When the Criteria glossary doesn’t include a term, that means the Criteria usage doesn’t significantly differ from the common, dictionary definition. “Strategy” is one of those terms: “a careful plan or method for achieving a particular goal, usually over a long period of time; a plan, method, or series of maneuvers or stratagems for obtaining a specific goal or result” (Merriam-Webster). At a higher level, someone well acquainted with the Baldrige Criteria might say, “an organization’s approach to addressing the future.”
When the Criteria glossary doesn’t include a term, that means the Criteria usage doesn’t significantly differ from the common, dictionary definition. “Transformational change” is one of those terms: change that disrupts the status quo in an organization, forcing people out of their comfort zones, and likely causes a change in cultural norms for the organization. It is generally organization-wide and enacted over some time, but it is not the same as looking back over many years of evolutionary change and realizing there has been a transformation. Transformational change is leadership driven. Drivers might be a change in the business model, organizational strategy, or work systems due to dramatic regulatory changes (think health care today), disruptive innovation in the marketplace (think digital cameras), or new market opportunities (think joint ventures).
The “what” questions set the context for the item, and in many cases, for other areas to address. If an organization doesn’t give responses to them, that certainly could be grounds for an OFI—as could a response to “What are your key strategic objectives?” that doesn’t appear to address the strategic challenges given in the Organizational Profile. As for accounting for the answers to “what” questions in scoring, even if you are considering the 50–65% range (with approaches responsive to the overall requirements), you should score the item holistically: consider the responses to overall requirements (“how” as well as “what” questions) as a group, just as you would for the multiple requirements.
There are numerous places where the Criteria enable or imply timeliness and effectiveness of decision making. (1) Category 1: creating a successful organization; organizational agility; focus on action; setting the vision and values (to enable empowered and aligned workforce decision making). (2) Category 2: addressing change; agility; flexibility; strategic objectives with timelines; action plans with timelines; resource allocation; action plan modification. (3) Category 4: data, information, and systems to enable decision making; measurement agility; performance analysis and review; responding to rapidly changing organizational needs.
You would relate the OFI to a Criteria requirement, such as those found in 1.1c(1), 2.1a(1), 2.2b, 4.1a(4), and 6.1a(3). The Criteria don't treat agility like a process. It's a characteristic (related to a core value). The Criteria ask how organizations do the things that enable organizational agility. So an OFI would be around the relevant process and how it could improve in creating, addressing, or enabling agility. That said, note that this is a prime example of how key factors can influence the relative importance of Criteria requirements. Some industries and organizations have a much greater need for agility than others.
P.2a(1) asks how many and what types of competitors an organization has, but the Criteria don’t ask how competitors are determined. As the notes make clear, competition may exist for customers, resources, and visibility, for example. An OFI might relate to Criteria requirements that touch on this issue, such as 2.1a(3), on potential blind spots in strategic planning (see the notes to 2.1a[3]), and 3.2a, on market segments.
2.1b(1) asks the organization to identify its key strategic objectives. This set of requirements provides context for other questions in the item and in other items. The responses are evaluated only for their relationship to other item requirements or key factors, and for their presence or absence. So why aren’t these questions in the Organization Profile? The Organizational Profile is often an organization’s first Baldrige-based assessment. The questions in 2.1b(1) are too difficult for a first assessment, and they would be out of context without the rest of the questions in 2.1.
The Criteria ask, “What are your goals …?” in several places: 1.2b(1) related to regulatory and legal compliance, 2.1b(1) related to strategic objectives, and 5.1b(1) related to workforce environment improvement goals. Goals are not specifically asked for in category 7, but your assessment of the associated results for these areas will consider how well an organization is meeting or exceeding its stated goals.
First, it is fair to give an OFI on failure to achieve a stated goal. Strengths for meeting a goal are not always appropriate unless the goals are anchored to objective high performance, such as top-decile performance. Consistent performance around top 10% is very good performance and likely worthy of strength comments. However, other factors besides achieving or not achieving the top 10%, such as trend data (have the results been improving or not?), competitor performance (are the results better or worse than competitors’?), and the stated importance of achieving the top 10% (are the measures critical ones for the organization?) should influence your feedback.
To a limited degree. Note the definition of “effective”: “How well a process or a measure addresses its intended purpose. Determining effectiveness requires (1) evaluating how well the process is aligned with the organization’s needs and how well is it deployed, or (2) evaluating the outcome of the measure as an indicator of process or product performance.” Performance certainly is an indicator that something is working well or not so well, but other factors also impact performance. You should not assume that unfavorable results come only from ineffective processes, any more than you would assume that favorable results automatically mean that processes are systematic, well deployed, regularly evaluated and improved, and well integrated and aligned. If results performance were due only to the maturity and effectiveness of processes, there would be no need to evaluate anything other than results.
To answer in reverse: yes, the parent is a stakeholder. The significance of that relationship will vary from organization to organization. Having a parent organization can be a mixed blessing. The parent may provide resources, support, and processes that the subunit needs; the parent may also require the subunit to use a corporate process that is less than ideal. Ultimately, the applicant is responsible for the efficacy and outcomes of the processes it uses. Therefore, sometimes a subunit will deserve a strength for something the parent prescribes, and sometimes the subunit has to accommodate a challenging process. Before you say that this isn't fair, keep in mind that examiners can’t exclude parts of the Criteria from consideration just because the parent has a strategy or process that requires a subunit to do something a certain way. The applicant is being evaluated against the standard of excellence. If the applicant uses a less-than-optimal process, it should do everything it can to optimize it, including working upstream with the parent organization. But also keep in mind the relative importance of that process. Whether you write an OFI, and how strong that OFI is, should reflect how important that part of the operation is to success and sustainability.
This type of response has grown in use, but it is risky for the organization because it calls for “benefit of the doubt.” You should give benefit of the doubt when the organization has provided sufficient evidence that it's warranted. Therefore, the applicant needs to provide enough evidence of its processes and results for you to confidently give the organization benefit of the doubt. For example, a blanket statement that comparative data or segmented data are available on-site without any evidence that the organization tracks and uses such data would likely not warrant benefit of the doubt. However, if such data are presented across several charts/graphs or even several items, and the statement is made that additional data are available on-site, then benefit of the doubt may be warranted. Whether benefit of the doubt is appropriate should be a team discussion topic during consensus.
Keep in mind that the “Considerations …” sheets are essentially reminders of the potential impact of certain key factors. Not all small organizations are the same, and you shouldn't assume that all those considerations automatically apply to every small organization you review. In this case, having a large volunteer workforce, especially relative to the size of the organization, will certainly affect your expectations for that organization. Some elements from the Considerations for Small Organizations may still apply, and others may not. Many other key factors will affect your determination of which should apply, including these: Is the organization part of a large, well-resourced parent organization? What type of work do the volunteers perform? How significant is that work? The possibilities are endless, and the Considerations for Small Organizations is by no means comprehensive or prescriptive.
1.2c(2) elicits what an organization is doing above and beyond normal operations to support and strengthen its key communities. In fact, a note in the Business/Nonprofit Criteria speaks to this issue for nonprofit organizations. Does this support have to be a volunteer effort? No. Does it have to be related to the organization’s mission? No. Businesses supporting local schools, health care systems providing reading tutors, and schools hosting after-school sports camps are all examples that may not be directly related to mission.
You would refer to P.1b(2), which asks for key customer requirements and expectations. Engagement comes from meeting these requirements and exceeding expectations, and from other aspects of building relationships (3.2b).
The difference in context has led to the use of slightly different terms. In 4.2b(1), the topic is managing organizational knowledge. Blending data from different sources refers to the need to handle, analyze, and use data and information of varying types, such as data tables, text, or even video. The question asks how you glean information/findings from such sources, correlate (determine the relationships among) data, and combine them into accurate and actionable knowledge. In 4.1a(1), the context is performance measures. The question is how the organization selects, collects, aligns, and integrates data and information. Here the meaning is integrating/incorporating data from different datasets/data sources into a new, single dataset that might provide deeper insights and knowledge than analyzing them separately would.
First, note that diversity denotes more than race, ethnicity, religion, gender, and national origin. It also includes age, skill characteristics, ideas, thinking, academic disciplines, and perspectives. Also note that diversity in the Criteria is relative, not absolute. The Criteria ask about diversity in relation to the organization’s hiring and customer (student, patient) communities. Because the workforce does all the work, having an appropriately diverse workforce benefits the organization in everything it does, across all categories. It also impacts workforce engagement, customer engagement, community engagement, organizational learning, innovation, and agility.
The questions are built into category 3 in all versions, but this extra section is included in the Health Care Criteria because patient requirements and preferences are particularly critical for the design and delivery of key work processes (patient care), often on a patient-by-patient basis.
First, remember that no matter where an organization reports information on processes or results, you should consider it wherever it is relevant to your evaluation. You would expect to see information on processes to ensure patient safety in 6.1, as patient safety is a health care service process requirement and a patient expectation. 6.2c(1) deals with workplace safety (as it does in the Business/Nonprofit and Education Criteria). As for results, the same distinction applies. 7.1b(2) is focused on operational workplace safety results (as it is in the Business/Nonprofit and Education Criteria). Generally speaking, you would expect to see patient safety results in 7.1a. Results for patient satisfaction with safety might be in 7.2a(1), and, as noted in the Criteria Commentary, 7.4a(1) may include results related to leaders’ efforts to create and promote a culture of patient safety.

InnovationTop

Yes to both. Innovation can be found in any aspect of an organization or its operations, from specific processes, to products and services, to work systems, to business models. The Baldrige definition encompasses all this: “making meaningful change to improve products, processes, or organizational effectiveness and create new value for stakeholders.” Innovation, though, is more than incremental process improvement. Process innovation means that the process is novel—brand new or new in its application to that type of business/industry.
The nature of the industry and the role innovation plays in sustainability vs. strategic advantage will affect the extent to which you expect to see a robust process for pursuing opportunities for innovation and how much influence that has on your scoring. Obviously, by including innovation management in the overall requirements, the Criteria are saying that pursuit of opportunities for innovation is important to success and long-term sustainability. However, your expectations for what that looks like will vary from organization to organization. For example, in the 2015 Casey Comprehensive Care Center for Veterans Case Study, you might not expect many opportunities for innovation in the cemetery administration’s products and services. But the organization might be able to improve its operational processes, enhance efficiency, and improve effectiveness through innovation.
Elements that may differ include those in caps: Innovation is making meaningful change to IMPROVE products, processes, or organizational effectiveness and CREATE NEW VALUE for stakeholders. In addition, an innovation may not be completely new--just NEW TO ITS PROPOSED APPLICATION. The outcome is a DISCONTINUOUS IMPROVEMENT in results, products, or processes.
The extent of innovation ("making meaningful change to improve products, processes ... and create new value for stakeholders") is included in the Learning dimension beginning at the 50-65% scoring range. (Some Baldrige-based programs break up the definition of innovation, with two different types parsed into two scoring ranges. The national program DOES NOT use this approach to scoring.)
Yes, it can. Innovation is only one element of Learning, which is only one of the four evaluation factors you take into account in scoring an item. The other elements in the Learning evaluation factor are evaluation and improvement, and organizational learning. No single evaluation factor, or as in this case, single element within an evaluation factor, should be used as a gate to prohibit the possibility of scoring in any particular range. As always, consider the applicant’s responsiveness holistically and choose the range for the item that is the most descriptive of the organization's performance, using the lens of the organization's key factors in making this judgment.

ScoringTop

In scoring, we ask you to determine the scoring range that is “most descriptive” of performance. We chose the term “holistic” for this determination deliberately, with attention to the dictionary definition—the idea that the whole is more than merely the sum of its parts. An analogy might be the blind men/elephant story, where each man is aware of one part of a complex whole. Depending on what part of the elephant each man observes, he comes up with a different description of a very complex animal—and none of these descriptions is accurate. Holistic scoring is not an exact science; nor is it meant to be. It is interpretive. Total consistency of individual scoring at Independent Review is not the goal. Variable IR scoring—by examiners from a variety of backgrounds—leads to rich discussion during Consensus Review. It is this discussion among the examiners that leads to a more complete understanding of the applicant and thus more accurate scoring. If scoring were completely consistent at IR, we would not need CR. (Some Baldrige-based programs use "scoring calibration" guidelines and "gates" to block higher scores. The national program DOES NOT use this approach to scoring.)
The added value is increased accuracy—scores that reflect the whole applicant. It’s a matter of aiming for validity, not just reliability. In holistic scoring, no one evaluation factor should serve as a gate that keeps the score out of a higher range. Using a somewhat mathematical formula for scoring (where one factor is a gate) can result in a lower, less accurate score. Also, if a formulaic approach were possible, we wouldn’t need Consensus Review.
Here’s an example: an approach is responsive to the overall requirements (50–65%). It is well deployed, with no significant gaps, with systematic evaluation and improvement and organizational learning used as key management tools, and it is integrated with current and future organizational needs (70–85%). The organization might well score 70–85% for this item. This scenario may not be very common, but it is certainly possible. Your score should be the result of a holistic assessment of all four factors to determine which range best describes the applicant’s maturity level. The approach element may be useful as an indicator of where to begin the conversation of which range to choose, but not as a barrier to higher levels of scoring.
Often, the evaluation factors that seem to (wrongly) hold back scores are deployment and learning. For example, an organization may have a systematic approach that is integrated with organizational needs, but deployment to a remote location or a recently acquired unit is in the early stages. Some examiners may (wrongly) keep the applicant out of a higher range because of these minor cases of lack of deployment. Similarly, an approach may be effective and systematic, well deployed, and integrated with organizational needs, but there is no innovation associated with it. To allow this factor to depress the score is also inaccurate. Comments support and supplement the score. Together, they tell the applicant where it stands. (Unlike some Baldrige-based programs, the national program DOES NOT ask you to identify "blocking OFIs" that can capitate scores.)
No. We expect, and even want, some variation across team members during Independent Review. This process requires judgment and interpretation, and Consensus Review completes the process of figuring out where the organization stands with regard to scoring. The concern comes when the variation is excessive. We do pay attention to variation and have been working to decrease it through training.
There is a misunderstanding here. The bolded overall requirements ARE multiple requirements. They are the most important and/or foundational of the multiple requirements, so we labeled them “overall,” but they are still multiple requirements. In areas to address with only bolded overall requirements, an organization that meets the overall requirement also meets the multiple requirements. This might increase scores—except for the fact that examiners don’t score organizations requirement by requirement. They look at the whole item and all the scoring factors, not just the “approach” factor.
The multiple Criteria requirements explicitly ask for comparisons only where they are important for all organizations (e.g., customer results in 7.2a[1], product and services results in 7.1a, and process effectiveness/efficiency in 7.1b[1]). In addition, they are inherently part of marketplace performance, 7.5a(2). Comparisons are a sign of maturity. They are referenced in the scoring guidelines as early as the 10–25% range. But they really start to kick in at 50–65%, which references addressing the overall requirements. Said another way, they are “required” or “expected” when an organization addresses the multiple requirements (i.e., the 70–85% range), but an organization can get credit for having some in the 50–65% range. It’s useful to step back and note where you might not expect comparisons for a given organization or type of data until it is very mature. (Examples might be governance and workforce development.) Having them as part of the scoring guidelines rather than the Criteria requirements allows for and communicates flexibility around whether a given organization should be expected to track and use such information.
The Criteria call for competitive comparisons in areas where they are especially important. Customer satisfaction is one of those areas, and therefore the use of competitive comparisons is part of the multiple requirements for 7.2a(1). Comparisons are also an evaluation factor in the scoring guidelines. Their use in other areas is a matter of maturity. The 50–65% range calls for “relevant comparisons”: in most cases this will include competitive comparisons; 70–85% +:call for “areas of leadership”: “leadership” includes competitive comparisons by definition.
No. “Fully responsive to the multiple requirements” reflects the approach description in the 90–100% range. That means that an organization scoring 70–85% would probably have some gaps. The significance of those gaps will impact where within the range the score falls, but you should not expect an organization to be fully responsive to the multiple requirements to score in the 70–85% range. Of course, you would look at the organization’s performance on all the evaluation factors and choose the most descriptive range. You wouldn’t choose the score based only on the approach descriptor.
The first note in 7.4 explains that levels and trends are not always asked for because organizations may be reporting some measures or indicators that are not quantitative and/or not amenable to trending. In this case, you should consider whether the reported results/indicators are appropriate and responsive to the basic, overall, or multiple requirements, and whether they address organizational needs and key stakeholders' expectations (I). However, when the applicant reports quantitative data, you should evaluate levels and trends (also in note 1).
You probably wouldn’t give an organization an OFI for not providing comparisons for measures that are consistently performing at 100%. But access to those comparisons would give you a better idea of the significance of the strength: Are competitors also at 100%, or are they floating around 75%? If achieving 100% appears to be no big deal, then while it’s certainly a strength, it may not be quite as impressive as a scenario where most other organizations struggle to achieve 75%.
The scoring guidelines refer to comparative information, such as benchmarks. They do not specifically call for competitive comparisons. However, results items specifically call for competitive comparisons in areas where they are important for an organization (e.g., 7.1a, 7.1b[1], 7.2a[1]). Even then, sometimes competitive data aren't available. In such a case, we'd still expect the organization to use the best available comparative data.

Contact
Baldrige Customer Service:
Phone: 301.975.2036
Fax: 301.948.3716
Email: baldrige@nist.gov
Sign Up for Baldrige Email Alerts