Getting the Truth into Workplace Surveys

Unfortunately, not all assessments produce such useful information, and some of the failures are spectacular. In 1997, for instance, United Parcel Service was hit by a costly strike just ten months after receiving impressive marks on its regular annual survey on worker morale. Although the survey had found that overall employee satisfaction was very high, it had failed to uncover bitter complaints about the proliferation of part-time jobs within the company, a central issue during the strike. In other cases where failure occurs, questionnaires themselves can cause the company’s problems. Dayton Hudson Corporation, one of the nation’s largest retailers, reached an out-of-court settlement with a group of employees who had won an injunction against the company’s use of a standardized personality test that employees had viewed as an invasion of privacy.

What makes the difference between a good workplace survey and a bad one? The difference, quite simply, is careful and informed design. And it’s an unfortunate truth that too many managers and HR professionals have fallen behind advances in survey design. Although the last decade has brought dramatic changes in the field and seen a fivefold increase in the number of publications describing survey results in corporations, many managers still apply design principles formulated 40 or 50 years ago.

In this article, we’ll explore some of the more glaring failures in design and provide 16 guidelines to help companies improve their workplace surveys. These guidelines are based on peer-reviewed research from education and the behavioral sciences, general knowledge in the field of survey design, and our company’s experience designing and revising assessments for large corporations. Managers can use these rules either as a primer for developing their own questionnaires or as a reference to assess the quality of work they commission. These recommendations are not intended to serve as absolute rules. But applied judiciously, they will increase response rates and popular support along with accuracy and usefulness. Two years ago, International Truck and Engine Corporation (hereafter called “International”) revised its annual workplace survey using our guidelines and saw a leap in the response rate from 33%to 66%of the workforce. These guidelines—and the problems they address—fall into five areas: content, format, language, measurement, and administration.

Guidelines for Content

1. Ask questions about observable behavior rather than thoughts or motives. Many surveys, particularly those designed to assess performance or leadership skill, ask respondents to speculate about the character traits or ideas of other individuals. Our recent work with Duke Energy’s Talent Management Group, for example, showed that the working notes for a leadership assessment asked respondents to rate the extent to which their project leader “understands the business and the marketplace.” Another question asked respondents to rate the person’s ability to “think globally.”

While interest in the answers to those questions is understandable, the company is unlikely to obtain the answers by asking the questions directly. For a start, the results of such opinion-based questions are too easy to dispute. Leaders whose understanding of the marketplace was criticized could quite reasonably argue that they understood the company’s customers and market better than the respondents imagined. More important, though, the responses to such questions are often biased by associations about the person being evaluated. For example, a substantial body of research shows that people with symmetrical faces, babyish facial features, and large eyes are often perceived to be relatively honest. Indeed, inferences based on appearance are remarkably common, as the prevalence of stereotypes suggests.

The best way around these problems is to ask questions about specific, observable behavior and let respondents draw on their own, firsthand, experience. This minimizes the potential for distortion. Referring again to the Duke Energy assessment, we revised the question on understanding the marketplace so that it asked respondents to estimate how often the leader “resolves complaints from customers quickly and thoroughly.” Although the change did not completely remove the subjectivity of the evaluation—raters and leaders might disagree about what constitutes quick and thorough resolution—at least responses could be tied to discrete events and behaviors that could be tabulated, analyzed, and discussed.

2. Include some items that can be independently verified. Clearly, if there is no relation between survey responses and verifiable facts, something is amiss. Conversely, verifiable responses allow you to reach conclusions about the survey’s validity, which is particularly important if the survey measures something new or unusual. For example, we formulated a customized 360-degree assessment tool to evaluate leadership skill at the technology services company EDS. In order to be sure that the test results were valid, we asked (among other validity checks) if the leader “establishes loyal and enduring relationships” with colleagues and staff; we then compared these scores with objective measures, such as staff retention data, from the leader’s unit. The high correlation of these measures, along with others, allowed us to prove the assessment’s validity when we reported the results and claimed that the survey actually measured what it was designed to measure. In other assessments, we frequently also ask respondents to rate the profitability of their units, which we can then compare with actual profits.

In another case, we designed an anonymous skill assessment for the training department of one of the nation’s largest vehicle manufacturers and found that 76% of the engineers believed their skills were above the company average. Only 50% of any group can be above the average, of course, so the survey showed how far employee perceptions about this aspect of their work were out of step with reality. The results were invaluable for promoting enrollment in the company’s voluntary training program, because few people could argue with the conclusion that 26% of the respondents—nearly 8,000 engineers—had a mistakenly favorable view of their skills.

In addition to posing questions with verifiable answers, asking qualitative questions in a quantitative survey, although counterintuitive, can provide a way to validate the results. In an employee survey we analyzed for EDS in 2000, we engaged independent, objective readers to classify the topic and valence (positive, negative, or neutral) of all written comments—45,000 of them. We then examined the correlation between these classifications and the quantitative data contained in the survey ratings from all 66,000 respondents. The tight correlation between ratings and comments in each section of the survey—high ratings accompanying positive comments—gave us strong evidence of the survey’s validity.

3. Measure only behaviors that have a recognized link to your company’s performance. This rule may seem obvious, but as many as three-quarters of the questions (such as “I know about my company’s new office of internal affairs”) in surveys we review have no clear link to any business outcome or to job performance. This shortcoming explains many of the more startling survey failures. Most often, the problem arises because questions have not been systematically chosen. To avoid this, we use a two-step process to select question topics. First, we interview informed stakeholders, asking them to describe the main problems and what they think their causes are. Then, we review published research to identify known pairings of problems and causes.

For instance, to build a survey for International, we interviewed nearly 100 managers, employees, union representatives, and executives in the workforce of 18,000. We asked each to specify what aspect of performance they thought most needed improvement and what they believed was its primary cause. Interviewees all agreed that the defect rate required improvement but were less certain in identifying behaviors possibly causing the problem. Research on quality, however, seemed to confirm the suspicion of some stakeholders that improving communication would lower the defect rate.

As a result, we included a number of questions about communication in the survey. One question asked respondents to indicate how often “In our department, we receive all the information we need to get our jobs done.” The results confirmed that poor communication was indeed associated with the defect rate. The company then implemented a pilot program at one of its larger manufacturing facilities to improve communication within and between departments. Following this intervention, communication scores at the pilot site rose 9.5% while defects fell 19%. Although any of a number of factors may have been behind the defect rate, it was incontestable that the more communication improved, the more the defect rate fell.

Guidelines for Format

4. Keep sections of the survey unlabeled and uninterrupted by page breaks. Boxes, topic labels, and other innocuous looking details on surveys can skew responses subtly and even substantially. The reason is relatively straightforward: As extensive research shows, respondents tend to respond similarly to questions they think relate to each other. Several years ago, we were asked to revise an employee questionnaire for a large parcel-delivery service based in Europe. The survey contained approximately 120 questions divided into 25 sections, with each section having its own label (“benefits,” “communication,” and so on) and set off in its own box. When we looked at the results, we spotted some unlikely correlations between average scores for certain sections and corresponding performance measures. For example, teamwork seemed to be negatively correlated with on-time delivery.

A statistical test revealed the source of the problem. Questions in some sections spanned two pages and therefore appeared in two separate boxes. Consequently, respondents treated the material in each box as if it addressed a separate topic. We solved the problem by simply removing the boxes, labels, and page breaks that interrupted some sections. The changes in formatting encouraged respondents to consider each question on its own merits; although the changes were subtle, they had a profound impact on the survey results.

5. Design sections to contain a similar number of items, and questions a similar number of words. Research and our own experience show that the more questions you ask, the higher the resulting scores for the entire section tend to be. Similarly, respondents often give higher ratings to questions that contain more words and require more time for reflection. Maintaining fairly equal question and section lengths provides the highest probability that you’ll obtain compatible survey responses across all questions.

A customer satisfaction questionnaire used by a large retailer in the Northwest illustrates those dangers. In evaluating the survey, we found that longer questions and longer sections evoked higher ratings, regardless of the product being evaluated. Together, response biases produced by these two question characteristics elevated scores on the survey’s final question (“How likely is it that you will repurchase from us?”) and lowered the overall accuracy of the survey’s findings. The company could have avoided both of these problems by maintaining consistent question and section length.

The same response bias—wherein scores increase with question and section length—will also elevate scores in excessively long surveys. In addition, the average score for survey questions increases as a respondent works through a questionnaire: It is not unusual to see the average score on a 100-question survey climb by 5%. At the same time, research and our experience show that the range of responses (the standard deviation) usually becomes smaller.

6. Place questions about respondent demographics last in employee surveys but first in performance appraisals. An optional section on demographics is a staple of customer questionnaires, and its value is uncontestable. Questions about demographics also frequently appear in employee surveys since managers believe the generated information can produce useful general data about workforce trends. Of course, it is imperative to avoid demographic questions that can seem invasive or irrelevant.

Including demographic questions, however, can dramatically depress employee response rates, especially when respondents feel that their anonymity may be jeopardized. A survey carried out in 1999 by one of the nation’s largest appliance manufacturers began by asking respondents whether they belonged to a union. Most of the union employees stopped filling out their surveys at this point; they reportedly feared that the data would be used to make misleading comparisons with unrepresented workers and that those comparisons could weaken the union’s position during future contract negotiations.

In employee surveys, it’s generally best to put demographic questions at the end, make them optional, and minimize their number. Such placement avoids creating an initial negative reaction at the very moment when readers are deciding whether to participate. A 1990 study by M. T. Roberson and E. Sundstrom found that moving demographic questions to the end of an employee survey improves response rates by around 8%.

7. Avoid terms that have strong associations. This rule of language is one of the most frequently ignored. Metaphor plays a prominent role in descriptions of management, but it can also trigger associations that bias responses. A leadership evaluation conducted in the mid-1990s by one of the nation’s largest manufacturers of photographic equipment asked respondents whether their team leader “takes bold strides” and “has a strong grasp” of complicated issues. While such phrases are commonly used to describe leadership qualities, they are counterproductive in surveys because they can trigger associations favoring males, whose stride length and grip strength, on average, exceed those of women. As a result, the leadership ratings of male leaders for this assessment were unfairly elevated. Here, simple revisions in wording solved the problem: “Has a strong grasp of complex problems” was changed to “Discusses complex problems with precision and clarity.” Subsequently, we found—as published research leads us to expect—no significant difference between the average scores of male and female leaders. We have observed similar results when words that trigger ethnic and religious associations have been changed.

8. Change the wording in about one-third of questions so that the desired answer is negative.One of the best-documented response biases is the tendency of respondents to agree with questions, a tendency that becomes more pronounced as work progresses through a survey. The best way to overcome this bias is to periodically introduce questions that are phrased negatively. It’s possible to transform almost any question or statement (“In my department, we do a good job of resolving conflicts”) to its opposite (“In my department, we do a poor job of resolving conflicts”) without creating tortuous wording, double negatives, or the like. This practice is quite common. When airline personnel ask passengers about their baggage, they usually ask one question so the desired answer is yes and another so the answer is no. For instance, “Did you pack your bags yourself?” might be followed by “Have your bags been out of your control since they were packed?”

One of the best-documented response biases is the tendency of respondents to agree with questions, a tendency that becomes more pronounced as work progresses through a survey.

It is also important to describe reverse wording in the instructions to the survey and to clearly signal its presence to respondents. Readers can easily miss minor word changes; a statement such as “My leader makes unfair hiring decisions” might be misread as “My leader makes fair hiring decisions.” So the wording of the negative questions must be carefully considered. One good way to prepare readers for this possibility within the questionnaire is to introduce a simple reversed item early on, in the third or fourth question. This reminds respondents about the presence of these kinds of queries throughout the survey. In our experience, we’ve found a good rule of thumb is to change the wording in about one-third of the questions.

9. Avoid merging two disconnected topics into one question. Many survey questions combine two elements. When items are associated, it makes sense to minimize the length of the survey by combining them, but at other times, merging two elements can be problematic. For example, a leadership assessment at a telecommunications company in the late 1990s asked employees to rate their leader’s skill at “hiring staff and setting compensation.” Clearly, data from such a question would result in little insight about a leader’s specific skill in each of the two related but distinct tasks. In determining whether to include two related elements in the same question, decide whether the behaviors associated with them will require the same intervention if they need to be fixed. It can be quite reasonable to ask employees whether they think a leader both “provides and responds to constructive feedback” because both processes (to various degrees) require insight, tact, candor, flexibility, and a willingness to learn. But asking about hiring and compensation at the same time will probably elicit muddied responses of little specific usefulness.

Guidelines for Measurement

10. Create a response scale with numbers at regularly spaced intervals and words only at each end. Many surveys invite respondents to evaluate an item by selecting words that best fit their own reactions. For instance, a global computer company’s annual performance appraisal asked managers to evaluate employees by ticking one of five boxes labeled “unacceptable” to “far exceeds expectations.” (See the top of the exhibit “Numbers Are Better than Words.”)

Numbers Are Better than Words

The results of this kind of evaluation, however, are notoriously unreliable because they are influenced by a variety of extraneous factors. The biggest problem is that each response option on the scale contains different words, and so it is difficult to place the responses on an evenly spaced mathematical continuum in order to conduct statistical tests. Although the labels may be in a plausible order, the distance between each pair of classifications on the continuum remains unknown. For many people, for instance, “unacceptable” and “does not meet expectations” may be closer to each other than “meets expectations” and “exceeds expectations” are to each other. In addition, the response scale uses words that overlap (“exceeds” and “far exceeds”) and that may mean different things to different people over time. Therefore, it is difficult to compare ratings on these scales from different managers in different years or to compare ratings from different departments, geographic regions, and even seasons.

You can avoid these and other distortions created by word labels by using a scale with only two word labels, one at either end with a range of numbers in between. Questions answered with numerical scales may not appear to be very different from those with word answers, but the responses to them are far more reliable and can be submitted to a much more informative statistical analysis.

11. If possible, use a response scale that asks respondents to estimate a frequency. Relying on a numerical scale is only part of the story. There can still be a great deal of subjectivity in the question or in the words at each end of the scale that you’ll need to eliminate. For instance, an employee survey we reviewed in the late 1990s asked respondents how much they agreed with the question: “Are you dedicated to quality in all that you do?” People were asked to tick a box on a scale between “disagree strongly” and “agree strongly.” But questions that invite respondents to measure extent of agreement often produce biased responses. The bias may be especially pronounced if, as in our example above, disagreement would be unflattering to the respondent. After all, who would say that they were not dedicated to quality? Naturally, responses to this survey question were clustered at the high end of the scale.

The best way around the problem, we’ve found, is to invite respondents to provide an estimate of frequency, with percentages or ratings between “never” and “always,” as shown in the lower part of the exhibit “Numbers Are Better than Words.” For example, in conducting a nationwide benchmark survey of employee motivation, we asked: “What percent of the teams in your company produce high-quality work?” In contrast to the agree-disagree question on quality mentioned above, we used a rating scale with numbers and obtained a normal curve of responses (see the results for both types of surveys in the exhibit “Well-Designed Surveys Produce Normal Results”), indicating that the responses were unbiased. What’s more, a large body of research confirms that respondents’ frequency estimations are typically quite reliable and accurate, even if they’d never consciously kept track of the behaviors examined in the survey.

Well-Designed Surveys Produce Normal Results Well-designed surveys generate data that follow the normal bell curve: A small number of the results lie near the low end of the scale, most are average, and a few are exceptional. Poorly designed surveys generate skewed data that depict overly high or low responses.

12. Use only one response scale that offers an odd number of options. Many surveys have a jumble of different response scales, jumping from one to another without warning. A survey currently being used by a large hotel chain asks respondents to rate the service’s friendliness on a scale from “very unfriendly” to “very friendly,” then the service’s efficiency on a scale from “very inefficient” to “very efficient,” and so on for dozens of questions about the hotel’s service. One response scale, such as “never” to “always” with numbered ratings in between, allows for an easy comparison of responses and is simpler for respondents. Single-scale surveys take less time to complete, provide more reliable data, and make quantitative comparisons between different items much easier than multiple-scale surveys.

Single-scale surveys take less time to complete, provide more reliable data, and make quantitative comparisons between different items much easier than multiple-scale surveys.

We find that it’s advisable to provide an odd number of response alternatives, so that respondents have the option of registering a neutral opinion. We also advocate including a “don’t know” or “not applicable” answer (preferably made to look different from the other answer options, as illustrated in the exhibit). Without that option, respondents may feel compelled to provide answers that they know are worthless. Including this option enhances response rates and makes it less likely that respondents will leave blanks or abandon the survey in the middle.

Take care not to offer too many or too few response options. In its annual employee survey, one of the nation’s largest oil companies asks employees about attitudes and offers them only two response alternatives: “agree” or “disagree.” Inevitably, managers complain that the results are simplistic and difficult to interpret. We have found that a graded response scale with seven or 11 alternatives (the latter for scales from 0% to 100% in increments of ten) furnishes sufficiently detailed results.

13. Avoid questions that require rankings. Many surveys require respondents to rank a number of items in order of preference. A survey we reviewed in 1997 asked people to “Rank in ascending order of severity the problems threatening productivity in your department: on-the-job injuries, absenteeism, attrition, out-of-specification materials from vendors, lack of tools.” Research shows, however, that responses to such questions are biased by a host of factors—most prominently the number, order, and selection of items. Respondents will best remember a list’s first and last items and will tend to assign them the top and bottom ranks. Moreover, other research shows that a ranking question can disrupt ratings on subsequent questions, presumably because respondents become sensitized to the topic of the ranking question.

Guidelines for Administration

14. Make workplace surveys individually anonymous and demonstrate that they remain so. As we have already pointed out, respondents are much more likely to participate in surveys if they are confident that personal anonymity is guaranteed. In our employee survey for International, we told employees that the anonymous surveys contained no hidden marks and that we would never be able to connect any individual survey to a specific employee. We backed up this claim by having boxes of spare surveys (under minimal supervision, to discourage people from submitting more than one questionnaire) at every facility. Access to all those loose surveys went a long way toward reassuring people about our commitment to anonymity.

The desire of respondents for anonymity explains why many companies prefer using paper-based surveys, even when all employees have access to a computer network. Most workers are savvy enough to know that each computer has a unique fingerprint and that passwords can be easily decrypted or overridden. A 2001 pilot test of a leadership assessment at Duke Energy illustrates the problems of administering surveys electronically. Duke ran, in parallel, an electronic and a paper-based version of its 360-degree leadership assessment so that the company could complete a cost-benefit analysis of the two methods.

Analysis of the pilot data revealed that ratings administered via the company’s e-mail system had a higher mean, a narrower range, and more blanks than ratings taken from optically scanned paper forms. The distribution of the scores was also markedly different: Paper-based ratings were distributed along a normal bell curve, indicating reliable and valid results, while ratings from the company server were strongly skewed toward favorable answers. These results suggested that respondents were reluctant to provide anything other than unrealistically favorable ratings of their leader and peers when they knew that their responses were being compiled somewhere on the company mainframe. Duke now lets participants choose the format they prefer for the survey: a conventional paper form or a new Web-enabled version running on an external server owned by a third party.

15. In large organizations, make the department the primary unit of analysis for company surveys. While the need to retain anonymity is paramount, large corporations still need to organize and analyze the results of internal surveys at the department or operating unit level because they assess performance at those levels. Clearly, surveys that are undifferentiated by department will be limited in their usefulness. In designing large surveys, therefore, it is useful to add a check-off sheet (or a list of codes) identifying a respondent’s facility and department. This feature helps you put together customized feedback reports that cluster departments and divisions into the precise groupings you need. Adding this feature to a large survey for International enabled us to deliver nearly 400 customized reports—some summarizing a single department’s results, others summarizing sectors (a cluster of departments), facilities, or entire divisions—only one month after we collected surveys from more than 10,000 employees.

16. Make sure that employees can complete the survey in about 20 minutes. Employees are busy, and nobody really likes surveys and assessments. If a questionnaire appears excessively time-consuming, only people with a lot of time (hardly a representative sample) will participate, and the response rate will fall dramatically. We’ve already seen that when surveys are long, respondents’ answers become automatic and overly positive. In general, we’ve found that surveys that can be finished in 20 minutes can provide substantial results for a company.

A sign at the auto parts store in my hometown states: “The wrong information will get you the wrong part…every time.” Good surveys accurately home in on the problems the company wants information about. They are designed so that as many people as possible actually respond. And good survey design ensures that the spectrum of responses is unbiased. Following these guidelines will make it more likely that the information from your workplace survey will be unbiased, representative, and useful.

Palmer Morrel-Samuels, a research psychologist, is a former research scientist at IBM and the University of Michigan Business School. He is president of Employee Motivation and Performance Assessment ( in Ann Arbor, Michigan. He is the author of “Getting the Truth into Workplace Surveys” (HBR, February 2002).

6 Critical Success Factors for Volatility

Here are six critical success factors for embracing volatility in ways that will allow the organization to thrive:

1. Courageous leaders
I define courageous leaders as individuals who focus relentlessly on achieving the organization’s mission, especially when they are surrounded by chaos, uncertainty, and fear-based opposition. They make and implement the tough decisions required when faced with the reality of extraordinary shortages of resources. They seek and find the opportunities inherent in the volatility rather than succumb to the obstacles that less bold leaders point to as reasons to hide under the desks till the storm blows over.

2. Concrete and unambiguous definition of the playing field
Every organizational stakeholder must know, with certainty and precision, what “game” they are playing, who the players are, the rules of the game, the specific roles that they fill, and the desired end result. They must know what is “in” and what is “out” of play so they can concentrate on the former and not waste time on the latter.

3. Clear priorities
In order to allocate scarce resources most effectively, there must be unmistakable priorities. Leaders must model priorities-based decision-making.

4. Ability to release people and things that no longer serve the organization’s mission well
Everything that is done must contribute directly to the organization’s mission. Anything that is not mission-critical must be jettisoned if the organization is to thrive. This is a time to gain exceptional clarity about what the organization does, and why.


5. Accountability
Although accountability was a critical success factor for thriving in more stable times, it often fell by the wayside when leaders were willing to settle for less than excellence, or for a lower standard of performance, when resources were more plentiful. In volatile times, however, the ability to thrive demands accountability at all levels.

6. Creativity and innovation
About the only thing we know for certain about these volatile times is that there will be continued turbulence. Things we cannot forecast with any certainty include what new challenges and opportunities will present themselves, and how we can handle them. What we can do is encourage and reward those who apply their creativity and innovation to address the opportunities in ways that help the organization succeed.

Three Secrets of Organizational Effectiveness

When the leaders of a major retail pharmacy chain set out to enhance customer satisfaction, market research told them that the number one determinant would be friendly and courteous service. This meant changing the organizational culture in hundreds of locations—creating an open, welcoming atmosphere where regular customers and employees knew one another’s names, and any question was quickly and cheerfully answered.

If you’re trying to instill this kind of organizational change in your company, then you face not just a logistical shift, but a cultural challenge as well. Employees will have to think differently, see people differently, and act in new ways: going the extra mile for shoppers, helping them articulate what they’re looking for, and working harder to keep items from getting out of stock. Employees also need to continually reinforce the right habits in one another so that the customer experience is on their minds everywhere, not just at the pharmacy or checkout counter, but in the aisles and back room as well. Conventional efforts to make this happen by “changing the organizational culture” in a programmatic fashion won’t get the job done.

One method that can help is known as pride building. This is a cultural intervention in which leaders seek out a few employees who are already known to be master motivators, adept at inspiring strategic awareness among their colleagues. These master motivators are invited to recommend specific measures that enable better ways of working. It’s noteworthy that pride builders in a wide variety of companies and industries tend to recommend three specific measures time and time again: (1) giving more autonomy to frontline workers, (2) clearly explaining to staff members the significance and value (the “why”) of everyday work, and (3) providing better recognition and rewards for employee contributions.

These are, of course, widely appreciated management methods for raising performance. But they’re rarely put into practice. Perhaps it’s because they feel counterintuitive to many managers. Even the leaders who use them, and whose enterprises benefit from the results, don’t know why they work. So the value of these powerful practices is often overlooked.

That’s where neuroscience comes in. Breakthroughs in human brain research (using conventional experimental psychology research in addition to relatively new technologies like CT scans and magnetic resonance imaging) are revealing new insights about cognitive processes. With a little knowledge of how these three underused practices affect the brain, you can use them to generate a more energizing culture.

Autonomy at the Front Line

At the pharmacy chain, the pride builders were employees with a knack for exceptional service. When asked how to spread that knack to others, they suggested giving clerks more leeway to do things on their own. For instance, the clerks could resolve customer complaints by issuing refunds on the spot, and they could try out their own product promotion ideas. In the past, store managers had been quick to step in and correct mistakes in an abrupt and sharp-tongued manner. Now they would be more positive, collaborative, and interactive with customers and colleagues.

The company set up a pilot program to train some store managers and track results. Almost immediately, there were encouraging comments from the front line: “[My store manager is] now open to suggestions, big or small. I know that my opinion counts with her.” Customer ratings and the amount spent per visit also rose, perhaps because giving employees the freedom to stretch and to shape their work directly improved the customer experience.

Why did autonomy make such a difference? Because micromanagement, the opposite of autonomy and the default behavior for many managers, puts people in a threatened state. The resulting feelings of fear and anxiety, even when people consciously choose to disregard them, interfere with performance. Specifically, a reduction in autonomy is experienced by the brain in much the same way as a physical attack. This “fight-or-flight” reaction, triggered when a perceived threat activates a brain region called the amygdala, includes muscle contractions, the release of hormones, and other autonomic activity that makes people reactive: They are now attuned to threat and assault, and primed to respond quickly and emotionally. An ever-growing body of research, summarized by neuroscientist Christine Cox of New York University, has found that when this fight-or-flight reaction kicks in, even if there is no visible response, productivity falls and the quality of decisions is diminished. Neuroscientists such as Matthew Lieberman of the University of California at Los Angeles have also shown that when the neural circuits for being reactive drive behavior, some other neural circuits become less active—those associated with executive thinking, that is, controlling oneself, paying attention, innovating, planning, and problem solving.

By giving employees some genuine autonomy, a company can reduce the frequency, duration, and intensity of this threat state. Indeed, as Mauricio Delgado and his social and affective neuroscience research laboratory at Rutgers University have found, the perception of increased choice in itself activates reward-related circuits in the brain, making people feel more at ease.

In the long run, sustained lack of autonomy is an ongoing source of stress, which in itself can habitually lead the brain to be more reactive than reflective. Sustained stress can also decrease the performance of important learning and memory brain circuits, as well as the performance of the prefrontal cortex, which is so important for reflection.

To return to our drugstore example, when a customer complains about being overcharged, a clerk in a fight-or-flight state might respond counterproductively—for example, by arguing. But a clerk accustomed to autonomy would be more likely to understand and to try to solve the problem in an empathetic way. If the company leaders try to enforce better customer service through strict rules that make clerks feel micromanaged, the physiological state associated with the fight-or-flight reaction would probably lead to the opposite outcome: driving customers away.

The “Why” of Everyday Work

A regional health insurance company, adapting to the U.S. Affordable Care Act, resolved to create more brand loyalty in an attempt to attract customers. One of the first trouble spots was the call center that managed claims. Customer satisfaction with health insurance call centers is notoriously low, often with good reason. There are not always good options for resolving claims. Staff members are typically judged on how rapidly and economically they can get people off the phone. The technology is often unsophisticated, catching callers in irritating voice-mail loops. At this company, call center employees saw consumers as their enemies—as complainers who berated the employees and blamed them for a miserable system that wasn’t their fault. All the training in the world could not overcome their fight-or-flight reaction. This, in turn, led to low levels of effectiveness and high turnover rates. From a neuroscience perspective, the system couldn’t have been better designed to bring out the worst in everybody.

Despite all this, some supervisors in the call centers regularly managed to mobilize service reps to deliver great customer care. The company was eager to learn how. When they brought these supervisors together, it turned out they had all, independently, discovered the same technique: taking the time to help sales reps and other call takers see and fully understand the “why” of their everyday work. This often took the form of explaining (or, better yet, demonstrating) the significant value of daily tasks, so that the reps understood their impact as part of a larger health ecosystem that supported people during difficult and stressful times. In the words of one pride builder, “I tell my team that it’s not just a claim on the other end of the call; it’s a family. You do more than answer the phone. You are a part of these folks’ lives.”

Here, too, neuroscience helps illuminate why the explicit invoking of significance and empathy is so effective. Helping a family member who is concerned about a medical issue (generally one with financial ramifications) is a different challenge from dealing with a customer trying to get more money. In neuroscience, these would be called different schemas: patterns of thought that organize experiences.

People do not have just one way of operating. They have different modes of social behavior that vary from one context to the next. The rules for social interaction are quite different when out for a drink with friends than when at a parent–teacher meeting. Schemas reflect these changes of context; thus, when a call center employee is operating in a help-a-family schema, the kinds of behaviors that are appropriate are quite different from those in a deal-with-a-customer schema.

Elliot Berkman of the University of Oregon, one of the leading researchers into the neuroscience of goal setting and habit formation, has proposed another reason why explanations of this sort are powerful motivators. When people know the reason that a goal exists, it is easier to form a “goal hierarchy”: a mental structure in which priorities can be considered as complements rather than obstacles to one another. This makes it more likely that people will follow through.

Consider the job of helping people who call for information about their insurance policy. The employee’s goal is tightly connected with the purpose of the job. If the goal is to help families, the employee would ask about the family’s challenges and describe how its policies could help. If the goal is to get people off the phone quickly, the employee would try to convince callers that the company was already doing everything it could. Employees will favor the former goal only if they see how it fits the company’s strategy, and if they are confident that pursuing it will be regarded as right by their leaders and peers.

Finally, stressing the “why” to employees helps companies deploy the cognitive power of altruism. Studies show that the brain’s reward system is directly activated by helping others. At the University of British Columbia, Elizabeth Dunn and her colleagues found that people report feeling happier after giving money to others than after spending it on themselves. Similarly, when it’s clear to employees that they’re helping others through their work, their intrinsic motivation rapidly expands. Management by objectives is a far more limited mental schema than management by aspiration.

When it’s clear to employees that they’re helping others through their work, their intrinsic motivation rapidly expands.  For all these reasons, once the “why” of their jobs had been explained to them, call center employees transformed the way they dealt with customers. This mitigated a prevalent pain point and accelerated the changes that the company needed to make.

Recognition and Rewards

When the global automobile industry began to recover from the severe slump of 2008–10, the leaders of one major automaker recognized the need to refocus their orientation from survival to growth. Employees already knew how to make the production line work better. Now, could they do the same in their customer interactions, particularly with car buyers in showrooms?

The company found the solution in its pride builders. North America, Europe, and Asia had been affected differently by the recession, so these master motivators had to adapt their approach to regional business conditions, cultural differences, and employee attitudes. One theme was common to everyone: recognizing employee success in a skillfull and considered way. This did not mean heaping undeserved praise on people; it meant celebrating a job well done while keeping the bar high. One example is this note from a team member about a supervisor: “She is a demanding manager in a fast-paced job, but she knows the importance of keeping the work fun and rewarding.”

The most effective supervisors all turned out to have similar pride-builder-style approaches for conveying recognition and, where possible, rewarding people for good customer interactions. They relayed positive feedback from customers; they took care to contact each team member’s manager when giving thanks and recognition; and they personalized the messages. “Maria knows what kinds of recognition each person appreciates most,” a team member observed about his boss. “She might take one person out to coffee or lunch as a form of recognition. Or she might encourage people to work from home one day per week so they can spend more time with their kids.”

Neuroscience explains the importance of the personal touch in delivering recognition that matters. When a manager recognizes an employee’s strengths before the group, it lights up the same regions of the employee’s brain as would winning a large sum of money. Rewards of all kinds, including social rewards, tend to release the neurotransmitter dopamine, which produces good feelings. These reward circuits encourage people to repeatedly behave the same way.

One framework of social motivators is the SCARF theory: David Rock, cofounder of the NeuroLeadership Institute, proposes that people at work are highly motivated by five types of social rewards: status boosts (S); increases in certainty (C); gaining autonomy (A); enhancing relatedness (being part of the group) (R); and demonstrating fairness (F) (see “Managing with the Brain in Mind,” by David Rock, s+b, Aug. 27, 2009). Public personal recognition provides three of these rewards. It increases social status, enhances the sense of being a valued member of the group, and shows that hard work will be fairly recognized. Most people’s neural circuits will respond directly to these, and the automakers were no exception. This, in turn, made it more likely that they would continue behaving in productive ways. The auto supplier thus laid the cultural foundations to support a shift from financial peril to growth.

Pride and the Imitation Process

The three management approaches described here—autonomy, purpose, and recognition—can create a climate of trust that spirals upward through the ecosystem of the organization. That’s because people in just about any social setting tend to pick up the mood and attitudes of others nearby, generally to a degree that they don’t consciously realize.

This process, which neuroscientists call imitation, has been studied extensively. For example, Elaine Hatfield’s work at the University of Hawaii on “emotional contagion” has shown how one person’s emotions can rapidly influence those of a group. The brain also has a process known as mirror neuron activity: When people see others act in a certain way, circuits in their brain are activated as if they had taken the actions themselves, even if they don’t directly imitate that behavior. Moreover, according to research led by Andreas Olsson, now at the Karolinska Institutet in Stockholm, observation can at times substitute for personal experience. Watching someone else in a situation can have an impact on the brain similar to that of experiencing it directly.

The workplace is a natural medium for viral behavior, transmitted through observation. As long as people see the difference it makes, a change in a few individuals’ neural patterns can move rapidly through the enterprise. Social scientists sometimes refer to this phenomenon as social proof or the bandwagon effect, and it has long been documented as a vehicle for social change. Indeed, this could be why the pride building method itself is so effective.

There is enormous potential for combining neuroscience theory with efforts to help companies improve the positive impact of their culture. The more people who understand the value of fostering autonomy, purpose, and recognition—and who translate these principles into practice—the more others will mirror them and the more widespread these practices will become. By providing scientific evidence of the power of the pride builder behaviors, neuroscience can help leaders see the value of constructive organizational culture change, and deploy more effective ways to accomplish it.



5 Signs of an Ineffective Structure ron palinkasAn organization is defined as a group of people working towards a common goal. An effective organization is one that delivers on its common goal consistently with the resources available. Clarity, Competency, Consistency and Efficiency are hallmarks of an effective organization. While simple in theory, this is something that most organizations grapple with. One key component of this is the Organizational Structure (also known as Boxes and Wires). The structure is put in place to provide formal authority for decision-making and drive accountability. The structure enables an organization to perform five critical functions of management – planning, organizing, staffing, directing and controlling.

Building an effective organization structure is a leadership competency. A Leader that I admire a lot once shared that “No structure is permanent and no structure is perfect. You need to keep adjusting it to build an effective organization.” So, how do we know if our structure has a problem? Here are five simple signs that may be helpful

  1. You need sign off from three or more people for a single decision – The evolution of matrix structure, fueled by Global Operating Models have created a complex operating environment – a mix of functional experts and business revenue owners. Multiple parts of the organization are focused on multiple priorities and some of them may be at odds with each other. The matrix structure distributes the ‘power’ to make decisions and aims to drive a ‘healthy tension’ (functional maximization vs business optimization). However, when decisions need to be signed off by three or more people, you have an ineffective structure. This is reflected in the ‘time to decide’ metric. Decision delays and multiple reviews where same information is shared are great examples of ineffective structure.
  2.  You have more than eights levels or layers of hierarchy- I have done a ‘blank sheet approach’ with number of senior leaders. I start by asking them to pen down their ideal levels or layers of hierarchy starting from the top. Each level has to have a distinct responsibility. The typical 7 levels for large multinational, multi-product organization that they come up with are – CXO – Business/Functional Global Leader – Product/Department Leader – Senior Manager – Manager – Supervisor – Execution Level. This is then compared with actual levels/layers in hierarchy and we are typically off by 2 or more levels. Typical rationale provided include – talent pipeline, development roles and career opportunities. All are valid reasons. However what started as a short term focus become a permanent fixture and creates a tall hierarchical structure. What these additional layers create is an overlap or dilution of responsibilities and accountabilities. A simple way to measure this is to look at the number of layers/levels that are present in a meeting. If you look around and find two or more levels in the same review meeting, you know you have a problem.
  3. When was that decided, nobody told me” – Any time you hear this, you know your structure has inefficiency. The power of management is in its ability to coordinate efforts. The key is effective communication. When you have multiple stakeholders focusing on multiple priorities at differentiated speed, there are bound to be ‘misses’. Your processes and structure should be able to ensure effectiveness of communication. While there is a ‘want’ to be ‘informed about everything that goes on’, what people ‘need’ is ‘information that impacts achievement of their goal’. If you are spending time in this communication tsunami and seem to be going around in circles with it, you know you have an ineffective structure.
  4. “I need to check with my boss for this decision, let me get back to you” – This statement is like a slow spreading cancer for an effective organization. The Managers and Leaders are expected to make decisions to further the organization. That’s their primary duty. So any forum, meeting or review where critical decisions need to be taken, leaders in the room should have the courage and competence to decide. If they are not, something is wrong. It could be a person dependency or structural redundancy. Either ways it is not a healthy sign.
  5.  “Let me introduce myself and explain what I do” or “We need a new position” – A common mistake Leaders make is to confuse new work (job) with a dedicated role (position). Each new type of work does not mean a dedicated new role. It should be explored whether this work can be done with existing resources. While dedicated role has its advantages – like focus, clear accountability – it also comes with its baggage. People are not certain about what the new role will accomplish and how it will change current interactions. It takes precious time away from critical priorities and directs them to settling the new role. This is typical in a global operating model or a matrix organization where functional areas create their own set up rather than leveraging existing structure.

Any time you find yourself in a situation that is described above, it’s time to pause and reflect. Building an effective organization is a ‘Leadership’ competency, one that is becoming highly valuable in today’s complex business environment.

(Views expressed herein are my own. It does not represent any organization)

From Sanjay Gawde:   http://sanjay gawde

ServiceMax Titanium


SANTA CLARA, CA– September 14, 2010 – ServiceMax today announced ServiceMax Titanium, providing small manufacturers and service-based businesses a revolutionary new way of automating their field service organizations. The new service brings together a pre-configured version of ServiceMax and Salesforce Service Cloud 2, to create ServiceMax Titanium. Built natively on the cloud-computing platform, ServiceMax Titanium delivers industry-leading CRM features with best-in-class field service capabilities and industry best practices to help companies rethink field service. Learn more during a 30-minute Titanium webinar on October 5 at 8 a.m. PDT.

Comments on the news

“For the past two decades our industry has neglected the needs of small service organizations. Field service teams, the backbone of their organizations, frequently still use spreadsheets and whiteboards to run their field service operations,” said Dave Yarnold, CEO of ServiceMax. “We are proud to team with to deliver a streamlined, yet complete field service solution to companies with less than 100 users. Titanium – the most complete field service solution available today – is easy to use, inexpensive to implement, loaded with industry best practices, and built on the world’s most powerful cloud platform,”
“Today’s customers aren’t waiting to get their solutions in the mail, they are on Cloud 2 – they’re mobile, using social networks to collaborate and demanding real-time answers,” said Kendall Collins at “With ServiceMax Titanium, Service Cloud 2 can now deliver success to small- and mid-sized field service companies like never before.”
“Our research confirms that small- and mid-size service organizations are looking to grow revenue and increase productivity without sacrificing customer service and retention,” said Sumair Dutta, Sr. Research Analyst, Service Management, Aberdeen Group. “The availability of enterprise-class field service management capabilities in a low cost cloud computing model, delivered by solutions such as ServiceMax Titanium, will provide SMBs the opportunity to meet and exceed their service initiatives.”
ServiceMax Titanium brings the innovations of,’s enterprise cloud computing platform, to post-sales field service, making the solution cost-efficient, easy to use, and deployable in days, not weeks. Because Titanium is delivered through the cloud, businesses no longer have to rely on clunky, outdated legacy software systems. Instead, they can focus on customer relationships and immediate customer needs, not on software maintenance or piles of work orders.

Titanium includes all the tools small service organizations require to reinvent field service from one vendor, including:

Installed Base and Entitlements
Full entitlement tracking on all equipment under warranty or service contracts. Real-time, complete access to all relevant customer and contractual information that allows companies to ensure they meet each individual customer’s needs while maximizing service revenues.
Advanced Scheduling and Workforce Optimization
Interactive, automatic and/or cost optimal assignment of work-orders to technicians. Optimize the schedules of any number of technicians with a click of a button and without any additional IT resource investment.
Work-order Management & Issue Tracking
Create work orders and assign technicians to close issues quickly. Easy-to-use tools make it simple to manage field technicians and automate the creation, assignment, execution and closure of cases and work orders.
Inventory and Parts Logistics
Complete logistics and reverse logistics for organizations whose field operations include parts movement functions and depot repair activities. Manage infinite locations of spare parts inventories, including van stock and inventory depots.
Salesforce Chatter
Real-time, secure communication and collaboration across the organization – from field service to engineering to sales to execs. Illuminate service issues immediately via Salesforce Chatter, before the customer even knows there is a problem.
Reports and Dashboards
Create reports and dashboards to give service staff the business intelligence they need to run a profitable and competitive service operation to easily see profits, losses, service levels, and much more.
Mobile and Offline Solutions
Titanium Mobile brings the real-time collaboration of Cloud 2 to field technicians on the go, providing easy access across many mobile platforms, including BlackBerry, iPhone and Windows Mobile.
Customer Portal
The customer portal offers self-service capabilities to customers so they can directly interact with a business for their field service needs by creating, tracking and managing their own work-orders. Platform
Titanium is built and delivered on the robust and trusted enterprise cloud computing platform. It includes the unparalleled scalability and speed that thousands of small, medium, and large companies have come to rely upon.



Create Organizational Paths to Success


Are you building the infrastructure and human capital you need for the long haul? How well do your employees’ individual road maps for professional development align with your long-term plans for the company, and how can you strengthen those connections? To create your path to long-term success, consider:


  • Giving your people increasing responsibility. There can be a balancing act between tapping employees for new responsibilities and not overloading them with work that isn’t tied to a short-term opportunity to advance. Continually explore their capabilities and potential for growth, and you’ll uncover new talents among your team members.
  • Communicating as a hedge against risk. Speak with people whose roles are evolving to determine which responsibilities can be shifted away from them as they take on new tasks. Even if you can’t offer them an immediate raise or change in title, they’ll feel valued and see that you’re not trying to take advantage of them.
  • Planning for your own evolution. Consider the ways your own role will change as the company grows, and ensure that you continue to guide employees regardless of how big your team becomes.

As a leader, you must ensure that your core values remain a prominent part of your culture as the company grows. This empowers the business and individual members of your team to advance together with a shared sense of purpose and a strong, enduring commitment to realizing your performance goals.

Read the third in our series of Connections to Growth: Team guides, Tactics and Tech to Put Your People on Paths to Growth, to learn how growing as an organization leads to long-term success.

from :