Dude poses as a donut delivery man to sneak his résumé into agencies

cslihcxueaapwkuFree donuts: the key to getting a new job?

Lukas Yla, a marketer searching for work in San Francisco, has devised a tasty way to get the attention of prospective employers. He poses as a Postmates delivery person, then drops off a box of free donuts at the agency he’s interested in. When they open the box, whoa! They find a resume.

Guess whose résumé it is. Go on, guess.

Despite how deeply hokey this approach might sound, it actually seems to be working. According to Adweek, he scored 10 interviews after dropping off donuts at around 40 agencies. Hmm.

Such is the power of donuts. Imagine what a cupcake could land you.

by Chloe Bryan Mashable


“73.6% of All Statistics Are Made Up”

“73.6% of All Statistics Are Made Up”

by Mark Suster

paper_calc http://stat.oregonstate.edu/files/stat7/paper_calc.jpg ron palinkas field service managerThe headlines in the media are filled with that latest stats. Stats sell. The stats are often quoted from the latest reports. People then parrot them around like they’re fact when most of them are complete bullsh*t. People throw them around at cocktail parties. Often when they do I throw out my favorite statistic: 73.6% of all statistics are made up. I say it deadpanned. Often I’ll get some people look at me like, “really?” ”It’s true. Nielsen just released the number last month.”

No. It’s irony.

Or as Mark Twain popularized the quote most attributed to the Prime Minister of Great Britain, Benjamin Disraeli, “there are three kinds of lies: lies, damn lies and statistics.” The quote is meant to highlight the deceiving but persuasive power of numbers.

So, where is this all coming from, Mark? What are you on about? Anyone with a great deal of experience in dealing with numbers knows to be careful about the seduction of them. I’m writing this post to make sure you’re all on that same playing field.


Here’s how I learned my lesson:

I started my life as a consultant. Fortunately I was mostly a technology consultant, which meant that I coded computers, designed databases and planned system integration projects. OK, yes. It was originally COBOL and DB2 – so what? But for my sins I got an MBA and did “strategy” consulting. One of our core tasks was “market analysis,” which consistent of: market sizing, market forecasts, competitive analysis and then instructing customers on which direction to take.

It’s strange to me to think that customers with years of experience would ever listen to twenty-something smarties from great MBA’s who have never worked in your industry before – but that’s a different story. Numbers are important. I’d rather make decisions with uncertain numbers than no numbers. But you have to understand how to interpret your numbers.

In 1999 I was in Japan doing a strategy project for the board of directors of Sony. We were looking at all sorts of strategic decisions that Sony was considering, which required analysis and data on broadband networks, Internet portals and mobile handsets/networks. I was leading the analysis with a team of 14 people: 12 Japanese, 1 German and 1 Turk. I was the only one whose Japanese was limited to just a sushi menu.

I was in the midst of sizing the mobile handset markets in 3 regions: US, Europe and Asia. I had reports from Gartner Group, Yankee Group, IDC, Goldman Sachs, Morgan Stanley and a couple of others. I had to read each report, synthesis it and then come up with our best estimate of the markets going forward. In data analysis you want to look for “primary” research, which means the person who initially gathered the data.

But all of the data projections were so different so I decided to call some of the research companies and ask how they derived their data. I got the analyst who wrote one of the reports on the phone and asked how he got his projections. He must have been about 24. He said, literally, I sh*t you not, “well, my report was due and I didn’t have much time. My boss told me to look at the growth rate average over the past 3 years an increase it by 2% because mobile penetration is increasing.” There you go. As scientific as that.

I called another agency. They were more scientific. They had interviewed telecom operators, handset manufacturers and corporate buyers. They had come up with a CAGR (compounded annual growth rate) that was 3% higher that the other report, which in a few years makes a huge difference. I grilled the analyst a bit. I said, “So you interviewed the people to get a plausible story line and then just did a simple estimation of the numbers going forward?”

“Yes. Pretty much”

Me, sarcastically, “And you had to show higher growth because nobody buys reports that just show that next year the same thing is going to happen that happened last year?” Her, “um, basically.”

“For real?” “Well, yeah, we know it’s going to grow faster but nobody can be sure by how much.” Me, “And I suppose you don’t have a degree in econometrics or statistics?” Her, “No.”

I know it sounds like I’m making this sh*t up but I’m not. I told this story to every consultant I knew at the time. Nobody was surprised. I wish it ended there.

The problem of amplification:

The problem got worse as the data flowed out to the “bulge bracket” investment banks. They, too, were staffed with super smart twenty somethings. But these people went to slightly better schools (Harvard, Stanford, Wharton, University of Chicago) and got slightly better grades. They took the data from the analysts. So did the super bright consultants at McKinsey, Bain and BCG. We all took that data as the basis for our reports.

Then the data got amplified. The bankers and consultants weren’t paid to do too much primary research. So they took 3 reports, read them, put them into their own spreadsheet, made fancier graphs, had professional PowerPoint departments make killer pages and then at the bottom of the graph they typed, “Research Company Data and Consulting Company Analysis” (fill in brand names) or some derivative. But you couldn’t just publish exactly what Gartner Group had said so these reports ended up slightly amplified in message.

Even more so with journalists. I’m not picking on them. They were as hoodwinked as everybody was. They got the data feed either from the research company or from the investment bank. And if anybody can’t publish something saying “just in, next year looks like a repeat of last year” it’s a newspaper. So you end up with superlative amplification. ”Mobile penetration set to double next year reaching all time highs,” “venture capital market set to implode next year – more than 70% of firms may disappear” or “drug use in California growing at an alarming rate.” We buy headlines. Unless it’s a major publication there’s no time to fact check data in a report. And even then …

The problem of skewing results:

Amplification is one thing. It’s taking flawed data and making it more extreme. But what worries me much more is skewed data. It is very common for firms (from small ones to prestigious ones) to take data and use it conveniently to make the point that that want to make. I have seen this so many times I consider it routine, which is why I question ALL data that I read.

How is it skewed? There are so many ways to present data to tell the story you want that I can’t even list every way data is skewed. Here are some examples:

– You ask a small sample set so that data isn’t statistically significant. This is often naivete rather than malicious
– You ask a group that is not unbiased. For example, you ask a group of prisoners what they think of the penal system, you ask college students what they think about the drinking age or you ask a group of your existing customers what they think about your product rather than people who canceled their subscription. This type of statistical error is known as “selective bias.”
– Also common, you look at a large data set of questions asked about consumer preferences. You pick out the answers that support your findings and leave out the ones that don’t support it from your report. This is an “error of omission.”
– You change the specific words asked in the survey such that you subtly change the meaning for the person reading your conclusions. But subtle changes in words can totally change the way that the reader interprets the results.
– Also common is that the survey itself asks questions in a way that leads the responder to a specific answer.
– There are malicious data such as on Yelp where you might have a competitor that types in bad results on your survey to bring you down or maliciously positive like on the Salesforce.com AppExchange where you get your friends to rate your app 5 out of 5 so you can drive your score up.

That doesn’t happen? “I’m shocked, shocked to find that gambling is going on here.” We all know it happens. As my MBA statistics professor used to say, “seek disconfirming evidence.” That always stuck with me.

Believing your own hype:

And this data subtly sinks into the psyche of your company. It becomes folklore. 13% of GDP is construction – the largest industry. 40% of costs are labor, 40% are materials and 20% are overheads. 23% of all costs are inefficient. 18% of all errors come from people using the wrong documents. 0.8 hours are spent every day by workers searching for documents.

It’s important to quantify the value of your product or service. I encourage it.

You’ll do your best to market the benefits ethically while still emphasizing your strong points. Every investment banker I know is “number 1″ in something. They just define their category tightly enough that they win it. And then they market the F out of that result. That’s OK. With no numbers as proof points few people will buy your products.

Obviously try to derive data that is as accurate as possible. And be careful that you don’t spin the numbers for so long and so hard that you can’t separate out marketing estimates from reality. Continually seek the truth in the form of better customer surveys, more insightful market analyses and more accurate ROI calculations. And be careful not to believe your own hype. It can happen. Being the number one investment bank in a greatly reduced data set shouldn’t stop you from wanting to broaden the definition of “number 1″ next year.

Here’s how to interpret data:

In the end make sure you’re suspicious of all data. Ask yourself the obvious questions:

– who did the primary research on this analysis?

– who paid them? Nobody does this stuff free. You’re either paid up front “sponsored research” or you’re paid on the back-end in terms of clients buying research reports.

– what motives might these people have had?

– who was in the sample set? how big was it? was it inclusive enough?

– and the important thing about data for me … I ingest it religiously. I use it as one source of figuring out my version of the truth. And then I triangulate. I look for more sources if I want a truer picture. I always try to think to myself, “what would the opposing side of this data analysis use to argue its weaknesses?”

Statistics aren’t evil. They’re just a bit like the weather – hard to really predict.

And as they say about economists and weathermen – they’re the only two jobs you can keep while being wrong nearly 100% of the time.




Organize, Collaborate, Execute

The video is a collection clips from the USS Eisenhower.  When I discuss service procedures and the need for a process I often use them as a way of putting the words Organize, Collaborate, and Execute into context.  To provide an example of  high-stakes teamwork.  I am sure there are other examples, but few are more mesmerizing than the complicated dance of launch and recovery operations.  It also shows the need for these organizational skills in the life and death environment of a flight deck.  Recovering aircraft is just as exciting as launching.  Equipment and personnel must work in harmony to make sure that absolutely every aircraft is recovered safely.  Not 99%, not most of them.  Each and every one of them. Below are some pictures of the equipment used in this recovery.AAG-system http://www.ga.com/Websites/ga/PhotoGallery/5323648/AAG-system.jpg?09287 ron palinkas    download ron palinkas http://www.ga.com/Websites/ga/PhotoGallery/5323648/AAG-system.jpg?09287080331-N-8421M-002 PACIFIC OCEAN (March 31, 2008) Sailors position a wire support under an arresting wire aboard the nuclear-powered aircraft carrier USS Nimitz (CVN 68). The wire support lifts the arresting wire off the deck enabling an aircraft tail hook to catch the wire during an arrested landing. Nimitz is deployed to the U.S. 7th Fleet operating in the western Pacific and Indian oceans. U.S. Navy photo by Mass Communication Specialist 1st Class David Mercil (Released) ron palinkas

In March of this year, this equipment and the procedures to safeguard it’s operation failed with disastrous results.

Failures will occur, but I was particularly interested in the findings of the Navy Investigation.  “…(the) But that procedure lacked warnings, other notations and wasn’t “user friendly,” Navy investigators found. As a result, while those personnel failed to comply with a “technically correct written procedure,” the Navy found their error understandable because the procedure didn’t explain the basis for its steps, lacked supervisory controls and “failed to warn users of the critical nature” of the valve’s realignment.”

I was impressed by this finding.  Not only did it call out the error that had occurred, but it also adressed the underlying reasons why.  This “total” approach to the malfunction and the circumstances around is an example of how to truly resolve a process or set of circumstances.  Service departments benefit from the same type of insight.

Organize, Collaborate, Execute.







Virgin Atlantic Tested 3 Ways to Change Employee Behavior

b757_panel_01 http://www.aerospaceweb.org/aircraft/cockpits/b757/b757_panel_01.jpg ron palinkasAn estimated 21% of carbon emissions in the United States are attributable to companies, and yet to date there is scant research on how to make firm operations more efficient in terms of reducing pollution. However, the numbers suggest that getting employees to change their behavior could significantly impact climate change. So we partnered with Virgin Atlantic Airways on a field experiment to understand how the behavior of employees—in this case, airline captains—influences fuel efficiency, and how low-cost company interventions can influence their behavior.

Studying 335 captains across 40,000 flights, we found that informing captains that their fuel performance was being monitored and giving them personalized performance targets dramatically increased their fuel efficiency—in other words, they made flying decisions that made the operations more efficient. Changes in their behavior led to both lower carbon dioxide emissions (by 21,500 metric tons) and an estimated $5.4 million reduction in fuel costs for the firm over the eight-month study period. Clearly, positive environmental impact can be quite profitable.

For many years, teams at Virgin Atlantic have been testing ways to motivate efficient decision-making in the cockpit. They had identified three important and measurable behaviors that capture captains’ fuel-related decision making during pre-flight, in-flight, and post-flight phases: (1) calculating and implementing the correct amount of fuel needed for the flight prior to takeoff (this was called “Efficient Fuel Load”); (2) using fuel efficiently during flight, for example by flying at optimal speeds and altitudes (“Efficient Flight”); and (3) turning off at least one engine when taxiing to the gate after landing—an action that is not mandated but saves fuel (“Efficient Taxi-In”).

2-pilots-in-cockpit-of-a-boeing-757-aircraft-over-europe-b334xd http://l7.alamy.com/zooms/b6440a4ba1c34391b4ca8441b07b3573/2-pilots-in-cockpit-of-a-boeing-757-aircraft-over-europe-b334xd.jpg ron palinkasAt the end of 2013, we randomly allocated the captains to three treatment groups and one control group, and in January 2014, all captains were told that their flight and fuel behavior would be monitored for the next eight months. From February to October 2014, each captain in the first treatment group was given a monthly summary of his or her flight performance, which included the percentage of flights flown in the prior month in which they completed the aforementioned three behaviors—this was the feedback group.

Captains in the second treatment group received this performance information, as well as a personalized monthly performance target that was 25% above their pre-experiment baseline performance. (For example, a captain achieving Efficient Fuel Load on 20% of flights prior to the experiment would receive a target for achieving 45%. These targets were capped at 90% to allow captains some psychological flexibility.) This was the targets group.

The third treatment group received monthly performance information, targets, and an incentive for achieving their targets (£10 donated to the charity of choice per target achieved)—this was the prosocial incentives group. These interventions were sent to the home address of each captain in the middle of every month for eight months.

We analyzed how captains responded to these interventions by looking at data from more than 40,000 flights during 2013 and 2014, and comparing the behavior of each captain before and after the study began. We found that:

  • The vast majority of captains, from all groups, engaged in more fuel-efficient decision-making, such as optimizing in-flight procedure and fueling precision. This suggests that informing captains that their behavior was being monitored significantly improved their performance. This is consistent with a well-documented social science phenomenon called the Hawthorne effect, whereby people change their behavior as a result of knowing they are being observed.
  • Challenging captains to meet higher performance targets proved to be the most cost-effective intervention. Those in this group demonstrated improved fueling precision, in-flight efficiency measures, and efficient taxiing practices by 9% to 20%, at an extremely low cost to the study administrators. We attribute this strong effect to a challenging of the captains’ status quo: by changing the expectation for satisfactory job performance, captains successfully adjusted their habits to meet it.
  • Contrary to prior studies that have suggested that prosocial incentives can lead to increased effort, our intervention of offering charitable contributions for meeting targets did not lead to more behavior change; the fuel efficiency improvement in this group was very similar to that in the targets group.  Importantly, however, captains in this group reported 6.5% higher job satisfaction than captains in the control group in a post-study questionnaire.

Most of the gains came from what we identified as the Hawthorne effect – the sheer awareness of being monitored influenced captains’ fuel efficiency dramatically, whether in the control group or in any of the treatment groups. In fact, a vast majority of captains improved on those three fuel-relevant behaviors immediately once the study began.  Receiving targets for achieving them provided additional motivation above and beyond the observed Hawthorne effect, leading to increased implementation of the pre-flight, in-flight, and post-flight behaviors of 4, 18, and 22 percentage points, respectively.

We estimate a fuel cost savings of $5.4 million for Virgin Atlantic, resulting in reduced emissions of more than 21,500 metric tons of carbon dioxide (CO2) over the course of the study.  Moreover, our study appeared to induce a longer-term change in habits, as the captains continued to demonstrate these fuel-efficient behaviors after the study ended (for at least six months).

As these behavioral interventions were practically costless and still resulted in large value savings, we estimate that Virgin Atlantic saved about $250 for each metric ton of CO2 abated. For comparison, the lowest-cost technology aimed at reducing carbon emissions—efficient residential lighting—saves the economy approximately $180 per metric ton of CO2 abatement.  As policymakers and firms are grappling to introduce policies to combat climate change, this study clearly demonstrates the potential of influencing employees to make subtle behavioral changes that can improve energy efficiency. The study did not increase captain absenteeism, nor did it increase flight times, which provides further support for the position that such behavioral interventions can provide gains on a number of workplace dimensions without producing negative effects.

Academics, companies, and policymakers should look for similar partnership opportunities as they can provide low-cost solutions to issues such as air pollution and climate change. Capitalizing on the knowledge and methods emanating from the burgeoning field of behavioral science, these partnerships can achieve a private sector trifecta: increased profits, heightened employee well-being, and beneficial environmental impact.

Greer Gosnell is a Ph.D. researcher of environmental economics in the Grantham Research Institute at LSE. Her research combines experimental and behavioral economics to reveal cost-effective climate change mitigation strategies at the microeconomic level.  Her current projects focus on the contexts of commercial fuel efficiency, residential energy and resource use, and climate change negotiations.


The Service Economy

86-of-us-jobs-today-involve-offering-services-instead-of-making-things http://www.businessinsider.com/39-ways-the-american-workforce-is-changing-2015-6


The graph shows how industries are changing their focus from goods producing jobs to service producing jobs.  Nowhere is this more important than in the field service segment.  Slowly, manufacturers are from produce and sell, to produce sell and service.  The new economic trend called “servitization” is a smart step to retain brand identity and have influence on the customer experience.  Servitization implementation places the routine and emergency service of those goods in the hands of factory representatives.  What better way to ensure that they are being maintained in the best way possible.  I had reviewed labor statistics previously but felt this was an eye-opening set of data.

Launching Field Service Management Software

1https://www.google.com/search?hl=en&site=imghp&tbm=isch&source=hp&biw=1366&bih=585&q=woman+computer&oq=woman+computer&gs_l=img.3..0j0i10l5j0j0i30l3.1744.4299.0.4393. ron palinkasApproaching Field Service Management Software

It’s a known fact that employees who have better tools at their disposal are more productive and effective on the job and in the field.

Advancements in management software development, machine-to-machine communications and the Internet of Things are equipping mobile workforces with the ability to make more proactive decisions when it comes to diagnostics, maintenance and repair.

According to research from the AberdeenGroup, 82% of field service organizations planned to implement or expand mobile initiatives in 2015. You may be considering implementing a mobile solution, but aren’t sure how to approach it. Before getting started, be sure you’ve explored the value you hope to provide. Having “shiny and new” mobile software is sufficient for some organizations, but without a deeply intentional purpose, the applications won’t gain adoption or provide return on investment (ROI).

So where to begin?

Start with your end users.
What do they want in a tool? What are their biggest pain points while in the field? Can a software solution address any of these challenges?

“The biggest mistake made by executives purchasing field service management software is not talking to experienced field technicians to properly assess their requirements.” – DeWayne Lehman, Independent IT Consultant to Fortune 500s

Don’t strictly interview field technicians; observe them too. Ride along with them in the field to truly get a sense of what their day looks like, how much time is wasted on repeatable tasks, where processes can be improved and where efficiencies can be made.

Campaign for stakeholder support.
Who are the internal audiences that will be affected by implementing software? Who is required to sign off on the project? What do they want to achieve from the software? Do these audiences want to boost efficiency and productivity? Have company leaders mandated IT departments to reduce costs or increase profitability?

Be sure your stakeholders all agree on the business objectives your product intends to address immediately and in the future. Developing a product that is architected to scale over time is critical if you intend to add more users or significantly increase features in the future.

Determine your budget.
Unless you have internal development teams with the capacity to take on new projects, you will likely need a development partner. Make sure you have the budget set aside to adequately address the needs of the project.

 Compile your requirements.
Now that you’ve determined what your budget is and what your users want to see in an application, it’s time to decide what features are critical to success. Do you have a tight deadline? Maybe you want to integrate smart forms preloaded with customer information that allow field service employees to reduce the time they spend on routine tasks. Perhaps you want to leverage a software solution that optimizes scheduling or provides electronic proof of attendance. Whatever you want to achieve, deciding what features are mission-critical up front will help your team both deliver the product on time and plan for future phases of development.

Plan for back-end systems integration.
Enterprise software rarely works in isolation. Likely, your field management mobile product will need to share information across multiple systems. But what are those systems? Are they all proprietary or are there third-party integration considerations? For example, should other departments be alerted in order to have job-specific parts ready for the field?

Choose a development partner.
This is a tricky task, no matter the size of the project. As for non-technical managers, evaluating the expertise of developers is almost impossible. Meet with the team in person, if possible. If you are going to work with the developers over a long period of time, you’ll want to make sure the chemistry and communication is good between your two teams. Ask for references. Have previous client projects been delivered on time and within budget and scope? Did the development team go above and beyond to ensure their client enjoyed the experience? How sound is the product?

Beyond customer satisfaction, does the development team have the capacity and scalability to work with you over time? Do they have full-stack capabilities? What processes do they have in place to deliver a quality product? Make sure they can plan, design, develop and deliver within your timeline. Ask lots of questions – development is a big commitment! For more on how to choose a development partner, read this piece.

Don’t forget about training.
No matter how good the end product is, unless employees understand how it functions, how to use it and why it matters, they won’t realize software’s full value or integrate it into their workflows.

As software replaces archaic processes, it will take employees time to transition from “the old way of doing things.” Look for management experts in your organization who can help you socialize the product roll out. A large-scale roll out of a new product can impact company culture – so make sure you first ask for their buy-in, then teach them how to use it and finally, demonstrate the product’s potential.

What does success look like? Benchmark baseline data and determine what key performance indicators (KPIs) are important to your employees and decision-makers. If you know how to measure the effectiveness of the app, it will be easier to design and develop a solution that will be seen as a good investment.

Assign ongoing maintenance.
Software requires updates, bug fixes, general maintenance and new feature releases during the lifetime of a product. If you’ve outsourced your product development, your internal teams should be able to handle ongoing maintenance once a strong architectural approach is developed by your external product team.

d8XuoV8m-964158-edited http://blog.stablekernel.com/approaching-field-service-management-software-development ron palinkas


Sarah Woodward applies more than 16 years managing client relationships and business development efforts to her role as director of business development for stable|kernel. Her strengths lie in bringing together the right people with the right expertise to the right business opportunities. Sarah’s favorite part of her job is evangelizing stable|kernel’s story and finding new ways to help new clients dream big.

Original Article Approaching Field Service Management

Expanding Service Department Metrics

Expanding Service Department Metrics

field-service-data-from-visualization-to-value http://cdn2.hubspot.net/hubfs/160846/_blog/field-service-data-from-visualization-to-value.jpg

In our work with dealers in multiple industries we see the same anxiety regarding the management, development and execution of the service department. Especially in periods of service market growth, the anxiety is heightened because of the diverse forces that play on service performance. These forces range from recruiting, selecting, onboarding and training cycles, to billing, expense management and measurement process. In today’s more sophisticated business environment we need to bring the analysis of service and service metrics to a deeper level.

Almost all dealers have relatively complete data processing systems that provide reasonable information on the performance of the service department. However, in today’s fast-paced data processing world, more than just standard reports are needed. All dealers should be engaged in data mining activities that allow us to dig deeper and analyze more specific operational statistics. Certainly, we have to start with a base set of service metrics – the traditional metrics – and supplement them with the new, deeper, richer analysis of service performance.

Traditionally, we have examined a number of key ratios. Service department gross profit benchmark is 65 percent, which is calculated by the following formula:

Overhead expenses in the service department are bench marked at 35 percent of labor revenue. These expenses include personnel expense at 20 percent, operating expense at 10 percent and occupancy expense at 5 percent.

Many dealers also measure technician productivity, which is calculated by:

More critically, an extension of technician productivity that we find more valuable is technician efficiency, which is measured by:

These are service department metrics that have been key measurements within the industry for a number of years. The expectation is that all reasonably run service departments have these metrics and are working with them continuously.

As we work with individual service departments, we have focused many of them on a group of supplemental data points to improve our management and, ultimately, to greatly improve bottom line results.

One of the most critical measures we see in managing service departments is revenue per technician per month. This measure correlates very strongly with service profit ability in all of the analyses we’ve done. It actually is the most strongly correlated – even more so than productivity or efficiency. So, our first recommendation is that you create a descending list by technician of revenue per technician per month. Further, accumulate this data over a period of time so that you can analyze the average revenue per tech per month and see if your trend is improving or not.

A competent technician should yield at least $12,000 per month. Better technicians can produce close to $15,000 per month. So, the first issue to be examined is, “How many technicians do we have greater than $15,000 per month, how many between $12,000 and $15,000 and how many under $12,000?” Frequently, we only see the average number. This tends to limit our expectations to the average number. If, in fact, one-third of the technicians are close to $15,000 per month, then that understanding will push us to drive other technicians to a number approaching $15,000 per month. In other words, let’s prove a benchmark that high performing technicians are achieving and use that benchmark to drive the performance of the rest of the team.

Another valuable measure, and one that is derivative of this analysis, is to take the revenue per technician per month and divide it by the number of billed hours in that month. That analysis produces a yield of revenue per hour for that technician. Let’s assume that your published service rate is $85 per hour. If this analysis shows that you have some technicians yielding $89 per hour, some yielding $82 per hour and some yielding $70 per hour, then the process is to analyze whether those differences are execution related, customer related, type of work related or related to some other factor.

One more critical measure we are using with dealers when consulting on service departments addresses the yield per hour per customer. Here, we want to begin with the descending customer list. We know in analyzing data in dealerships that the Pareto Principle applies. In fact, in most instances, even in service, 10 percent of the customers are 70 percent of the business. So, first complete a descending sales list, but cut off this analysis for the top 10 percent only, or the top 70 percent of revenue, whichever you prefer. Take this short list, divide labor revenue by billed hours and develop your yield per hour per customer. Again ask, how many are greater than $85 per hour, how many are at $85 per hour and how many are less than $85 per hour – more critically, why are they at this level?

So, as we examine service departments we want to reconfirm the traditional measures around which we drive service success. Plus, we want to introduce these three new measures to see what additional insight they give us into driving service performance.

Data is available. Be creative in seeking information that answers key questions. Don’t be surprised if this data also helps answer questions you haven’t even asked.


Matthew Hicks is with Currie Management Consultants, Inc., located in Worcester, Massachusetts, and on the web at www.curriemanagement.com.

posted at MHEDA Journal

If It Matters, Measure It: Tips to Determine Field Service KPIs

blog-agile-implementation-lego-edition-00 http://www.archerpoint.com/sites/default/files/images/blog-agile-implementation-lego-edition-00.jpgThere are several sayings out there about measuring company performance:

“What gets measured gets done.”

“You can’t get what you don’t measure.”

“An acre of performance is worth a whole world of promise.”

And, my personal favorite, “If It Matters, Measure It.”

As your company continues to grow, quantifiable data will become more and more important to owners, customers, and other stakeholders of the business. By measuring key performance indicators in all areas of your field service business now, you can count on more predictable results in the future.

What are KPIs?

Key Performance Indicators (KPIs) provide a means to quantify and measure business performance toward the attainment of organizational goals.  Goals can vary from every industry and company. In the service industry, businesses may evaluate themselves on KPIs that measure customer satisfaction, technician performance, and operational efficiency.

No matter what KPIs your company chooses to measure, the most important factor is that the metrics align with organizational goals. What are you looking to accomplish in the next quarter, 6 months, or year?

Goals Drive KPIs

The goals you set will drive the performance metrics you choose. For example, if your goal is to improve technician performance, the metrics you measure could include average time to respond (AVR), Mean Time to Repair (MTTR), and number of calls or revenue generated per technician.  A standard rule of thumb is to choose 4-5 KPIs. Otherwise, your metrics, at some point, will start to contradict each other.

KPIs Must Be Quantifiable

In addition to aligning metrics with organizational goals, it’s good to keep in mind that the chosen metrics are quantifiable. It would be extremely difficult to measure success if targets are not quantitative in nature, or associated to a number on a rating scale.  For example, if you’re looking to improve customer service from good to great, it would be difficult to distinguish one level of satisfaction from the other.  By associating the level of satisfaction with a number, such as 1-5, you can quickly determine the average customer satisfaction rate.

Compiling KPIs

The ability to obtain these figures is another important factor when measuring KPIs.  Most field service managers do not have time to review multiple pages or spreadsheets of data. Dashboards and reports that are systematic and automated provides easy to digest information that can be processed quickly so leaders can make informed business decisions. Larger organizations have the luxury of collaborating with IT to build customized dashboards and reports. Small to mid-sized field service companies can utilize field service management software that comes with built in reports.  Since the company manages all aspects of the field service within the software, managers can quickly pull reports on open invoices, revenue generated by technician, and customer satisfaction surveys.

Gaining Employee Buy-In

None of the above matters if your team does not buy into the organizational goals. By incenting service technicians to complete individual goals, such as upselling, renewing service contracts, or completing a certain number of jobs per week, the business can ensure that all employees are working to meet the company’s overall goals. For example, a FieldLocate customer supplies weekly bonuses to his contractors when they surpass their regular job quota. This information is stored in FieldLocate so the manager can quickly run a job report to determine the bonus amount.

Does your organization measure KPIs? If so, which do you find most important for your business?


Five Golden Rules of Measuring Performance

NOC-Performance-Measurement http://www.ot.co.tt/wp-content/uploads/2015/02/NOC-Performance-Measurement.jpg

Measuring may not be the thing that really excites you as a leader. At the same you probably have heard people say over and over again that what gets measured get’s done. Any business that is serious about achieving results needs to measure performance. So what are the 5 golden rules of measuring performance?


Rule 1: Be clear on what you want to achieve

If you don’t know what the end destination is just about any direction will be fine. Yet the reality is that the clearer you can be about what you want to achieve from your business, the much easier it will be to develop and implement measures. Taking the time to define in clear and straightforward what you want to achieve is similar to laying foundations for a house.

Rule 2: Separate the things to do from the things that are critical

Filling up your schedule with things to do is not difficult. It is pretty easy to think that it is volume that matters. As a leader, you know that what’s important is to be clear on the things that drive results rather than the things that fill up your schedule. Do you know what those 5 critical things are in your organisation?

Rule 3: Watch out for those who focus on the data

Information is rarely 100% complete and accurate. As a result people sometimes focus their time and effort in picking holes in the reports rather than the underlying messages. Aim to keep people focussed on the big picture and key messages rather than the petty bits of detail.

Rule 4: Eliminate the “I thought” discussions

What do I mean by the “I thought discussion”? Basically it’s when 6 people have 4 different views on what a particular measure is telling them or how it is calculated. To overcome this, make the basis of calculation and scope of any measurement crystal clear.

Rule 5: Focus on action

Measurement is only worthwhile if it results in choice and action. Keep the focus on what the organisation is going to do as a result the information being identified from the performance measurement.

Performance measurement can be a huge asset in achieving success as a leader. The question is are you ready to leverage those benefits?


From  Measuring Mangement by Anthony Ewing

Five Ways To Improve the Customer Experience in the Field

Five Ways To Improve the Customer Experience in the Field

When customers have great experiences with companies, they buy more, they tell others and are less likely to leave. Forrester research (The Revenue Impact of Customer Experience) and dozens of others have calculated that those positive feelings can amount to hundreds of millions and even billions in additional revenue. If you’ve looked at reviews of companies or asked friends for suggestions, you’ve been part of that engine.Despite what seems common sense, the usual field services experience doesn’t conjure delight. Instead, one can feel a stomach knot thinking about the ensuing hassles across a many B2C and B2B industries, including telecommunications, utilities and manufacturing. Typically the interactions include:

  • Scheduling gauntlets. First, customers must navigate an IVR, contact center or web self-service gauntlet to get service. Equipment rarely comes with contact center numbers of any use and serial numbers have little or no connection to customer data anyway that would help. The worried chef at one company put it this way: “Sysco seems to bring the order whenever they want…in the middle of lunch rush. Is there a way to establish specific days for delivery?”
  • Unwieldy delivery and service windows. Next, customers must deal with delivery windows that can range up to half a day – or in the case of a moving company, up to 5 days. While this may provide firms with flexibility, it fails to account for the inconvenience, time and money (use of vacation time or unpaid leave) customers expend for the company’s sake.
  • Uninformed, unprepared & unprofessional technicians. Finally, customers may likely have to work with outsourced 3rd party technicians or service staff that have little visibility into the customers’ overall interaction history of interactions (from the point of sale, to any past service calls, etc.), making customers repeat basic histories. Without that knowledge, they often fail to show up with the right equipment, tools or replacement parts wasting time and money for both parties. And finally, to top it all off, they come with little empathy for customers, with metrics on how fast they close tickets rather than make sure customers feel well taken care of. Forget all that effort brand-building…these brand ambassadors leave the biggest impression.



Rethink field services as a form of customer engagement.

Businesses and consumer alike view on-site visits as a disruption. Make it worthwhile by using these visits as an opportunity to build a personal relationship with the customer. Improve the customer experience and recognize the fantastic opportunity to engage customers. To do this:

  • Be Proactive: Hone big data to create proactive service. Smart firms will use big-data and predictive analytics to understand the root causes of service call variance and take actions to mitigate them. Even better, they will pre-empt problems entirely and focus on customer success with products and services, whether that’s through the crucial first 90 days or as part of an on-going relationship. Internet-connected sensors embedded in machines allow firms like Abbott Diagnostics to collect terabytes of data monthly that enables the firm to predict when a machine is about to fail and proactively dispatch a technician with the right equipment ahead of time to prevent the failure. Preventing problems not only has a big impact on the overall experience, it can positively impact your clients’ bottom lines.
  • Be Different: Re-think dispatch: When Uber and Lyft created their ride-sharing businesses, they didn’t just create a mobile app – they re-invented dispatch to better match resources with customer needs and along the way essentially eliminated the contact center. Design self-service with relevant customer data to automate dispatch and make it easier to make and schedule requests with flexible service levels. Examine the routing of field-based crews and re-think the dispatch model to maximize customer-facing time with tools like Salesforce Field Service Lightning. Every time a field-tech needs to return to his office or warehouse for a part or tool, they lose critical customer facing time. Consider mid-air refueling: rather than stop to get gas, the gas comes to you and you stay in the field. When feasible, bring the right material to the field to save time.
  • Be transparent: Create flexible and visible delivery windows. With 1-Hour delivery windows, Safeway shoppers can schedule a grocery delivery at an available time that perfectly fits a tight schedule. Shoppers can also save money by selecting an available 2-Hour window or an environmentally friendly “Green” 4-Hour window. Comcast’s “Uber-esque” mobile app sends customers an alert when a technician is about 30 minutes away…and allows users to track the technician on a map (Graphic source: Comcast.com). Build in redundant back-ups, such as additional technicians to pick up slack when one visit goes long.


comcast app

  • Be empathetic: Move beyond installation to customer success. While she was the VP of branded customer experience at Time Warner Cable, Catherine Cattrell initiated a pilot aimed at improving customer experience out in the field. She recruited a set of high-performing technicians, gave them a more professional uniform, provided them soft-skills training to empathize with customers’ needs, and instructed them in how to not just install the cable services but rather to make sure the customer could do what they wanted on it. While these house calls took longer, they also reduced future calls to contact center and increased the speed that customers experienced the value of the services they purchased.
  • Be attuned: Change to customer-focused metrics. Internal performance metrics often have little connection with the actual customer experience. Think about “on time departure” for airlines. How many times do airlines pull back from the gate to comply to an on time departure target only to sit 10 feet from the gate for 90 minutes due to an air traffic delay. Leading edge contact centers at firms like American Express have eliminated outdated internally-focused metrics like call handling time in favor of more engagement and better experiences that lead to future revenue – field services organizations should follow suit. Elevate customer-focused metrics like “ease of doing business” and “likelihood to recommend” (Net Promoter Score) while putting resolution time on the back-burner.

Your customer’s expectations – regardless of industry and despite being a business-to-business company – are being set by Amazon, Uber and Apple. They don’t care about the complexity of your internal operational environment or the challenges of your old and outdated IT systems. They want interactions to be easy and not waste their valuable time.Firms that succeed see customers who do more business with them and who become a powerful marketing advocate through their word-of-mouth.