Pages

Saturday, May 29, 2021

Management Basics: Determining a Performance Rating

 originally posted on LeadDev.com

One of the most stressful parts of the end-of-year process for managers is the dreaded performance rating. This process forces you to boil down all of the work that a person did over the year, all of their accomplishments and misses, into a numeric score (often from 1–5) that may also come with words like ‘meets expectations’, ‘exceeds expectations’, or the unhappy ‘misses expectations’.

If you work for a company that has a ‘pay for performance’ model, your rating will influence the employee’s compensation. It may be used as an input for promotions, and yes, as a factor in firing or laying off employees. And so while this process is painful, every manager needs to get comfortable with assigning ratings to their engineers, and justifying those ratings to both the person receiving the rating, and potentially to their peers, boss, or other stakeholders who have a say in the distribution of rating scores at the company.

Your manager rating is a combination of measurable inputs and your own judgment. Today, I’m going to focus on how you can start to get comfortable with this step, by developing your own judgment and the supporting metrics you can use to guide your ratings.

Evaluations start long before it’s time to actually determine a rating

The first major input in any kind of fair evaluation is based on the work the employee needed to accomplish, and the work they did accomplish. This process starts with both goal-setting and a review of the expectations of the role, as well as any areas you already know they need to improve on. This may seem obvious but, as we’ll see, it is actually a tricky thing to get right!

As an example of using goal-setting as an input in performance rating, let’s explore a method that splits goals up into three categories: must be achieved, stretch goals, and moonshot goals.

By setting ‘must be achieved’ goals, you inform the employee on what the important work for the year should be, given their level and role. Then you collaboratively think about what stretch goals might look like, including a goal or two that would be really outstanding if they could achieve it — a moonshot goal. When you have your whole organization calibrated on how to set these kinds of goals, you can use them as a critical input for performance rating. If someone met the base goals, they probably met expectations. If they met the base goals and some of the stretch goals, they exceeded or substantially exceeded expectations. If they achieve base, stretch, and a moonshot goal, that would imply a year of incredible achievement that would support the highest rating of all.

Of course, the downside of goal-setting is that goals often become irrelevant in the course of a year. This is where your judgment must come into play. When they missed a goal, was it because it became irrelevant, because they failed to execute, or because they were unavoidably busy with unplanned critical work? Ideally you revisit goals regularly and adjust them when they become irrelevant, but I’m a realist and know that many people forget to do this or get too busy to bother. It’s a lot of work to constantly track this! So most of us must get comfortable with looking across the scope of goals (hit, missed, and deferred) and value them as they are.

Judgment is a major part of evaluating more senior people’s goals, particularly more-senior managers. On the one hand, they will tend to be responsible for the achievements of a team of people, and many things can happen to a team in a given year to derail their goals. On the other hand, the more senior a manager gets, the more they are expected to anticipate ways in which their team may fail to achieve their goals; this anticipation and course correction is part of performing well as an experienced manager. Only you can tell the difference between a manager who was blindsided by events and a manager who just failed to plan well, or who couldn’t course correct their team effectively.

Decide on your own axes of evaluation and attempt to apply them evenly

A manager I used to work with had a very methodical approach that he used to evaluate managers on his team. He had seven characteristics of management that he considered essential to doing the job, and would score each manager on each area, then roughly average them to get a final score. A different manager that I worked with at Rent the Runway did something similar with our four engineering ladder attributes. She would grade each person based on their level and role, and use that to justify how she rated her team.

I find this approach to be a helpful part of my ratings process. Looking across a set of attributes that I believe are important forces me to think about all of the skills a person brings to the table, and how they seem to be doing at each of them. This helps me identify strengths and weaknesses, and structures my thinking when I am evaluating across people in similar roles.

Rating by category works when you have a lot of clarity about what is important (as in the seven characteristics of managers), or a ladder that is well-written to support this. But it does have its downsides, and these will become apparent when you try to put this into practice. People are not easy to put into boxes. You may have a manager that is incredibly weak in one area and incredibly strong in another. Does this average out? Or are they actually underperforming, because their weak area is so essential that it means they aren’t doing their job? Or on the flip side, are they over-performing because, for this role and team, the weak area doesn’t matter so much? Now you have to add judgment into the mix.

It’s also hard to apply this model when you have a team of people who all have somewhat different jobs. It’s very hard to write level criteria that works well for both front-end and back-end engineers, let alone systems specialists, reliability engineers, and DBAs. Then add in the fact that you may be managing someone who is in a bespoke role, say developer relations or technical writing. Now you may not have enough data points to make sure that you are rating the person fairly, because there is really no one else for you to compare them to.

The final component has to be your own judgment

This brings us to the final aspect of performance rating: manager judgment. As with all things in the world of people management, there is no perfect algorithm you can apply to ensure total fairness and accuracy. You do not want to set yourself up to deny good ratings to people doing good work when they don’t fit perfectly in a box, or when they had all of their goals upended by a strategic shakeup halfway through the year. But you also don’t want to unfairly reward or punish people through an arbitrary process that relies on how much you like or dislike them.

Setting clarity at the beginning of the performance evaluation period via goal-setting and job alignment is important because it gives your employees a clearer idea of what they are going to be evaluated against. Breaking roles down into components and rating each person against each component helps you consider a balanced picture of someone’s strengths and weaknesses across the areas that matter. But ultimately, these inputs are merely some of the data you need, and you must consider the full picture of their work against the ever-changing requirements and challenges of your workplace.

The most interesting and useful part of this exercise is comparing what you get from this data against your gut reaction to the rating you think someone should get. If you are open-minded, the data will show you that you’re off both high and low in different cases.

This is an opportunity for you to broaden your thinking: what aspects of this role are really important but unstated in our level guidelines? It will force you to postmortem your own leadership: how did we miss on the goals here so badly? And sometimes, it will force you to acknowledge your own bias: why do I always want to give this kind of person a lower rating even though objectively their work looks as good as this other kind of person?

Your own ratings are rarely the final step. Most companies force an alignment across teams and managers via a calibration exercise in order to ensure ratings are fairly applied across the company. But if you spend good time up-front getting very clear in your own mind the rating you believe someone deserves, and the reasoning behind that rating, you are well-prepared to go into that calibration exercise and defend your ratings with thoughtfulness and care. And you want to be prepared, because once the rating is set, you’ll have what may be the hardest conversation of all: the one where you share the rating with the employee.

Enjoy this post? You might like my book, The Manager’s Path, available on Amazon and Safari Online!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.