Buy My Book, "The Manager's Path," Available March 2017!

Saturday, June 12, 2021

An incomplete list of skills senior engineers need, beyond coding

For varying levels of seniority, from senior, to staff, and beyond.

  1. How to run a meeting, and no, being the person who talks the most in the meeting is not the same thing as running it
  2. How to write a design doc, take feedback, and drive it to resolution, in a reasonable period of time
  3. How to mentor an early-career teammate, a mid-career engineer, a new manager who needs technical advice
  4. How to indulge a senior manager who wants to talk about technical stuff that they don’t really understand, without rolling your eyes or making them feel stupid
  5. How to explain a technical concept behind closed doors to a senior person too embarrassed to openly admit that they don’t understand it
  6. How to influence another team to use your solution instead of writing their own
  7. How to get another engineer to do something for you by asking for help in a way that makes them feel appreciated
  8. How to lead a project even though you don’t manage any of the people working on the project
  9. How to get other engineers to listen to your ideas without making them feel threatened
  10. How to listen to other engineers’ ideas without feeling threatened
  11. How to give up your baby, that project that you built into something great, so you can do something else
  12. How to teach another engineer to care about that thing you really care about (operations, correctness, testing, code quality, performance, simplicity, etc)
  13. How to communicate project status to stakeholders
  14. How to convince management that they need to invest in a non-trivial technical project
  15. How to build software while delivering incremental value in the process
  16. How to craft a project proposal, socialize it, and get buy-in to execute it
  17. How to repeat yourself enough that people start to listen
  18. How to pick your battles
  19. How to help someone get promoted
  20. How to get information about what’s really happening (how to gossip, how to network)
  21. How to find interesting work on your own, instead of waiting for someone to bring it to you
  22. How to tell someone they’re wrong without making them feel ashamed
  23. How to take negative feedback gracefully

Enjoy this post? You might like my book, The Manager’s Path, available on Amazon and Safari Online!

Saturday, May 29, 2021

Management Basics: Determining a Performance Rating

 originally posted on LeadDev.com

One of the most stressful parts of the end-of-year process for managers is the dreaded performance rating. This process forces you to boil down all of the work that a person did over the year, all of their accomplishments and misses, into a numeric score (often from 1–5) that may also come with words like ‘meets expectations’, ‘exceeds expectations’, or the unhappy ‘misses expectations’.

If you work for a company that has a ‘pay for performance’ model, your rating will influence the employee’s compensation. It may be used as an input for promotions, and yes, as a factor in firing or laying off employees. And so while this process is painful, every manager needs to get comfortable with assigning ratings to their engineers, and justifying those ratings to both the person receiving the rating, and potentially to their peers, boss, or other stakeholders who have a say in the distribution of rating scores at the company.

Your manager rating is a combination of measurable inputs and your own judgment. Today, I’m going to focus on how you can start to get comfortable with this step, by developing your own judgment and the supporting metrics you can use to guide your ratings.

Evaluations start long before it’s time to actually determine a rating

The first major input in any kind of fair evaluation is based on the work the employee needed to accomplish, and the work they did accomplish. This process starts with both goal-setting and a review of the expectations of the role, as well as any areas you already know they need to improve on. This may seem obvious but, as we’ll see, it is actually a tricky thing to get right!

As an example of using goal-setting as an input in performance rating, let’s explore a method that splits goals up into three categories: must be achieved, stretch goals, and moonshot goals.

By setting ‘must be achieved’ goals, you inform the employee on what the important work for the year should be, given their level and role. Then you collaboratively think about what stretch goals might look like, including a goal or two that would be really outstanding if they could achieve it — a moonshot goal. When you have your whole organization calibrated on how to set these kinds of goals, you can use them as a critical input for performance rating. If someone met the base goals, they probably met expectations. If they met the base goals and some of the stretch goals, they exceeded or substantially exceeded expectations. If they achieve base, stretch, and a moonshot goal, that would imply a year of incredible achievement that would support the highest rating of all.

Of course, the downside of goal-setting is that goals often become irrelevant in the course of a year. This is where your judgment must come into play. When they missed a goal, was it because it became irrelevant, because they failed to execute, or because they were unavoidably busy with unplanned critical work? Ideally you revisit goals regularly and adjust them when they become irrelevant, but I’m a realist and know that many people forget to do this or get too busy to bother. It’s a lot of work to constantly track this! So most of us must get comfortable with looking across the scope of goals (hit, missed, and deferred) and value them as they are.

Judgment is a major part of evaluating more senior people’s goals, particularly more-senior managers. On the one hand, they will tend to be responsible for the achievements of a team of people, and many things can happen to a team in a given year to derail their goals. On the other hand, the more senior a manager gets, the more they are expected to anticipate ways in which their team may fail to achieve their goals; this anticipation and course correction is part of performing well as an experienced manager. Only you can tell the difference between a manager who was blindsided by events and a manager who just failed to plan well, or who couldn’t course correct their team effectively.

Decide on your own axes of evaluation and attempt to apply them evenly

A manager I used to work with had a very methodical approach that he used to evaluate managers on his team. He had seven characteristics of management that he considered essential to doing the job, and would score each manager on each area, then roughly average them to get a final score. A different manager that I worked with at Rent the Runway did something similar with our four engineering ladder attributes. She would grade each person based on their level and role, and use that to justify how she rated her team.

I find this approach to be a helpful part of my ratings process. Looking across a set of attributes that I believe are important forces me to think about all of the skills a person brings to the table, and how they seem to be doing at each of them. This helps me identify strengths and weaknesses, and structures my thinking when I am evaluating across people in similar roles.

Rating by category works when you have a lot of clarity about what is important (as in the seven characteristics of managers), or a ladder that is well-written to support this. But it does have its downsides, and these will become apparent when you try to put this into practice. People are not easy to put into boxes. You may have a manager that is incredibly weak in one area and incredibly strong in another. Does this average out? Or are they actually underperforming, because their weak area is so essential that it means they aren’t doing their job? Or on the flip side, are they over-performing because, for this role and team, the weak area doesn’t matter so much? Now you have to add judgment into the mix.

It’s also hard to apply this model when you have a team of people who all have somewhat different jobs. It’s very hard to write level criteria that works well for both front-end and back-end engineers, let alone systems specialists, reliability engineers, and DBAs. Then add in the fact that you may be managing someone who is in a bespoke role, say developer relations or technical writing. Now you may not have enough data points to make sure that you are rating the person fairly, because there is really no one else for you to compare them to.

The final component has to be your own judgment

This brings us to the final aspect of performance rating: manager judgment. As with all things in the world of people management, there is no perfect algorithm you can apply to ensure total fairness and accuracy. You do not want to set yourself up to deny good ratings to people doing good work when they don’t fit perfectly in a box, or when they had all of their goals upended by a strategic shakeup halfway through the year. But you also don’t want to unfairly reward or punish people through an arbitrary process that relies on how much you like or dislike them.

Setting clarity at the beginning of the performance evaluation period via goal-setting and job alignment is important because it gives your employees a clearer idea of what they are going to be evaluated against. Breaking roles down into components and rating each person against each component helps you consider a balanced picture of someone’s strengths and weaknesses across the areas that matter. But ultimately, these inputs are merely some of the data you need, and you must consider the full picture of their work against the ever-changing requirements and challenges of your workplace.

The most interesting and useful part of this exercise is comparing what you get from this data against your gut reaction to the rating you think someone should get. If you are open-minded, the data will show you that you’re off both high and low in different cases.

This is an opportunity for you to broaden your thinking: what aspects of this role are really important but unstated in our level guidelines? It will force you to postmortem your own leadership: how did we miss on the goals here so badly? And sometimes, it will force you to acknowledge your own bias: why do I always want to give this kind of person a lower rating even though objectively their work looks as good as this other kind of person?

Your own ratings are rarely the final step. Most companies force an alignment across teams and managers via a calibration exercise in order to ensure ratings are fairly applied across the company. But if you spend good time up-front getting very clear in your own mind the rating you believe someone deserves, and the reasoning behind that rating, you are well-prepared to go into that calibration exercise and defend your ratings with thoughtfulness and care. And you want to be prepared, because once the rating is set, you’ll have what may be the hardest conversation of all: the one where you share the rating with the employee.

Enjoy this post? You might like my book, The Manager’s Path, available on Amazon and Safari Online!

Sunday, January 24, 2021

Make Boring Plans

You’re probably familiar with the concept of Choose Boring Technology. If you’re not, I’ll wait for you to read the excellent blog post by Dan McKinley that inspired a much-needed correction in tech to balance “innovation” with stability. I’m here to take this to the next level, and talk about how “boring” should apply not just to your technology choices, but to your plans.

I spoke to someone several months ago who was frustrated with their management chain. They were anxious about the fact that the management chain was always pushing on delivery in an unpredictable way. The team felt really high pressure, even though the projects they were working on were all part of long-running infrastructure renovations. Why was this so stressful? Why, they asked, was the plan not already laid out? Why isn’t this boring?

Why isn’t this boring?

It might say something about the area that I focus on, Platform Engineering*, that “why isn’t this boring” would ever come up. You see, usually when people are in this situation, they blame everything but the lack of planning for their problems. It is a common belief in engineering that, with a clear enough vision, the rest of the pieces of work will fall into place. With a well-understood goal and smart engineers, the idea is that you can trust that people will work towards that vision faithfully and deliver something great. And this does, in rare cases, seem to work. After all, half of the hiring wisdom of the past has been “hire smart people and get out of their way.” Magic can happen with a small, highly-motivated group of people building a new thing towards a clear goal.

However, this concept of building towards a grand vision falls apart when you are building the underlying software that other engineers rely on. For better and for worse, Platform often has to be the place where we push new things to the rest of the company. A big change in the platform is the definition of an innovation token being spent. You want to move to Kubernetes? You’re gonna spend a lot of time figuring out how to operate it well in your environment, to start. You want to support a massive monorepo for the whole company? Hello innovation tokens everywhere, as you try to make it scale and perform well for all of your engineers and all of the languages they want to use. Speaking of new languages, you want to introduce Rust, or O’Caml, or even just C++17? The platform will have to support it.

Before you go blaming the Platform team for spending all of the innovation tokens for the company, remember that these initiatives are often driven by someone else. If Platform doesn’t support Kubernetes, some team will decide to build shadow infrastructure because they’re convinced that it will solve the problem they have to handle with their tens of microservices, and then it will land in the lap of Platform after a year with none of the work done to make it easy to operate, but all of the operational expectations anyway. Our goal is to build just enough ahead of you so that when you realize you need the capacity, it’s there, or can be with minimal fuss, and it’s reliable to boot.

Novel Technology Deserves Boring Plans

Since we often end up in the land of novel technology, we owe it to ourselves and our customers to be boring in other ways. And the most important way that a Platform team can be boring is by writing boring plans.

It’s great to have a vision for the future of the platform. To achieve this vision, a non-trivial amount of our job is not just building new big, scalable, complex software infrastructure, but moving everyone from the last generation of this software infrastructure to the next generation. Upgrade the programming languages, the operating systems, the libraries. Move from OpenStack to Kubernetes, from on-prem to the cloud, from maven to bazel, from svn to git. Migrate from the old storage system that was optimized for a rare legacy usecase, to a new storage system with higher availability and performance.

Making these changes happen, under the covers, has both interesting parts and boring parts. If you’re not a platform engineer, you shouldn’t see the interesting parts. The interesting parts are where we go and tune the kernel to perform well for our workloads. The interesting parts are where we build out automatic failover, so that we can meet the availability needs of the workloads. The interesting parts are the many patches we might contribute back to the inevitably-broken open source projects that hold half the world together but still don’t seem to understand how to work with FQDNs. The interesting parts are where we understand deeply the dependencies of our technology stack, the opportunities and limitations, and build solutions for our customers that fix limitations and unlock new opportunities.

When we don’t attend to the boring parts by making our plans predictable, the interesting parts turn into extra stress on top of the overwhelming anxiety of juggling these moves. When you make plans that start and end with the vision “we will move everyone to the public cloud, and it will be great,” you find yourself in the exhausting situation of running all of your old infrastructure, trying to figure out the new cloud stuff, and dealing with customers who are confused and angry that the thing they want to do doesn’t seem to quite work in either world.

Contrast this to the team that turns that vision into boring plans. They start with a small proof of concept, migrating perhaps a single application and learning in the process. Then they do the work of looking across other applications on the old platform, to see which ones are similar to the one that is now in the cloud. They work with those users to get them migrated and running, all the while gaining comfort with this new environment and uncovering the interesting gotchas. They write down what they’re learning, so that each new step in the migration builds on the last, and others can be pulled in without a huge knowledge transfer. The team focuses on the hard parts of the moment, whether they are figuring out data mirroring, or fixing a bug in a popular open source project, and they are free from the anxious overhead of wondering what is happening tomorrow. The users are also free from the stress of wondering when the work they need will be delivered, because the team has communicated plans that account for this process of iteration, learning, and gradual migration.

A Strategic Plan Is Obvious and Simple, Even Boring

Making boring plans is a foundational step in getting good at setting engineering strategy. Strategy is often confused with innovation and vision in tech circles, but they are far from the same thing. Having a future vision and recognizing the potential of innovations is valuable in building great strategy, but strategies that rely on unproven magic bullets are not good strategies. Good strategy identifies a problem with the current situation, proposes a principled approach to overcome it, and then shows you a coherent roadmap to follow. Strategy is not in the business of razzle-dazzle, it’s in the business of getting to the core of the issues so that the solution becomes simple and obvious. Good strategy provides the clarity that enables boring plans.

To become great at technology strategy, start by getting good at making boring plans. Get clear about the problem you are overcoming with your plans. Make the principles of the work at each stage clear:

  • How do we know when we’re in exploration mode, and how do we know when we’re ready to commit to a direction?
  • Have we talked to our users? Do we understand how they are using our systems, and have we made plans that account for their needs?
  • What are the problems we’re focused on solving right now, and which problems are we leaving to worry about another day?
  • How do we know if we’re on the wrong track, what are the guardrails, milestones, or metrics that tell us whether the plan needs review?

Your teams need more than a clear idea of the end state and the hope that smart engineers will inevitably get you there. Plans that are formed around hope are failing plans; hope is not a plan. Plans that change constantly are failing plans. When your plans are constantly changing, it is a sign that you either are making plans that express a certainty you don’t have, or you haven’t done your research to get the right certainty in place. Either of these is a waste of time and an unnecessary stress on the team.

So leaders, you owe it your teams, and to your users, to free them from the tyranny and stress of uncertainty. You must do the work to go beyond vision, create concrete actions, and make boring plans.