Solutions
Who uses Directual and why?
What can be built on the platform?
🇬🇧
Developer (and no-code developer) productivity is a mess. Devs are sick and tired of being told to hit some random number of commits or story points that don't mean jack when it comes to delivering real value to customers. And when managers try to shove productivity metrics down their throats, devs just roll their eyes and resist, making it seem like they don't care about being held accountable. But that's not true. Devs care about delivering value just as much as the suits upstairs. The problem is, no one can agree on how to actually measure progress.
Let’s dive into this and explore some ways to tackle it head-on. We'll talk about how to measure productivity in a way that actually makes sense, how to find and fix the bottlenecks that are holding your team back, and how to make changes without pissing everyone off.
First off, can developer productivity even be quantified? Well, it depends on how you define it. If you're just looking at things like lines of code or number of commits, sure, you can measure that. But what the hell does that actually tell you? How many lines of code does it take to create a great user experience? How many commits should a productive dev be churning out each day? Nobody knows.
There's been a ton of pushback against trying to boil productivity down to just numbers. So let's flip the script and think about it this way:
These questions are trickier to answer with just numbers, but they'll get you a lot closer to measuring the kind of productivity that actually makes a difference. We'll dive into some ways to measure productivity from both angles, but first, let's be real about which metrics are meaningless on their own. Tracking this stuff can still be useful, but you have to put it in context.
Goodhart's Law is pretty damn straightforward: "When a measure becomes a target, it ceases to be a good measure." In other words, as soon as you start using some metric as a goal, it becomes useless as an actual way to measure anything meaningful.
So, are activity metrics ever useful for measuring how productive your engineers are? Well, yes and no.
Tracking how many hours your employees are sitting at their desks each week isn't going to tell you anything about whether they're actually getting important work done. And just counting up commits, lines of code, or any other single metric by itself is probably not going to give you a good idea of whether your teams are being productive in ways that actually make a difference for your company. A lot of the real work in programming and architecture happens inside people's heads, not just when they're typing on a keyboard.
But that doesn't mean these activity metrics are completely useless. They can give you a general sense of where to start digging to find potential problems or bottlenecks.
So yeah, activity metrics can be a starting point, but don't rely on them too heavily or you'll end up chasing the wrong things.
When it comes to productivity, it's all about the team, not the individual. Productivity metrics and dashboards should be used to track how the team is doing as a whole, not to call out individual developers. Even if you decide to use certain metrics as targets instead of just as a way to get a sense of what's going on, you don't want to create a situation where:
So how are engineering managers actually tracking productivity without causing an issue? Well, there are tons of possible metrics, but the ones you choose should depend on what your company is trying to optimize for. Here are some common approaches:
DORA metrics: These guys (Dr. Nicole Forsgren, Jez Humble, and Gene Kim) wrote a book called "Accelerate" where they talk about four key metrics that high-performing teams track:
The first two are about how fast you're developing, while the last two are about how stable your shit is.
The SPACE framework: This framework is all about looking at productivity from a bunch of different angles, not just one or two metrics in isolation. It proposes five dimensions of developer productivity:
For each dimension, there are suggestions for metrics you could track and how to gather them. But the authors recommend capturing several metrics across at least three of the dimensions, and combining that with qualitative data like surveys.
The point is, don't just fixate on one number and call it a day. Look at the big picture and gather data from multiple angles to really understand how your team is doing.
If you want to get a real sense of what's slowing your team down, you have to talk to your people and run some surveys. The data you get from those conversations is like a big, flashing neon sign pointing you in the right direction, but it's not the whole story. You'll probably need a mix of hard numbers and subjective feedback to really get to the bottom of what's blocking your team.
So what kind of numbers should you be looking at? Aggregate metrics can give you a high-level view of trends and bottlenecks without singling anyone out. Here are some easy ones to grab from the data you already have:
But don't go overboard trying to track every possible metric under the sun. That'll just make things worse. Every decision involves trade-offs, and if you're not clear on what's most important, you can argue for pretty much anything. Start by picking a few metrics that actually matter for what your company is trying to achieve.
And hey, if your metrics are aligned with your goals, there's nothing wrong with devs gaming the system a bit to hit their targets.
But choose your targets and incentives poorly, and you'll get some nasty unintended consequences. Just look at Sears' hourly sales targets that led to overcharging customers and useless busy work, or Ford's fuel-efficient car that had a teensy little flaw—it could burst into flames on impact. These horror stories are a reminder to really think through how your metrics could be gamed and whether those side effects are actually helping or hurting your overall goals.
Code reviews can be a major pain in the behind when it comes to shipping code. If your pull requests are just sitting there collecting dust and you want your team to actually help each other out, you can set a target like "X% of pull requests are reviewed within X hours." That way, the metric is actually tied to your goal, and by aiming for a percentage instead of "all," you're leaving some wiggle room for the inevitable outliers.
If you're trying to build a culture of shipping early and often, setting a target for more pull requests might actually encourage engineers to 'game' the system by breaking their PRs down into even smaller chunks than usual. But guess what? In this case, that's exactly the behavior you want to see, so it's a win-win.
Keeping an eye on both the touchy-feely qualitative data and the cold, hard aggregate numbers can make productivity metrics feel a little less threatening to developers. Watching those aggregate metrics can point you towards the bottlenecks gumming up the works, while actually asking your devs about their experiences helps them feel heard and can surface inefficiencies that might not show up in pure activity metrics.
So, how do you actually figure out what's blocking your team? You're looking for clues about what work is harder than it needs to be, what's taking way too damn long, and if there are any tasks that could be automated or just straight-up eliminated.
1:1s: When you're meeting with your reports, it's the perfect time to take their temperature on how stressed they're feeling and where they're getting stuck.
Surveys: Want more consistent data to compare? Survey your team members. Ask them to point out tasks that take longer than they should, or processes that involve a ton of manual bullshit.
Job shadowing: "The best way to find out how broken a system design is is to try to use it exactly as designed." Try immersing yourself in a sprint cycle and see how it feels.
Retrospectives: "Don't just retro incidents, retro your successes too to understand how they unfolded… when you do that you can know how and when to recreate them."
Once you've got some data and hopefully spotted some potential bottlenecks, you'll probably start to see some patterns. Every org has its own special snowflake challenges, but there are some common themes in the types of problems you'll see, and how you might fix them.
Undifferentiated heavy lifting (UHL)
Jeff Bezos came up with this term for all the work that's essential for building and deploying software, but doesn't actually move the needle on your product or features. We're talking provisioning servers, managing load balancers or data centers, building internal tools—basically, the "price of admission." Consider standardizing, automating, outsourcing, or solving this stuff with tools so your engineers can focus on the work that only your company can do. As you grow, you might hire in-house experts who could theoretically handle these tasks, but you still have to think about the best use of their time and expertise.
Toil
Toil is mind-numbing, repetitive work that doesn't solve any new problems and just gets in the way of moving your business forward. Google coined the term for Site Reliability Engineering, but you'll find examples of toil all over the software development lifecycle: any time-consuming or repetitive tasks that have no lasting value and could be automated or solved with tools. Toil doesn't just waste time, it can also lead to resentment and burnout in team members who have to deal with too much of it. You could outsource these tasks, but it's often more efficient to find ways to standardize and reuse solutions so you can get rid of toil altogether.
Most scaling companies eventually face this catch-22: Toil can often be eliminated with internal tools, but building those tools is toilsome. Tasks like building a UI from scratch take forever without actually addressing the underlying business needs.
Technical blockers
Even if your team is working exclusively on truly differentiated business efforts, they can still get slowed down by technical bottlenecks, like a slow CI system (a common problem as you scale) or a badly designed architecture with tightly coupled components. If you don't invest in monitoring and observability, your team will end up spending way too much time on root cause analysis debugging (which will tank your MTTR score if you're tracking DORA metrics).
Alright, let's talk about the cultural stuff that can really slow your team down.
Moving too fast
When you're a small startup with a handful of people, you're usually pretty nimble, quick to jump on new tech, and free of all the red tape that can slow you down. But as you grow, the downside rears its ugly head: without established processes (or any documentation of those processes), people can end up wasting a ton of time reinventing the wheel.
It's harder to borrow from your teammates' work when no one is writing down what tools they're using, what dependencies they've found, or what tests they've written. This can lead to a lot of wasted engineering time, and over time, the effects of cutting corners or duplicating work can snowball until your application or service is about as reliable as a drunken toddler. Small startups rarely prioritize or incentivize things like documentation, but it is possible. You can even use technical solutions to solve cultural problems:
Moving too slow
On the other hand, once your company gets big enough, with enough customers bitching when your shit breaks, you start introducing process, compliance requirements, and coordination among groups. Suddenly, it's hard to move fast because some asshole on a different team has to sign off on your project before it can move forward, and they have their own targets and SLAs to meet.
This rigidity isn't inherently bad (checks and balances exist for a reason), but it's a well-known fact that within big companies, small teams tasked with special projects are suddenly, mysteriously able to pull rabbits out of their asses when they're freed up from the usual bullshit. If your company has multiple "North Star" metrics and each group is optimizing for something different, it's worth getting the big bosses to align on the one, paramount goal that everyone should be rallying around—that clarity makes it easier to get unblocked when one team's process is blocking another's.
Meetings…ugh
Meetings have always threatened focused work, but with the growth in remote meetings—Harvard Business Review's research found that there were 60% more remote meetings per employee in 2022 compared to 2020—getting into flow state is proving even more elusive for engineers. Tracking uninterrupted work hours vs meeting hours should be pretty revealing here. You don't have to go so far as to declare meeting bankruptcy and cancel all recurring group meetings, but you can get them under control:
Don't just burn it all down (yet). It's tempting to just start from scratch when you're trying to boost productivity. Slow is smooth, smooth is fast. Spending an extra 45 minutes to write tests can save a lot of time compared to constantly having to fix a build.
Having a robust feature management system in place to enable safe, reliable rollbacks creates both psychological and literal safety when pushing to production. Setting your team up with the right tools so they don't have to build undifferentiated solutions from scratch.
These are all examples of moving slowly at first to enable going fast later. As an engineering manager, you're in a position to zoom out and see where those edges can be smoothed out, and introduce those optimizations for the whole team.
There's no one-size-fits-all solution. "It depends" is the go-to response of senior engineers when you hit them with questions or proposals, and choosing the right levers to boost productivity is no different. What works for one type of bottleneck at a certain scale won't do anything for others.
Onboarding to a big, hairy codebase is a common challenge for engineers. It's time-consuming and tedious. You could solve this in a few different ways:
Which solution works for your org will depend on your company's size, priorities, resources, and whether your team is all in one place or spread out.
Getting everyone on board with changes starts with including your team in the process. By now, hopefully you've already included their input when you were identifying blockers and potential solutions. Next steps:
Start small. If you're adopting new technology, a well-scoped, successful pilot project can help you gain traction by word of mouth as engineers start to see the tangible results from one initial use case. We've seen this time and again as customers trial the platform for building one internal tool, with adoption growing as other teams learn about it. Hearing about how something worked from a peer is always going to carry more weight than being told to use a tool (or process, or system) by the higher-ups.
Transparency is a strong antidote to skepticism. Documenting not only the solution but how you arrived at it is critical. Giving your team this context in written format can help build trust and give the team time to absorb the information on their own. Follow up with a synchronous meeting (multiple, if you're distributed across time zones) so people have a chance to ask questions. You may want to bring in your review panel or team members who took part in a pilot to share their experiences firsthand.
And finally: always be iterating. Even with great success, it's important not to get too attached to your solutions. Your company might not be actively scaling, but other changing factors could impact your team's productivity. Be ready to reevaluate frequently.
Want to learn more about no-coding and how to stay efficient without burning out? Come find us in our communities—the links are in the footer below. Thanks for reading, and stay frosty out there.
Common developer productivity metrics include DORA metrics (deployment frequency, lead time for changes, MTTR, change failure rate), SPACE framework dimensions (satisfaction, performance, activity, communication & collaboration, efficiency & flow), focus time vs meeting time, pull request review time, and more. The key is looking at multiple metrics holistically.
You can uncover productivity blockers through 1:1 conversations with developers, team surveys, job shadowing to experience processes firsthand, and retrospectives on both successes and failures. Common themes that emerge often relate to time wasted on repetitive toil, lack of documentation leading to duplicate work, slow systems/tools, or excessive meetings.
Tactics to boost productivity include: eliminating "undifferentiated heavy lifting" through automation/tools, streamlining meetings and making more async, improving documentation and onboarding, adopting feature management for safer rollouts, gathering team input on solutions, running pilot projects before broad changes, and continuously iterating based on feedback. Providing context and data transparency also helps get buy-in.
Join 22,000+ no-coders using Directual and create something you can be proud of—both faster and cheaper than ever before. It’s easy to start thanks to the visual development UI, and just as easy to scale with powerful, enterprise-grade databases and backend.