Measuring Success in Developer Relations

Developer relations is an integral part of many software companies who hope to win the hearts and minds of developers. You may refer to it as developer evangelism or community outreach but ultimately, it’s a motion dedicated to ensuring that:

  1. You’re proactively listening to what the community needs and looking to see how you can help
  2. You’re providing a conduit for developers to offer you feedback
  3. You have an opportunity to share your vision with the community and hopefully solve some of their problems

In my opinion, this is absolutely the right order to be driving on since it’s important to always think of the needs of the community.

But the problem with developer relations is that it’s a subjective, somewhat nebulous field that in most cases doesn’t involve tangible “things“. This can make it hard to measure how successful you or your team are and if you’re hitting the mark with your community.

What do Developer Advocates do?

From my experience and through many discussions with my peers, the typical developer advocate tends to focus on several key outreach mechanisms to engage with developers. These are:

  1. Social media engagement, primarily Twitter
  2. Content generation via blogs or 3rd party sites such as Smashing Magazine
  3. Screencasts
  4. Podcasts
  5. Webinars
  6. Influencer engagement
  7. Local or regional meetups and usergroups
  8. Major conferences including speaking or attending
  9. Hackathons

The need to scale a message means that numbers 1 through 5 will get the most attention allowing advocates to reach the largest possible audience. They’re less personal but do afford a big megaphone. Six through nine offer opportunities for more direct one on one interaction and engagement with the chance to meet with community members in person, work with influential developers to ensure they have an opportunity to affect your product direction and generally put a face to a name.

Easy Measurements

Of these, the easiest to measure success on are the top 5. In most cases, there are analytics that can offer insight into your motions allowing you to determine how many people were interested in what you had to say. Tools like Twitter Analytics, Google Analytics and Bitly offer tremendous insight into how well your outreach and engagement efforts are doing. The data offered can have a profound impact on how you adjust your tone and message, especially in these semi-autonomous mediums where anything can be misconstrued and left to interpretation.

For example, my post yesterday about ngrok really went over well and based on the analytics, I can say that it was something that really resonated with my audience:

twitter-analytics

I’m less concerned about the engagement rate & percentage as I am about the number of impressions, in this cases 5,744 impressions. That’s a fairly decent reach and shows that a number of developers were interested in this.

Not So Easy Measurements

When we get to the lower end of the list, beginning at number 6, it’s hard to quantify how well you’re doing mainly because these are subjective motions that tend to have qualitative value and thus need to be measured as such. What I mean by this is that it’s very difficult to measure quantitatively the impact, immediate and long-term, that your participation in one of these motions might have. Sure, you can say that you spoke at an event and it was attended by 300 people. But does that really tell you whether your message landed or that you affected product sales? Not really and this is where the issues comes in with many companies who see developer relations solely as a cost center and not a value-add. The pointy-haired guy wants hard numbers for something that’s more of a soft skill.

In my opinion, measuring this needs to be done through social media tracking of a specific key message you’re trying to land. This means that if you’re participating in an event, you should have an idea of what your goals are so that they can be measured afterwards by analyzing feedback on social mediums, especially Twitter. Twitter is by far the best medium for gathering qualitative measurement of your engagements. The community uses it extensively to discuss the good and bad of what they see so if your message lands well, Twitter will know about it in most cases. Tools like Sprinklr help to offer insight into this and can help you gather information that you can use to measure your success. I personally use TweetDeck’s multi-column capability to track keywords that are important to me, especially at events or during announcements.

As I mentioned, these motions are typically measured qualitatively which means you need to ensure you save tweets or articles that highlight your motions, whether they’re positive or negative. Yes even negative feedback is valuable and should be used to determine how well you’re engaging with the community and if your company or product is of interest. This tweet regarding my ngrok post is a great example of this:

It’s equally important to ensure that the influencers you work have clarity into your vision and direction. Most developer advocates have lists of influential developers with whom they have regular conversations and briefings with so they can get a pulse of what’s trending and whether their company’s product is actually solving a need. Tracking how influencers feel is incredibly important because it tells you if they’re understanding your vision and what their sentiment is towards your product and company. Because these influencers have a big megaphone and the ears of the community, being able to proactively engage them at the right time ensures they’re getting the most accurate information to share with their followers. And trust me, they will be vocal for both the good and bad. They’re in a position of trust and thought leadership and rightfully need to express their true feelings about a topic. Measuring your interactions with them, again, is typically qualitatively and from perspective a long tail scenario since most influencers will take their time to ensure the information is worthwhile to discuss or promote. This is why it’s very hard to measure. The best tool I’ve found for this is Onalytica which helps to offer engagement opportunities with influencers.

How Do You Measure Success?

I’d actually really love to hear from other developer advocates on this topic. I know there are plenty of other ideas out there on how to effectively measure engagements and it’d be great to be able to pool it altogether. I’m hoping you’ll jump in on the comments and offer up your experience and perspective so that we can all benefit and do our jobs better.

Rey Bango

19 Comments

  1. I have a little experience directly and indirectly managing these kinds of teams. :)

    Hard metrics for Developer Relations/Evangelism are super hard to get right. You can easily motivate the wrong behaviors OR suppress the natural enthusiasm that make “great” advocates great.

    In my view, there are two things you can measure: Output/Effort and Results/Impact.

    Output/Effort is something a Developer Advocate team has complete control over. They can create a plan, execute against it and then be measured by their ability to meet/exceed that plan. The impact of that effort is then overlaid primarily to inform which activities should be repeated in the future, and which should be avoided due to insufficient impact.

    Results/Impact is ultimately what the Developer Advocate needs to achieve, but it is also hard for a developer advocate to fully control. A hardworking, talented advocate could do all the right things, but that doesn’t guarantee standing-room only audiences and 150,000 followers.

    Worse, if you overly judge an Advocate’s performance on the later, it can drive odd behaviors in pursuit of “hitting their numbers.” On paper it will look like “big impact,” but the metrics won’t truly reflect the kind of genuine impact you want Advocates to have.

    Add to that, no two Advocates are the same. Some are GREAT speakers. Some are GREAT writers. Some can do KILLER demos/open source projects. A FEW can do it all. That demands that metrics around DevRel be tailored to the team’s strengths if they are going to be effective.

    After almost a decade in this space, the system I feel best about is one that judges 70% to 80% of a Developer Advocate’s success on their Output/Effort (versus plan) and 20% to 30% on the impact/public success of those actions.

    Butts in seats, Number of conversations at a booth, Number of influencers at a dinner, Twitter followers, Retweets, High-fives…all fine, but not great measures of Developer Advocate success.

    • Awesome feedback, Todd. In terms of Output/Effort, it’d be great to know more about how you see the plan actually making an impact. For example, you can create a plan that says we’re going to target 40 developer events and you could end up doing all 40 you listed plus a couple of extra for good measure. But how do you determine whether those were the right events to have attended and whether you got value out of being there? What are the things you feel make an engagement successful?

      Having worked for you, I definitely agree with you and have a sense of where you’re going but I’d love for others to get this perspective.

      • Good point.

        You must be sure to PLAN for the impact you want. If you want to reach a lot of people, going to 40 small events is probably the wrong plan.

        But even before that, you have to collaborate with marketing, product management and biz to figure-out what impact DevRel needs to prioritize. Is it driving awareness of products to help stimulate trials/sales? Is it building relationships with specific community thought leaders? Is it establishing 1st party thought leadership? Is it helping customers be successful implementing (to Paul’s point)?

        If there is a shared understanding across marketing/product/biz about what impact DevRel is pursuing, that helps a lot when it comes time to measure.

        And maybe that’s the biggest point: DevRel goals/metrics don’t mean much if the product/business they’re supporting are not trending in the right direction.

        Having lots of followers, speaking at big events, and all the good stuff DevRel does should be a means to an end that benefits the business. It’s easy for DevRel teams to spend too much time doing things that benefit the Advocates more than the business.

        HOW DO YOU KNOW IF YOU GOT THE VALUE?
        Two ways, in my opinion:

        1) Direct measures: This is the classic, inside-out DevRel measures like attendees, eyeballs, 3rd party surveys for mindshare/awareness, social media, etc.

        2) Indirect measures: These are the (ultimately) more critical outside-in business metrics DevRel activity is supposed to be influencing

        If DevRel gets an “A+” on Direct Measures, but the Indirect Measures haven’t moved the right way, DevRel should still think critically about what it should change/do differently. Maybe it means DevRel needs to collaborate more closely with marketing.

        If DevRel gets a “A+” on Direct Measures, and the Indirect Measures are showing some lift, it’s a gold star for DevRel. The more directly DevRel can link their KPIs to the business KPIs they have impacted (like showing a big spike in downloads after producing a webcast), the bigger the gold star.

  2. Rey,

    Great post!

    This is really a great point to look at or understand especially during evangelizing/advocacy practices. To me evangelizing is all about managing or approaching following items in the context of success, they are:

    1. High level pre-sales engineering, product marketing, customer matrix to the client or relevant one: helps on-boarding relevant client, product high adoption, increasing number of deals, amount revenue and most importantly time management on checking off the deals.
    2. How quickly we can get client checked off ? : supporting high level on doing meetup, presentation (related to product API – code snippets, documentation, tools, …), various channel mode supporting, and last but not least regular follow-up, etc.
    3. At the end, make sure or maintain high-level customer satisfaction and relationship management for future engagement – helps on rolling our scalability, product feedback and growth strategy keys.

    That’s it!

    Cheers,
    Ran

    • Thanks Paul. But how do you measure that? Is it Github contributions? Downloads? Crawlers that scour the web? And how do you show the value to your company at the end of the day?

      • Well, lots of ways. Browser Metrics. Crawls of the web. I don’t really care for the number of views on an article or video, or downloads of a sample etc they are vanity metrics. Use is the most important thing for us and it is what I encourage my team to focus on.

        One example is Pete LePage’s theme-color post it had a decent number of views, talked about on many blogs but the stat we were looking for were the number of sites that used it and if developers and users liked it. We had a crawl of the web and found after two months over 20k sites were using (or there abouts) and that was above what we felt was good.

        We work side-by-side with eng (because we are eng) and we agree on the goals of the product and we then work out the plan to how we get the adoption it needs.

        The same applies for the manifest, and service worker. We are not helping to change the web if people don’t use the API’s we as an industry create, the tools the industry provides and the guidance we design.

        One thing that I like, is that we help direct the product and shape the api’s and features because as Advocate we have a good sense of what developers want and how they would use it. If we don’t think it fits well, we help steer and shape the product – sometimes it works well, others we have a miss, but the important thing is that as a DA you provide that channel and insight on behalf of the developer community. I look at Jake Archibald as an example of this, he has worked directly on the Service Worker (and many other specs) and likewise Ilya Grigorik because at this early stage we can help shape the API’s and tools that every devloper will use…. 6 months ago no usage metric made sense (other than feedback from X early adopters), now it is much more concentrated on implementation based goals…. At least for the team I run.

        I could talk about this for a long long time, but Gauntface covers it well too in the comments. I am putting a lot of thought about trying to shape industries based on customized guidance. I recently presented at a Finance event, I don’t measure that or the number of people who turned up, what I am setting up is do our other teams have the tools to make those sites and partners better and do they change and perform better themselves.

  3. Nice Post Rey.

    I think the key thing you need to consider is what is the end goal?

    – Is it to get developers aware of something?
    – Get developers using something?
    – Get certain companies or industries using something?
    – Is it to get partners who help generate a story to drum up further interest?

    Based on that, you’ll then have an idea of what you want to track and to some degree how you track it?

    awareness – social media, conferences, podcasts and videos are good bets.
    Use – This can be done by any means you can think of : crawl top 1000 sites and check implementations / growth of use, crawl github for use, ask developers to ping you with implementations
    Companies / Industries – measure yes or no are they interested, yes or no did they implement.
    Generate stories – Get partner interest and then generate story and interest. (Measure how you will, social, attendees of conferences, views on post etc).

    From the list above, the outreach approach would be tailored to try and further that goal.

    I agree with Todd’s sentiments. Chasing numbers is one of the most de-motivating things I can think of for a developer advocate. Having and following metrics of implementations is good and you should have them to hand and should be used to review how / what kind of outreach should be done, but setting requirements on results feels somewhat counterintuitive to the goal of advocacy in my head.

    • Yep absolutely agreed. And BTW, Todd is my former manager so we see eye to eye on a lot of things.

      I’m less interested in the metrics side of things as I am in measuring the qualitative impact of efforts. To me, the ability to positively influence perception is an incredibly powerful skill but something that’s equally hard to demonstrate “on paper”. Hence why I recommend saving tweets or articles which demonstrate the impact of your work. While they may not correlate to a hard metic that a pointy-haired manager can point to, they do demonstrate that your actions are resonating.

      I’m hoping to flesh that qualitative measurement out more because to me, that’s more of a reflection on advocacy.

      • I think the intersection of Qualitative opinions and Quantitative metrics (for the bean counters) is best captured with formal surveys.

        Surveys let your audience provide qualitative feedback in a structured way so you can present it as more than anecdotal evidence (which cherry-picked tweets feel like).

        The key is to create a survey that you can run regularly, establishing a benchmark, and then showing progress against that benchmark on “soft measures” like awareness, sentiment, Net Promoter Score, etc.

        If DevRel takes ownership of this surveying process and presenting the results, they can more easily own the trends and defend the actions that support them.

    • Inspired by the book, “How to Measure Anything”, I would argue that we have a hard measuring Developer Advocacy (or Evangelism) because we haven’t sufficiently defined it. More specifically, we don’t have a shared understanding about how our efforts change the world around us.

      I’m relatively new to this role, although I’ve done development, architecture, sales, marketing, product management, and many other roles. I do not yet understand the value of measuring those things you characterized as easy measurements and later as direct measures. For comparison, those seem like measuring developers by lines of code or marketing by emails sent. My experience tells me I want less of those things, not more, to achieve the same result.

      To me, only the things characterized as “not so easy” and indirect, make sense. These seem closer to “the end goal” suggested above. But perhaps the term “end goal” is too finite. Rather than measure the above concepts as “end goals”, I would think about them as flow like the way Dave McClure and Ash Maurya would have startups measure cohorts moving through a funnel. So rather than “end goal”, I prefer the word “impact” and the technique of Impact Mapping as described by Gojko Adzic. In that technique, “impact” is a behavior change for some group of people.

      As Developer Advocates, we might be trying to influence the same group of people — developers. But working for different companies, the change each of us wants will be different. Doesn’t that mean that each of us should have different measures of success? It seems like this discussion can only be about the heuristics of measurement, not which measures.

      Just a thought.

    • Thanks for that link, Michael. And yes, PR is a close analog to the work we do. In fact, I work closely with our PR team on many initiatives.

  4. I think it depends on your goals. For example, “top of the funnel” type of efforts can drive big numbers. If the goal is to reach a broad audience, gain “mindshare” and recognition, than measuring hard stats on things like traffic and social media engagement are useful. However, sometimes developer relations focuses on existing customers. A post about how to do X with our product Y fills a real need customers have, but won’t gain much traction in terms of traffic and social media engagement.

    This isn’t specific to writing. Speaking has much the same issue.

    Once you add in, as Todd said, having to play to the specific “evangelist’s” strengths, I think it becomes tough to come up with number goals. You end up having to rely on qualitative versus quantitative data. Unfortunately, this sort of data doesn’t always translate well up the chain, so DevRel always tends to face the “I’m not exactly sure what they do for us” conundrum – we do a ton, it’s just difficult to measure the impact.

  5. It’s super hard to meaningfully measure success for virtually anything we do. In many ways we are the catalyst for things the community does (or would do) anyway, we can just dedicate time and effort to things that others mostly can’t and, in doing so, hopefully fight fights for them in code and process so that when they *do* try and use new APIs, features, technologies, or processes that they’re not alone. Basically it’s kind of like a combo of R&D and applied sciences in my bald head. Generally, though, the community mostly does a phenomenal job of that anyway, although sometimes a little too well (I’m looking at you, eleventy billion MVC frameworks!)

    Many of the things on your list are exactly right inasmuch as they’re all things most of us do, although I would add one: technical credibility. If you’re not actually building things, how can you be relevant? Empathy helps a lot, and it also helps other developers to know that a) you experience at least some of their day-to-day issues and b) that when you implement something well that they can benefit from it (via OSS etc).

    Anyway, to the point, which is to say measuring success… I guess I have a question mark over the whole thing in general. A thought experiment: you could persuade 1,000 people that something is a good idea, but not one of them may be able to do anything about it because of other constraints on their time or focus. Equally, you may influence 1 person who then goes on to make a game-changing technology or product. You want the latter, and just measuring numbers doesn’t tell you that you were successful. Often, though, it can be a decent proxy. Ultimately, I just try to do my best, and hope that whatever I create will help and inspire others. Not the most solid metric ever devised, but there we go! I do, of course, try and track stats and metrics for whatever I product, and I try and tie them back to what I’m doing, but I’m never wholly convinced that I’ve reached stats perfection, on account of my aforementioned thought experiment! :D

  6. In a perfect world, teams would be small enough that it would be painfully obvious what everyone was doing at any given point in time and metrics wouldn’t exist outside of “Did we make more money?”. Metrics are a tax, or as Brandon Satrom likes to say, “A pound of flesh”.

    That said, they are necessary to track effectiveness in larger organizations, but often encourage the wrong sort of behavior, even in roles where metrics can be clearly defined. This is how we end up with half-baked bits and sleazy car salesmen. The very DNA of developer relations prohibits this due to the fact that, when done properly, the job is primarily about relationships. You cannot put metrics on a relationship, nor can you normalize a measurement. Is so-and-so a good boyfriend/girlfriend? What are the metrics to be considered a good husband? What numbers do I have to hit? In that context, the concept of metrics is ridiculous, if not offensive.

    If we had to measure it (and we do), I would throw down most of your top five, if not an exact facsimile.

    – How many events did we attend?
    – How many of those events did we speak at?
    – How many articles did we write? (I’m the biggest fan of this one)
    – How many webinars did we do?
    – How many internal initiatives did we support?

    I definitely agree with Todd on measuring the effort as opposed to the output. Of course, I work for him so I guess I have to! :)

    We’re always looking for new ways to gauge the role without gutting it. DevRel has to be given the latitude to be honest genuine, and vulnerable. Primarily because those are the traits that are the foundation of good relationships. Forcing them to hit some number is a guaranteed way to make sure they lose their integrity.

    Thanks to Michael for posting the PR doc.

  7. A major focus is often on engagement with the goal of developer happiness. So your point about tracking influencers is critical because those are the people speaking at local meetups and even building their own local communities.

    A measurement for that can be a checklist of best practices that make them happy and loyal, across the entire list of those key influencers. We just wrote an article about it in case you have feedback :) https://medium.com/@Mobilize/creating-long-term-value-in-developer-relations-4ecb25ef25e2

Comments are closed.