Impact Reporting Tensions

Hello, friends. I’m excited to start blogging alongside the most fascinating impact investor you’ve never heard of (though in fairness, if you’re here, I assume you are familiar with him). While I can’t promise I’ll be as interesting, I’m similarly committed to advancing the practice of impact investing.

Pursuant to this aim, I was fortunate that Big Path Capital recently asked me to join a group of fellow investors to identify the fund manager with the best impact reporting metrics. (The winner gets to present their strategy and impact investment capabilities at the Impact Capitalism Summit this week in Nantucket.) We reviewed over a dozen entries, across every asset class, with three funds (hereafter, the Top Three) earning public recognition. Since this blog isn’t about any one of them, and I can’t comment specifically about any of the other entrants, I’ll simply refer you to this press release for the final results. Meantime, I’d like to comment on four tensions that exist within impact reporting, as evidenced by this coterie of funds.

  • Before vs. After: The impact reports provided by the Top Three are all *really* impressive. This is due in large part to the fact that their impact measurements were not identified ex post. Instead, each manager had clearly developed an intricate Theory of Change, and decided ex ante how they would reflect that Change. Their metric sets were clear, convincing, and interconnected. Each data point had relevance, advancing my faith in the execution of their strategy. These managers’ metrics didn’t just enumerate the social and environmental benefits achieved by their investments. Much more importantly, they tied their ongoing impact returns back to the systemic changes that have been sought from the beginning. If intentionality is truly a critical component of impact investing, then we should expect every manager to have a similarly strong link between the upfront impact strategy and the ongoing impact measurement.
  • Depth vs. Consumability: I love impact reporting. Thus, I thoroughly enjoyed reading each and every one of the 84 collective pages devoted to impact measurement by the Top Three. That said, my experience with clients suggests I am in the distinct minority. Frequently, these fancy (and costly) write-ups get deleted or recycled – oftentimes after only a quick perusal. In these cases, the analytical rigor may be commendable, but it may also be overlooked due to page count. Each of the Top Three contained selected highlights, no doubt in acknowledgement of many investors’ low likelihood of reading much beyond. In this case, are impact data nerds like me asking too much of GPs? Do my expectations, as an advisor, flagrantly exceed those of their LPs? And if so, are impact funds unfairly caught in an unsustainable “arms race” to produce the most meticulous (i.e. the least likely to be read) impact reports possible?
  • Industry Standards vs. Proprietary Frameworks: The Top Three are not GIIRS rated, and they do not track IRIS compliant metrics. Instead, each manager has their own themes, with success articulated via their bespoke dashboard, radar, and/or key performance indicators. Notably, these managers have rejected standardization, seemingly because existing tools don’t allow for adequate expression of their value creation. This isn’t necessarily a critique of the important work that B Lab is doing. Indeed, there were a few funds that relied on their GIIRS rating to tell their impact story. But I don’t believe it is a coincidence that these funds all clustered in the lower half of the aggregate rankings.
  • Impact Metrics vs. Portfolio Statistics: Two fund types were at a distinct disadvantage in this contest.
    • First were those focused on publicly traded securities. As my colleague has convincingly argued herehere, and here, our view on these funds is that they are much better at reflecting values than demonstrating impact. So while one fixed income manager does an exemplary job of classifying, detailing, and rating the use of proceeds for every bond, the fact that the bond was purchased on the secondary market still dilutes the impact story. And the data captured and reported by that manager are not quite impact metrics per se, since the investment didn’t directly generate a social or environmental return.
    • The other type of funds that naturally struggle in these comparisons are those that merely reflect the characteristics of their capital recipients. In these cases, it is the provision of capital that is key; the Theory of Change commonly holds that underserved subsets of the population can be empowered, if only they are provided with access to capital. As such, these funds usually report on the demographic composition (i.e. gender, race, geographic location) of their portfolio investees. To us, these are not impact metrics, but rather, portfolio statistics. They are certainly reflective of impact – and, in the case of one fund, allow for pinpoint detail on loan recipients. However, it is impact via directed inputs, with indiscernible tangible outputs. While there’s insufficient room here to tackle that thorny continuum, suffice to suggest that the Top Three all captured a series of inputs, outputs, and outcomes (or at least green shoots thereof).

Feel free to revisit this blog for more of my insights on impact reporting… especially if you’re an impact nerd like I am.