Notes - Social Performance Task Force

BSFP Working Group
Pricing Transparency Webinar Series
Webinar #1: Pricing Data Collection in the Microfinance Sector
Thursday September 24, 2015
Guest speakers: Emmanuelle Javoy, Beth Rhyne
Tony Sheldon first introduced Emmanuelle Javoy, who wrote a paper on this topic at the
request of the Center for Financial Inclusion (CFI) given her experience and background
in the topic of data collection as a former rater and board member of MFT. Emmanuelle
emphasized that the paper represents her thoughts and suggestions and encouraged
participants to engage in open discussion.
What did we learn from the MFT experience?
Collecting pricing data was tedious and costly, and costs did not decrease
significantly over time - a significant portion of the time of MFT staff (3/4 of their
time) was spent on convincing people to report, rather than on verifying data. The
issue for MFIs was that the costs (time requirements) and risks (competitive
disadvantage if they were the only one disclosing this information, reputation risk)
of publishing data did not outweigh the benefits (benchmarks, adherence to ethical
Good regulation makes the task of collecting pricing data easier
Industry support for pricing transparency has not been strong enough to
compensate for lack of regulation or other incentives – e.g., it has not been a precondition to funding or membership of a network. And while data reporting is
integrated into the Client Protection Principles Certification, only a few MFIs have
been certified.
What next?
Emmanuelle suggested looking at MFT’s model to see whether changing any of its key
parameters (e.g., one country at a time, all data verified by one analyst, all data
available for free) would make data collection more feasible. Her recommendations are
the following:
1. Build consensus – make sure all actors collect data in a standard format so that
data can be shared (i.e., provide a methodological guide for data collectors)
 Determine whether an MFI charges prices that are at/below/or above
market – however it’s not easier to define “the” price charged by an MFI (as
there usually are several prices within each organization)
2. Lower barriers and reduce costs to make it easier for MFIs to submit information
a. Require minimal manual intervention/data check from an analyst. A
tiered system can be put into place to verify data for a fee – this could be
done by data collectors and paid either by the MFI or an investor, etc. But
this should be optional and not a requirement.
b. Allow all MFIs to submit data – then indicate whether data is selfreported or has been checked by an analyst, but do not put any obstacles
on an MFI for reporting
Pool efforts – gather all existing reported data into one platform to allow for
benchmarks. This can be done in a way that makes each MFI’s data anonymous.
Increase incentives, reduce risks and limit free riding – only publish aggregated
data at the country/market segment level, provide a free benchmark report to
those MFIs who have reported data and to data collectors
Test model – e.g., use a “mystery shopping” approach in at least 2 countries per
continent to test accuracy of the data collected and whether data being reported
is enough for benchmarking
Design a system that is sustainable from the start – Direct data collection could
be pre-financed or reimbursed (through sale of data – although this has proved
hard in the sector). Donations are good but likely not sustainable in the long
Define what is the ideal actor to be “the pricing data collector” – this actor must
have a good reputation, be independent, have the technological capacity to do
the work, and a good understanding of the sector.
Tony then introduced Beth Rhyne, director of the Center for Financial Inclusion (CFI).
Beth mentioned that since Emmanuelle developed her paper, SPTF (Laura Foose),
investors (Jurgen Hammer), and CFI (Anne Hastings and herself) met to discuss this
topic. One of the reporting formats noted by Emmanuelle in her paper has been
incorporated into the SPI4. This creates a framework through which pricing data
collection could take place. The mechanism is already set up for voluntary data
 Investors using the SPI4 ALINUS can also encourage their MFI investees to
use the SPI4 to report pricing data.
 Cerise could gather all the data – SPI4 is already connected to the MIX.
Additionally, from the Smart Certification side, the old model called for reporting to MFT
and looked at whether country regulation required it. As reporting to MFT is no longer
an option, the Smart Campaign would like to test out having people from their staff be
the spot-checkers of data. The next step, Beth noted, would be to do a pilot and answer
some key questions to define how to move forward:
1. Will MFIs participate in a voluntary process and make their pricing data public?
Would this be a big enough group that represents critical mass?
2. Will self-reported data with random verification result in sufficiently reliable
data? (Testing whether the quality of the data will be good enough as to rely on
it is a key issue.)
3. Is the proposed system feasible given the roles of various players?
4. Who will use the pricing data? Chuck Waterfield (former CEO of MFT) had
mentioned disappointment over how little pricing data was used for decision
making in the sector.
5. How important is the availability of pricing data for the industry?
6. Who can and should pay for this data to be collected and shared?
Beth asked participants for their reactions to these questions.
Anne Hasting (Microfinance CEO working group) mentioned the commitment of the
group to this project. While several key actors need to come to action for the model to
work, she expressed that the model has potential to work.
Ben Wallingford (Planet Rating) mentioned that one of his burning questions was
related to Beth’s question #2 (see above) – how to balance the need of having data
with a simple, straightforward approach given the complexity of data in the sector (e.g.,
if there is no loan documentation verification). Are spot checks going to be enough?
Emmanuelle mentioned a few ways to work around this issue: in her opinion, the full
database should be checked, or if only spot checks are possible, then only those checked
should be the ones included in benchmarks.
Emmanuelle mentioned that in order to have meaningful information that allows seeing
whether an MFI’s prices are below, above, or at market rate, it is important to organize
data by market segments (amount and term) so that the MFI’s data can be compared
against it.
Tony Sheldon went back to the point regarding data verification – the difference and
implications between self-reported data and “checked” data. Emmanuelle mentioned
that a reason to publish (while flag) data that is self-reported is the ability to display the
data right away (and not after the time lag that checking the data requires). Deeper
verifications should be taken into consideration, but that would depend on people being
willing to pay for it. A model that allows for self-reported data with spot verification is
less costly and hence more feasible, at least for the start.
Beth Rhyne clarified the difference between consistency check and verification. A check
is a quick overview to make sure things are in the right category, for example, whereas
verification is a deeper analysis, more in line with what MFT was doing before. Beth
wondered whether verification can be done without an analyst have to be in-country.
This would cut down costs significantly, so we need to understand whether it is feasible.
Jurgen Hammer (Grameen Credit Agricole Foundation, co-chair of the SPTF social
investor working group) agreed with Emmanuelle on the need to simplify the model for
data collection. He mentioned that investors should have a role in this model as many of
them already do a pricing evaluation in their due diligence process. Therefore, if, as
suggested by Emmanuelle, the methodology is adapted to a more “realistic” level, and
can be part of a an MIV/investor due diligence process, we should be able to find ways
to make quality validation part of the global investor community monitoring processes
and thus make it viable. He added that all “client-related indicators” in the social
performance evaluation tools (full SPI4 or the investor due diligence version SPI4ALINUS), are taken from the Client Protection Principles (Smart Campaign). Hence the
transparent pricing question needs to be added at that level.
Tony Sheldon asked Emmanuelle whether investors would make their due diligence
data public. Emmanuelle went back to the point about “pooling efforts” – if data can be
gathered and anonymized, some investors have expressed that it could be feasible to
share it. Most MIVs have non-disclosure agreements with their investees, so making
sure the system allows for data to remain confidential would be key. Investors represent
a way in which data could be collected and also a way in which some checks could be
done on-site. Investors should discuss further whether there is enough support from
their side for this option (more than just a few agreeing to it).
Laura Foose (SPTF) reiterated the importance of the questions of who is using pricing
data, how it is being used, and whether there is willingness to pay for this validation of
the information. She suggested that as a follow up for this question SPTF could develop
a brief survey to get the feedback from all participants on the questions posed by Beth
to the group. This would be very helpful to identify next steps to make sure the model
developed is practical. The feedback from participants will also help the SPTF design the
next two webinars on this topic.
Tony wrapped up by thanking Emmanuelle and Beth in particular and saying that the
SPTF will follow up with the survey mentioned by Laura.
The next two webinars on this series will focus on pricing transparency at the investor
and regulator level. Details will be shared soon.