Thinking about your feedback loop; what information are you trying to get from whom, to whom, and to bring about what change?
Our initiative includes three levels of development: 1) designing a set of standards and protocols, 2) building open referral platforms according to those standards, and 3) supporting the development (often through third parties) of applications that consume and contribute to this standardized open data pool. Each one of these levels will inform and be informed by the others. We want to know whether each component of our standard is truly necessary and sufficient; for that validation, we will test through the process of building platforms and experimenting with delivering data to users through those platforms.
We expect the platforms to serve three primary use cases (help-seeker, service provider, and analyst) so we will establish user groups that represent these use cases, in order to get direct and appropriate feedback on the platforms and the underlying standard. We want to test the assumption that an ecosystemic approach will work, so we will experiment with various applications and learn from what works and what fails. Finally, we want to ensure that there is a sustainability and governance plan for these standards and the systems that use them — and only through these learning processes above will we get a sophisticated understanding of what those plans should look like.
Through all of this, of course, there's also the actual service directory data itself. Our big vision is an ecosystem surrounding a common dataset that is increasingly improved by feedback from users. We need feedback loops to update, classify, and evaluate service records -- in order to decrease the cost of producing this data, improve its quality, and broaden its scope of use. The process we describe above is largely a set of interpersonal feedback loops, intended to ask and answer questions about how to construct the digital feedback loops that we'll need for a viable shared system.
What mediums or mechanisms do you use to collect feedback? (check all that apply)
SMS, Paper, Website, Physical gathering.
Could you briefly describe the way you collect the feedback?
With each of our ‘local teams,’ we will develop a participatory process of research, design, action and evaluation. This process will include the users themselves (likely front-line service providers) as they identify which questions to ask and develop (in partnership with our organizers) the testing methodology. The results of these tests will be written up by the organizers, with recommendations for changes, which will be shared with the standards-development body as input for the next iteration of the process.
If other, please specify
Possibly: in-kind training and other skill-building activities, in hour-for-hour exchange for users’ time
If other, please specify
We're designing this to ensure that the program follows direction of end-users and frontline service providers!
Give two concrete examples of how feedback loops have brought a program or policy more in line with citizens’ desires.
In our program, sustained dialogue with government, as well as the United Way, have shifted the conversation among people in positions of programmatic authority towards the development of ‘open systems.’ (However, we will need to demonstrate the viability of these open systems -- i.e. through more feedback.) Meanwhile, the initial design of an open platform — the Ohana project, developed by Code for America’s San Mateo’s fellows — was only a one-way API that lacked any ability to enable users to curate the community’s data. Early feedback from user groups has established that write-to functionality is a high priority for future development.
If there was one thing you could change to increase the impact of your feedback loop, what would it be?
As alluded above, I wish we had the capacity to offer meaningful non-monetary rewards to end-users and service providers who participate in our design and evaluation process. I’ve explored strategies such as time-banking, and I think something like that could be effective — in which each hour of participation earns a credit that can be used for time offered by another member of the community — but right now, the time-banking community does not seem vibrant enough for this to be a viable strategy. We need other means of valuing the labor of participation without depending upon money (which is both scarce and warps the dynamics of community-building).
What is the one thing you would most like to see changed to improve the competition process?
I'm not sure what we're competing for!
What are you doing to make sure that feedback providers know that they are empowered by the information they can give and that they know exactly what the information they are providing?
We are currently seeking to contract a participatory action research consultant, who will work with our regional organizers and stakeholders to design such a process from scratch! Perhaps this contest can enable us to do that.