OpenReferral

Congratulations! This Entry has been selected as a finalist.

OpenReferral

Washington, United StatesWashington, United States
Year Founded:
2013
Organization type: 
hybrid
Project Stage:
Start-Up
Budget: 
$100,000 - $250,000
Project Summary
Elevator Pitch

Concise Summary: Help us pitch this solution! Provide an explanation within 3-4 short sentences.

What if a standard set of information about the universe of health, human, and social services were openly available?

WHAT IF - Inspiration: Write one sentence that describes a way that your project dares to ask, "WHAT IF?"

What if a standard set of information about the universe of health, human, and social services were openly available?
About Project

Problem: What problem is this project trying to address?

In any given community, you’ll find a number of directories of “community resources”—health, human, and social services—that are available to people seeking help of various kinds. But these directories are produced in silos, and the resulting landscape of information is fragmented, chronically out of date, and not standardized. This costs precious time and energy, and often yields ineffective results.

Solution: What is the proposed solution? Please be specific!

Community resource directory data should be a 'common resource,' maintained and shared by the community that uses it. For this to be possible, we need a) common standards for describing and categorizing services, b) open platforms through which the data can be circulated, and c) realignment of incentives to change behavior among institutions and people. We propose an 'interoperability initiative' that will simultaneously undertake two objectives: 1) develop a set of open standards and protocols for referral information, and 2) support a set of pilot implementations by localized teams who will build open platforms serving comprehensive directory data about and to their communities.
Impact: How does it Work

Example: Walk us through a specific example(s) of how this solution makes a difference; include its primary activities.

In DC (where I’ve been organizing around this issue for years), the local government operates a ‘2-1-1’ system that supposedly refers callers to various services depending on their needs. But the data was not maintained and is now out of date. In the meantime, several local service organizations produce their own internal referral directories (at significant cost); they have good data, because they use it every day. If this data could become interoperable, then distributed resources could be cooperatively reallocated, and the output could be shared. Organizations could refocus their efforts on effectively delivering this data to people in need. Furthermore, such data will be valuable for policy and program analysis, crisis preparedness, etc

Impact: What is the impact of the work to date? Also describe the projected future impact for the coming years.

Locally, we’ve consolidated four DC resource directories into a common data pool (yielding the first open data set to integrate local 2-1-1 data with IRS 990 data.) But this is a ‘flat file,' not yet structured in a way that is usable. To realize the potential, we need a) a common schema; b) a common taxonomy; and c) standard protocols for the circulation of this data. Globally, I’ve convened a table of major institutions that have the influence and reach to set new standards. These institutions include the Alliance of Information and Referral Systems (AIRS, which licenses 2-1-1s), Google.org (which has proposed a ‘civic services schema’ for web markup), Code for America (which has prototyped an open referral API) and entrepreneurs committed to developing an open service taxonomy (openeligibility.org). We have reached provisional buy-in to an initiative called ‘OpenReferral.'

Spread Strategies: Moving forward, what are the main strategies for scaling impact?

OpenReferral will think globally and act locally. During the pilot project, our ‘global table’ will establish interoperable data specs, protocols, and tools, to be implemented by local teams of diverse stakeholders collaborating toward a common goal of open, standardized referral directory datasets. Through iterative development, we can establish sound standards and secure institutional buy-in. Along the way, we will produce demos and evaluation materials to demonstrate the value and suggest future potential. Between Google.org, AIRS, and CfA's networks, we have quite a market already at hand.
Sustainability

Financial Sustainability Plan: What is this solution’s plan to ensure financial sustainability?

Two different questions here: First, how can the data quality of open referral platforms be sustained over time? This question lies at the heart of our challenge strategy: we assume it’s possible, but lack precedents — so the challenge is designed to support local innovators as they try to figure it out. The second question: how will our new standard be managed over time? A core objective of the ‘global table’ is to propose possible answers.

Marketplace: Who else is addressing the problem outlined here? How does the proposed project differ from these approaches?

There are many startups in the space of referral data — such as Purple Binder, Aunt Bertha, One Degree. And some incumbent 2-1-1 systems are innovating as well. But all of these entities currently compete at the level of the data itself, all spending precious resources to collect the same info. By developing open standards and common data pools, we’ll flip that script, enabling enterprise resources to be allocated more effectively toward innovation in delivery of the data. This support will enable the brokerage of new value propositions, and the evaluation thereof.
Team

Founding Story

More like a long series of ‘aha’ moments. The first was the realization that my own organization’s internal referral directory is an extremely valuable resource that could be pooled with other sources of data and ‘opened up’ to even greater effect. But when we tried to actually move in that direction, I realized that the challenge was not just about the data itself, but about establishing a common language for this data. This seemed technically and politically futile, until a third ‘Aha!’ moment, which occurred when I realized that we could catalytically converge a set of innovations emerging from different sources — in which case, our experiment in DC could contribute to the evolution of open referral systems around the country and world.

Team

First, a 'global' team including a table of standard-bearers (AIRS) and cooperating newcomers (Google.org, Code for America, Aunt Bertha), an advisory group of Information-and-Referral experts, a 1-2 FTE for project management and tech. Second, local teams would include representatives from local government, community anchors, an incumbent or startup, and a local funder, plus 1-2 FTE for project management and tech.
About You
Organization:
OpenReferral
About You
First Name

Gregory

Last Name

Bloom

About Your Organization
Organization Name

OpenReferral

Organization Country

, DC, Washington

Country where this project is creating social impact

, DC, Washington

The information you provide here will be used to fill in any parts of your profile that have been left blank, such as interests, organization information, and website. No contact information will be made public. Please uncheck here if you do not want this to happen..

Impact
Full Impact Potential: What are the main spread strategies moving forward? (Please consider geographic spread, policy reform, and independent replication/adoption of the idea or other mechanisms.)

OpenReferral will think globally and act locally. During the pilot project, our ‘global table’ will establish interoperable data specs, protocols, and tools, to be implemented by local teams of diverse stakeholders collaborating toward a common goal of open, standardized referral directory datasets. Through iterative development, we can establish sound standards and secure institutional buy-in. Along the way, we will produce demos and evaluation materials to demonstrate the value and suggest future potential. Between Google.org, AIRS, and CfA's networks, we have quite a market already at hand.

Barriers: What barriers might hinder the success of your project and how do you plan to overcome them?

We believe that communities have almost everything they need to establish open directory systems, but breaking past institutional inertia will require political will and logistical support. While the logic of standards-development can be self-defeating (see; http://xkcd.com/927), we now face a unique circumstance due to Google.org’s recent proposal of a ‘civic services schema’ to the W3C standards body. This establishes a language for publishing such data on the web—a clear civic prerogative for change. By making space for collaboration, we can ensure this shift is guided by diverse interests.

Sustainability
Partnerships: Tell us about your partnerships.

Globally, our team includes a balance of legacy systems, disruptive newcomers, and civic technology incubators, committed to common solutions. Locally, we recognize that health/human/social service providers (especially social workers, but also health counselors, legal clinics, educators, librarians, etc) use this information every day and are the primary stakeholders. In software development terms, they are the 'product owners.'

Closing the Loop
How does your project primarily ensure that feedback delivers results?

Facilitate a conversation that combines wisdom of the crowds with the perspective of experts.

Please elaborate on your answer to the above question.

Most of these options apply. Our standardization process itself would hinge upon iterative cycles of feedback between our teams and users on the ground. In particular, the development of an open, semantic taxonomy will require a lot of folksonomic input. Meanwhile: an open platform can reduce the cost of producing its own data by harvesting updates via user feedback. Finally, the potential for open referral directory data to enable new kinds of feedback loops (i.e. tracking referral outcomes; program evaluation; community planning; crisis response) is immeasurable.

Languages: In what languages are you able to read and write fluently?

English.

2nd Round Questions
Thinking about your feedback loop; what information are you trying to get from whom, to whom, and to bring about what change?

Our initiative includes three levels of development: 1) designing a set of standards and protocols, 2) building open referral platforms according to those standards, and 3) supporting the development (often through third parties) of applications that consume and contribute to this standardized open data pool. Each one of these levels will inform and be informed by the others. We want to know whether each component of our standard is truly necessary and sufficient; for that validation, we will test through the process of building platforms and experimenting with delivering data to users through those platforms.

We expect the platforms to serve three primary use cases (help-seeker, service provider, and analyst) so we will establish user groups that represent these use cases, in order to get direct and appropriate feedback on the platforms and the underlying standard. We want to test the assumption that an ecosystemic approach will work, so we will experiment with various applications and learn from what works and what fails. Finally, we want to ensure that there is a sustainability and governance plan for these standards and the systems that use them — and only through these learning processes above will we get a sophisticated understanding of what those plans should look like.

Through all of this, of course, there's also the actual service directory data itself. Our big vision is an ecosystem surrounding a common dataset that is increasingly improved by feedback from users. We need feedback loops to update, classify, and evaluate service records -- in order to decrease the cost of producing this data, improve its quality, and broaden its scope of use. The process we describe above is largely a set of interpersonal feedback loops, intended to ask and answer questions about how to construct the digital feedback loops that we'll need for a viable shared system.

What is the purpose of your feedback loop?

Improve quality of programs

If other, please specify
What mediums or mechanisms do you use to collect feedback? (check all that apply)

SMS, Paper, Website, Physical gathering.

If other, please specify
Could you briefly describe the way you collect the feedback?

With each of our ‘local teams,’ we will develop a participatory process of research, design, action and evaluation. This process will include the users themselves (likely front-line service providers) as they identify which questions to ask and develop (in partnership with our organizers) the testing methodology. The results of these tests will be written up by the organizers, with recommendations for changes, which will be shared with the standards-development body as input for the next iteration of the process.

What mechanisms are in place to protect people from retribution?

None

If other, please specify
What are the immediate benefits or incentives for people to provide feedback?

Other

If other, please specify

Possibly: in-kind training and other skill-building activities, in hour-for-hour exchange for users’ time

How do you ensure new and marginalized voices are heard?

Other

If other, please specify

We're designing this to ensure that the program follows direction of end-users and frontline service providers!

What are the incentives for the intended recipient to act on the feedback?

They understand that feedback is necessary

If other, please specify
How does the feedback mechanism close the loop with those who provided feedback in the first place?

Meetings discussing results with providers

If other, please specify
How is feedback published/transparent?

On a website

If other, please specify
Give two concrete examples of how feedback loops have brought a program or policy more in line with citizens’ desires.

In our program, sustained dialogue with government, as well as the United Way, have shifted the conversation among people in positions of programmatic authority towards the development of ‘open systems.’ (However, we will need to demonstrate the viability of these open systems -- i.e. through more feedback.) Meanwhile, the initial design of an open platform — the Ohana project, developed by Code for America’s San Mateo’s fellows — was only a one-way API that lacked any ability to enable users to curate the community’s data. Early feedback from user groups has established that write-to functionality is a high priority for future development.

If there was one thing you could change to increase the impact of your feedback loop, what would it be?

As alluded above, I wish we had the capacity to offer meaningful non-monetary rewards to end-users and service providers who participate in our design and evaluation process. I’ve explored strategies such as time-banking, and I think something like that could be effective — in which each hour of participation earns a credit that can be used for time offered by another member of the community — but right now, the time-banking community does not seem vibrant enough for this to be a viable strategy. We need other means of valuing the labor of participation without depending upon money (which is both scarce and warps the dynamics of community-building).

What are your biggest challenges or barriers in “closing the feedback loop”?

An “expert paradigm” where the perspectives on “non-experts” is not valued

If other, please specify
Are you aware of The Feedback Store?

No, but I can see myself using it as a resource

What are the main uses you can envision for the Feedback Store?
What is the one thing you would most like to see changed to improve the competition process?

I'm not sure what we're competing for!

What are you doing to make sure that feedback providers know that they are empowered by the information they can give and that they know exactly what the information they are providing?

We are currently seeking to contract a participatory action research consultant, who will work with our regional organizers and stakeholders to design such a process from scratch! Perhaps this contest can enable us to do that.

randomness