Henry Farrell (blogger at Crooked Timber)
Steven Berlin Johnson (author of Emergence, Where Good Ideas Come From, and the forthcoming Future Perfect: The Case for Progress in a Networked Age)
Tom Lee (director of Sunlight Labs at the Sunlight Foundation)
Clay Shirky (author of Here Comes Everybody and Cognitive Surplus_)
Tom Slee (author of No-One Makes You Shop at Walmart)
Victoria Stodden (assistant professor of statistics at Columbia, Big Data public intellectual)
Aaron Swartz (in no need of introduction to CT readers
Matthew Yglesias (author of Slate’s Moneybox column).
Open Data are the basis for government innovation. This isn’t because open data make government more transparent or accountable. Like Tom Slee, I have serious doubts about whether it does either of those things. In any event, shining a light on the misdeeds of ineffective institutions isn’t as imperative as redesigning how they work. Instead, open data can provide the raw material to convene informed conversations inside and outside institutions about what’s broken and the empirical foundation for developing solutions together.The ability of third parties to participate is what makes open data truly transformative. The organization that collects and maintains information is not always in the exclusive position to use it well. For example, US regulators have compiled hospital infection rates for a long time. Accessible only to government professionals, they had limited resources to make adequate use of the information. When HHS made the data publicly available by publishing the data online in a computable format, then Microsoft and Google were able to mash up that information with mapping data to create search engines that allow anyone – from the investigative journalist to the parent of the sick child—to decide which hospital to choose (or whether it is safer to stay home). When data are open—namely legally and technically accessible and capable of being machine processed – those with technical know how can create sophisticated and useful tools, visualizations, models and analysis as well as spot mistakes or mix and mash across datasets to yield insights. As Matt Parker, put it: “By making data open, you enable others to bring fresh perspectives, insights, and additional resources to your data, and that’s when it can become really valuable.”
Solving complex challenges requires many people with diverse skills and talents working together. In modern society, we weave our collective expertise together, enabling us to make complex products such as cars and computers that we cannot make alone. The more complex and diverse the products, the more successful – measured both in terms of wealth and well-being – the society over time.
Educating our young or curing cancer are the cars and computers of governance. They are complex social problems that require us to bring our diverse talents to bear. But our centralized institutions of government do not adequately leverage our collective knowledge to improve governance and solve problems. We can’t foster complexity if we limit public participation to voting in annual elections or commenting on already written rules. There’s no excuse for failing to take advantage of people’s talents, abilities and desire to play a role in governing ourselves and our own communities.
Hackathons as a Model for Engagement
Open data create obvious new ways for geeky citizens to play a role in governance. All over the world, local transportation authorities are making schedules available for free and then inviting tech savvy citizens – civic coders—to create iPhone apps that tell commuters when their bus or train is coming. There’s obvious value to the public as well as to institutions from having better data to inform planning, policymaking and the expenditure of resources. But what’s exciting about mashathons, hackathons, data dives and datapaloozas (a Todd Park favorite term) is that these are intelligible models for taking action.
Wikipedia works because we know what tasks are required of us to write an encyclopedia entry. Only the high priesthood of government professionals knows how to write a law, craft a policy, draft procurement RFPs, or appropriate funds. Hackathons aren’t the only model for participatory governance but they are one way for us to get involved that showcases how it might be possible to move away from centralized to distributed action.
Making government more participatory wouldn’t have worked as well if we had only focused on releasing data-as-in-FOIA about the workings of government—politicians’ tax returns, who-met-with-whom, and even spending data. By defining High Value Data to include information: “to increase agency accountability and responsiveness; improve public knowledge of the agency and its operations; further the core mission of the agency; create economic opportunity; or respond to need and demand as identified through public consultation,” the hope was to speak to more people’s interests, talents and abilities. We took a lot of flak at the time from those with passion for specific kinds of data. I have written previously that the “open” in open gov was never meant to suggest data-as-in-FOIA but, rather, meant open as in open innovation and therefore always had to go beyond “civil liberties data” to include all the information that government collects as well as information that citizens might crowdsource and provide to make government smarter.
The Hard Work of Opening Data
Moving toward open innovation as a default way of working in government is not easy. It takes a religious fervor (hence the sense of movement) for those who want to open up data.
It requires doing the hard and costly work of persuading data owners to shift from paper to digital and machine-readable formats and then to release that data despite political and technical challenges. But to foster engagement also requires curating the guest list for the hackathons to get subject matter experts, stakeholders, data geeks, activists, designers, computer scientists, data junkies and entrepreneurs together.
The host of a good dinner party doesn’t just leave the guests to fend for themselves. He introduces people, points out what they might have in common and seeds the conversation. Transit camps have been so successful because the conversation starts itself. Everyone wants to know when their bus is coming. But give people a data set about freight routes for transshipping goods or Form 990 tax returns and some explanation might be required.
Creating a participatory innovation ecosystem is about a lot more than just publishing data sets. It requires doing the hosting, convening, persuading, and demonstrating involved in inviting diverse people to participate. The institutional players have to be prepared to collaborate with the innovators; those outside government have to know how to collaborate; civil society activists have to ensure that innovators know the problems that need solving; and research is needed to figure out what works.
Using Data to Re-Regulate
The curatorial function is about coming up with strategies for using data to develop innovative solutions to protect consumers and serve the public interest. If we merely throw data over the transom, entrepreneurs, especially large ones, are likely to be the only entities with the wherewithal to do anything with the raw information.
But when we focus on data as a means to the end of bringing people with diverse skills together to solve problems then open data can improve upon the blunt instrument of regulation enforced by litigation.
With open data (also called Smart Disclosure), the US government is experimenting with using light touch regulation combined with technical innovation (and a firm belief in behavioral economics) to create consumer decision tools. For example, the Department of Transportation enacted a rule to require airlines to make all their fees and charges transparent. Because the data is open, innovators can create new visualizations to help consumers understand the costs and make informed decisions. No Child Left Behind requires states to gather and report school performance data, which is now being used by GreatSchools.org (in cooperation with the Department of Education) to help parents choose between public schools. The tool is in use 40-50% percent of all K-12 households. The White House Open Educational Data Initiative is spurring university Presidents to provide data voluntarily to help students and parents compare college costs and college aid “so they can make more informed decisions about where to enroll.”
But until we stop talking about data and start talking about complex and collaborative governance, we will fail to appreciate how open data can both protect consumers, lessen the burdens on entrepreneurs and catalyze more effective institutions.
Both Congress and the White House have taken initial steps toward creating greater transparency in reporting federal spending. While preliminary, these efforts could have a far-reaching impact on how governments collect and publish data from the entities they regulate.
Done right, new rules can create greater transparency and accountability while reducing the paperwork burden on regulated entities. At present, however, both sides' proposals fall short. They fail to recognize that spending is only one type of data collected from players from whom data is repeatedly and inefficiently gathered.
We offer some suggestions for improvement that could lead to reduced compliance and investment costs, improved corporate accountability, greater consumer protection, and will also create new research and reporting test beds to foster data-driven journalism and scholarship about the life of organizations.
The DATA Bill
In mid-June, Rep. Darrell Issa (R-CA) introduced the Digital Accountability and Transparency Act of 2011 (DATA), calling for quarterly reporting of all federal spending, including grants, contracts and subcontracts by both the recipients and the awarding agencies to an independent successor to the board previously established to oversee the implementation of the Recovery Act. DATA would effectively strip the Office of Management and Budget (OMB) of much of its oversight over federal spending reviews. However, DATA also calls for OMB to set standards about which data elements are to be used in reporting and to follow international, open and non-proprietary models such as XBRL. All reported data is to be published online and, to the greatest extent possible, the process automated to maximize transparency.
Executive Order on Accountability
On the same day, the President issued an executive order on Delivering an Efficient, Effective, Accountable Government, calling for agencies to reduce fraud, waste, and abuse and centralizing control over these accountability efforts with the Chief Performance Officer in OMB. The EO, too, calls for the creation of a new Transparency and Accountability Board, in this case, however, comprising agency personnel from the executive branch.
Without opining here on the jurisdictional debate between branches of government over the control, composition and authority of the new board, there are provisions that could be improved both in the draft legislation and in any subsequent guidance to serve the bi-partisan goals of greater transparency regarding how data is collected and published.
First, the DATA bill provides that items reported include name and address of recipient but no requirement that corporate persons identify the beneficial owner nor any parent-subsidiary corporate relationships. This week the Securities and Exchange Commission proposed draft "know your counter party" rules for complex financial transactions known as swaps as part of its new package of rules implementing the Dodd-Frank legislation. In the same way, it is entirely doable to add simple provisions to the DATA bill that would mandate disclosure of the ownership and structure of recipients at whatever level of specificity will best enable the public to know who is really receiving the money and how they relate to other recipients.
Second, entities could be mandated to use consistent legal entity identifiers by, for example, picking their corporate entity from a selection list. This will be useful for building a more consistent, open and standard library of legal entity identifiers within the federal government. By moving toward a standard list of names in the federal spending domain, we will help agencies to amass a library of common corporate names across different regulatory regimes. Currently, one federal agency might refer to a company as ABC Inc. while another uses ABC Corp. We can help solve this problem by mandating open, universal identifiers here rather than exacerbate it by creating yet another IT system with yet another set of disparate naming conventions.
Third, while mandating a single way of naming a legal entity is important, it's not sufficient to address the fact that every agency also collects different information, ie. names of facilities or securities controlled by that entity. We shouldn't be designing and building a system for reporting spending in a vacuum and focusing only on those limited data elements. Instead, DATA and bills like it should mandate a process that leads toward a single, universal, entity identifier for naming firms with the requirement that additional data fields be open and interoperable. We want the spending data to be able to "talk" to other data collected about corporate compliance and innovation so we can “mash up” data across agency responsibilities - for example, linking patent activity with the data about federal contracting. The DATA bill describes only a limited universe of approved standards and the EO is silent on the topic. Instead, any new requirements should mandate the use of non-proprietary, interoperable data elements not subject to any license fees or restrictions on reuse.
Fourth, data release through the federal data.gov, or via the many data sharing sites being developed by states, cities and tribal governments throughout the US, drives innovation and the development of innovative new startups. This side benefit to our economy should be augmented in data transparency legislation by allowing thatnew data standards promulgated for use in reporting federal spending should be subject to public consultation, letting developers and others help make sure the systems are open. The data should also be available in a machine-readable format, to encourage this sharing, with transparency legislation mandating the development of APIs for information sharing. The DATA bill does recognize the need to allow data to be linked, but it is an ambiguous, throwaway reference to Uniform Resource Identifiers (URIs) in 3612(d)(3)(H). Strengthening this requirement would significantly lower the effort to reporters, economic researchers, and systems developers to reuse this data in our increasingly information-driven economy.
Fifth, there is no authority in either the EO or the DATA bill to create pilot projects and iterate. We don’t understand the problem of inconsistent spending reporting well enough to design -- whether by the legislative or executive branch -- "the" system. Instead, we ought to be allowing small-scale pilots (potentially funded by prize-backed challenges) seeing what works, and trying again. Further, if the data is made available in machine-readable ways, new systems to make the data more transparent and useful can and will arise outside the government through crowsourced design and use. This will reduce the development costs while simultaneously allowing more designs to be explored. In the government, making large-scale “legacy” data systems interoperable is a hard problem that we are trying to retrofit without great expense. This requires more humility and the qualifications to try new policies, technologies, rules and standards. This isn't reflected in the legislation or in the composition and role of the Boards proposed. (We note that the UK’s Data Transparency Board includes a combination of government representatives and outside experts from corporations and academia, and would encourage the US government to consider a similar approach.)
Outside the government, we have been advocating the creation of an Open Organizational Data Project (http://dotank.nyls.edu/orgpedia), which is committed to assisting with the development of open, interoperable, non-proprietary standards for reporting data collected by government about firms and other corporate entities. With the support of the Alfred P. Sloan Foundation, we are at the beginning stages of thinking through the legal, policy and technology framework for a data exchange that can facilitate efficient comparison of organizational data across regulatory schemes as well as allowing public reuse and annotation of that data. Currently, we are convening workshops with relevant stakeholders and developing a functional prototype of such a system. As part of this project, we will also continue to curate feedback on legislative and regulatory approaches to achieving greater transparency, efficiency and accountability.
Updated April 19
Clarification: The following are notes of the March 30th workshop at the Sloan Foundation and reflect the views expressed by participants in the workshop not my opinions.
For more on ORGPedia, see the new ORGPedia project page at http://dotank.nyls.edu/orgpedia/.
On March 30th twenty economists, technologists, and government officials (Download Participant List) convened in person and by telephone at the Sloan Foundation in New York to discuss creating an open numbering scheme and platform to facilitate the comparison of data about organizations across levels of government and agencies in order to:
This ORGPedia project is convening a wide range of experts to inform the design and scope of:
ORGPedia is an experiment in designing an information system that effectively combines authenticated government data with user-contributed information – a hybrid wiki – to enhance public understanding about organizations and firms.
During the March 30th discussion, participants provided their thoughts on the opportunities, challenges, and strategies for implementation, including ideas for how to prototype and pilot a first phase of the system, from the perspective of government and research communities.
This is the first in a series of five planned workshops. The Sunlight Foundation will host a second meeting on April 8th to focus on issues of corporate accountability and compliance. There will be subsequent meetings focused on the needs of those businesses who consume business intelligence; the technology design; and the international opportunities and implications.
The following are notes summarizing the discussion among participants at the March 30th Meeting:
There are 18 million registered legal entities in the United States. Having the ability to compare and track data about them would make it possible to:
In order to make information more transparent to the public; facilitate information sharing across agencies and states; and streamline regulatory compliance by pre-populating information requests with information about entities.
Imagine if, as with the Encyclopedia of Life, which creates a page for every organism on earth, we had a system with a page for every legal entity on earth. Imagine if we had an “ISBN number” for every entity. It would enable all kinds of new services and research. This has become possible in the last few years as a result of advances in web technology and policies for opening up access to public data. The challenge is that firms evolve faster than fish and firms can morph into new firms with different names and owners through changes in control.
At root, we must address the fundamental microeconomic problem of identifying the boundaries of the firm. What if Adam Smith’s pin factory had a financing arm? Or an exclusive steel supplier? We now have the technology to represent these relationships and make the transparent.
Benefits to Government:
Having stable, unique identifier system by means of a single number or a data dictionary to translate across numbering schemes (or both – a single entity identifier plus a way to translate other common fields across schemes) would enable comparison of corporate activity across levels of government, states and across agencies. Right now we don’t know if a company doing business in one state is the same or related to a company doing business in another state. So when malfeasance is committed in one place, we are missing an opportunity to be on the look out before it happens in another state. It would be incredibly valuable to have a way to generate early warning signals.
Having a unique identifier or the ability to pull data from a common and authenticated collection of data about an entity would reduce the transaction costs to entities wishing to comply with requirements across multiple states.
The federal government alone spends $3.5 trillion. Public should be able to slice and dice. In order to make the information about how government spends accessible to people, we need to be able to trace this money even when companies change ownership and name. For example, when Boeing acquires McDonnell Douglas, a search today does not connect these two entities to provide an accurate picture.
Even though we track to the subcontractor level, we have none of the history to connect affiliates and see relationships.
This makes having a unique identifier a priority. If we had the ability to trace changes such as mergers, we could better understand the connection, if any, between government grants/contracts and campaign contributions; we could spot fraud and remove offending companies from the rolls across agencies.
Some discussion about needing a level of private information, especially about the individuals involved, even as we maintain public information at the entity level.
Benefits For Researchers:
Think about scholars working with firm as unit of analysis – engaging in same redundant transaction costs – cries out for public data set.
There are huge transaction costs associated with doing work about firms. Data sets tends to be proprietary, limited in scope and the info is at best outdated and, at worst, just terrible.
Accounting, business strategy, information technology management, finance, political science scholars are all engaging in the same socially wasteful redundant activity of trying to clean and match this data. If we could free up some of the time spent on cleaning data, we would free up researcher capacity.
For example, NYTimes did Pulitzer Prize piece on worker death at a manufacturing firm. It was tremendously labor intensive and next to impossible, to investigate the environmental compliance record of the same entity, though preliminary analysis showed they were turning in the same topic release statements to regulators each year rather than developing new figures.
If we wanted to “mash up” OSHA compliance data with EPA compliance data, we can’t do it today. Researchers have the interest but the incompleteness makes it so hard.
Over 50% of the business outputs in the United States are coming from intangibles. But there is no way to match up firms with IP output because we can’t connect patent registrations to the registrations to the entities that hold IP. At a time when innovation is becoming more important as a driver of the economy, this work is more important not less.
The field of business history is dying off because of difficulty of doing empirical research.
Technologically, this problem is not unlike the naming issues we face today in trying to create websites (or banking codes) to identify entities, ie. sloan.org and we’re now trying to make sense of the secondary pages like the About page, address page etc. which search engines know how to do.
We have the ability to map when a firm is taken over, complex interdependencies, who owns what.
Visualizations will help make this data more usable. We can show where data came from, whether it is authenticated government data, or contributed by the public.
The technology platforms for building this kind of site exists. There are no show stoppers. Some work will be needed at the applied research level to transition technology from research to practice but there are existing models.
The Encyclopedia of Life (eol.org), funded by Sloan, provides some important organizational lessons learned about running a system of this type and complexity with a mix of authoritative and open information.
Adding a signal field to existing identifier systems (ie. a universal identifier) might not be hard. Adding several fields to track changes in control, however, could be costly. However, there are Web technologies that can mitigate most of this cost if properly deployed.
What is the right role of the government? Should the government own such a system or should it be a stand-alone non-profit? What is the right governance structure to ensure legitimacy?
Pilot and Partners
Three areas of focus for potential pilot/prototype came up:
The National Organization of Secretaries of State would be a natural partner for implementing the necessary changes.
Also check out B-Lab at http://www.bcorporation.net/, a younger, more entrepreneurial set of companies committed to social benefit who might be willing to test contributing more of their data to be used in a pilot.
Check out: Bottega and Powell, Creating a Linchpin for Financial Data: Toward a Universal Legal Entity Identifier (http://www.federalreserve.gov/pubs/feds/2011/201107/index.html)
Check out: UK Companies House, which does impose an LEI but would benefit from the win/win of gains to companies and transparency of getting companies to share their data through such a platform. There will be a June/July paper on corporate reporting.
Check out the book: The Demography of Corporations
The House of Representatives is proposing cutbacks to the E-Gov fund to reduce it down to $2 million.
Without the funding, the USA will not be able to maintain the national spending data portal (USASpending.gov) and the national data transparency portal (Data.gov).
These are the tools that make openness real in practice. Without them, transparency becomes merely a toothless slogan.
There is a reason why fourteen other countries whose governments are left- and right-wing are copying data.gov. Beyond the democratic benefits of facilitating public scrutiny and improving lives, open data of the kind enabled by USASpending and Data.gov save money, create jobs and promote effective and efficient government. As the Economist writes: “Public access to government figures is certain to release economic value and encourage entrepreneurship. That has already happened with weather data and with America’s GPS satellite-navigation system that was opened for full commercial use a decade ago. And many firms make a good living out of searching for or repackaging patent filings.”
For those interested in the topic, there's a longer discussion here in the Open Data, Open Society report. But here are a few, short reasons.
By making available the raw information about how government spends money, it is affording the opportunity to Congress, among others, to analyze the data and spot patterns of fraud, waste and abuse. Here's one example published today. Because of the availability of data on these sites, the US attracts free evaluation by academics and others. This kind of (free) feedback loop aids with analyzing what works and saving the taxpayer money. But we can't streamline government without access to the data.
Moreover, hidden within the troves of public data being made available through data.gov (and in the pipeline on their way to data.gov) is information that could translate into private sector job growth and the next GPS or genomics industry.
Here are a number of examples:
BrightScope has made a profitable business of using government data about 401(k) plans. They’ve raised $2 million in venture capital and hired 30 people and is likely to double headcount to at least 60 by the end of the year. They did $2M in sales in 2010 and are currently on a $10M+ run rate for 2011.
The National Oceanic and Atmospheric Agency in the United States has a ~$5 billion dollar annual budget. Through the open release of data, NOAA is catalyzing at least 100 times that value in the private sector market of weather and climate services when including market and non-market valuations. As just one example of a market that uses NOAA data, the total value of weather derivative trading has been estimated at $15.0 billion in 2007-2008.
The ~$1 billion it spends on the National Weather Service enabled weather.com, which has since been sold for $3.5 billion.
The Health datasets (health.data.gov) on Data.Gov are unleashing the wider software development community to build robust tools that stimulate entrepreneurship and help Americans lead healthier lives.
The availability of ten year's of Federal Register data sets on Data.gov enabled three young programmers to design the new FederalRegister.gov, the daily gazette of government, and, at the same time, do business with the Federal government for the first time.
Promoting Innovation and Efficiency
By making government data available through these E-Gov programs, public officials can then reach outside of government for creative answers to tough problems, which, in turns help with identifying strategies that are more effective and save money.
HHS CTO, Todd Park, gives several examples here of how the 1170 health data sets now available on data.gov are creating the "rocket fuel" for public sector innovation. In this era when government is trying to curtail spending, E-Gov technology creates opportunities to identify creative solutions for delivering services in new ways. The value from “doing more with less” is the potentially biggest payoff of the kinds of tools supported by the E-Gov fund. Also if Congress ever wants to cut the number of regulations then it has to support the availability of data to inform the identification of more efficient strategies.
If we care about saving money, creating jobs and doing more with less, we should ensure that this budget remains intact.
I'm trying something new. As a condition for granting any interviews, I'm now quid pro quo asking to interview the interviewer. I find that reporters and writers often have more breadth of knowledge about a field than anyone else. And I want to learn something!
Recently, I talked with Laurence Millar, who was New Zealand government CIO until May 2009 and is the editor-at-large for FutureGov magazine. Here's the first of my comments to him. What follows is what he said to me about open gov in New Zealand:
The current government in New Zealand took some time to cotton on to open government. I think that, in general, the left wing's political values are more naturally attuned the values of open government. As you point out, the UK has continued the work started under the previous government, so maybe it is more to do with the timing of the Open Government movement.
We have established a group of ICT Ministers, led by Bill English, who is Deputy Prime Minister and Minister of Finance. I think he likes Open Government because he sees it as a way he can enlist the public as agents of change to improve government performance. In NZ, we have a politically neutral public service, so incoming governments always look for levers that they can use to move forward with their policies -- to move the bureaucracy. In New Zealand open gov been driven by enthusiastic individuals in the public service (who come from the open government values of them individually). It is bottom up rather than being top down by the manifesto.
Ministers have endorsed the Directions and Priorities for government ICT, which include a statement of support for Open and Transparent Government, with three workstreams
It is not quite as snappy as your mantra – transparency, collaboration and participation.
There is a group of agency Chief Executives, led by Land Information New Zealand, who provide leadership in the area of Open and Transparent Government, and there are champions in each department to push open government. The initial momentum has definitely come from the bureaucrats, bottom up, rather than being part of the manifesto of the politically elected leaders..
Tell me about the most interesting and innovative projects like the Mix-and-Mash Competition?
We discovered that if you find something for people to rally around that creates moments - for instance, the mix-and-mash, similar to your Apps for America. The winner was a mashup of walking tracks, using data provided by the Department of Conservation. There was quite a lot of anxiety about putting data that was not accurate, but what they found was that people were willing to update the map based on their experience on the ground. So we saw the virtuous cycle of crowdsourcing data quality improvement.
We've also had a powerful reminder of the power of crowdsourcing after the Christchurch earthquake. Eq.org.nz was a community-based website fed from e-mails and SMS twitter and Facebook notifications like "this ATM is working," "this supermarket has food," "you can get fresh water here," "pharmacies are available." The information is then pushed back out by Twitter, RSS, smartphone apps, and printed maps are distributed at community briefings. They even send out information via teletext. The site used the CrisisCommons foundation work, and enlisted about 120 volunteers from around the world to do quality assurance on the information, operating 24 x 7.
We don't have as many people here. We don't have the depth and cross-section of .gov, .org, .edu who can work with government data to improve the quality of life but we are building and growing this community.
Official sources can only process so much information, and they rightly focus on life and death, rescue and infrastructure issues. There is lot more involved in returning to normal daily life, and so the site extends the information published by official sources. I have been saying that the site provides information that is not important enough to be official information, but is still important to people recovering from civic emergency. It is first time I've seen cognitive surplus in action (or as you call it, civic surplus).
Beth: So what happened with the police wiki?
Not much happened to build on the experience; we did have some other successes in e-participation at the time, but nothing like using a wiki to revise legislation. I guess it was our Bob Beamon moment – it was so far ahead of the thinking at the time, no-one has yet caught up.
Over the last two years, the public sector has begun to experiment with open innovation by releasing data, trying new forms of citizen engagement, pursuing multi-sector partnerships, using prizes as incentives to solve problems and using other techniques to enable government and the public to solve problems together.
Because of the rapid pace of "open gov" and "gov 2.0" innovation, there is an urgent need to figure out what's working and what's not and to develop metrics that we can put in place at the start of new projects to understand the impact. If governments are to accelerate the pace of innovation, we want to make sure the research community is helping to ensure that these innovations are improving the functioning of government institutions, empowering citizens and strengthening democracy.
I am really excited to be speaking at this upcoming event. It's a "Noah's Ark of scholars" in that there are no more than 2 people from each discipline. Should be tremendously interesting and, hopefully, launch a community of researchers interested in and willing to study the future of institutions in the 21st century.
According to the organizers, due to space limitations, conference will be limited to .edu and .gov. But sessions will be videotaped and made available online.
Open Government Research & Development Summit
March 21-22, 2011
Monday 1:00 - 6:30 plus reception
Tuesday 8:30 – 4:45
Please R.S.V.P. by March 16, 2011 to firstname.lastname@example.org
National Archives and Records Administration
McGowan Theater, National Archives Building
700 Pennsylvania Avenue, NW
Washington, DC 20408-0001
The summit will set the foundation for a robust R&D agenda that ensures the benefits of open government are widely realized, with emphasis on how open government can spur economic growth and improve the lives of everyday Americans. The President's Council of Advisers on Science and Technology noted the importance of establishing an R&D agenda for open government in their recent report. This will be the first opportunity for researchers, scholars, and open government professionals to begin a discussion that will continue at academic centers throughout the country over the next few years.
Government innovators will talk about openness in the context of education, health, and economic policy, and international open government. Speakers include Aneesh Chopra, U.S. Chief Technology Officer, Todd Park, Chief Technology Officer of the U.S. Department of Health and Human Services (HHS), and David Ferriero, Archivist of the United States.
Panelists made up of scholars, activists, and present and former policymakers will then discuss the important research questions that researchers must grapple with in order to ensure lasting success in the open government space. Panels will discuss issues such as how to safely release data without creating mosaic effects. Panelists include Jim Hendler (Rensselaer Polytechnic Institute), Noshir Contractor (Northwestern University), Archon Fung (Harvard University), Chris Vein (U.S. Deputy Chief Technology Officer), Beth Noveck (New York Law School), and Susan Crawford (Yeshiva University).
The National Archives and Records Administration (NARA) and Networking and Information Technology Research and Development (NITRD) are hosting this summit, with support from the MacArthur Foundation. The conference is free to attend. We are preparing an agenda for distribution.
From Pew Internet and American Life, a new survey shows that "if citizens feel empowered, communities get benefits in both directions. Those who believe they can impact their community are more likely to be engaged in civic activities and are more likely to be satisfied with their towns."
Here's the exec summary:
Surveys in Philadelphia, San Jose, and Macon show that those who believe city hall is forthcoming are more likely than others to feel good about: the overall quality of their community; the ability of the entire information environment of their community to give them the information that matters; the overall performance of their local government; and the performance of all manner of civic and journalistic institutions ranging from the fire department to the libraries to the local newspaper and TV stations.
In addition, government transparency is associated with residents’ personal feelings of empowerment: Those who think their government shares information well are more likely to say that average citizens can have an impact on government.
The Center for Community Development Investments published, "Building Scale in Community Impact Investing through Nonfinancial Performance Measurement" by Ben Thornley and Colby Dailey. Based on six months of empirical research in 2010, the report is the subject of a new issue of the Community Development Investment Review from the Federal Reserve.
The lack of transparency in community investing is hampering opportunities for innovation and greater effectiveness not to mention accountability. Given that many, highly diverse investors seeking to create social impact with their money need to go through Community Development Financial Institutions to invest, it would seem to be both urgent and manageable to introduce better performance metrics.
Investors are putting money into domestic, low income communities in the hope of generating both financial and non-financial returns. Despite the fact that "nonfinancial performance measurement directly informs the investment process" and is essential to providing " latent sources of capital with market-level information on the tradeoffs between financial and social return," there has been a lack of effective measurement tools to understand investor preferences in a complex and diverse process that seeks to maximize impact, growth and risk avoidance.
Surprisingly, many impact investors fail to report non-financial impacts at all (p. 17-18)!
The report examines 4 questions:
1. Does nonfinancial performance measurement really matter for investors?
2. If it does matter, is nonfinancial performance measurement even possible?
3. If nonfinancial performance is possible to measure, what form should it take?
4. How will nonfinancial performance measurement increase community impact investing?
In particular, the authors survey 8 existing but underused measurement tools. They go on to identify both theoretical and practical barriers to effective measurement including those that stem from the diversity of investor preferences, lack of readily available tools, and an absence of accountability in the system.
They conclude that "nonfinancial performance measurement is critical because, simply put, willingness to pay is partly determined by the quality of the information that investors use to make decisions about financial and nonfinancial tradeoffs."
The report does not commit to a single way forward that will lead to adoption. They suggest 4 possible avenues: 1) industry self-regulation; 2) Community Reinvestment Act reform; 3) CDFI Fund regulatory mandate; 4) additional federal investment to support innovation in nonfinancial performance measurement?
I'm willing to agree with their conclusion that there is no, single tool or silver bullet policy prescription. Hence I think they should be demanding, shouting, calling for trying all four approaches urgently to see what works. The paper is more helpful support for the power of open data as well as a good excursus on the how to design law, policy and technology in tandem to produce greater innovation and effectiveness.
Trying to catch up on reading some new reports on open government that have recently appeared. For those whose "to be read" pile is similarly taller than the "already ready" pile. Here's a quick take on one.
Timothy Vollmer of Creative Commons published the State of Play: Public Sector Information in the United States, an excellent report on open data in the United States as part of the European Public Sector Information Platform series on information re-use.
In it he provides a concise and accurate primer (with footnotes) on the legal and policy framework for open government data in the US. He describes the varied uses for public sector data.
For instance, some view the dissemination and re-use of PSI as a means to increase the transparency and accountability of government. Others view PSI as primarily a means for improving internal government communication and efficiency. Some view PSI as a vehicle for promoting economic activity and innovation. Others are exploring ways for PSI to be used as a means for international diplomacy and global information sharing. Some see PSI as civic capital, working to increase citizen participation in government activities.
Citing to work published by the National Academies he highlights, in particular, the economic benefit to be gained from open government data:
it promotes new types of research and avoids duplication of research, enables the development of tools that can aid in search and discovery of information, promotes transparency and validation of government funded information, maximizes the return on investment for government funded PSI, promotes interoperability between different sets of government information, and supports socioeconomic and good governance.
Vollmer ends with an uplifting Carl-Malamudism that underscores the positive, economic externalities. "Public data is “the raw material of innovation, creating a wealth of business opportunities that drive our economy forward. Government information is a form of infrastructure no less important to our modern life than our roads, electrical grid, or water systems.”
We need more empirical research to draw the connection between open government and a thriving economy. I will be talking more about this when I testify before the Canadian Paliament later this week.