Jenna Shelton, MPP Staff Writer, Brief Policy Perspectives
The following is an op-ed and does not necessarily reflect the views of Policy Perspectives or the Trachtenberg school.
Leveraging science and data in policymaking has extraordinary potential for improving outcomes across sectors, disciplines, and populations. In many cases, better evidence leads to more informed analysis and decision-making. However, even the best and most reliable data can be misused. To ensure culturally sensitive and socially responsible policymaking, policymakers and researchers should consider the following questions when they plan to generate or apply data for policy purposes.
How were these data misused in the past?
While data are useful for making policy decisions, researchers and scientists who are involved in data collection processes should consider the purpose of collecting and using data and be mindful of historical abuses. For example, in 1933 the Homeowner’s Loan Corporation (HOLC) used demographic data on race to classify neighborhoods by perceived lending risk, a practice known as “red-lining.” The HOLC maps had lasting impacts on credit access for Black Americans, influencing long-term urban development and geographic segregation.
Although researchers are often seen as impartial parties, they are still subject to implicit biases. If we are not able to acknowledge and confront our own biases, they can have long-lasting impacts on social equity.
How will using these data impact vulnerable communities or groups?
HOLC’s use of redlining is not the only dubious application of data use and management by a federal agency. In 2017, the Federal Emergency Management Agency (FEMA) improperly released the personal information of 2.3 million survivors of hurricanes Harvey, Irwin and Maria as well as survivors from the 2017 California wildfires. FEMA shared individual information such as addresses and personal and banking data with a contractor placing disaster survivors in hotels, exposing the survivors to possible identity theft. In this case, FEMA failed to appropriately oversee secure data access and safeguard sensitive data of a vulnerable group.
Are these data secure and confidential?
Government agencies like FEMA have a responsibility to act in the public interest. Not only does wrongful disclosure of data contradict with government mission to effectively and efficiently serve the American public; it could also erodes public trust and discourage participation in data collection efforts. Researchers in both the private and public sectors should consider how disclosing individual-level data could impact different communities and respond with deliberate and thoughtful solutions that take context into account.
Some agencies, such as the U.S. Census Bureau, have made notable efforts to safeguard data by improving confidentiality-protecting technology. However, mismanagement of personal data can degrade public trust. Unethical disclosure of data could cause irreparable damage to already marginalized communities and their trust in federal government. Researchers must always consider how they will uphold data security and confidentiality.
Were these data generated using ethical guidelines?
There are multiple examples of unethical science and data collection that have shaped the science community: the Willowbrook Experiments, the Tuskegee Syphilis Study, the Stanford Prison Experiment, and the more recent example of unauthorized gene editing, just to name a few. Each of these studies have prioritized scientific discovery at the expense of beneficence, justice, autonomy, and informed consent. The scientific community instituted Institutional Review Boards (IRB), panels meant to provide oversight and enforce ethical standard in human experiments, to safeguard scientific ethics specified in the Belmont Report. Scientists and researchers have an obligation to acknowledge and correct our own biases and assumptions when conducting scientific experiments and collecting data, especially when it is used for informing broader policy debates.
The American Evaluation Association (AEA) developed guiding ethical principles that can be useful in policy contexts beyond evaluation. The guidelines outline themes related to data-based inquiry, competence and professionalism, integrity and honesty, respect for people, cultural responsiveness, and responsibilities for public welfare.
Data must be used to inform policymaking; failing to do so would be unethical in and of itself. Yet, for evidence-based policymaking to be effective, it is essential to acknowledge and combat the weaponization of data for nefarious purposes. The science community is making commendable progress in recognizing ethical concerns starting with the creation of the Belmont Report. However, scientists and researchers still have a pressing obligation to acknowledge internal biases, recognize the history of unethical data use in policymaking, and confront institutional biases, unethical experimentation, and poor data use in our work.