About
People
Many people have contributed to building our scripting framework and supporting its development. The project is managed by Andrew Klassen, who ensures consistency and coordinates between the different teams and researchers making contributions. There have been and are currently many research assistants, students, postgraduate scholars, and academics who have added or are currently adding variables. These include Maureen Nyirenda, Joseph Mwanza, Lisa Fenner, Joaquín Concha, Pao Engelbrecht, Matthew Placek, Nathan Fioritti, Giorgio Farace, Demi Chu, Daniella Wenger, Han Isha, and Margot Mollat-Du-Jourdin. Andrew is collaborating with other academics and policy makers to prepare articles, reports, and data outputs.
Research
Access to a large merged database enables original research with unprecedented temporal and geographical coverage. We can track long-term trends with greater accuracy across more regions of the world or investigate small sub-populations that have traditionally been difficult to study statistically. Having this much data enables answering all kinds of new research questions with greater validity, reliability, and generalizability. We collaborate with other researchers, join teams containing complementary skill sets, take on large research projects, expand the scope of analysis, and produce ground-breaking results. This kind of collaborative research enables producing ambitious and impactful outputs within limited time frames.
Mission
Our mission is to provide an evidence base for building resilient and stable societies. We aim to measure important concepts such as the legitimacy of institutions, support for democracy, and the happiness of individuals. The objective of HUMAN Surveys is to enable harmonizing any variable from any nationally representative public opinion survey. Our purpose is to learn from the past and help answer important questions about the causes and consequences of events, policies, institutions, values, and behaviors. The goal is to solve the hard problem of managing a complex public opinion dataset and then partner with researchers to provide results and advice to policy makers, media, academic communities, and the public.
History
This project evolved out of work for Andrew Klassen’s 2014 PhD dissertation. This used 5 survey sources and included about 121,000 respondents from 80 countries. Andrew continued expanding the resource over the next few years. The name was created during 2016, with credit going to Andrew’s future wife Madusha Weeratunga, as an acronym for Human Understanding Measured Across National Surveys. When Andrew met Roberto Foa in 2018, HUMAN Surveys combined 19 sources and over 8 million respondents from 160 countries. Over the following years and with support from Roberto Foa, Robert Thomson, and Will Jennings the framework has expanded to combine over 80 sources and 21 million respondents from 185 countries.
System
We harmonize public opinion surveys that are (1) nationally representative of adult populations and (2) freely available to use. Our Stata scripting framework includes all survey waves from each source to enable adding whatever variables are needed for research projects. The system is modular to enable different people to simultaneously add different variables from the same surveys. Each module includes its own variable labeling and harmonization process. Central coordination on variable naming and coding activities avoids duplication of work and ensures that different modules merge together seamlessly. Researchers that add new variables have exclusive use of those variables until their projects and publications are completed.