Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Yale Library

Research Data Management: Organize & Document Data

Research data is loosely defined as information collected, observed, or created for purposes of analysis to produce original research. This guide provides resources for managing your research data no matter the discipline.

What is Data Organization?

Data organization is how you organize the storage of your data. This can involve your filing strategy (folder / directory structure) as well as version control and management. How you organize (and later find) your data can have significant impacts on research efficiency and collaboration, and often has downstream affects on data documentation, storage, sharing, and preservation.

Using LabArchives

LabArchives is a cloud-based electronic lab notebook (ELN), licensed by Yale and free for those with a Yale NetID to use.

Electronic Research Notebooks (LabArchives) | Information Technology ...

Key features:

  • Store and organize your research data — up to 1TB per notebook  online.
  • Share notebooks across teams, even with external colleagues.
  • Create standard notebook formats and templates for your lab or research group.
  • Integrate your notebook with other services, such as Microsoft and Google products, Canvas, and many more.
  • Access all revisions of notebook entries.

How to get started:

Get started by logging in with your NetID.

Learn more at the LabArchives Help Pages. For Yale-specific assistance with LabArchives, email labarchives@yale.edu.

Organizations related to data management and preservation

What is Data Documentation?

Data documentation is how you describe your data first for yourself and your research team, and later, more formally, to a broader community. Data documentation can be as a simple as a text document, or it can involve many interwoven applications and systems. Common data documentation methods include data dictionaries, lab notebooks, qualitative codebooks, etc. Data documentation also often involves using standardized naming and formatting conventions as well as data and metadata standards and ontologies. 

Data documentation should capture the following elements:

  • How data was created or obtained (e.g., methods, instruments, units of measurement, software, etc. used)
  • When, where, why, and by whom data was created
  • What data variables mean
  • How data are organized and where they are stored
  • How (if) data have been transformed or altered

Data documentation is closely related to data organization, as data organization structures are often recorded in data documentation.

Describing data

Sharing data (with others or within a lab over time) is impossible without proper data documentation. "Metadata" is data about data. It's structured information that describes content and makes it easier to find or use. A metadata record can be embedded in data or stored separately. Any data file in any format can have metadata fields. In social science, this record is called the "codebook" or "data dictionary."

There are many metadata standards and which one is right for your data will depend on the type, scale, and discipline of your research project.

Some examples of metadata standards are:

For more examples, see the Research Data Alliance Metadata Directory.


If your field doesn't have a metadata standard (it may not be listed above) or if you just need a simpler system to keep track of data within your own lab, consider that there are three main types of metadata addressed by most standards:

  • descriptive: describes the resource for identification and discovery
  • structural: how objects are related or put together
  • administrative: creation date, file type, rights management

Also consider this advice from the UK Data Archive [pdf]:

Good data documentation includes information on:

  • the context of data collection: project history, aim, objectives and hypotheses
  • data collection methods: sampling, data collection process, instruments used, hardware and software used, scale and resolution, temporal and geographic coverage and secondary data sources used
  • dataset structure of data files, study cases, relationships between files
  • data validation, checking, proofing, cleaning and quality assurance procedures carried out
  • changes made to data over time since their original creation and identification of different versions of data files
  • information on access and use conditions or data confidentiality

 At the data-level, documentation may include:

  • names, labels and descriptions for variables, records and their values
  • explanation or definition of codes and classification schemes used
  • definitions of specialist terminology or acronyms used
  • codes of, and reasons for, missing values
  • derived data created after collection, with code, algorithm or command file
  • weighting and grossing variables created
  • data listing of annotations for cases, individuals or items