OSF Backend Setup

Setting up data collection with JSPsych DataPipe and Open Science Framework

Overview

The Open Science Framework (OSF) provides a robust, free platform for research data management. Combined with JSPsych DataPipe, it offers secure, automated data collection for O-ELIDDI studies with built-in research best practices.

Free & Open

Completely free for academic research with unlimited storage and no participant limits.

Secure

Data encrypted in transit and at rest, with granular access controls and audit trails.

Research-Ready

Built-in version control, metadata management, and collaboration tools for research teams.

Automatic

No server setup required - data flows automatically from your study to OSF storage.

Prerequisites

Before you begin, ensure you have:

Step-by-Step Setup

1Create OSF Account

If you don't already have an OSF account:

  1. Visit osf.io
  2. Click "Sign Up"
  3. Use your institutional email address if available
  4. Verify your email address
  5. Complete your profile with research information
Tip: Using an institutional email helps establish credibility and may provide access to additional OSF features.

2Create a New OSF Project

Set up a project to organize your study data:

  1. Log into OSF and click "Create new project"
  2. Enter a descriptive project title (e.g., "Daily Activity Patterns Study 2024")
  3. Add a project description including:
    • Study objectives and research questions
    • Participant demographics and recruitment
    • Data collection timeline
    • Analysis plan overview
  4. Set appropriate access permissions:
    • Public: For open science projects
    • Private: For sensitive data (can be made public later)
  5. Add relevant tags and subjects for discoverability

3Access JSPsych DataPipe

Set up automated data collection through DataPipe:

  1. Visit pipe.jspsych.org
  2. Click "Get Started"
  3. Sign in with your OSF credentials
  4. Authorize DataPipe to access your OSF account
Authorization: DataPipe needs access to create files in your OSF projects. This is secure and only allows DataPipe to upload data files to projects you specify.

4Create DataPipe Experiment

Configure DataPipe for your O-ELIDDI study:

  1. In DataPipe dashboard, click "Create Experiment"
  2. Enter experiment details:
    • Name: Match your OSF project name
    • Description: Brief description of data being collected
    • OSF Project: Select the project you created in Step 2
  3. Configure data settings:
    • Data format: CSV (default, recommended for O-ELIDDI)
    • File naming: Use default pattern
    • Storage location: Confirm correct OSF project
  4. Click "Create Experiment"
  5. Important: Copy the generated Experiment ID - you'll need this for O-ELIDDI configuration
Experiment ID Example: Your ID will look something like eR8ENvJPgQth - save this securely as it's required for data collection.

5Configure O-ELIDDI

Connect your study to DataPipe by updating the configuration:

  1. Open your O-ELIDDI repository (locally or on GitHub)
  2. Edit settings/activities.json
  3. Update the experimentID field in the general section:
{
  "general": {
    "experimentID": "YOUR_DATAPIPE_EXPERIMENT_ID",
    "app_name": "Your Study Name",
    "version": "1.0.0",
    "author": "Your Name",
    "language": "en",
    "instructions": true,
    "primary_redirect_url": "pages/thank-you.html",
    "fallbackToCSV": true
  },
  // ... rest of configuration
}
Important: Replace YOUR_DATAPIPE_EXPERIMENT_ID with the actual ID from Step 4. Keep fallbackToCSV: true for backup data collection.

6Test Data Collection

Verify that data flows correctly from your study to OSF:

  1. Deploy your updated O-ELIDDI configuration
  2. Complete a test timeline on your deployed study
  3. Submit the data and note any error messages
  4. Check your OSF project for the uploaded data file
  5. Download and examine the CSV to verify data format
Successful Test Indicators:
  • No error messages during data submission
  • Automatic redirect to thank you page
  • CSV file appears in OSF project within minutes
  • File contains expected timeline data and metadata

Data Management on OSF

File Organization

DataPipe automatically organizes your files in the OSF project:

Accessing Your Data

Via OSF Web Interface

  1. Log into OSF and navigate to your project
  2. Click "Files" tab
  3. Browse and download individual files or entire folders
  4. Use built-in preview for CSV files

Via API Access

For automated data processing, use OSF's API:

# Python example using requests
import requests

# Get project files
project_id = "YOUR_OSF_PROJECT_ID"
url = f"https://api.osf.io/v2/nodes/{project_id}/files/osfstorage/"
response = requests.get(url)

# Download specific file
file_download_url = "DIRECT_FILE_URL_FROM_API"
data = requests.get(file_download_url).content

Data Backup and Security

Privacy and Ethics Configuration

Project Visibility Settings

Visibility Options:

Ethical Considerations

Before collecting data, ensure:

Data Anonymization

O-ELIDDI can collect anonymous data by design:

Collaboration and Team Management

Adding Team Members

  1. In your OSF project, click "Contributors"
  2. Click "Add" and enter collaborator email addresses
  3. Set appropriate permission levels:
    • Administrator: Full project control
    • Read + Write: Can view and upload data
    • Read: View-only access
  4. Send invitations and manage access as needed

DataPipe Access Management

DataPipe Security: Only the OSF account holder who created the DataPipe experiment can modify its settings. Team members access data through OSF project permissions, not DataPipe directly.

Monitoring and Troubleshooting

DataPipe Dashboard

Monitor your data collection through the DataPipe interface:

Common Issues and Solutions

Data Not Appearing in OSF:
Authentication Errors:

Debugging Data Collection

Browser Console Testing

Test DataPipe connectivity directly in browser console:

// Check if data would be sent successfully
console.log(window.timelineManager.study);

// Test DataPipe endpoint (replace with your experiment ID)
fetch('https://pipe.jspsych.org/api/data/', {
    method: 'POST',
    headers: {'Content-Type': 'application/json'},
    body: JSON.stringify({
        experimentID: 'YOUR_EXPERIMENT_ID',
        filename: 'test.csv',
        data: 'test,data\n1,2'
    })
}).then(response => console.log(response));

Data Analysis Preparation

Batch Data Download

For analysis, download all participant data at once:

  1. In OSF project, go to Files tab
  2. Select all data files (Ctrl+click or Shift+click)
  3. Click Download to get a ZIP file
  4. Extract CSV files for analysis

Data Aggregation Scripts

Python Example

import pandas as pd
import glob
import os

# Read all CSV files from download
csv_files = glob.glob("downloaded_data/*.csv")
all_data = []

for file in csv_files:
    df = pd.read_csv(file)
    all_data.append(df)

# Combine all participant data
combined_data = pd.concat(all_data, ignore_index=True)

# Save master dataset
combined_data.to_csv("master_timeline_data.csv", index=False)

Quality Control Checks

Recommended data quality checks:

Advanced Features

OSF Registrations

For enhanced research credibility, consider creating an OSF registration:

Integration with Other Tools

GitHub Integration

Link your OSF project to your GitHub repository for complete reproducibility.

ORCID Integration

Connect your ORCID profile for automatic publication tracking.

Institutional Storage

Add institutional storage accounts for additional backup.

Reference Managers

Export citations directly to Zotero, Mendeley, and other tools.

Best Practices Summary

For successful OSF + DataPipe deployment:
Remember: OSF and DataPipe provide powerful infrastructure for research data management. Take advantage of the built-in features for version control, collaboration, and reproducibility to enhance your study's impact and credibility.