Benchling Integration
Benchling Integration automation and integration for streamlined scientific data workflows
Benchling Integration is a community skill for connecting laboratory information systems with Benchling, covering API authentication, entity management, notebook operations, registry queries, and workflow automation for life science research data management.
What Is This?
Overview
Benchling Integration provides patterns for building programmatic integrations with the Benchling platform for life science research. It covers API authentication with tenant-specific configuration, entity CRUD operations for managing sequences, proteins, and custom entities, notebook entry creation and retrieval for electronic lab notebook workflows, registry queries for searching and filtering registered entities, and event-driven automation that triggers actions based on Benchling state changes. The skill enables developers to build lab automation systems that synchronize data with Benchling as the central research platform.
Who Should Use This
This skill serves bioinformatics engineers building lab data pipelines that integrate with Benchling, research teams automating electronic notebook workflows, and developers creating custom LIMS connectors that synchronize with the Benchling registry.
Why Use It?
Problems It Solves
Manual data entry into Benchling from external instruments and analysis pipelines is slow and error prone. Querying the Benchling registry for specific entities requires navigating the web interface without API access. Lab notebook entries created manually lack consistency in structure and metadata across researchers. Automation workflows that depend on Benchling state need event monitoring to trigger downstream actions.
Core Highlights
API client handles authentication and request formatting for Benchling REST endpoints. Entity management creates, updates, and retrieves biological entities from the registry. Notebook operations generate structured lab notebook entries programmatically. Registry search provides filtered queries across entity types and custom fields.
How to Use It?
Basic Usage
from dataclasses import dataclass, field
import httpx
@dataclass
class BenchlingConfig:
tenant_url: str
api_key: str
class BenchlingClient:
def __init__(self, config: BenchlingConfig):
self.config = config
self.http = httpx.Client(
base_url=f"{config.tenant_url}/api/v2",
headers={"Authorization":
f"Basic {config.api_key}"},
timeout=30)
def list_entities(self, schema_id: str,
page_size: int = 50
) -> list[dict]:
resp = self.http.get(
"/custom-entities",
params={"schemaId": schema_id,
"pageSize": page_size})
data = resp.json()
return data.get("customEntities", [])
def get_entity(self, entity_id: str) -> dict:
resp = self.http.get(
f"/custom-entities/{entity_id}")
return resp.json()
def create_entity(self, schema_id: str,
name: str,
fields: dict) -> dict:
resp = self.http.post(
"/custom-entities",
json={"schemaId": schema_id,
"name": name,
"fields": fields})
return resp.json()Real-World Examples
from dataclasses import dataclass, field
class NotebookManager:
def __init__(self, client: BenchlingClient):
self.client = client
def create_entry(self, folder_id: str,
name: str,
content: str) -> dict:
resp = self.client.http.post(
"/entries",
json={"folderId": folder_id,
"name": name,
"initialContent": content})
return resp.json()
def search_entries(self, query: str,
limit: int = 20) -> list[dict]:
resp = self.client.http.get(
"/entries",
params={"name": query,
"pageSize": limit})
data = resp.json()
return data.get("entries", [])
class RegistrySearch:
def __init__(self, client: BenchlingClient):
self.client = client
def find_by_name(self, name: str,
schema_id: str = "") -> list[dict]:
params = {"name": name, "pageSize": 20}
if schema_id:
params["schemaId"] = schema_id
resp = self.client.http.get(
"/custom-entities", params=params)
data = resp.json()
return data.get("customEntities", [])Advanced Tips
Use pagination tokens to retrieve complete result sets when querying registries with more entities than the page size limit. Implement idempotent entity creation by checking for existing entries before creating duplicates. Subscribe to Benchling webhook events to trigger automated analysis pipelines when new data is registered.
When to Use It?
Use Cases
Build a data import pipeline that registers experimental results from instruments directly into the Benchling registry. Create a notebook template system that generates standardized entries for recurring experiment types. Implement a search tool that queries Benchling entities across schemas for cross-project data discovery.
Related Topics
Laboratory information management systems, electronic lab notebooks, life science data management, REST API integration, and research workflow automation.
Important Notes
Requirements
A Benchling tenant with API access enabled and an API key. Python HTTP client for REST API requests. Knowledge of the Benchling schema configuration for the target tenant.
Usage Recommendations
Do: validate entity data against the schema before creating entries to catch field type errors early. Use pagination for all list endpoints to handle registries with large entity counts. Store API keys securely and never embed them in client-side code.
Don't: create entities without checking for duplicates, which pollutes the registry. Ignore API rate limits that Benchling enforces per tenant. Modify production notebook entries without version tracking that preserves the audit trail.
Limitations
Benchling API access and features vary by subscription tier. Custom entity schemas must be configured through the Benchling web interface before API operations can reference them. Webhook event delivery is not guaranteed and requires retry logic for reliable automation.
More Skills You Might Like
Explore similar skills to enhance your workflow
Zoho Bigin
Zoho Bigin API integration with managed OAuth. Manage contacts, companies, and pipelines
Release Skills
Automate the deployment of new features and manage release skills to ensure smooth software updates
Project Bootstrapper
Sets up new projects or improves existing projects with development best practices, tooling, documentation, and workflow automation. Use when user wan
Kaggle Automation
Automate Kaggle operations through Composio's Kaggle toolkit via Rube MCP
Cargo Fuzz
Streamline Rust security testing by integrating Cargo Fuzz for automated bug discovery and vulnerability checks
Astropy
Automate and integrate Astropy astronomy tools into your data workflows