SecureChatAI is a service-oriented REDCap External Module that provides a unified, policy-controlled gateway to Stanford-approved AI models.
It acts as the foundational AI runtime layer for the REDCap AI ecosystem, enabling chatbots, RAG pipelines, background jobs, and agentic workflows to access multiple LLM providers through a single, auditable interface.
Requires VPN connection to SOM / SHC.
- A model-agnostic AI service layer
- A centralized policy and logging boundary
- A runtime for both single-shot and agentic LLM calls
- A secure bridge between REDCap projects and Stanford AI endpoints
- Not a chatbot UI
- Not a RAG engine
- Not a workflow engine
- Not model-specific business logic
Those responsibilities live in other EMs (e.g., Chatbot EM, REDCap RAG EM).
SecureChatAI is intentionally designed as a shared dependency:
-
Chatbot EM (Cappy)
→ Uses SecureChatAI for all LLM calls and optional agent routing -
REDCap RAG EM
→ Uses SecureChatAI for embeddings and downstream generation -
Backend services / cron jobs
→ Use SecureChatAI via the REDCap EM API endpoint
This separation ensures:
- One place to manage credentials
- One place to enforce policy
- One place to log and audit AI usage
-
Unified model interface
Call GPT, Gemini, Claude, Llama, DeepSeek, Whisper, etc. via one method. -
Model-aware parameter filtering
Only valid parameters are sent to each model. -
Normalized responses
All models return a consistent structure. -
Centralized logging
Requests, responses, errors, and token usage are logged. -
Optional agentic workflows
Controlled, project-scoped tool invocation with strict limits. -
REDCap EM API support
Secure external access without exposing raw model keys.
gpt-4ogpt-4.1o1,o3-miniclaudegemini20flash,gemini25prollama3370b,llama-Maverickdeepseek
ada-002
whispergpt-4o-tts
- Caller (EM, UI, or API) prepares messages and parameters.
- SecureChatAI:
- Applies defaults
- Filters unsupported parameters
- Selects the correct model adapter
- Model request is executed via Stanford-approved endpoint.
- Response is normalized into a common format.
- Usage and metadata are logged for audit and monitoring.
- Normalized response is returned to the caller.
When explicitly enabled:
- Caller sets
agent_mode = true - SecureChatAI injects:
- A router system prompt
- A project-scoped tool catalog
- The model may:
- Ask for clarification
- Call a registered tool
- Produce a final answer
- Tool calls are:
- Strictly validated
- Project-scoped
- Step-limited
- Tool results are injected back as system context
- The loop exits with a final response or error
Agent mode is:
- Opt-in
- Globally toggleable
- Disabled by default
$em = \ExternalModules\ExternalModules::getModuleInstance("secure_chat_ai");
$params = [
'messages' => [
['role' => 'user', 'content' => 'Hello from SecureChatAI']
],
'temperature' => 0.7,
'max_tokens' => 512
];
$response = $em->callAI("gpt-4o", $params, $project_id);[
'content' => 'Model response text',
'role' => 'assistant',
'model' => 'gpt-4o',
'usage' => [
'prompt_tokens' => 42,
'completion_tokens' => 128,
'total_tokens' => 170
]
]Embeddings return a numeric vector array.
Primary entry point for all model calls.
- Handles retries
- Applies model-specific parameter filtering
- Routes to agent mode if requested
Returns plain text from a normalized response.
Returns token usage metadata.
Returns model-level metadata (ID, model name, usage).
Fetches logged interactions for admin inspection.
Tools are defined via system settings and are:
- Project-scoped
- Explicitly registered
- Argument-validated
- Executed via:
- Module API calls, or
- REDCap API calls
SecureChatAI does not allow arbitrary or ad-hoc tool execution.
SecureChatAI exposes a REDCap External Module API endpoint for backend services.
callAI
curl -X POST "https://redcap.stanford.edu/api/" \
-F "token=YOUR_API_TOKEN" \
-F "content=externalModule" \
-F "prefix=secure_chat_ai" \
-F "action=callAI" \
-F "prompt=Summarize this RAG pipeline" \
-F "model=deepseek" \
-F "format=json"- RAG ingestion pipelines
- Scheduled summarization jobs
- Backend AI services running outside REDCap
- Cloud Run / App Engine workers
Configured entirely via System Settings:
- Model registry (API endpoints, tokens, aliases)
- Default model selection
- Parameter defaults
- Agent mode controls
- Tool registry
- Logging and debug flags
No code changes are required to add or modify models.
- Requires REDCap authentication or API token
- Project-scoped access enforced
- All interactions are logged
- No PHI is introduced unless present in input
- Agent execution is constrained and auditable
SecureChatAI is the foundation layer for AI inside REDCap:
- One gateway
- Many models
- Consistent behavior
- Controlled agentic expansion
Other EMs build on top of it, not alongside it.