OpenAIAvailable
OpenAI Integration
Integrate the Memory Scope API with OpenAI to create personalized AI assistants that remember user preferences, constraints, and communication style.
Installation
Install both the Memory Scope SDK and OpenAI SDK:
Installation
# Install both SDKs
pip install memory-scope openaiQuick Start
Create a personalized AI assistant that uses stored memories to provide contextual responses:
Quick Start
from memory_scope import MemoryScopeClient
from memory_scope.exceptions import PolicyDeniedError
import openai
# Initialize clients
memory_client = MemoryScopeClient(api_key="your-memory-api-key")
openai_client = openai.OpenAI(api_key="your-openai-key")
def get_personalized_response(user_id: str, user_message: str) -> str:
"""Generate a personalized AI response using stored memories"""
# Read user preferences
prefs_context = ""
try:
preferences = memory_client.read_memory(
user_id=user_id,
scope="preferences",
domain=None,
purpose="generate personalized AI response"
)
prefs_context = f"User Preferences: {preferences.summary_struct}"
except PolicyDeniedError:
prefs_context = "No preferences available"
# Read communication preferences
comm_context = ""
try:
communication = memory_client.read_memory(
user_id=user_id,
scope="communication",
domain=None,
purpose="generate personalized AI response"
)
comm_context = f"Communication Style: {communication.summary_struct}"
except PolicyDeniedError:
comm_context = "Use default communication style"
# Build system prompt with context
system_prompt = f"""You are a helpful assistant.
{prefs_context}
{comm_context}
Adapt your responses to match the user's preferences and communication style."""
# Generate response
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
)
return response.choices[0].message.content
# Use it
response = get_personalized_response("user123", "What should I have for lunch?")
print(response)Advanced Usage
Using Constraints
Incorporate user constraints to ensure the AI respects boundaries:
Using Constraints
# Read constraints
try:
constraints = memory_client.read_memory(
user_id=user_id,
scope="constraints",
domain="dietary",
purpose="apply dietary restrictions to recommendations"
)
dietary_rules = constraints.summary_struct.get("rules", [])
constraints_context = f"Dietary Restrictions: {dietary_rules}"
except PolicyDeniedError:
constraints_context = "No dietary restrictions"
# Include in system prompt
system_prompt = f"""You are a helpful assistant.
{constraints_context}
Always respect these constraints when making recommendations."""Storing Conversation Context
Store important information from conversations as memories:
Storing Conversation Context
def store_conversation_memory(user_id: str, conversation_summary: dict):
"""Store important information from a conversation"""
# Store preferences mentioned in conversation
if "preferences" in conversation_summary:
memory_client.create_memory(
user_id=user_id,
scope="preferences",
domain=conversation_summary.get("domain", None),
source="conversation",
value_json=conversation_summary["preferences"]
)
# Store constraints mentioned
if "constraints" in conversation_summary:
memory_client.create_memory(
user_id=user_id,
scope="constraints",
domain=conversation_summary.get("domain", None),
source="conversation",
value_json=conversation_summary["constraints"]
)Best Practices
- Always handle PolicyDeniedError gracefully - it's expected in some cases
- Use clear, descriptive purpose strings when reading memories
- Store important information from conversations as memories
- Respect user constraints when generating responses
- Cache memory reads when appropriate to reduce API calls
- Provide users with a way to revoke access to their data
Model Compatibility
This integration works with all OpenAI chat models (gpt-4, gpt-3.5-turbo, etc.). The system prompt approach ensures memories are properly incorporated into the conversation context.
Related Resources