You don't need to rebuild your entire application to benefit from AI
Integration Approaches
| approach | complexity | cost | bestFor |
|---|---|---|---|
| API-Based | Low | Pay-per-use | Quick wins, variable usage |
| Self-Hosted Models | High | Fixed infrastructure | Privacy, high volume |
| Hybrid | Medium | Mixed | Balanced needs |
API-Based Integration (Recommended Start)
OpenAI Integration Example
// AI Service Wrapper
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export class AIService {
async generateResponse(prompt: string, context?: string): Promise<string> {
const messages = [
{
role: 'system' as const,
content: context || 'You are a helpful assistant.',
},
{
role: 'user' as const,
content: prompt,
},
];
const response = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages,
max_tokens: 1000,
temperature: 0.7,
});
return response.choices[0].message.content || '';
}
async analyzeText(text: string): Promise<{
sentiment: string;
topics: string[];
summary: string;
}> {
const response = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages: [
{
role: 'system',
content: 'Analyze the text and return JSON with sentiment, topics array, and summary.',
},
{
role: 'user',
content: text,
},
],
response_format: { type: 'json_object' },
});
return JSON.parse(response.choices[0].message.content || '{}');
}
}
Adding AI to Existing Features
// Example: Adding AI-powered search suggestions
class SearchService {
private aiService: AIService;
private searchIndex: SearchIndex;
async search(query: string): Promise<SearchResult[]> {
// Traditional search
const results = await this.searchIndex.search(query);
// AI enhancement: expand query with related terms
if (results.length < 5) {
const expandedQuery = await this.aiService.generateResponse(
`Suggest 3 related search terms for: "${query}"`,
'Return only comma-separated terms, no explanation.'
);
const additionalResults = await this.searchIndex.search(expandedQuery);
results.push(...additionalResults);
}
return results;
}
}
Common AI Integration Patterns
Pattern 1: AI as Enhancement Layer
Architecture:
User Request --> Your App --> AI Service --> Enhanced Response
Use Cases:
- Smart search suggestions
- Content summarization
- Auto-categorization
Pros:
- Non-invasive to existing code
- Easy to enable/disable
- Gradual rollout possible
Pattern 2: AI Middleware
// Express middleware for AI processing
const aiMiddleware = async (req, res, next) => {
// Process incoming content with AI
if (req.body.content) {
req.aiAnalysis = await aiService.analyzeText(req.body.content);
}
next();
};
// Usage in route
app.post('/support/tickets', aiMiddleware, async (req, res) => {
const ticket = await createTicket({
...req.body,
category: req.aiAnalysis?.topics[0] || 'general',
priority: determinePriority(req.aiAnalysis?.sentiment),
});
res.json(ticket);
});
Pattern 3: Background AI Processing
// Queue-based AI processing
import { Queue, Worker } from 'bullmq';
const aiQueue = new Queue('ai-processing');
// Add job when content is created
async function onContentCreated(content: Content) {
await aiQueue.add('analyze', {
contentId: content.id,
text: content.body,
});
}
// Worker processes in background
const worker = new Worker('ai-processing', async (job) => {
const { contentId, text } = job.data;
const analysis = await aiService.analyzeText(text);
await db.content.update({
where: { id: contentId },
data: {
aiSummary: analysis.summary,
aiTopics: analysis.topics,
aiSentiment: analysis.sentiment,
},
});
});
Cost Management
| model | inputCost | outputCost | bestFor |
|---|---|---|---|
| GPT-4 Turbo | $0.01/1K tokens | $0.03/1K tokens | Complex reasoning |
| GPT-3.5 Turbo | $0.0005/1K tokens | $0.0015/1K tokens | Simple tasks |
| Claude 3 Haiku | $0.00025/1K tokens | $0.00125/1K tokens | Fast, cheap |
| Claude 3 Sonnet | $0.003/1K tokens | $0.015/1K tokens | Balanced |
Cost Optimization Strategies
// Strategy 1: Use cheaper models for simple tasks
const getModel = (taskComplexity: 'simple' | 'medium' | 'complex') => {
switch (taskComplexity) {
case 'simple': return 'gpt-3.5-turbo';
case 'medium': return 'gpt-4-turbo-preview';
case 'complex': return 'gpt-4';
}
};
// Strategy 2: Implement caching
const cache = new Map<string, { result: string; timestamp: number }>();
const CACHE_TTL = 3600000; // 1 hour
async function cachedAICall(prompt: string): Promise<string> {
const cacheKey = hashString(prompt);
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.result;
}
const result = await aiService.generateResponse(prompt);
cache.set(cacheKey, { result, timestamp: Date.now() });
return result;
}
// Strategy 3: Batch processing
async function batchAnalyze(items: string[]): Promise<Analysis[]> {
const batchPrompt = items
.map((item, i) => `[${i}] ${item}`)
.join('\n\n');
const response = await aiService.generateResponse(
`Analyze each numbered item and return JSON array:\n${batchPrompt}`
);
return JSON.parse(response);
}
Implementation Checklist
## Pre-Integration
- [ ] Define specific AI use cases
- [ ] Estimate API costs based on usage
- [ ] Review data privacy requirements
- [ ] Choose appropriate AI providers
## Development
- [ ] Create AI service abstraction layer
- [ ] Implement error handling and fallbacks
- [ ] Add request/response logging
- [ ] Set up cost monitoring
## Testing
- [ ] Test with edge cases
- [ ] Validate AI output quality
- [ ] Performance testing under load
- [ ] Cost estimation validation
## Deployment
- [ ] Gradual rollout (feature flags)
- [ ] Monitor costs in real-time
- [ ] Set up alerts for anomalies
- [ ] Document AI behavior for users
Ready to add AI to your software?
We help companies integrate AI capabilities into existing products quickly and cost-effectively.
Discuss AI Integration