_d
devops/ness
Blog
Reading ListAbout

Blog

Practical articles on AI, DevOps, Cloud, Linux, and infrastructure engineering.

Tag: #llmClear filters
Operational Checklist: AI Inference Cost Optimization
••yesterday

Operational Checklist: AI Inference Cost Optimization

AI Inference Cost Optimization. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Operational Checklist: RAG Retrieval Quality Evaluation
••last month

Operational Checklist: RAG Retrieval Quality Evaluation

RAG Retrieval Quality Evaluation. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Operational Checklist: Prompt Versioning and Regression Testing
••last month

Operational Checklist: Prompt Versioning and Regression Testing

Prompt Versioning and Regression Testing. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Operational Checklist: LLM Gateway Design for Multi-Provider Inference
••last month

Operational Checklist: LLM Gateway Design for Multi-Provider Inference

LLM Gateway Design for Multi-Provider Inference. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Architecture Review: AI Inference Cost Optimization
••3 months ago

Architecture Review: AI Inference Cost Optimization

AI Inference Cost Optimization. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Architecture Review: RAG Retrieval Quality Evaluation
••4 months ago

Architecture Review: RAG Retrieval Quality Evaluation

RAG Retrieval Quality Evaluation. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Architecture Review: Prompt Versioning and Regression Testing
••5 months ago

Architecture Review: Prompt Versioning and Regression Testing

Prompt Versioning and Regression Testing. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Architecture Review: LLM Gateway Design for Multi-Provider Inference
••5 months ago

Architecture Review: LLM Gateway Design for Multi-Provider Inference

LLM Gateway Design for Multi-Provider Inference. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
AI Security and Safety: Protecting Your AI Applications
••5 months ago

AI Security and Safety: Protecting Your AI Applications

Learn how to secure AI applications against prompt injection, data leakage, and adversarial attacks. Best practices for AI security in production.

KU
Kiril Urbonas
Read article
Embedding Models Comparison: Choosing the Right Model for Your Use Case
••5 months ago

Embedding Models Comparison: Choosing the Right Model for Your Use Case

Compare popular embedding models including OpenAI, Sentence-BERT, and open-source alternatives. Learn which model fits your RAG, search, or similarity tasks.

KU
Kiril Urbonas
Read article
AI Cost Optimization: Reducing LLM Inference Costs by 80%
••5 months ago

AI Cost Optimization: Reducing LLM Inference Costs by 80%

Learn proven strategies to reduce AI inference costs including model quantization, caching, batching, and efficient prompt design. Real-world cost savings examples.

KU
Kiril Urbonas
Read article
Fine-tuning vs Few-Shot Learning: When to Use Each Approach
••5 months ago

Fine-tuning vs Few-Shot Learning: When to Use Each Approach

Compare fine-tuning and few-shot learning for adapting LLMs. Learn when to use each approach and their trade-offs in terms of cost, performance, and complexity.

KU
Kiril Urbonas
Read article
12...4
Next