Bỏ qua đến nội dung chính
← Quay lại danh sách dự án

Open University Moodle Optimization

Backend Engineer • Enterprise LMS modernization

Education Tech

Mốc thời gian: 2023 — 2024

Bối cảnh đội ngũ: Cross-functional delivery with backend engineers, Moodle specialists, QA, and platform operations.

Bài toán

Open University needed to modernize Moodle with AI-assisted learning flows, but strict privacy requirements prohibited sending student conversations to external LLM providers. At the same time, peak semester traffic regularly exposed bottlenecks in API response times and query-heavy tutor interactions.

Trách nhiệm

  • Owned backend integration architecture between Moodle plugins and AI service endpoints.
  • Defined caching and invalidation strategy for high-frequency recommendation and tutor requests.
  • Partnered with QA and ops to design release gates, rollback plans, and observability baselines.

Điểm nhấn kiến trúc

  • Local LLM gateway pattern so Moodle could call internal model-serving endpoints without external data exposure.
  • Redis cache-aside approach on repeated recommendation queries with targeted TTL and invalidation rules.
  • Structured logging and endpoint-level latency instrumentation for release-time regression detection.

Kết quả triển khai

  • API/database pressure was reduced on critical learner paths during traffic spikes.
  • AI tutor feature became production-usable for real coursework flows with stable latency.
  • Stakeholders approved broader AI experimentation due to improved privacy posture and operational control.

Bài học rút ra

  • AI features in education must be designed as reliability work, not only model integration work.
  • Performance gains compound when caching strategy is aligned with real user journey repeat patterns.
  • Adoption speed increases when architecture decisions are documented in risk/impact language for non-engineering stakeholders.

Đóng góp và tác động chính

Key contributions

  • Built Moodle plugin integrations with an API layer for local LLM orchestration.
  • Implemented Redis-backed cache strategy for repeated tutor and recommendation flows.
  • Introduced safer rollout patterns and monitoring for feature reliability.

Impact

  • Reduced database load by ~40% on heavy query paths.
  • Improved perceived response time for AI tutor interactions.
  • Enabled privacy-conscious deployment by running models on internal infrastructure.
Sẵn sàng cho cơ hội phù hợp

Xây dựng hệ thống backend ổn định với tác động kinh doanh rõ ràng.

Nếu bạn cần kỹ sư backend để hiện đại hoá kiến trúc, cải thiện hiệu năng hoặc triển khai tính năng AI an toàn, hãy kết nối.

© 2026 Nguyen Van Hai. Đã đăng ký bản quyền.

Xây dựng bằng SvelteKit, ưu tiên hiệu năng, khả năng truy cập và sự rõ ràng.