I develop data-driven solutions for various industries using modern tools and best practices.

I provide actionable insights from industrial datasets. I am an expert in analytics, machine learning, statistics, and mathematics. My specialties include time-series analysis, predictive analytics, clustering, regression algorithms, and data visualization. I develop end-to-end artificial intelligence projects for on-premises and cloud platforms.

About

My name is Serdar Gündoğdu. I am a data scientist with a strong background in statistics, mathematics, and mechanical and industrial engineering. I enjoy identifying patterns in industrial systems and using them to inform data-driven decisions. I also enjoy developing end-to-end solutions that provide these insights.

Currently, I work on power plant time-series analysis, decision tree modeling, performance condition monitoring, and development. To become a well-rounded problem solver, I continuously expand my skillset to include natural language processing (NLP), computer vision, robotic process automation (RPA), REST APIs, microservices, Docker, Kubernetes, domain-driven design (DDD), and modern front-end development.

  • Data Science
  • Machine Learning
  • Time Series
  • Clustering Algorithms
  • Clean Code
  • Lean Six Sigma
  • Python
  • NumPy
  • Pandas
  • Matplotlib
  • Seaborn
  • Scikit-Learn
  • TensorFlow
  • PyTorch
  • REST API
  • OOP
  • SQL
  • Java
  • C#
  • C

Projects GitHub Repo

Stage 1 · Frontend Foundations

Descriptive UI Skills

Solidify semantic HTML, accessible layouts, and client-side interactivity tailored to plant operators.

Stage 2 · Core Backend & Data

APIs & CRUD Foundations

Build FastAPI and Flask services that expose clean contracts for assets, KPIs, and operational logs.

  • 6. Power Plants Asset REST API: CRUD with validation, filtering, and OpenAPI docs.
  • 7. Power Plants KPI Calculator API: Encapsulate efficiency and availability formulas.
  • 8. Daily Meeting Logbook: Authenticated Flask app with uploads and search.

Stage 3 · Monitoring & Visualization

Dashboards & Telemetry

Translate raw telemetry into diagnostic visuals with Dash and high-throughput ingest endpoints.

  • 9. Condition Monitoring Trends: Multi-series plots with annotations and exports.
  • 10. IoT Ingest Endpoint: Batched telemetry POST with idempotency keys.
  • 11. Auth & Permissions: FastAPI RBAC with JWT scopes for plant roles.

Stage 4 · Realtime & Reliability

Operational Awareness

Deliver live status updates, alarm rationalization, and resilient services that withstand drops.

  • 12. Real-Time Unit Status: WebSocket feeds for unit state and alarms.
  • 13. Alarm Catalog: CRUD with severity, mitigation, and audit trail.
  • 14. Maintenance Calendar: Drag-to-reschedule PM tasks with conflict checks.

Stage 5 · Predictive & Prescriptive

Applied ML & Decision Support

Introduce anomaly detection and remaining useful life (RUL) modelling with explainable overlays.

  • 15. Anomaly Detection Baseline: Rolling stats alerts with operator context.
  • 16. Predictive Maintenance RUL: Serve scikit-learn models with Dash insights.

Stage 6 · Maps, Streaming & Ops

Deployment & Observability

Harden deployments with geospatial awareness, streaming pipelines, and full-stack observability.

  • 17. Geospatial Maintenance Map: Leaflet view of work orders and alarms.
  • 18. Streaming Telemetry Pipeline: Redis pub/sub fan-out for live charts.
  • 19. Observability & SRE: See and trust your system.
  • 20. Secure Deployment & CI/CD: Dockerized stack with automated releases.