Compare commits

..

2 Commits

Author SHA1 Message Date
31add42d6d update report 2025-11-13 11:13:11 +01:00
4de79169a2 update report 2025-11-13 11:11:16 +01:00

View File

@@ -22,12 +22,12 @@ filtering and visualization. New transactions are automatically fetched in the b
## Architecture Overview ## Architecture Overview
Our system is a fullstack web application composed of a React frontend, a FastAPI backend, Our system is a fullstack web application composed of a React frontend, a FastAPI backend,
a MariaDB database with Maxscale, and asynchronous background workers powered by Celery with RabbitMQ. a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ.
Redis is available for caching/kv and may be used by Celery as a result backend. The backend The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories,
exposes REST endpoints for authentication (email/password and OAuth), users, categories, transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and
transactions, exchange rates and bank APIs. A thin controller layer (FastAPI routers) lives under app/api. the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with
Infrastructure for Kubernetes is provided via OpenTofu (Terraformcompatible) modules and CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages.
the application is packaged via a Helm chart. Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.
### High-Level Architecture ### High-Level Architecture
@@ -50,11 +50,9 @@ The workflow works in the following way:
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from - Client connects to the frontend. After login, frontend automatically fetches the stored transactions from
the database via the backend API and currency rates from UniRate API. the database via the backend API and currency rates from UniRate API.
- When the client opts for fetching new transactions via the Bank API, the backend delegates the task - When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching
to a background worker service via the Message queue. using background worker.
- After successful load, these transactions are stored to the database and displayed to the client - After successful load, these transactions are stored to the database and displayed to the client
- There is also a Task planner, that executes periodic tasks, like fetching new transactions automatically from the Bank
APIs
### Features ### Features
@@ -62,6 +60,9 @@ The workflow works in the following way:
- For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD - For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
- On every push to main, the production app is automatically updated - On every push to main, the production app is automatically updated
- UI is responsive for mobile devices - UI is responsive for mobile devices
- Slow operations (emails, transactions fetching) are handled
in the background by Celery workers.
- App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.
### Components ### Components
@@ -69,13 +70,10 @@ The workflow works in the following way:
login/registration, shows latest transactions, filtering, and allows adding transactions. login/registration, shows latest transactions, filtering, and allows adding transactions.
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange - Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange
rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas. rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
- Worker service (backend/app/workers): Celery worker handling asynchronous tasks (e.g., sending verification emails, - Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
future background processing). - Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
- Database (PostgreSQL): Persists users, categories, transactions; schema managed by Alembic migrations. - Message Queue (RabbitMQ): Queues background tasks for Celery workers.
- Message Queue (RabbitMQ): Transports background jobs from the API to the worker. - Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel, etc.).
- Cache/Result Store (Redis): Available for caching or Celery result backend.
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Argo CD,
cert-manager, Cloudflare tunnel, etc.).
- Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes. - Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.
### Technologies Used ### Technologies Used
@@ -624,31 +622,34 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
[Reflect on the key technical and collaboration skills learned during this project] [Reflect on the key technical and collaboration skills learned during this project]
### Challenges Faced ### Challenges Faced
#### Slow cluster performance
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
#### Slow cluster performance
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
[Describe the main challenges and how you overcame them] [Describe the main challenges and how you overcame them]
### If We Did This Again ### If We Did This Again
#### Different framework #### Different framework
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template. Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template.
Using .NET (which we considered initially) would probably solve these issues. Using .NET (which we considered initially) would probably solve these issues.
[What would you do differently? What worked well that you'd keep?] [What would you do differently? What worked well that you'd keep?]
### Individual Growth ### Individual Growth
#### [Lukas] #### [Lukas]
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks. This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks.
(I am bad even at that :) ). (I am bad even at that :) ).
It was also interesting experience to be the one responsible for the initial project structure/design/setup It was also interesting experience to be the one responsible for the initial project structure/design/setup
used not only by myself. used not only by myself.
[Personal reflection on growth, challenges, and learning] [Personal reflection on growth, challenges, and learning]
@@ -661,4 +662,4 @@ used not only by myself.
--- ---
**Report Completion Date**: [Date] **Report Completion Date**: [Date]
**Last Updated**: 15.10.2025 **Last Updated**: 13.11.2025