mirror of
https://github.com/dat515-2025/Group-8.git
synced 2026-03-22 06:57:47 +01:00
update report
This commit is contained in:
@@ -22,12 +22,12 @@ filtering and visualization. New transactions are automatically fetched in the b
|
||||
## Architecture Overview
|
||||
|
||||
Our system is a full‑stack web application composed of a React frontend, a FastAPI backend,
|
||||
a MariaDB database with Maxscale, and asynchronous background workers powered by Celery with RabbitMQ.
|
||||
Redis is available for caching/kv and may be used by Celery as a result backend. The backend
|
||||
exposes REST endpoints for authentication (email/password and OAuth), users, categories,
|
||||
transactions, exchange rates and bank APIs. A thin controller layer (FastAPI routers) lives under app/api.
|
||||
Infrastructure for Kubernetes is provided via OpenTofu (Terraform‑compatible) modules and
|
||||
the application is packaged via a Helm chart.
|
||||
a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ.
|
||||
The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories,
|
||||
transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and
|
||||
the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with
|
||||
CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages.
|
||||
Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.
|
||||
|
||||
### High-Level Architecture
|
||||
|
||||
@@ -50,11 +50,9 @@ The workflow works in the following way:
|
||||
|
||||
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from
|
||||
the database via the backend API and currency rates from UniRate API.
|
||||
- When the client opts for fetching new transactions via the Bank API, the backend delegates the task
|
||||
to a background worker service via the Message queue.
|
||||
- When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching
|
||||
using background worker.
|
||||
- After successful load, these transactions are stored to the database and displayed to the client
|
||||
- There is also a Task planner, that executes periodic tasks, like fetching new transactions automatically from the Bank
|
||||
APIs
|
||||
|
||||
### Features
|
||||
|
||||
@@ -62,6 +60,9 @@ The workflow works in the following way:
|
||||
- For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
|
||||
- On every push to main, the production app is automatically updated
|
||||
- UI is responsive for mobile devices
|
||||
- Slow operations (emails, transactions fetching) are handled
|
||||
in the background by Celery workers.
|
||||
- App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.
|
||||
|
||||
### Components
|
||||
|
||||
@@ -69,13 +70,10 @@ The workflow works in the following way:
|
||||
login/registration, shows latest transactions, filtering, and allows adding transactions.
|
||||
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange
|
||||
rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
|
||||
- Worker service (backend/app/workers): Celery worker handling asynchronous tasks (e.g., sending verification emails,
|
||||
future background processing).
|
||||
- Database (PostgreSQL): Persists users, categories, transactions; schema managed by Alembic migrations.
|
||||
- Message Queue (RabbitMQ): Transports background jobs from the API to the worker.
|
||||
- Cache/Result Store (Redis): Available for caching or Celery result backend.
|
||||
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Argo CD,
|
||||
cert-manager, Cloudflare tunnel, etc.).
|
||||
- Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
|
||||
- Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
|
||||
- Message Queue (RabbitMQ): Queues background tasks for Celery workers.
|
||||
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel, etc.).
|
||||
- Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.
|
||||
|
||||
### Technologies Used
|
||||
@@ -624,31 +622,34 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
|
||||
[Reflect on the key technical and collaboration skills learned during this project]
|
||||
|
||||
### Challenges Faced
|
||||
#### Slow cluster performance
|
||||
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
|
||||
|
||||
#### Slow cluster performance
|
||||
|
||||
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
|
||||
|
||||
[Describe the main challenges and how you overcame them]
|
||||
|
||||
### If We Did This Again
|
||||
|
||||
#### Different framework
|
||||
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
|
||||
|
||||
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
|
||||
Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template.
|
||||
Using .NET (which we considered initially) would probably solve these issues.
|
||||
|
||||
|
||||
[What would you do differently? What worked well that you'd keep?]
|
||||
|
||||
### Individual Growth
|
||||
|
||||
#### [Lukas]
|
||||
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
|
||||
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
|
||||
|
||||
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks.
|
||||
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
|
||||
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
|
||||
|
||||
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks.
|
||||
(I am bad even at that :) ).
|
||||
|
||||
It was also interesting experience to be the one responsible for the initial project structure/design/setup
|
||||
It was also interesting experience to be the one responsible for the initial project structure/design/setup
|
||||
used not only by myself.
|
||||
|
||||
[Personal reflection on growth, challenges, and learning]
|
||||
|
||||
Reference in New Issue
Block a user